Digital Photography Articles
Everything you need to know about digital photography. Articles on digital photography workflow, technical articles, JPEG compression, file naming strategies, photo cataloging software (digital asset management), photo software reviews, equipment reviews and archiving the digital photos on CD or DVD.
Learn how to use programs for organizing digital photographs. Also includes a beginners guide to digital photography or those who are making the change from film to digital.
Is there an question or article that you would like to see?
If so, please feel free to leave a comment below!
Latest Photography Articles
2016-02-04 | JPEGsnoop Open Source Code |
2015-12-23 | JPEGsnoop - Options |
2015-11-29 | JPEGsnoop - Interesting Uses |
2015-11-29 | JPEGsnoop - JPEG Decoding Utility |
2011-04-11 | Do-it-Yourself Offsite Backup |
2011-04-01 | EXIF Orientation and Rotation |
2010-02-07 | Undelete your Photos! |
2009-03-27 | JPEGsnoop - Identify Edited Photos |
2008-07-22 | India Photo Gallery - Updates! |
2008-05-03 | Rights Managed vs Royalty Free Stock Photos |
2008-04-06 | Comparison of Photo Catalog Software |
2007-12-15 | Fix Corrupt JPEG Photos! |
Latest Other Articles
2016-02-07 | Robotics and Electronics Tutorials |
Search Articles
Photo Catalog Software
Everything about photo catalog software, including versioning, software comparison, features, exporting, etc.
Read MoreGeneral Digital Photography
Miscellaneous topics on a range of issues relating to digital photography and software.
Latest Comments in Photography Sections
2017-04-10 | What is an Optimized JPEG? Very nice guide and explanation. I started ... |
2017-04-04 | JPEGsnoop - Options Hi! I'm currently helping a customer that got ... |
2017-03-31 | JPEG Compression Quantization Tables Last post need more this codefor tabX function ... |
2017-03-31 | JPEG Compression Quantization Tables Mathematical vs wiki table wiki 16 11 10 16 24 40 ... |
2017-03-01 | JPEGsnoop - JPEG Decoding Utility Hello Calvin! Thank you for this terrific ... |
2017-02-26 | EXIF Orientation and Rotation Actually, saving the photo to Windows Paint ... |
2017-02-26 | JPEGsnoop - JPEG Decoding Utility We have a DAT file from a DVR. DVR puts mjpeg fı... |
2017-02-12 | JPEGsnoop - JPEG Decoding Utility Thanks for the great software! Got it while on a ... |
2017-01-14 | JPEGsnoop - Options This looks very promising for my need. I have a ... |
2017-01-14 | JPEGsnoop - JPEG Decoding Utility THANKS A LOT! I don't have a way to donate but ... |
2017-01-14 | JPEG Huffman Coding Tutorial Ha, turned out my way of dealing with stuffed ... |
2017-01-06 | Digital Photography Articles Could you please tell me what this means on the ... |
2016-12-23 | JPEG Huffman Coding Tutorial I don't know if you're familiar with the TIFF ... |
2016-12-23 | JPEG Huffman Coding Tutorial Hallo, I'm implementig a jpeg-decoder and I'm now ... |
2016-12-23 | What is an Optimized JPEG? Thanks so much for all your helpful references ... |
2016-12-20 | JPEG Huffman Coding Tutorial Just curious, do you know what to make of bit ... |
2016-12-12 | Designing a JPEG Decoder Hey, I'm currently working on a JPEG encoder out ... |
2016-12-07 | JPEGsnoop Open Source Code Hi, how can i get the source code of JpegSnoop? ... |
2016-11-22 | JPEGsnoop - JPEG Decoding Utility In the position marker data, what are the block ... |
2016-11-17 | JPEGsnoop - JPEG Decoding Utility Hi, Calvin May be you know free program for ... |
Jamaican Kids |
Organizing and Naming Photos
How to be organized when you have thousands of photos on your computer. File name schemes that allow a mix of digital photos, scanned photos and non-photos on the same drive.
Technical Articles
For those interested in knowing the details of how digital photos are stored, articles on how JPEG compression works, and other in-depth tutorials:
Archiving & Storing Photos
Your entire photo collection can vanish in an instant, so a proper methodology in archiving your photos is crucial. Here you will find strategies to help automate the digital photo backup process.
Read More
Importing Digital Photos
Software used to transfer your images from memory cards to your hard drive.
Beginner's Guide to Digital Photography
Articles for those who are either new to photography and want to start with digital, or those who are experienced with film and want to upgrade to digital.
Digital Photography Equipment
Articles on digital cameras and related equipment.
Reader's Comments:
Please leave your comments or suggestions below!Date/Time Created 2007:05:24 15:21:29+00:00
Flash Auto, Did not fire, Red-eye reduction
Image Size 3072x2304
Shutter Speed 1/1000
Circle Of Confusion 0.006 mm
Field Of View 19.6 deg
Hyperfocal Distance 18.82 m
The flash was set to auto mode (meaning that it was ready to fire but didn't, and that it would have issued a multi-flash burst to help reduce the red-eye effect in the photo). That is followed by the JPEG image resolution (6 megapixels). The shutter speed of 1/1000 probably indicates that it was taken during sun / daylight as very little time was required to capture enough light on the sensor for a properly exposed photo. The Field of View refers to the apparent angle at which the lens + sensor combination can observe the scene -- a narrower field of view (telephoto) means that only a small portion of what is in front of the camera will be seen. A wide-angle lens would show a much higher Field of View. The hyperfocal distance is a rough measure of what minimum distance objects need to be from the camera to be in focus if the camera were focused at inifinity. In other words, if you were taking a landscape photo and the camera focused on the mountains in the background, any objects more than 18m away would also appear in focus. If you had taken a wider-angle photo, then the hyperfocal distance would have become shorter. Greatly simplified, the circle of confusion is a measure of the size of a point of light that falls upon the image sensor (in focus). The smaller the number, the sharper the image can be.
I have used your excellent tool JPEGsnoop for quite some time now, very happy about it. I use it for my own images that take with a camera, but also as a tool in puzzle solving in geocaching. Geocaching is the game where you have a GPS coordinate and try to find the cache at that coordinate.
Puzzle caches or mystery caches include a puzzle/problem to solve to be able to get the correct GPS coordinate. Many times it involve manipulated images of different kind.
Now I have one mystery problem with an image in jpg that I very much suspect contain steganography, an embedded message or embedded image of some kind. I have tried JPEGsnoop, but I can't get anything vital out.
Do you have any suggestion how to use JPEGsnoop in case of suspected steganography?
Can you please help us and analize data from one image, because developers can't understand where is difference betwen images from ex. LG and meizu mobile phones ?
Best regards
Rafael
(JPEGsnoop output log trimmed)
A note on MJPEG. Some webcams that support raw MJPEG frames reduce data overhead by removing the huffman tables in the image frame which makes the images unviewable by most software (Firefox seems ok althougth it also supports server-push JPEG streams). I recently found a standard huffman table set from streamed data from an ancient DVR and combined the two and make a complete JPEG image.
The inital jpeg only contained the following markers...
SOI, DQT, SOF0, DRI, SOS, EOI.
I added an APP0 and the 4 DHT tables after SOI to make
a stand-alone viewable image file.
I think it would be good to include default huffman tables as a fallback in JPEGsnoop for redendering previews when they are not included in the file.
Again great work and best regards.
Thanks!
I've been trying to repair some corrupted JPEG files, as per your guide posted on luvbug comments dated 2008-04-18 on http://www.impulseadventure.com/photo/fix-corrupt-jpeg-photo.html
I use WinHex as the hex editor.
I'm using a good JPEG, read the offset from 000000 to 00001602. On the bad JPEG, I deleted hex data from 000000 to 00000334 (thus reducing the size of the BAD file). Then I pasted the hex code from GOOD file to the start of the BAD file (offset 000000), save it under different filename.
What was produced was a thumbnail from the good pix, but still with the broken/corrupted pix when viewing in normal mode.
What did I do wrong?
The pictures are both from the same camera.
Is there anymore I can upload the GOOD & BAD pictures so that you can take a look?
Thanks.
UPDATE: You are right -- there was indeed a bug in the compression ratio calculation. This affected images (like yours) that contained restart markers. I have now fixed the code, which will appear in upcoming release 1.7.6. Thanks!
When I use jpegsnop on a picture straight off SD card, it says to add the camera details to Database,saying there is no compression signature match for this given camera.
I add the camera to the database and reprocess that same photo and it still says there is no matching compression signature..??
I use JPEGsnoop (currently 1.7.5) to look into the Exif metadata in camera files.
I have recently begun the use of a program called Silkypix Developer Studio to do my routine processing of JPG files from various cameras. (It is nominally a raw converter, but does a nice job in processing JPG files from the camera)
I discovered that in the "developed" file this program writes that the MakerNote area dues not carry the MakerNote data from the source camera file. Rather, I understand it is used to hold information related to the processing of the file by SilkyPix. This in fact causes some (small) problem when I am adding IPTC metadata with Exiftool via Exiftool GUI. That program in some cases reports the MakerNote directory to be bad.
JPEGsnoop reports for any file generated by Silkypix Developer Studio that the size of the MakerNote directory is 0x5349. It then indicates that the MakerNote are contains "excessive # components", and then states the number of components as 1,330,532,933. It then says that it is "Limiting to first 4000".
Certainly the size of the MakerNote directory as 0x5349 is "startling", but it is hard for me to believe that the directory indicates that there are 1,330,532,933 components. Is this possible? For one thing, I do not the think that the count field of the IFD has that capacity.
And what does it mean that JPEGsnoop is "Limiting to first 4000"? Does that mean that it will attempt to "decode" the first 4000 components of the MakerNote? (Clearly in this case it cannot, so no MakerNote component values are reported.)
Thanks for any help you can give me here.
Best regards,
Doug Kerr
Great question. Without seeing the actual files, it appears that you may be observing the result of some "corruption" of the MakerNotes due to processing by the Studio software. This is a common problem when edits are made to files that contain proprietary MakerNotes. Two scenarios often occur:
1) The editor adds in extra EXIF metadata and then doesn't adjust any of the offsets within the MakerNotes segment. If the offsets within the MakerNotes segment are absolute addresses, a shift applied to the MakerNotes should cause all of the MakerNotes pointers to move as well. Unfortunately this generally requires fully decoding all of the tags within the MakerNotes segment which is difficult since it is vendor-specific.
2) The editor makes changes to the EXIF metadata and in doing so flips the endianness of the EXIF (eg. from little endian to big endian). The problem is that the editor is highly unlikely to parse all of the MakerNotes and change its endianness as well. So, we will be left with a MakerNotes segment that was actually encoded with a different endianness than the EXIF header at the start of the file! JPEGsnoop will attempt to decode the MakerNote IFD directory with the opposite endianness which will lead to a massive difference in the directory entry count (this is why you might see millions of entries reported, followed by the warning that the report will be limited). In reality, the interpretation will already be corrupted so very few values will actually be shown since random data is unlikely to match known MakerNote tags. I put corruption in quotation marks earlier since one could argue that the editor tool didn't corrupt anything, but it certainly adds to ambiguity in the resulting file.
Some advanced EXIF software such as the excellent exiftool by Phil Harvey have worked around these corruption scenarios by implementing clever heuristics to guess at what may have happened and attempt to fix the offsets. JPEGsnoop doesn't attempt such manipulations in the current version.
Hope that helps!
PS> If you are the same Doug Kerr who authored The Pumpkin, I want to thank you for sharing a wealth of information on digital photography -- such great technical detail is hard to find anywhere else!
Thanks!
First, I am really sorry to hear about the loss of your photos from the wedding -- that would be so devastating.
Unfortunately, I don't have experience in analyzing the NEF file format so I can't help specifically with these files. There are a few tools advertised online for recovering NEF files, but regrettably I think you may find that the files you were supplied might be insufficient for a full recovery. Having the original memory card is vital in many cases (as often the image data is there, but split into fragments across the card). When copying these corrupted files off the card, many of the fragments are left behind and not retained.
That said, I do hope that you still managed to get a number of good shots from the photographer that were not damaged.
There appears to be a problem with Blackberry 10 devices that they are corrupting their own camera shots. The jpeg files will view on the device as a thumbnail, but if you try to open them they won't open. No program that I can find will open them either. Looking at the files in a hex editor they have a wierd format which is that four out of every sixteen bytes are FF FF FF FF in a block together. Could I send you a jpeg to have a look at?
thanks, Philip
Great for me to create a Phyton code to decode JPEG format to tkinter canvas mode
Thank a lot , hope god bless for you
I was wondering if you could help me with a DVD-R issue.
Here is the thing: Each DVD-R is supposedly 4.7GB, so I was going to backup all my home videos from the DVD-R's to a 1TB external hard drive.
But when I dragged the files from the first DVD-R into the ext hard drive, it said the whole files were 12.8 GB. How can this be if the DVD-r is only supposed to hold 4.7GB? That would be too much space, considering I have like 70 DVD-R's to backup.
I'm going to put my purpose and questions in order so you can answer me more easily:
Purpose: I want to backup videos (with their audios, obviously) from DVD-R's into a hard drive, so that I can later burn them into other DVD-R's (in the event that the original DVD-R's suffer any damage):
1. Do I need the "VIDEO_RM" folders?
2. Inside the "VIDEO_TS" folder there are BUP, IFO, and VOB files (they seem to be the same videos in different formats). Do I need to copy all this formats (all this files)?, keeping in mind that I may need to make new DVD-R's later from what I backup into my ext hard drive.
Thanks for any help you can give me.
Have a nice day.
I'm working on embedded lossless rotate code, and thought I had all the pieces right, but don't. I'm able to: 1) decode to the DCT blocks properly, 2) encode those blocks again properly, and 3) transpose and reflect the blocks.
When I semi-deocde, then re-encode with no transformations, I get the original image back fine. But, after transposing/reflecting each block in the MCU and spitting out the MCUs in the new order, I get a junk image. Using both jpegtran & the irfanview plugin to rotate, then JPEGSnoop to look at the MCU blocks, there seems to be more going on than a simple rotate. There are two pieces I'm unsure about - do I need to de-quantize & re-quantize? (I am rotating the quant tables) Second, do I need to do the block-to-block DC accumulation in the lossless rotate? Thanks for your help!
have you meet the camera with changeable quantization tables, not several fixed ones?
Recently, I test a database taken by Agfa 505x, and I found 55 different pair of quantization tables! I feel confuse about this situation. Is there some unwritten rules?
Thx!
my question is about signatures. I analyzed 34 images in EXIF reader, where model of the camera ís shown as Canon Powershot G12. Then I analyzed all these images in JPEGSnoop with result:
*** Searching Compression Signatures ***
Signature: 01F0A31D3842CFD4B7E09178F141E14B
Signature (Rotated): 016A9F39EDF7E9DCAF8BE822C2266077
File Offset: 0 bytes
Chroma subsampling: 2x2
EXIF Make/Model: OK [Canon] [Canon PowerShot G12]
EXIF Makernotes: OK
EXIF Software: NONE
Searching Compression Signatures: (3347 built-in, 0 user(*) )
EXIF.Make / Software EXIF.Model Quality Subsamp Match?
------------------------- ----------------------------------- ---------------- --------------
CAM:[OLYMPUS OPTICAL CO.,LTD ] [C700UZ ] [ ] No
SW :[IJG Library ] [072 ]
... snip ... ASSESSMENT: Class 4 - Uncertain if processed or original
Then I made images by myself in Canon Powershot G12 and after analysis in JPEGSnoop it gave me different siganture with result:
01A84EC0DDFAE937A0336DB825C85028.
ASSESSMENT: Class 3 - Image has high probability of being original.
What could cause the differences between these signatures? If I suppose that controversal images was edited in graphic editor, why there is no signature of this graphic editor in the result of analysis? Is it possible, that images could be edited in graphic editor which has no signature in the database of JPEGSnoop, for example in Ipod?
Thanks for the answer.
Peter.
my question is about signatures. I analyzed 34 images in EXIF reader, where model of the camera ís shown as Canon Powershot G12. Then I analyzed all these images in JPEGSnoop with result:
EXIF Make/Model: OK [Canon] [Canon PowerShot G12],
01F0A31D3842CFD4B7E09178F141E14B.
ASSESSMENT: Class 4 - Uncertain if processed or original
While the EXIF fields indicate original, no compression signatures
in the current database were found matching this make/model
Then I made images by myself in Canon Powershot G12 and after analysis in JPEGSnoop it gave me different siganture with result:
01A84EC0DDFAE937A0336DB825C85028.
ASSESSMENT: Class 3 - Image has high probability of being original.
What could cause differences between these signatures? If I suppose that controversal images were edited in graphic editor, why there is no signature of this graphic editor in the result of analysis? Is it possible, that images could be edited in graphic editor which has no signature in the database of JPEGSnoop, for example in Ipod?
Thanks for the answer.
Peter.
i know what they are, but what do the terms stand for?
i suspect the answer is lost to history, but ...
is there some analogy to electricity, or what?
Since you have a clear understanding of the jpg format i was wondering if you would be interested in a project to code in C that would convert a .bmp file to .jpg. Or course we would pay for it. Let me know
thanks for your software, I was telling to a friend to use it and we think to exchange the DB of the various APN, but I do not find a way in the actual version.
Is it a way to export/import my DB to my friend ?
thanksi
I found your webpage by having troubles with JPEG files, that we cannot open after a data recovery procedure. There is only a white field with a red cross. With JPEGsnoop it says "File did not start with JPEG marker. Consider using [Tools->Img Search Fwd] to locate embedded JPEG". After doing this it says "No SOI Marker found". Is there a way to save the lost information in these JPEG files?
Regards
Chris
orig has X colors per Irfan (uses IJG) and per ImageMagick (also IJG?)
rot180 of orig has Y colors per both apps, delta ~ 100 colors
orig and rot180 of orig both have Z colors per Gimp (colors->info->color cube analysis). this is what you want to see with lossless! But is it true?
rot180 again and both Irfan and I.M. report X again
the orig test file is subsampled 2x1,1x1,1x1. hmm, suspicion...
-dave
I am writing a jpeg decoder, and I can only decode my test image
partially. It seems that before second MCU's chroma there are
some bits in scan stream that I dont know what they are.
First MCU luma and hroma and second luma is decoded whithout error.
Currently I am aware of :
1)huffman coded run/size byte
2)size(length) coded coefficient(number)
3)stuff bytes and restart markers
Is there anything else in huffman coded scan stream?
Please help.
Many thanks.
I actually have two questions.
First off, I've been using a combination of Irfaniew and Jpegcrop to losslessly optimize the images on my new laptop. (I've been doing it from the start, so it's not such an overwhelming process.) I've hit a snag though with some CMYK jpegs. Irfanview only optimizes the Huffman tables, it won't convert to progressive coding (unless you re-save the image completely). Jpegcrop will perform a progressive conversion, but it only accepts RGB colorspace jpegs. Here's the JPEGsnoop output for the jpg transform plugin Irfanview uses:
*** Searching Executable for DQT ***
Filename: [Jpg_transform.dll]
Size: [72192]
Searching for DQT Luminance tables:
DQT Ordering: pre-zigzag
Matching [JPEG Standard]
Searching patterns with 1-byte DQT entries
Searching patterns with 2-byte DQT entries
Searching patterns with 2-byte DQT entries, endian byteswap
Searching patterns with 4-byte DQT entries
Searching patterns with 4-byte DQT entries, endian byteswap
DQT Ordering: post-zigzag
Matching [JPEG Standard]
Searching patterns with 1-byte DQT entries
Searching patterns with 2-byte DQT entries
Searching patterns with 2-byte DQT entries, endian byteswap
Searching patterns with 4-byte DQT entries
Searching patterns with 4-byte DQT entries, endian byteswap
Done Search
******
Is there some way to tweak the plugin to apply a progressive conversion as well? Or do you know of a tool that will do such a thing with CMYK images?
My second one is more of a hypothetical, but bear with me.
Conventionally, the only truly "8-bit" jpegs are simply grayscale images where the cBcR data has been discarded. I've been wondering though, whether it would be possible to create 256 color "sepia-tone" jpegs by performing a grayscale conversion and then inserting flat, mono-color cBcR data?
jpegsnoop - great tool
One aspect of jpegs i am not entirely sure on is how to define Quantization tables within a jpeg.
For example if i have an image with the following Q Tables:
And the Hex values of the DQT marker at 0xDB is 58, with the next 4 bytes consisting of the hex values 59, 5a, 00 and 00.
How exactly do these hex values breakdown and define the above tables?
Any help would be greatly appreciated as i cannot find any good examples of this after hours of searching.
Im doing some research on quantization tables, how they change depending on which device takes the image.
Having read a few papers, i stumbled accross this presentation
http://www.dfrws.org/2008/proceedings/p21-kornblum_pres.pdf
On slide 24, it is mentioend that there are 99 known standard Q Tables etc.
Is this true? Was there an initial set of Q Tables created by JPEG but nowadays software generates its own.
If so, is there a list of these tables somewhere?
But software JPEG encoders, cell phones and digital cameras are and were always free to select their own quantization tables for encoding the JPEG bitstream. Devising one's own quantization table that gives the best tradeoff in optimizing file size versus human perception quality is probably quite a challenge, and is likely the reason that some encoders stuck with the Annex tables in combination with the scaling factor formula (which provides more control over final quality). More recent digicams have attempted to generate / select an appropriate quantization matrix on the basis of image content or file size (see my page on variable quantization). That methodology presents new challenges to signature-based encoder deduction techniques.
I tested a DNG file saved a TIFF, reopened, saved a JPEG quality 11, closed and reopened and saved as TIFF again.
JPEGsnoop report No SOI marker found in fwd search.
Can I do this.
I stumbled onto your site and you seem to reference this a few times, but I could not find the information. Any help you could provide would be most appreciated.
For example:
In the above, the height is 0x0798 (1944) and width is 0x0A20 (2592).
Have a look at JPEGsnoop as it should help you identify this and other related details.
... I have a technical question for you about JPEG compression, I've looked through the documentation and done a fair bit of Googling but couldn't find the answer.
What I was curious about was the "YCC Clipped" notes in the "Decoding SCAN Data" section - I seem to get these even with images in Photoshop saved at the highest level.
Am I correct in guessing that it is a symptom of the quantisation? Because the un-quantised-table isn't exactly as the quantised table, it's possible that the Inverse-DCT will result in a table which doesn't only contain values between 0-255?
Thanks,
Rob.
Because JPEG quantization will round to the nearest value (not truncate), large coefficients in the quantization matrix could lead to larger excursions in the YCC range. However, I wouldn't have expected that this would be possible with Photoshop set at the highest level (12, where coefficients are set to 1, from what I recall). The only other causes that come to mind are color space) conversions or corruption in the differential decoding.
Thanks in advance
I am interested in having JPEGsnoop included in the operating procedures for a standards setting organization. Can you please reply with contact information so we can talk.
I have been doing my final year project on JPEG decoder working for a Multi-processors running on a FPGA. I am now desparately running out of time. I am trying to replace my Loeffler IDCT with a Chen-Wang IDCT. I found such an algorithm already written and so I tried incorporating that with my own code. However the image looks very sketchy and the colour is very off. This code is written for the MPEG decoder.
My question is simple: is the idct used in mpeg cross-compatible with that used for jpeg decoding?
Thank you so much for your time,
Peter.
I was wondering - if different types of cameras leave their own quantization tables, is it possible that different cameras of the same model might leave their own unique signature?
In other words, does every camera have a unique signature?
All the best!
~ John
While I have you "on the phone," so to speak, my friend and I were doing some Photochops on a photo he took of his new car to virtually "pimp it out."
He was running CS3 on an OS2 Mac. while I was running CS2 on a Vista PC.
The only ground rules was to produce an image with the same dimensions, print resolution, and Save as 08 quality.
We ran both of them through JPEGSnoop - just in case - and the only difference I found was that his JPG had an APP0 marker and mine did not.
Is that due to any difference in CS2 vs. CS3 or is it an issue with Photoshop on the Mac vs. Photoshop on the PC ?
Thanks!
First, thanks for creating such an informative website as well as your amazing JpegSnoop!
Next, my question. I guess this is more in the area of Digital Image Analysis, but I thought I'd ask it here...is it possible to determine the quality of a jpeg image (low quality and blurry versus high quality and detailed) by examining the jpeg file itself (instead of viewing the image)?
The reason I ask is that I have a folder of approx. 10,000 jpeg files and I'd like to sort them from highest quality to lowest quality, but I'd like a way to sort them automatically without having to view each individual picture and adding metadata to each file (which would be quite time consuming). Thanks for any info!
-Bill
I have been playing recently with python to make a similar tool to work under linux, however I have a small query about Quantization tables, we seem to be getting slightly different results and I was wondering if you could take a look and see what might be wrong.
Contact me for more info if your interested.
I wanted to follow up on my question last year about detecting layers in a flattened image (Yes, I know that is part of the process in saving a PSD to a JPG. Photoshop has that warning every time you go to the SAVE screen).
I have tried using error level analysis to find the answer, but from my understanding of it, there has to be at least one resave of a JPG along with a physical change to the original image, for it to spot anything.
Here is the scenario:
I received a letter of recommendation, an email with attachment, from a chap I wish to hire. He claims that he had it scanned for him.
It looks like it was created in MS Word, using Arial, and then printed off on white stationary with a black and white business logo at the top and an address line along the bottom after the closing.
The letter was not signed (somewhat of a red flag).
The text part of the letter looked darker than the logo -- which may mean that the logo was in color or greyscale -- but I could not find anything unusual - like repeating patterns, different blocking, ringing artifacts, etc. - that would lead me to conclude the text was added to an existing JPG.
After I "Snooped" it, it identified the software as CS2 from the Exif data, but the compression signatures came out as something else, like IrFanView.
Given the Photoshop IRB that JPEGSnoop found with a matching pair of signatures that are not from PS, I take this ro mean that the scan may have been made within Photoshop, saved as a JPG (let's say "Save at 60"), and then dragged through IrFanView for a resave.
Do you think that the resave was to hide any evidence of layers?
I also tried using pixel equalization to spot any unnaturally light or dark pixels, as well as Edge Detection.
I am thinking that this dude scanned a sheet of blank letterhead (creating one layer) and then created a text layer to merge with the first when saving it as a flattened JPG.
But, how do I prove that?
Thanks for your help.
------
One thing that may not be apparent is that when you save an image with Photoshop, all layers are usually "flattened" in generating the single JPEG image. This single image uses only one set of quantization tables. In fact, if you resave an image with various tools, it is generally only the last tool that will define the quantization tables used to encode the final JPEG image.
So, to answer your question: while JPEGsnoop can often identify that an image was generated by Photoshop, it cannot infer further details about the layers that were used to generate the image (at least on the basis of the quantization tables). Therefore, cropping will make no difference to the quantization tables used (in fact it may change them, depending on the tool that you are using for the cropping).
The best way to accomplish what you are after is to use a tool that supports one of the many imaging algorithms that can decompose the error level analysis and other characteristics that point to the fact that an image was created from a composite.