Digital Photography Articles

Everything you need to know about digital photography. Articles on digital photography workflow, technical articles, JPEG compression, file naming strategies, photo cataloging software (digital asset management), photo software reviews, equipment reviews and archiving the digital photos on CD or DVD.

Learn how to use programs for organizing digital photographs. Also includes a beginners guide to digital photography or those who are making the change from film to digital.


Is there an question or article that you would like to see?
If so, please feel free to leave a comment below!

Latest Photography Articles


Latest Comments in Photography Sections

2017-04-10What is an Optimized JPEG?
Very nice guide and explanation. I started ...
2017-04-04JPEGsnoop - Options
Hi! I'm currently helping a customer that got ...
2017-03-31JPEG Compression Quantization Tables
Last post need more this codefor tabX function ...
2017-03-31JPEG Compression Quantization Tables
Mathematical vs wiki table wiki 16 11 10 16 24 40 ...
2017-03-01JPEGsnoop - JPEG Decoding Utility
Hello Calvin! Thank you for this terrific ...
2017-02-26EXIF Orientation and Rotation
Actually, saving the photo to Windows Paint ...
2017-02-26JPEGsnoop - JPEG Decoding Utility
We have a DAT file from a DVR. DVR puts mjpeg fı...
2017-02-12JPEGsnoop - JPEG Decoding Utility
Thanks for the great software! Got it while on a ...
2017-01-14JPEGsnoop - Options
This looks very promising for my need. I have a ...
2017-01-14JPEGsnoop - JPEG Decoding Utility
THANKS A LOT! I don't have a way to donate but ...
2017-01-14JPEG Huffman Coding Tutorial
Ha, turned out my way of dealing with stuffed ...
2017-01-06Digital Photography Articles
Could you please tell me what this means on the ...
2016-12-23JPEG Huffman Coding Tutorial
I don't know if you're familiar with the TIFF ...
2016-12-23JPEG Huffman Coding Tutorial
Hallo, I'm implementig a jpeg-decoder and I'm now ...
2016-12-23What is an Optimized JPEG?
Thanks so much for all your helpful references ...
2016-12-20JPEG Huffman Coding Tutorial
Just curious, do you know what to make of bit ...
2016-12-12Designing a JPEG Decoder
Hey, I'm currently working on a JPEG encoder out ...
2016-12-07JPEGsnoop Open Source Code
Hi, how can i get the source code of JpegSnoop? ...
2016-11-22JPEGsnoop - JPEG Decoding Utility
In the position marker data, what are the block ...
2016-11-17JPEGsnoop - JPEG Decoding Utility
Hi, Calvin May be you know free program for ...
Jamaican Kids




Organizing and Naming Photos

How to be organized when you have thousands of photos on your computer. File name schemes that allow a mix of digital photos, scanned photos and non-photos on the same drive.

Read More


Technical Articles

For those interested in knowing the details of how digital photos are stored, articles on how JPEG compression works, and other in-depth tutorials:



Archiving & Storing Photos

Your entire photo collection can vanish in an instant, so a proper methodology in archiving your photos is crucial. Here you will find strategies to help automate the digital photo backup process.

Read More


Importing Digital Photos

Software used to transfer your images from memory cards to your hard drive.

Read More


Beginner's Guide to Digital Photography

Articles for those who are either new to photography and want to start with digital, or those who are experienced with film and want to upgrade to digital.



Digital Photography Equipment

Articles on digital cameras and related equipment.




Reader's Comments:

Please leave your comments or suggestions below!
 Could you please tell me what this means on the EXIF data ?

Date/Time Created 2007:05:24 15:21:29+00:00
Flash Auto, Did not fire, Red-eye reduction
Image Size 3072x2304
Shutter Speed 1/1000
Circle Of Confusion 0.006 mm
Field Of View 19.6 deg
Hyperfocal Distance 18.82 m
 The main things this tells us are:
The flash was set to auto mode (meaning that it was ready to fire but didn't, and that it would have issued a multi-flash burst to help reduce the red-eye effect in the photo). That is followed by the JPEG image resolution (6 megapixels). The shutter speed of 1/1000 probably indicates that it was taken during sun / daylight as very little time was required to capture enough light on the sensor for a properly exposed photo. The Field of View refers to the apparent angle at which the lens + sensor combination can observe the scene -- a narrower field of view (telephoto) means that only a small portion of what is in front of the camera will be seen. A wide-angle lens would show a much higher Field of View. The hyperfocal distance is a rough measure of what minimum distance objects need to be from the camera to be in focus if the camera were focused at inifinity. In other words, if you were taking a landscape photo and the camera focused on the mountains in the background, any objects more than 18m away would also appear in focus. If you had taken a wider-angle photo, then the hyperfocal distance would have become shorter. Greatly simplified, the circle of confusion is a measure of the size of a point of light that falls upon the image sensor (in focus). The smaller the number, the sharper the image can be.
 Hello Calvin,

I have used your excellent tool JPEGsnoop for quite some time now, very happy about it. I use it for my own images that take with a camera, but also as a tool in puzzle solving in geocaching. Geocaching is the game where you have a GPS coordinate and try to find the cache at that coordinate.

Puzzle caches or mystery caches include a puzzle/problem to solve to be able to get the correct GPS coordinate. Many times it involve manipulated images of different kind.

Now I have one mystery problem with an image in jpg that I very much suspect contain steganography, an embedded message or embedded image of some kind. I have tried JPEGsnoop, but I can't get anything vital out.

Do you have any suggestion how to use JPEGsnoop in case of suspected steganography?
 Hi Peter -- the current version of JPEGsnoop could potentially help in identifying any content after the EOF as well as any data recorded elsewhere in the metadata, but those are not very common forms of steganography. For cases of steganography based on LSB (least-significant bit) replacement in the frequency domain (ie. DCT coefficients), I have considered adding a feature to JPEGsnoop that would report out the differences between LSB=0 and LSB=1 across the coefficients in the histogram. This may help reveal any asymmetries which could lead one to suspect steganography is at play.
 I have a problem with software developers team. They created for me small software for android device. This software generate galery from my pictures. I can see all images with the exception images from chinese mobile phones. Only chinese mobile phones dosen't generate preview.
Can you please help us and analize data from one image, because developers can't understand where is difference betwen images from ex. LG and meizu mobile phones ?

Best regards

(JPEGsnoop output log trimmed)
 Hi Rafael -- if you can email me an example image from LG and another from meizu, I can take a look. thanks, Cal
 I like JPEGsnoop and found it useful in imaging software development.

A note on MJPEG. Some webcams that support raw MJPEG frames reduce data overhead by removing the huffman tables in the image frame which makes the images unviewable by most software (Firefox seems ok althougth it also supports server-push JPEG streams). I recently found a standard huffman table set from streamed data from an ancient DVR and combined the two and make a complete JPEG image.

The inital jpeg only contained the following markers...

I added an APP0 and the 4 DHT tables after SOI to make
a stand-alone viewable image file.

I think it would be good to include default huffman tables as a fallback in JPEGsnoop for redendering previews when they are not included in the file.

Again great work and best regards.
 Thanks very much for the info... The Tools -> Export JPEG... feature includes a check-box Insert MJPEG DHT (for AVI Frame export) which might perform a similar function, but it sounds like I could be missing this during the MJPEG preview stage. I'm interested to know if you were to use the Export function (with the DHT option) from one of your MJPEG frames where it creates a viewable image? Better yet, if you could send me a link to an example MJPEG that has dropped the DHT, I'll ensure it works correctly in the next version of JPEGsnoop.

 Hi Calvin,

I've been trying to repair some corrupted JPEG files, as per your guide posted on luvbug comments dated 2008-04-18 on

I use WinHex as the hex editor.

I'm using a good JPEG, read the offset from 000000 to 00001602. On the bad JPEG, I deleted hex data from 000000 to 00000334 (thus reducing the size of the BAD file). Then I pasted the hex code from GOOD file to the start of the BAD file (offset 000000), save it under different filename.

What was produced was a thumbnail from the good pix, but still with the broken/corrupted pix when viewing in normal mode.

What did I do wrong?

The pictures are both from the same camera.

Is there anymore I can upload the GOOD & BAD pictures so that you can take a look?

 Hi Franics - The JPEG recovery method that you are referring to only fixes one type of corruption: JPEG header corruption. Unfortunately, most images have corruption in the main image data and not just the header. Therefore, it is very likely that your particular damaged image suffers from corruption in the main datastream which requires more advanced/custom methods to recover (if at all). You are observing the thumbnail from the good picture because that was included in the portion of the header file that was copied over.
2016-05-19Charlie Davis
 The reported Compression Ratio for JPEG files written w/ LR and Elements both have unbelievable answers, like 5564.18:1. Is this a known bug or perhaps something I'm doing wrong. I can send a file if necessary.
 It does sound like there could be a bug somewhere. Send me an example file and I will take a look!
UPDATE: You are right -- there was indeed a bug in the compression ratio calculation. This affected images (like yours) that contained restart markers. I have now fixed the code, which will appear in upcoming release 1.7.6. Thanks!
 I'm working on an app to talk directly to an Epson projector. I've got all the control working but am struggling with the image data. I'd like to send you a few data files I've captured from packet capturing and have you look at it and help me decipher it. I have some very specific questions as I understand about 95% of the file. Specifically 2 bytes in a special header just before the jfif data.
 Sure, please send me some example files and I'll see if I can decode it!
2016-04-04Paul Wilmore
 Hi great program
When I use jpegsnop on a picture straight off SD card, it says to add the camera details to Database,saying there is no compression signature match for this given camera.
I add the camera to the database and reprocess that same photo and it still says there is no matching compression signature..??
 Thanks Paul -- you're right... You have identified a bug that affects the way the DB Submit function works. I plan to fix this in the next release of JPEGsnoop. Thanks again for taking the time to bring this to my attention!
2016-03-09Doug Kerr
 Hi, Calvin,

I use JPEGsnoop (currently 1.7.5) to look into the Exif metadata in camera files.

I have recently begun the use of a program called Silkypix Developer Studio to do my routine processing of JPG files from various cameras. (It is nominally a raw converter, but does a nice job in processing JPG files from the camera)

I discovered that in the "developed" file this program writes that the MakerNote area dues not carry the MakerNote data from the source camera file. Rather, I understand it is used to hold information related to the processing of the file by SilkyPix. This in fact causes some (small) problem when I am adding IPTC metadata with Exiftool via Exiftool GUI. That program in some cases reports the MakerNote directory to be bad.

JPEGsnoop reports for any file generated by Silkypix Developer Studio that the size of the MakerNote directory is 0x5349. It then indicates that the MakerNote are contains "excessive # components", and then states the number of components as 1,330,532,933. It then says that it is "Limiting to first 4000".

Certainly the size of the MakerNote directory as 0x5349 is "startling", but it is hard for me to believe that the directory indicates that there are 1,330,532,933 components. Is this possible? For one thing, I do not the think that the count field of the IFD has that capacity.

And what does it mean that JPEGsnoop is "Limiting to first 4000"? Does that mean that it will attempt to "decode" the first 4000 components of the MakerNote? (Clearly in this case it cannot, so no MakerNote component values are reported.)

Thanks for any help you can give me here.

Best regards,

Doug Kerr
 Hi Doug!

Great question. Without seeing the actual files, it appears that you may be observing the result of some "corruption" of the MakerNotes due to processing by the Studio software. This is a common problem when edits are made to files that contain proprietary MakerNotes. Two scenarios often occur:

1) The editor adds in extra EXIF metadata and then doesn't adjust any of the offsets within the MakerNotes segment. If the offsets within the MakerNotes segment are absolute addresses, a shift applied to the MakerNotes should cause all of the MakerNotes pointers to move as well. Unfortunately this generally requires fully decoding all of the tags within the MakerNotes segment which is difficult since it is vendor-specific.

2) The editor makes changes to the EXIF metadata and in doing so flips the endianness of the EXIF (eg. from little endian to big endian). The problem is that the editor is highly unlikely to parse all of the MakerNotes and change its endianness as well. So, we will be left with a MakerNotes segment that was actually encoded with a different endianness than the EXIF header at the start of the file! JPEGsnoop will attempt to decode the MakerNote IFD directory with the opposite endianness which will lead to a massive difference in the directory entry count (this is why you might see millions of entries reported, followed by the warning that the report will be limited). In reality, the interpretation will already be corrupted so very few values will actually be shown since random data is unlikely to match known MakerNote tags. I put corruption in quotation marks earlier since one could argue that the editor tool didn't corrupt anything, but it certainly adds to ambiguity in the resulting file.

Some advanced EXIF software such as the excellent exiftool by Phil Harvey have worked around these corruption scenarios by implementing clever heuristics to guess at what may have happened and attempt to fix the offsets. JPEGsnoop doesn't attempt such manipulations in the current version.

Hope that helps!

PS> If you are the same Doug Kerr who authored The Pumpkin, I want to thank you for sharing a wealth of information on digital photography -- such great technical detail is hard to find anywhere else!
2015-07-22Help =(
 My photographer had one disc from our wedding fail on her. They are NEF files, she has given me the files, and unfortunately disposed of the bad memory disc. Do you have a software that can repair these NEF files with out the disc itself? The files are raw NEF but on a USB.

 Hi --

First, I am really sorry to hear about the loss of your photos from the wedding -- that would be so devastating.

Unfortunately, I don't have experience in analyzing the NEF file format so I can't help specifically with these files. There are a few tools advertised online for recovering NEF files, but regrettably I think you may find that the files you were supplied might be insufficient for a full recovery. Having the original memory card is vital in many cases (as often the image data is there, but split into fragments across the card). When copying these corrupted files off the card, many of the fragments are left behind and not retained.

That said, I do hope that you still managed to get a number of good shots from the photographer that were not damaged.
2015-06-04Philip Christian
 Hello Calvin
There appears to be a problem with Blackberry 10 devices that they are corrupting their own camera shots. The jpeg files will view on the device as a thumbnail, but if you try to open them they won't open. No program that I can find will open them either. Looking at the files in a hex editor they have a wierd format which is that four out of every sixteen bytes are FF FF FF FF in a block together. Could I send you a jpeg to have a look at?
thanks, Philip
 Hi Philip -- If you could send me a sample picture to examine, I might be able to determine what has happened.
2015-04-20Supot Sawangpiriyakij
 Hi ! I surprise for Raw imformation about JPEG in this site
Great for me to create a Phyton code to decode JPEG format to tkinter canvas mode

Thank a lot , hope god bless for you
 hellow,I run a bussiness and have been useing adobe lightroom5.7 bridged with cs6.I had a hard drive fail about 6 weeks ago.since then I have replaced everthing possable.I have 2 -3 terabites,external and 2 1 terbites,harddrives,for a total of 11 terbites,2 are in my machine,with 160,000 photos in lightroom,MY CR2 RAW FILES ARE DOING JUST WHAT YOUR PICTURES SHOW,TURNING PING,ORANGE,REG,BOTTOM ONLY,RIGHT SIDE,ANYONE ANY WHERE???WHAT DO I DO??OR BUY TO FIX THE ISSUE~Brian~can some one please help me,H.P. AND ADOBE has done all they can.running 5.7-adobe raw8.1 photos,and a nea amd digatil processer updated 6 weeks came out in November.I dont know how to fix this nor have never seen anything like it,//WHAT IS IT??what causes it??please help
 So sorry to hear that Brian... sounds like the drive directory may have become corrupted. Are you running RAID across the drives? Do you find corrupt images on only one drive or multiple drives? Your best bet will be to search for an "image file carving" recovery utility to see how many images you can get back. Unfortunately it is going to be a slow process and most recovery programs don't handle "file fragments" very well.
 Great site
2014-06-12Renato Rodrigoson

I was wondering if you could help me with a DVD-R issue.

Here is the thing: Each DVD-R is supposedly 4.7GB, so I was going to backup all my home videos from the DVD-R's to a 1TB external hard drive.

But when I dragged the files from the first DVD-R into the ext hard drive, it said the whole files were 12.8 GB. How can this be if the DVD-r is only supposed to hold 4.7GB? That would be too much space, considering I have like 70 DVD-R's to backup.

I'm going to put my purpose and questions in order so you can answer me more easily:

Purpose: I want to backup videos (with their audios, obviously) from DVD-R's into a hard drive, so that I can later burn them into other DVD-R's (in the event that the original DVD-R's suffer any damage):

1. Do I need the "VIDEO_RM" folders?

2. Inside the "VIDEO_TS" folder there are BUP, IFO, and VOB files (they seem to be the same videos in different formats). Do I need to copy all this formats (all this files)?, keeping in mind that I may need to make new DVD-R's later from what I backup into my ext hard drive.

Thanks for any help you can give me.

Have a nice day.
 Greetings -

I'm working on embedded lossless rotate code, and thought I had all the pieces right, but don't. I'm able to: 1) decode to the DCT blocks properly, 2) encode those blocks again properly, and 3) transpose and reflect the blocks.
When I semi-deocde, then re-encode with no transformations, I get the original image back fine. But, after transposing/reflecting each block in the MCU and spitting out the MCUs in the new order, I get a junk image. Using both jpegtran & the irfanview plugin to rotate, then JPEGSnoop to look at the MCU blocks, there seems to be more going on than a simple rotate. There are two pieces I'm unsure about - do I need to de-quantize & re-quantize? (I am rotating the quant tables) Second, do I need to do the block-to-block DC accumulation in the lossless rotate? Thanks for your help!
 You shouldn't need to dequantize/re-quantize as that could undermine the "lossless" qualification. I haven't looked at this for a long time, but yes, I think you might have to re-do the DC accumulation as the sequencing of the MCUs will now be different. You should see this if you compare the first few blocks in the JPEGsnoop MCU decode view. Do the rest of the MCU matrix look correct otherwise? Good luck!
2014-04-17Zoe W
 Dear Hass,
have you meet the camera with changeable quantization tables, not several fixed ones?
Recently, I test a database taken by Agfa 505x, and I found 55 different pair of quantization tables! I feel confuse about this situation. Is there some unwritten rules?
 Hi Zoe! Yes, I have encountered cameras that produce a wide range of quantization tables. Have a look at my article on variable quantization tables.
my question is about signatures. I analyzed 34 images in EXIF reader, where model of the camera ís shown as Canon Powershot G12. Then I analyzed all these images in JPEGSnoop with result:
*** Searching Compression Signatures ***

Signature: 01F0A31D3842CFD4B7E09178F141E14B
Signature (Rotated): 016A9F39EDF7E9DCAF8BE822C2266077
File Offset: 0 bytes
Chroma subsampling: 2x2
EXIF Make/Model: OK [Canon] [Canon PowerShot G12]
EXIF Makernotes: OK
EXIF Software: NONE

Searching Compression Signatures: (3347 built-in, 0 user(*) )

EXIF.Make / Software EXIF.Model Quality Subsamp Match?
------------------------- ----------------------------------- ---------------- --------------
SW :[IJG Library ] [072 ]
... snip ... ASSESSMENT: Class 4 - Uncertain if processed or original

Then I made images by myself in Canon Powershot G12 and after analysis in JPEGSnoop it gave me different siganture with result:
ASSESSMENT: Class 3 - Image has high probability of being original.

What could cause the differences between these signatures? If I suppose that controversal images was edited in graphic editor, why there is no signature of this graphic editor in the result of analysis? Is it possible, that images could be edited in graphic editor which has no signature in the database of JPEGSnoop, for example in Ipod?

Thanks for the answer.
 Yes, it is very possible that the first images were edited in a program that wasn't included in the sample signature set. Note that "IJG Library" is used by most/many image editors available today.
my question is about signatures. I analyzed 34 images in EXIF reader, where model of the camera ís shown as Canon Powershot G12. Then I analyzed all these images in JPEGSnoop with result:
EXIF Make/Model: OK [Canon] [Canon PowerShot G12],
ASSESSMENT: Class 4 - Uncertain if processed or original
While the EXIF fields indicate original, no compression signatures
in the current database were found matching this make/model

Then I made images by myself in Canon Powershot G12 and after analysis in JPEGSnoop it gave me different siganture with result:
ASSESSMENT: Class 3 - Image has high probability of being original.

What could cause differences between these signatures? If I suppose that controversal images were edited in graphic editor, why there is no signature of this graphic editor in the result of analysis? Is it possible, that images could be edited in graphic editor which has no signature in the database of JPEGSnoop, for example in Ipod?

Thanks for the answer.
 The same digital camera can use multiple compression signatures (eg. depending on quality level, etc.). When probably happened is that the first image had a signature that wasn't already in the database, whereas the second one was. Or, the first one may have been edited in an image editor (one that JPEGsnoop wasn't aware of at that time).
2013-05-21dave jordan
 can you explain the meaning behind the terms AC coefficent and DC coefficient?

i know what they are, but what do the terms stand for?

i suspect the answer is lost to history, but ...

is there some analogy to electricity, or what?
 That's right. Apparently, the use of DC & AC originated in the use of the DCT for analyzing electrical currents.
 i really enjoy the software..thanks but i have a question. Is there a way to verify if ' vidoe captured via cellphone, that is save a a .3gp file, has been edited'? If so, can you please point me in the direction to obtain that application? Again Thank you very much
 Hi Dorian -- it is generally much harder to edit a video without having some very obvious visual "inconsistencies" show up. That said, it may be possible to perform some similar compression or noise analysis on the video stream, but I'm not aware of any utilities that do this. Best of luck with the search!
 Been searching for a while now for something like this. Very great program!
Since you have a clear understanding of the jpg format i was wondering if you would be interested in a project to code in C that would convert a .bmp file to .jpg. Or course we would pay for it. Let me know
 Hi Jim -- I have something even better for you :) The IJG has published source code to their JPEG encoder library. There, you will see source code for the "cjpeg" executable -- it is a pure C implementation that converts a .bmp to a .jpg file. You'll probably want to strip out a lot of the extras, but it should all be there. More importantly, this source code is probably some of the most robust JPEG encoder code you'll find anywhere.
 Hi Calvin,

thanks for your software, I was telling to a friend to use it and we think to exchange the DB of the various APN, but I do not find a way in the actual version.
Is it a way to export/import my DB to my friend ?

 Sure... the database file is called "JPEGsnoop_db.dat" and it is located in a directory specified in the Options -> Configuration menu item dialog. You should be able to update the "Directory for User Database" to point to the directory containing your friend's database, and work from there.

I found your webpage by having troubles with JPEG files, that we cannot open after a data recovery procedure. There is only a white field with a red cross. With JPEGsnoop it says "File did not start with JPEG marker. Consider using [Tools->Img Search Fwd] to locate embedded JPEG". After doing this it says "No SOI Marker found". Is there a way to save the lost information in these JPEG files?


 Unfortunately, given the steps you have tried, it is unlikely that you can recover these files without detailed recovery analysis, if at all. Normally, the Image Search Forward will at least detect the embedded JPEG thumbnail within the original JPEG file, in cases where only the header was damaged.
 If apps are counting colors in differernt color spaces, that would certainly explain some of it. But how can it explain discrepancies between orig and rot180 in a single app? My understanding is, there need be no color conversion or any f.p. math in a lossless rotation.
2012-05-05dave jordan
 I have noticed anomolous color counts between different apps and between original jpegs and losslessly rotated ones (the dims were multiples of 8). For instance:

orig has X colors per Irfan (uses IJG) and per ImageMagick (also IJG?)

rot180 of orig has Y colors per both apps, delta ~ 100 colors

orig and rot180 of orig both have Z colors per Gimp (colors->info->color cube analysis). this is what you want to see with lossless! But is it true?

rot180 again and both Irfan and I.M. report X again

the orig test file is subsampled 2x1,1x1,1x1. hmm, suspicion...

 Interesting... without seeing the files, it's pretty hard to say. Nonetheless, I suspect that the likely cause is differences in the way that the programs report "number of colors" and/or some degree of roundoff error in the color conversion process (YCC to RGB). If they reported number of "YCC" colors, then one could eliminate the above as causes of the difference.
 I'm also interested in looking at the 1.53 JPEGSnoop beta if I can trouble you for the download link. I've got a Motion JPEG file that seems to be getting "re-interpreted" when displayed via some video playback utilities (or their codecs) so I'm hoping to dump the raw image sequence to compare. :)
 For sure... sent you an email. Hope it helps!

I am writing a jpeg decoder, and I can only decode my test image
partially. It seems that before second MCU's chroma there are
some bits in scan stream that I dont know what they are.
First MCU luma and hroma and second luma is decoded whithout error.

Currently I am aware of :
1)huffman coded run/size byte
2)size(length) coded coefficient(number)
3)stuff bytes and restart markers

Is there anything else in huffman coded scan stream?

Please help.
Many thanks.
 The only other things that you are likely to find in the scan stream are: 1) Restart markers (look for RSTn in the ITU-T spec) or 2) custom bits from proprietary encoders (some webcams do this).
 Hi Calvin, I love using JPEGsnoop to sort through embedded images, but I have over a million files to sort through after a catastrophic system crash. Could you send me the link the beta of 1.5.3?
 Sure. Private email sent to you.
 I like the search forward and back functions in JPEGsnoop; it's perfect for extracting screenshots from MJPEG video.

I actually have two questions.
First off, I've been using a combination of Irfaniew and Jpegcrop to losslessly optimize the images on my new laptop. (I've been doing it from the start, so it's not such an overwhelming process.) I've hit a snag though with some CMYK jpegs. Irfanview only optimizes the Huffman tables, it won't convert to progressive coding (unless you re-save the image completely). Jpegcrop will perform a progressive conversion, but it only accepts RGB colorspace jpegs. Here's the JPEGsnoop output for the jpg transform plugin Irfanview uses:
*** Searching Executable for DQT ***
Filename: [Jpg_transform.dll]
Size: [72192]
Searching for DQT Luminance tables:
DQT Ordering: pre-zigzag
Matching [JPEG Standard]
Searching patterns with 1-byte DQT entries
Searching patterns with 2-byte DQT entries
Searching patterns with 2-byte DQT entries, endian byteswap
Searching patterns with 4-byte DQT entries
Searching patterns with 4-byte DQT entries, endian byteswap
DQT Ordering: post-zigzag
Matching [JPEG Standard]
Searching patterns with 1-byte DQT entries
Searching patterns with 2-byte DQT entries
Searching patterns with 2-byte DQT entries, endian byteswap
Searching patterns with 4-byte DQT entries
Searching patterns with 4-byte DQT entries, endian byteswap
Done Search
Is there some way to tweak the plugin to apply a progressive conversion as well? Or do you know of a tool that will do such a thing with CMYK images?

My second one is more of a hypothetical, but bear with me.
Conventionally, the only truly "8-bit" jpegs are simply grayscale images where the cBcR data has been discarded. I've been wondering though, whether it would be possible to create 256 color "sepia-tone" jpegs by performing a grayscale conversion and then inserting flat, mono-color cBcR data?
 Interesting problem... For the first item, I'm not aware of a tool that will perform the progressive conversion on CMYK JPEGs. CMYK is often not supported very well by many utilities. As for the second question, yes, I believe one could mimic sepia-tone output by doing grayscale conversion (retaining Y intact, not blending the channels as most photographers prefer) and then adding in the two dummy channels with a DC offset at the first MCU. Note that most tools that output grayscale will leave you with a single component, so you'd have to add back in the CrCb components first and then insert the "EOB" huffman codes for each of the CrCb MCUs, which may be a challenge. Easiest method might be to use a tool that converts RGB -> grayscale, then reconvert it back with grayscale -> RGB. At that point one would have to modify the DC offset for the first MCU for CrCb, but that may require shifting the entire file by several bits, depending on the DC offset you're looking for.
 Hello cal

jpegsnoop - great tool

One aspect of jpegs i am not entirely sure on is how to define Quantization tables within a jpeg.
For example if i have an image with the following Q Tables:
*** Marker: DQT (xFFDB) ***
  Define a Quantization Table.
  OFFSET: 0x00000C0E
  Table length = 67
  Precision=8 bits
  Destination ID=0 (Luminance)
    DQT, Row #0:   5   3   3   5   7  12  15  18
    DQT, Row #1:   4   4   4   6   8  17  18  17
    DQT, Row #2:   4   4   5   7  12  17  21  17
    DQT, Row #3:   4   5   7   9  15  26  24  19
    DQT, Row #4:   5   7  11  17  20  33  31  23
    DQT, Row #5:   7  11  17  19  24  31  34  28
    DQT, Row #6:  15  19  23  26  31  36  36  30
    DQT, Row #7:  22  28  29  29  34  30  31  30
    Approx quality factor = 84.93 (scaling=30.13 variance=1.05)
*** Marker: DQT (xFFDB) ***
  Define a Quantization Table.
  OFFSET: 0x00000C53
  Table length = 67
  Precision=8 bits
  Destination ID=1 (Chrominance)
    DQT, Row #0:   5   5   7  14  30  30  30  30
    DQT, Row #1:   5   6   8  20  30  30  30  30
    DQT, Row #2:   7   8  17  30  30  30  30  30
    DQT, Row #3:  14  20  30  30  30  30  30  30
    DQT, Row #4:  30  30  30  30  30  30  30  30
    DQT, Row #5:  30  30  30  30  30  30  30  30
    DQT, Row #6:  30  30  30  30  30  30  30  30
    DQT, Row #7:  30  30  30  30  30  30  30  30
    Approx quality factor = 84.93 (scaling=30.15 variance=0.29)
And the Hex values of the DQT marker at 0xDB is 58, with the next 4 bytes consisting of the hex values 59, 5a, 00 and 00.

How exactly do these hex values breakdown and define the above tables?

Any help would be greatly appreciated as i cannot find any good examples of this after hours of searching.
 Starting with 0xFFDB (DQT marker), you'll find the following:
  • 2B: Section length
  • Table 0:
    • 1B: Destination ID
    • 64B: Quantization Matrix (in zig-zag order)
    • ...
I wonder if it is possible that you may have been decoding a different portion of your file as I would have expected you to see the following (given the tables above):
0xFF DB 00 43 00 05 05 05 07 06 07 0E ...

Im doing some research on quantization tables, how they change depending on which device takes the image.

Having read a few papers, i stumbled accross this presentation

On slide 24, it is mentioend that there are 99 known standard Q Tables etc.

Is this true? Was there an initial set of Q Tables created by JPEG but nowadays software generates its own.

If so, is there a list of these tables somewhere?
 It may be a bit misleading to state that there are 99 standard Q tables... Annex K of the ITU-T standard defines a single example set of luminance and chrominance tables that were based on psychovisual thresholding and derived empirically. The IJG quality formula provides an easy means of generating new quantization matrices based on these "example" tables using a single scaling factor (1-100).

But software JPEG encoders, cell phones and digital cameras are and were always free to select their own quantization tables for encoding the JPEG bitstream. Devising one's own quantization table that gives the best tradeoff in optimizing file size versus human perception quality is probably quite a challenge, and is likely the reason that some encoders stuck with the Annex tables in combination with the scaling factor formula (which provides more control over final quality). More recent digicams have attempted to generate / select an appropriate quantization matrix on the basis of image content or file size (see my page on variable quantization). That methodology presents new challenges to signature-based encoder deduction techniques.
 Hi, I need to discover if a TIFF image was previously saved as JPEG.
I tested a DNG file saved a TIFF, reopened, saved a JPEG quality 11, closed and reopened and saved as TIFF again.
JPEGsnoop report No SOI marker found in fwd search.
Can I do this.
 To detect that a TIFF image was previously saved as JPEG, it may be best to perform an analysis of JPEG blocking compression artifacts (there are some free utilities on the web that can do this). This will look for discontinuities around the block boundaries (usually 8x8 or 16x8 pixels in size). Unless the TIFF file has embedded a JPEG file within it, you probably won't be able to use JPEGsnoop to locate anything within it.
 I have been trying to learn how to decode the header information in a JPEG by reading the raw hex and have only met with partial success. It seems like from what I have read that it should be possible to find the width and height in pixels of an image.

I stumbled onto your site and you seem to reference this a few times, but I could not find the information. Any help you could provide would be most appreciated.
 If you are trying to locate the image dimensions from within a hex editor, you can try searching for 0xFFC0 and then skip the next three bytes. Following that are two 16-bit values, the first for the height and the second for the width.

For example:
... 1A 1A FF C0 00 11 08 07 98 0A 20 03 ...
In the above, the height is 0x0798 (1944) and width is 0x0A20 (2592).

Have a look at JPEGsnoop as it should help you identify this and other related details.
 i'd really like to give snoop a try, but can't figger out how to install it. any instructions for us less-tech savvy operators?
 There's actually no installation required! Simply unzip the download, double-click on the application icon and then select an image to analyze. Or, you can drag a photo on top of the application icon to automatically load it.
 Hi Calvin,

... I have a technical question for you about JPEG compression, I've looked through the documentation and done a fair bit of Googling but couldn't find the answer.

What I was curious about was the "YCC Clipped" notes in the "Decoding SCAN Data" section - I seem to get these even with images in Photoshop saved at the highest level.

Am I correct in guessing that it is a symptom of the quantisation? Because the un-quantised-table isn't exactly as the quantised table, it's possible that the Inverse-DCT will result in a table which doesn't only contain values between 0-255?

 Hi Rob -- The "YCC Clipped" warnings indicate that the cumulative YCC values have exceeded the normal range (eg. +/- 1024) prior to scaling. Normally, the result of the RGB color conversion step can fit within the YCC range. In example images I have created in PS (including full-gamut RGB color wheels), I don't recall seeing YCC clipping. Were these RGB or CMYK? Were they tagged with a profile?

Because JPEG quantization will round to the nearest value (not truncate), large coefficients in the quantization matrix could lead to larger excursions in the YCC range. However, I wouldn't have expected that this would be possible with Photoshop set at the highest level (12, where coefficients are set to 1, from what I recall). The only other causes that come to mind are color space) conversions or corruption in the differential decoding.
 Thanks for your time, in the meanwhile I resolved many of my problems. Now I generate huffman tables runtime (before I used huffman tables hardocded) and I think my problem was just the AC's Huffman table. Now I can create my own jpeg files (only grayscale of course). The only "strange" thing that happen to me is that I have to shift all pixel values of 127. Basically my input pixels take values from 0 to 255, but to ensure that the file is decoded correctly I have to shift everything in order to have values from -127 to 127 and I don't understand how it's posbbile ... but it works. Anyway congratulations for the very useful software JPEGsnoop
 Glad you figured it out Andrea!!
 I'm a student and I'm writing my own software to encode images from a little camera. I have only grayscale images and I need a personalized software to encode my images. I found JPEGsnoop very useful to test my software output. I think now my software writes correct jpeg files but I can't open them with softwares like gimp or others and JPEGsnoop extract corrects Huffman tables but gives me an error @ 0x00000184.2 but my file has only 386 (=0x182) bytes so the byte #184 dows not exist. Could I send you my file to have a suggestion on what's wrong with it?
Thanks in advance
 Hi Andrea -- sure... if you post another private msg with your email, I'll see what I can do once I get a bit of spare time.
 I had a picture CD made on vacation in New Zealand. When I put the CD in my laptop at home some of the picture files do not show, I just get an error message saying "Can not read from the source file or disk." My guess is that these pictures I can't see now, are ones that I previously downloaded off of the camera (onto a computer on vacation) just to look at some, before I made CDs for all the pics. Could they have deleted from the memory card (by looking at them/saving them to a computer) OR did they get corrupted? I'm trying to decided if there is a tool to fix whatever is the problem and recover them.
 Hi Melinda. Sorry to hear that your images no longer open from the picture CD. I'm not totally clear from your question about what must have happened, but it sounds like the images may have been fine before writing them to the CD. Compact discs can become damaged and give you these sort of errors. There are also some cases where a bad filename has triggered this error. For the first case, I recommend searching the internet for CD recovery utility, as sometimes they are able to re-read the file in alternate ways that may let you recover the file.
 i would like to read any good technical article about JPEG-XR, just like JPEG.

I am interested in having JPEGsnoop included in the operating procedures for a standards setting organization. Can you please reply with contact information so we can talk.
 Private message sent.
 Hi Calvin,

I have been doing my final year project on JPEG decoder working for a Multi-processors running on a FPGA. I am now desparately running out of time. I am trying to replace my Loeffler IDCT with a Chen-Wang IDCT. I found such an algorithm already written and so I tried incorporating that with my own code. However the image looks very sketchy and the colour is very off. This code is written for the MPEG decoder.

My question is simple: is the idct used in mpeg cross-compatible with that used for jpeg decoding?

Thank you so much for your time,
 Although I have not spent any real time looking at MPEG coding, I had assumed that the DCT (at least for I-frames) would be the same as for JPEG, but I don't know for sure. Presumably you are handling the chroma subsampling correctly?
2010-08-29homecoming dresses
 I wanted to thank you for this great read!! I definitely enjoyed every little bit of it. I have you bookmarked your site to check out the new stuff you post.
 I just wanted to comment on how great and informative I have found this site to be. I stumbled upon your JPEG rotation article through a Google search, and then noticed how many other useful articles you have on managing digital photographs. When I saw that you have sections on bodybuilding and RC helicopters, I was shocked because these are also hobbies of mine. I am going to have to spend some time reading the rest of your articles. Really an amazing site, thanks.
 Hello Calvin!

I was wondering - if different types of cameras leave their own quantization tables, is it possible that different cameras of the same model might leave their own unique signature?

In other words, does every camera have a unique signature?

All the best!
~ John
 No... every camera (of the same model and settings) does not have its own independent "signature", at least with respect to the JPEG compression quantization tables.
2010-04-16Charan Shetty
 Do you have any program or any links for detecting tampered image based on quantization table. Kindly reply please.
 JPEGsnoop is able to perform some basic detection in this manner.
 Thanks Cal. I knew you had the answer I needed.

While I have you "on the phone," so to speak, my friend and I were doing some Photochops on a photo he took of his new car to virtually "pimp it out."

He was running CS3 on an OS2 Mac. while I was running CS2 on a Vista PC.

The only ground rules was to produce an image with the same dimensions, print resolution, and Save as 08 quality.

We ran both of them through JPEGSnoop - just in case - and the only difference I found was that his JPG had an APP0 marker and mine did not.

Is that due to any difference in CS2 vs. CS3 or is it an issue with Photoshop on the Mac vs. Photoshop on the PC ?

2010-03-13william wallace
 Hi Sir,

First, thanks for creating such an informative website as well as your amazing JpegSnoop!

Next, my question. I guess this is more in the area of Digital Image Analysis, but I thought I'd ask it it possible to determine the quality of a jpeg image (low quality and blurry versus high quality and detailed) by examining the jpeg file itself (instead of viewing the image)?

The reason I ask is that I have a folder of approx. 10,000 jpeg files and I'd like to sort them from highest quality to lowest quality, but I'd like a way to sort them automatically without having to view each individual picture and adding metadata to each file (which would be quite time consuming). Thanks for any info!

 Thanks! Interesting question. I am sure that there are specialized tools out there that can assess the degree of "bluriness" for a given photo, but I'm not aware of any. Without being too fancy, it may be possible to determine the number of MCUs that have have an increased proportion of high-frequency image components (suggesting that the image has areas with more detail). This type of analysis is not foolproof, but could help differentiate images that are mostly blurred versus those that contain a lot of detail. However, this would not catch issues where the focus was wrong (ie. autofocus locked on the background instead of your subject). JPEGsnoop does count the frequency of huffman entries, but it would need to be presented in a different manner for it to be useful as described above. So, in short, no the current tool won't directly help you with this search :)

I have been playing recently with python to make a similar tool to work under linux, however I have a small query about Quantization tables, we seem to be getting slightly different results and I was wondering if you could take a look and see what might be wrong.

Contact me for more info if your interested.
 Hey Cal:

I wanted to follow up on my question last year about detecting layers in a flattened image (Yes, I know that is part of the process in saving a PSD to a JPG. Photoshop has that warning every time you go to the SAVE screen).

I have tried using error level analysis to find the answer, but from my understanding of it, there has to be at least one resave of a JPG along with a physical change to the original image, for it to spot anything.

Here is the scenario:

I received a letter of recommendation, an email with attachment, from a chap I wish to hire. He claims that he had it scanned for him.

It looks like it was created in MS Word, using Arial, and then printed off on white stationary with a black and white business logo at the top and an address line along the bottom after the closing.

The letter was not signed (somewhat of a red flag).

The text part of the letter looked darker than the logo -- which may mean that the logo was in color or greyscale -- but I could not find anything unusual - like repeating patterns, different blocking, ringing artifacts, etc. - that would lead me to conclude the text was added to an existing JPG.

After I "Snooped" it, it identified the software as CS2 from the Exif data, but the compression signatures came out as something else, like IrFanView.

Given the Photoshop IRB that JPEGSnoop found with a matching pair of signatures that are not from PS, I take this ro mean that the scan may have been made within Photoshop, saved as a JPG (let's say "Save at 60"), and then dragged through IrFanView for a resave.

Do you think that the resave was to hide any evidence of layers?

I also tried using pixel equalization to spot any unnaturally light or dark pixels, as well as Edge Detection.

I am thinking that this dude scanned a sheet of blank letterhead (creating one layer) and then created a text layer to merge with the first when saving it as a flattened JPG.

But, how do I prove that?

Thanks for your help.


One thing that may not be apparent is that when you save an image with Photoshop, all layers are usually "flattened" in generating the single JPEG image. This single image uses only one set of quantization tables. In fact, if you resave an image with various tools, it is generally only the last tool that will define the quantization tables used to encode the final JPEG image.

So, to answer your question: while JPEGsnoop can often identify that an image was generated by Photoshop, it cannot infer further details about the layers that were used to generate the image (at least on the basis of the quantization tables). Therefore, cropping will make no difference to the quantization tables used (in fact it may change them, depending on the tool that you are using for the cropping).

The best way to accomplish what you are after is to use a tool that supports one of the many imaging algorithms that can decompose the error level analysis and other characteristics that point to the fact that an image was created from a composite.
 I can't help you with the "proof", as that image analysis can best be answered by others. However, to answer your question regarding the mismatch of quantization tables (signature) from the Photoshop IRB metadata: it is quite possible that Photoshop was indeed used previously and then the file was again resaved using another graphics editor (ie. other than Photoshop). The Photoshop signatures are reasonably unique.


Leave a comment or suggestion for this page:

(Never Shown - Optional)