Jump to content

jpeg v. jpeg


John Anderson

Recommended Posts

Hi, I'm not sure how many people realize 'jpeg' compression for any arbitary quality setting depends on the manufacturer of the camera or the conversion software used. The information that tells the software how to compress an image is contained in quantization tables, which are inaccessible to the end-user. 

 

We've discovered (not entirely to our surprise) we can obtain equally good images from our 10MP Olympus E-3 as from our 21MP Canon 5D2 after conversion from RAW to jpeg via Adobe Camera RAW.

 

I know it's a tall order, but does anyone know of a standalone converter which allows fully-variable compression? We're trying to get the best image quality possible to preserve detail.

 

Regards,

 

John Anderson.

 

 

PS I might add I wasn't quite sure where to post this. Hope it's OK to put it here!

Link to comment
Share on other sites

Thanks for the post John! I don't actually know how to address or answer your question, as this topic is somewhat new to me, but I look forward to the information that this thread produces. Very curious, indeed.

Link to comment
Share on other sites

John, 

 

The RAW vs. JPEG thing is a minefield I try to stay out of. What are you currently using to convert from RAW to JPEG? Adobe CameraRAW via Bridge or Lightroom? I've been curious about RawTherapee, an open source RAW processing program: http://www.rawtherapee.com/  I haven't had much time to play with it, though. It seemed a bit unstable last time I used it. I've heard extremely good things about the PhaseOne CaptureOne package for RAW, but haven't tried it personally.

 

What exactly do you mean by "equally good images"? What metric are you using? Just curious.  

 

You've hit on a big issue with RAW workflows, though. Naturally we want our image processing to be as open as possible, but if we shoot RAW we have to deal with a big black box in the form of the manufacturer specific conversion tables. I suppose the benefits of shooting RAW outweigh the lack of openness.  

Link to comment
Share on other sites

Hi George, We imaged a test chart (fine lines, patterns, etc) using the two cameras. Conversion from RAW to jpeg with Adobe Camera Raw at maximum (12) quality setting. Although the converted Olympus file is only slightly smaller than than the Canon jpeg file, a visual check reveals much greater detail in the Olympus jpeg file, more than can be explained by the different optics used.

 

Some research into image compression reveals that quantization algorithms vary considerably from manufacturer to manufacturer. There is no 'standard.'

 

We'll carry out some more research because the jpeg conversion process is losing considerable high-frequency information in the captured images - essential to preserve fine detail in RTIs.

 

Regards,

 

John.

Link to comment
Share on other sites

Were you using the ISO 12233 text chart? Would you be willing to post either the photos or the MTF curves, if you have them?  I'm amazed that RAW conversion causes such a change in spatial resolution. Do you think it has to do with anti-aliasing applied during the conversion? That's the only thing I can think of during the conversion that would cause such a change in spatial resolution. 

Link to comment
Share on other sites

Hi George, at this stage I can't say. I don't know enough about the algorithms. From what I am able to determine, jpeg conversion is a two-stage process using a Discete Cosine Transform, followed by Huffman encoding. It is possible to tinker with the math if the quantization table is accessible.

 

A good introduction is at

 

http://en.wikipedia.org/wiki/JPEG

 

.

I'm going to repeat the tests over the weekend, just to be sure. I'll happily email you the two images of the target (12233 Resolution Test Chart from Edmund Optics) if we get the same results.

 

Regards,

 

John.

Link to comment
Share on other sites

I've been trying to use an older release of RawTherapee on my Windows system, but it crashes on a regular basis. There is a more recent release that is available for Mac and Linux, but hasn't been ported to Windows yet. Just starting to experiment with RawHide to see if it will work any better.

Link to comment
Share on other sites

I came across a conversion program called ImageMagick:

 

http://www.imagemagick.org/script/index.php

 

It's command line only (no GUI), but can read, write, and convert over 100 image formats.

 

Googling 'Imagemagick quantization tables' opens a number of contributors who are manipulating imagemagick  quantization matrices to improve jpeg compression quality.

 

It's all much beyond me, I'm afraid, but someone with good math in this area might want to look at it.

 

Everyone have a good weekend.

 

 

John Anderson.

Link to comment
Share on other sites

A couple of notes about this - related to long term preservation and reuse.

 

We highly recommend a workflow of shooting RAW - then converting to DNG and embedding the original raw data. That is your archive file. We do make jpegs, to process today - because that is the workflow supported by the tools (it's actually a bigger deal than you might think to support a 16 bit workflow - but that's more than I want to get into in this post) At any rate, you have the original raw data, you have that data converted to a 16bit tiff - whic is a way better archive format than the raw. The DNG format includes an embedded xmp metadata structure - which not only has all your EXIF camera settings, but also has all the conversion information about when you converted to DNG and using what software and version etc. Embedding the raw data - while doubling the file size, is an extra protection in that if there turns out to be a bug or problem in the conversion from RAW to tiff - you still have your raw data.

 

For clarity, I'll add that while I'm talking about tiffs and dngs here - there is a relationship. The DNG format is actually an extension of the tiff format, so the core image data is stored as a 16 bit tiff. Any modifications you make to the data (white balance, exposure compensation, etc.) is stored in the xmp data and not actually applied to the image data. When you open the image in a tool that knows how to read DNG files, then the modifications are applied. This means you have a record, and you can back anything out. So, if you accidentally applied sharpening to your images, because you forgot to "zero-out" your settings, you can back that out. If you save a standard tiff file, any modifications are "baked in" and you can't undo them - depending on your workflow, you may not have a record of them either.

 

The bottom line here is that good practice and good workflows give you the highest quality data to preserve, so you can reprocess as the software gets better. Also, the tiff image file format is likely to be around for a long time, and repositories know how to deal with them. So saving these images means the likelihood that someone in the future can reprocess your data, and get an even better result goes way up. This is true for photogrammetry as well. Even if you only save the original images (converted to dngs) and information about how you captured the data - you can have a digital preservation story that works with archives and repositories today even if they don't support saving 3D models or RTI files.

 

Carla

Link to comment
Share on other sites

I've been requested to provide smaller-sized versions of some of my .ptm files, which are too large (about 250 Mb) for the PTMViewer.  This is also relevant for sharing PTMs on-line using the java applet viewer, since it has limited bandwidth.  It seems this topic would also fit under the "Processing" heading.  What's the best work flow to produce a smaller PTM file from the original images?  Would you recommend re-building the PTMs using compressed versions of the original images to make the final output file size smaller?  Cropping the images could also help, but I don't want to lose too much information.  I'm trying to go from ~250 Mb down to about 10 Mb for each .ptm file.

Link to comment
Share on other sites

The size of a ptm file and an RTI file are a function of the resolution in pixels, and the fitting method chosen. Compressing the input jpegs will not reduce the size. The smallest usable RTI is an lrgb ptm, an rgb ptm is about 2 times that file size. For HSH, the first order doesn't work in the current RTIBuilder, 2nd order is about 1.3 - 1.5 times the size of an lrgb ptm, and third order is the largest, at about 2 times the size of a 2nd order HSH.

 

For anyone that wants an explanation of what is being stored in the ptm and RTI file for each of these choices, there is a basic description in the Guide to Highlight Image Processing.

 

So, back to Taylor's question. To get a smaller file size you are going to want to crop and/or resize the images to a smaller size. Both of these operations can happen in RTIBuilder. Follow the instructions in the processing guide to reprocess an existing RTI. After you load the XML file, and builder loads the images, you can choose to resize the images in that last screen.

 

Carla

Link to comment
Share on other sites

  • 4 years later...

Old topic, but I think OP overlooked at least one point - JPEG compression works on the basis of blocks of pixels, not individual pixels, consequently a high-contrast subject can give misleading results, especially when the target patterns closely align with the sensor resolution. To see the effect of different compressions, and different sensor resolutions, try shooting a subject with low-contrast patterns, say a wave form varying rhythmically from white to black, possibly through several colours on the way. That's also probably a more appropriate test for RTI shooting than standard resolution charts anyhow, since we're trying to bring out minute differences in the surface of the subject. If we had nice contrasty subjects, we wouldn't need RTI.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...