Jump to content

Re-focusing PTM files


Taylor

Recommended Posts

A while back, Tom Malzbender of HP gave a Google Tech Talk in which he discussed many applications of PTMs and RTIs, among which is the ability to re-focus images.  He demonstrated an example using only six images of a subject taken using different focus settings (see 50:20 in this video):

 

I was wondering if others have tried this using the PTMViewer, and could anyone describe the technique for processing and viewing the images?  I'd like to take macro RTIs of an object with a lot of relief, and it would be very useful to be able to interactively refocus the images to see details, since the depth of field of the macro images is very thin.  Could this capability be built into the new RTIViewer?

Link to comment
Share on other sites

I hope Tom might fill us in how it's done, if he gets a chance.  I'll look again at the HP website and see if I can find out more about it there.  It's part of what gave me the idea that a light-field (or plenoptic) camera could capture PTMs/RTIs.  If a PTM could be used to re-focus images, then why not the inverse:  using a lightfield camera to generate a PTM or RTI?  Although I'm not sure if the example in Tom's Google Tech Talk was a PTM, since it only used six images.  Perhaps each image is focused differently, and the PTMBuilder can interpolate to allow refocusing?  It would be very useful to be able to do this by processing an existing PTM/RTI data set, however.

Link to comment
Share on other sites

This feature was described back in the 2001 paper on PTM. It is distinct from the use of PTM to represent light directions as we commonly do. This is a way to give a compact, parametric representation of multiple images at different focal distances, and to interpolate focal distances between the constituent images. It seems to me straightforward to implement and may be available through an undocumented command-line option for the PTMFitter. The refocussing example Tom posted on HPLabs crashes RTIViewer, btw. 

 

Today, since computing power is so much greater, Helicon Focus + Stack-shot is going to do what you want quite nicely. 

Link to comment
Share on other sites

Sorry, I should have provided the link to Stack-Shot: http://www.cognisys-inc.com/stackshot/stackshot.php

 

The advantage of moving the camera as opposed to changing the focal distance on the lens is that you avoid the effects of "focus breathing", which is particularly acute in the macro realm. In other words, a change of focal distance creates an effective change of focal length, thus changing the composition of the shot and making pixel-to-pixel stacking difficult, if not impossible. 

Link to comment
Share on other sites

Thanks, George!  We had considered using the Helicon software, and might go that route.  When I recalled from TM's tech talk that PTMs could be used to re-focus images, I thought we might be able to do it without using additional proprietary software by processing the images differently.  As you say, it's described in the 2001 paper (Section 4.2).  As Carla has mentioned, the PTMViewer won't run on the latest Mac OSX platforms.  It would be nice if this option could be integrated into the new RTIViewer software with a simple GUI. 

 

We have a macro focusing rail to avoid the focus breathing you mentioned, but it's not automated like the StackShot gadget, which does look like it would be useful for the ability to automatically capture both a focus stack with HDR bracketing.  I understand there are also Photoshop procedures for processing a stack of variably focused images, but it's more labor intensive.

Link to comment
Share on other sites

In terms of processing speed we've found Helicon is definitely the fastest, and the best-of-breed in terms of functionality. If you're working at high magnification (of bugs etc.) you may need to process hundreds of individual images.

 

There's quite a powerful open source plug-in for ImageJ that does focusing stacking: http://bigwww.epfl.ch/demo/edf/  Have you tried that already? 

Link to comment
Share on other sites

Taylor, I asked Tom Malzbender about this topic. Let me speak for a moment about the photographic methodology and then I'll give you Tom's answer about how to get the the variable focus point. If I mis-remember the process, it may act as bait to bring Tom into the discussion.

 

In the simplest example, lets say you have a fixed camera pointing horizontally at the subject. The subject is on a translation stage, or its manual equivalent. A manual equivalent could be a square piece of plastic bounded on two sides by two horizontal rails so that the square plastic base could move closer or farther away from the camera. You would then determine the distance of the focal depth you require. For example, if your subject was half of a walnut shell and the distance between the bottom of the shell to the top of the shell facing the camera was 1 inch, your focal depth would be one inch.  You would move the subject into the position where in the distant edge of the walnut shell would be in focus. You could then take 10 photographs with the subject translated away from the camera one 10th of an inch for each successive shot. as your subjects is translated backwards from the camera, different levels of the walnut shell will fall into the camera's depth of field “sweet spot”. Take care to align the the camera's optical axis, the direction passing through the center of the lens and striking the center of the sensor, with the axis of the subject's motion. Small shifts of the subject position due to minor misalignment can be eliminated using the forthcoming Alignment Tool from the CARE project with CHI and Princeton, written by Sema Berkiten.

 

I'll now give you Tom's answer. You will need to open a text editor to build a new type of LP file. On the 1st line writes the number 10. This indicates the number of images. On the next line, enters the absolute pathname for the image, which was shot when the subject was farthest from the camera (with the focus on the top of the walnut shell). following the pathname, hit the space bar or tab key, then enter 0.0. On the the next line, enter the absolute pathname for the next photo in the sequence, followed by a space or tab and then enter the value 0.1. Continue to increment the numerical value following each photo's absolute pathname to 0.2. 0.3, 0.4 and so on. The photo taken of the edge of the walnut shell, which was taken closest to the camera, would have the numerical value 0.9. So now you have a new type of LP file which uses one of the 2 available dimensions of the PTM, in this case depth. Tom said that you could also use the 2nd available dimension, for example, to translate the light along the x-axis. This is all I remember from Tom.

 

At this point, the rest of my answer is speculative because I have never actually built this kind of PTM. My best guess is that you would save your LP file Into the project's assembly files folder for RTIViewer or into the same directory as your images for the PTM fitter. I would then see what happens using this data in the RTIbuilder software and/or the command line HP PTM fitter. If either of these methods work, go ahead and try to view the PTM in RTIViewer or the PTMviewer.

 

Taylor, if you try this, PLEASE let us know what happens!

Link to comment
Share on other sites

Thanks, Mark!  Your description sounds consistent with Tom Malzbender et al.'s 2001 paper, but with a bit more detail on how to do it.  I'm not very experienced with command-line programs--although I've done a little with them and keep coming up with reasons to learn more--and there are easier software alternatives to do this type of operation (such as the Helicon software George mentioned).  I was hoping that it might be possible to re-focus images using the same data sets collected to create RTIs, but this is clearly not the case, based on your answer.  However, I think the topic is very interesting from the point of view of understanding what PTMs can do and how they work. 

 

The re-focusing idea is what got me interested in the question of whether a plenoptic (or light-field) camera could be used to generate a PTM or RTI file, if the software were available to do this, given that a plenoptic camera collects so much data from a single exposure about the directions of light entering the camera.  I think you would at least need to know the light source direction (from spheres or other means), and you would also need to know how the surface reflectance changes with varying light directions.  So, my guess is that even with a plenoptic camera, more than one image would be required with different source light directions to create a PTM or RTI.  Perhaps it could be done with a small array of plenoptic cameras taking simultaneous exposures from different positions, or perhaps a sort of "relative" PTM could be created from a plenoptic image file in which the relative angles of normal vectors could be computed?  I'm just thinking about ways in which all the data captured in plenoptic images could be leveraged (there must be more that can be done with all that information about light directions than just re-focusing images!).  I admit I'm speculating beyond my depth of knowledge of how PTMs work and what's the minimum amount of data required to create PTMs.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...