Search the Community
Showing results for tags 'normal reflection'.
Found 2 results
Carla, Mark, Marlin and Community – I’ve some questions about reflective spheres and optimal normal reflection vector precision (that is the repeatability) and accuracy (how closely measured normal reflection vector values are to the actual value). The CHI “Guide to Highlight Image Capture, 2.0 (http://culturalheritageimaging.org/What_We_Offer/Downloads/RTI_Hlt_Capture_Guide_v2_0.pdf) explains that: “Depending on the size and portability of the target object, you must compose the camera’s field of view so it can encompass both the object and two reflecting spheres of an appropriate size. The spheres should have a diameter of at least 250 pixels in the resulting photograph.” (Pg. 3, Target Object with Reflective Spheres.) THE SET-UP- As a practical example, let’s say I’m capturing a 36” wide x 24” high painting using a 50mm lens. With space on either side of the frame for the spheres to be mounted so that they do not cast a shadow onto the canvas during 15° flash positions, the total width of the frame area is roughly 45”. With my 5D Mark II and a 50mm lens, shooting the captures in RAW, I get photos with a total frame size of 5616 horizontal pixels x 3744 vertical pixels, a 21.026 megapixel file (21.0MP). That equates to roughly 125 pixels per inch on the canvas. Let’s assume that when I manufacture the assembly jpg’s, I first distortion correct for the aberrations around the outer areas of the lens frame so that every pixel is metrological. A one inch reflective sphere is 125 pixels in this set-up – HALF the recommended pixel diameter. In my experience, really takes about 20 pixels, MINIMUM, to resolve a condition I am interested in documenting so that I may track its changes accurately. So in reality the smallest features that will resolve clearly in this composition are about an 1/8th of an inch diameter, about 3.2 mm. Let’s assume from a qualitative standpoint, I’m happy with that resolution. THE QUESTIONS- Normal reflection vectors taken from HSH assembly code calculate the normal reflection data using the brightest-to darkest RGBL values where the light source is the inverse of the highlight position on the spherical surface. To what degree am I decreasing both the accuracy and the precision extra variability or noise into the processing of normal reflection vectors by having a reflective sphere only HALF the recommended size? Do the HSH algorithms require a nearly 250 pixel diameter hemisphere to accurately calculate the light sources and inverted reflection vectors? How much does variability (precision) depend upon having a minimum 250 pixel hemisphere? My guess is that the 250 pixel recommendation is based upon some optimization tests. But if nobody knows, perhaps I should gather that data? Thanks – Dale Kronkright (GOKConservator), Head of Conservation, Georgia O’Keeffe Museum
On the Linked-In discussion group Cultural Heritage Conservation Science. Research and practice’s discussion on 3-D digital imaging and photogrammetry for scientific documentation of heritage sites and collections http://linkd.in/RZMpFj , Greg Bearman wrote the following question: “Does RTI give repeatable and quantitative set of normals good enough for looking for change? If I take an RTI set, rotate the object, let it warp a bit (flexible substrate), what do I get the second time? How do I align the datasets for comparison? what is the system uncertainty? ie if I just take repeated images of the same object without moving anything, how well does the RTI data line up. Second, suppose I take something with some topography but is totally inflexible and cannot distort(make up test object here!) and I do repeated RTI on it in different orientations? Can I make the data all the same? If you are going to use an imaging method to determine changes in an object, the first thing to do is understand what is in inherent noise and uncertainty in the measuring system. It could be some combination of software, camera or inherent issues with the method itself” I wrote back: “Hey Greg - tried sending response earlier last week but I do not see it!? Sorry. I'm on vacation until the 22nd - trying to recover and recharge. It is going well but I wanted to jot down my initial thoughts. One of my interns - Greg Williamson - is working on aberration recognition software that can recognize and highlight changes in condition captured by different H-RTI computational image assemblies - obviously taken at different times, but also with different equipment and with randomly different highlight flash positions. It seems, initally, that normal reflection is normal reflection, regardless of object or flash position and that the software correctly interpolates 3D positions of surface characteristics regardless of the precise position of the flash, because it is accustomed to calculating the highlights both the capture points and everywhere in between! Likewise, we have had promise with photogrammetry when the resolution of the images used to create the mesh and solids are similar. What may turn out to be key is a calibration set that will allow correction of the various lens distortions that would naturally come from different lenses. I know Mark Mudge at Cultural Heritage Imaging has suggested that we begin taking a calibration set before RTI capture, as we had before Photogrammetry. He may be working on incorporating a calibration correction into the highlight RTI Builder that CHI has made available. I'm sending this discussion along to the CHI forum at http://forums.cultur...ageimaging.org/ to see what others might have to add. When I return to work, I'll ask Greg to give this some additional thought.” Forum Members: any thoughts?