Jump to content

Minimum reflective sphere diameter and HSH precision


GOKConservator

Recommended Posts

Carla, Mark, Marlin and Community – I’ve some questions about reflective spheres and optimal normal reflection vector precision (that is the repeatability) and accuracy (how closely measured normal reflection vector values are to the actual value).

 

The CHI “Guide to Highlight Image Capture, 2.0  (http://culturalheritageimaging.org/What_We_Offer/Downloads/RTI_Hlt_Capture_Guide_v2_0.pdf) explains that:

 

“Depending on the size and portability of the target object, you must compose the camera’s field of view so it can encompass both the object and two reflecting spheres of an appropriate size. The spheres should have a diameter of at least 250 pixels in the resulting photograph.” (Pg. 3, Target Object with Reflective Spheres.)

 

THE SET-UP-

 

As a practical example, let’s say I’m capturing a 36” wide x 24” high painting using a 50mm lens. With space on either side of the frame for the spheres to be mounted so that they do not cast a shadow onto the canvas during 15° flash positions, the total width of the frame area is roughly 45”.

 

With my 5D Mark II and a 50mm lens, shooting the captures in RAW, I get photos with a total frame size of 5616 horizontal pixels x 3744 vertical pixels, a 21.026 megapixel file (21.0MP). That equates to roughly 125 pixels per inch on the canvas. Let’s assume that when I manufacture the assembly jpg’s, I first distortion correct for the aberrations around the outer areas of the lens frame so that every pixel is metrological. A one inch reflective sphere is 125 pixels in this set-up – HALF the recommended pixel diameter.


In my experience, really takes about 20 pixels, MINIMUM, to resolve a condition I am interested in documenting so that I may track its changes accurately. So in reality the smallest features that will resolve clearly in this composition are about an 1/8th of an inch diameter, about 3.2 mm.  Let’s assume from a qualitative standpoint, I’m happy with that resolution.

 

THE QUESTIONS-

 

  • Normal reflection vectors taken from HSH assembly code calculate the normal reflection data using the brightest-to darkest RGBL values where the light source is the inverse of the highlight position on the spherical surface.  To what degree am I decreasing both the accuracy and the precision extra variability or noise into the processing of normal reflection vectors by having a reflective sphere only HALF the recommended size?

  • Do the HSH algorithms require a nearly 250 pixel diameter hemisphere to accurately calculate the light sources and inverted reflection vectors?

 

  • How much does variability (precision) depend upon having a minimum 250 pixel hemisphere? 

 

  • My guess is that the 250 pixel recommendation is based upon some optimization tests. But if nobody knows, perhaps I should gather that data?


Thanks – Dale Kronkright (GOKConservator), Head of

Conservation, Georgia O’Keeffe Museum

Link to comment
Share on other sites

Here is the reasoning behind CHI's recommendation that there be at least 250 pixels along the diameter of the reflective spheres in an RTI image. The RTIBuilder software finds the pixel in the exact center of the highlight produced by the illumination source. The more pixels there are across a reflective sphere, the more the incident illumination direction can be refined.

If you look across a reflective black sphere (or any shiny sphere), the middle part of the sphere reflects the hemisphere of light in the direction of the camera. The outer part of the sphere reflects the hemisphere behind the reflective black sphere. This is how a sphere, often called a light probe, can capture the illumination information of an entire environment.

When building an RTI we are only concerned with the central part of the reflective black sphere because the light positions used to illuminate the subject exist in the hemisphere facing the camera. The number of pixels in the diameter of this central region of the reflective black sphere determines the angular resolution of the incident light direction. If there are 180 pixels across this central region the angular resolution will be 1°. If there are 90 pixels across the central region the angular resolution will be 2° and so forth. CHI recommends 250 pixels across the entire reflective black sphere because that will ensure that there will be at least 180 pixels across the central region of the reflective black sphere that is used in calculating RTI incident light positions (which are stored as x.y.z coordinates in a light position file - which is calculated from the highlight data)  For an explanation of how the light positions are calculated from the highlight data, see our paper with Tom Malzbender from VAST 2006.

If your purpose is to use your RTI for visual interpretive purposes, there will likely be no perceptible differences between an angular resolution of 1° and 2°. If however your purpose was to refine or generate a three-dimensional surface, the angular resolution of the incident light source and the resulting surface normals contribute significantly to the accuracy of the 3-D surface. As we cannot foresee how the documentation we produce today will be re-purposed by others in the future, we suggest, when practical, to capture the highest quality data.

 

--Mark
 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...