Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by GOKConservator

  1. Great advice, George and great question Sian. Would UV-induced autofluorescence photogrammetry shooting in the visible also be useful? I've done this with RTI because there is no specular and the assembled normal map is pretty flat because the surface behaves as if it is totally lambertian. This also works in photometric stereo using Matlab.
  2. Carla - Likewise, with the advent of wireless in all the collection and conservation workspaces, I have started sending all my captures directly to our image server and bulk processing the DNG and assembly jpegs there, rather than on my local disc. In building an HSH-filtered RTI, I can get everything done right up to the final build. I suspect that the pathway to the server is the issue but not certain. There are no spaces or special characters in the file pathway. Can you confirm that the build needs to occur with the jpegs and processing files on a local hard drive?
  3. Greg Bearman sent me this reference and I may be the last to know... (!) But this paper - “3D Surface Reconstruction Using Polynomial Texture Mapping” by Mohammed ElFarargy Amr Rizq and Marwa Rahswan http://bit.ly/1k7Xyoe - from the 2013 G. Bebis et al (eds) "Advances in Visual Computing", Lecture Notes in Computer Science (LNCS), Vols. 8033, 8034, 9th International Symposium, ISVC 2013, Rethymnon, Crete, Greece, July 29-31 - SEEMS to offer a reliable pathway towards automated comparison of chronologically separated RTI's for the discovery and tracking of morphological changes in heritage materials and works of art. By generating displacement (height) maps from RTI's by iteratively improving contrast between the surface normal data extracted from the RGB values, the investigators were able to generate TRUE 3D surface models. If calibrated and distortion-corrected images are used to assemble the RTI's, these models are accurate and precise. If I'm reading past the lines correctly, this means that regardless of the alignment (tip, tilt or rotation) of the work in the capture images, the 3D models should have sufficient precision to allow for automated morphological comparison! Any body else working in this? I am hopeful that Greg Williamson and I can run our data sets through the iterative algorithms and check for precision. Thoughts? .
  4. A set of RTI still images of typical lead-soap micro-protrusions seen in commerically primed Belgian canvases used by O'Keeffe are now posted on my Google+ page: https://plus.google.com/u/0/+DaleKronkright/posts Color, specular and normal vector visualization. Enjoy!
  5. Carla and Sema - Great explanation! I'm just viewing an HSH of an O'Keeffe oil painting on canvas with numerous changes in normal vector: high ridges of paint, lifted cracks, dents, and of course, the edges falling away at the tacking edges. The representation of these changes in vector direction are fantastic. At 200x, the differences between the vector changes of the canvas twill-weave pattern and O'Keeffe's subtle brushwork and paint textures are not simply value changes, they are COLOR changes. THis makes even the most subtle crack, buckle and crease clearly documented. THanks, so much for this great feature in the new reader. Likewise, THANKS for the light, zoom and pan values and the fantastic bookmarking functions. These make meta data capture and interpretation discussions so much more agile! Great work and congratulations to the team!
  6. Wow! Nice work Marlin, et al!! This is very cool. Next step - embedded videos! Thank you all for putting together such a thoughtful and complete guide.
  7. Carla, Mark, Marlin and Community – I’ve some questions about reflective spheres and optimal normal reflection vector precision (that is the repeatability) and accuracy (how closely measured normal reflection vector values are to the actual value). The CHI “Guide to Highlight Image Capture, 2.0 (http://culturalheritageimaging.org/What_We_Offer/Downloads/RTI_Hlt_Capture_Guide_v2_0.pdf) explains that: “Depending on the size and portability of the target object, you must compose the camera’s field of view so it can encompass both the object and two reflecting spheres of an appropriate size. The spheres should have a diameter of at least 250 pixels in the resulting photograph.” (Pg. 3, Target Object with Reflective Spheres.) THE SET-UP- As a practical example, let’s say I’m capturing a 36” wide x 24” high painting using a 50mm lens. With space on either side of the frame for the spheres to be mounted so that they do not cast a shadow onto the canvas during 15° flash positions, the total width of the frame area is roughly 45”. With my 5D Mark II and a 50mm lens, shooting the captures in RAW, I get photos with a total frame size of 5616 horizontal pixels x 3744 vertical pixels, a 21.026 megapixel file (21.0MP). That equates to roughly 125 pixels per inch on the canvas. Let’s assume that when I manufacture the assembly jpg’s, I first distortion correct for the aberrations around the outer areas of the lens frame so that every pixel is metrological. A one inch reflective sphere is 125 pixels in this set-up – HALF the recommended pixel diameter. In my experience, really takes about 20 pixels, MINIMUM, to resolve a condition I am interested in documenting so that I may track its changes accurately. So in reality the smallest features that will resolve clearly in this composition are about an 1/8th of an inch diameter, about 3.2 mm. Let’s assume from a qualitative standpoint, I’m happy with that resolution. THE QUESTIONS- Normal reflection vectors taken from HSH assembly code calculate the normal reflection data using the brightest-to darkest RGBL values where the light source is the inverse of the highlight position on the spherical surface. To what degree am I decreasing both the accuracy and the precision extra variability or noise into the processing of normal reflection vectors by having a reflective sphere only HALF the recommended size? ​ Do the HSH algorithms require a nearly 250 pixel diameter hemisphere to accurately calculate the light sources and inverted reflection vectors? How much does variability (precision) depend upon having a minimum 250 pixel hemisphere? My guess is that the 250 pixel recommendation is based upon some optimization tests. But if nobody knows, perhaps I should gather that data? Thanks – Dale Kronkright (GOKConservator), Head of Conservation, Georgia O’Keeffe Museum
  8. Just a note - we had the very same issue with a 64 bit PC running windows 7. The solution mentioned below - making a shortcut directly to the RTIbuilder.jar file, bypassing the launchers entirely, works perfectly! Many thanks!
  9. Two remote sensing articles in Spiegel Online Inernational “Picture This” feature: Underwater photogrammetry and light detection and ranging (LIDAR /LADAR) with great images! Florian Huber University of Kiel's Institute of Prehistoric and Protohistoric Archaeology, documenting Yucatan’s cenotes using underwater photogrammetry: http://www.spiegel.de/international/zeitgeist/german-archaeologists-explore-the-mysterious-cenotes-of-mexico-a-869940.html And Axel Posluschny, Archaeolandscapes Europe (ArcLand), which operates under the Roman-Germanic Commission of the German Archaeological Institute participating in the €5 million undertaking to increase the archaeological use of modern remote-sensing technology such as LIDAR, ground-penetrating radar and other electric and magnetic techniques: http://www.spiegel.de/international/zeitgeist/remote-scanning-techniques-revolutionize-archaeology-a-846793.html
  10. On the Linked-In discussion group Cultural Heritage Conservation Science. Research and practice’s discussion on 3-D digital imaging and photogrammetry for scientific documentation of heritage sites and collections http://linkd.in/RZMpFj , Greg Bearman wrote the following question: “Does RTI give repeatable and quantitative set of normals good enough for looking for change? If I take an RTI set, rotate the object, let it warp a bit (flexible substrate), what do I get the second time? How do I align the datasets for comparison? what is the system uncertainty? ie if I just take repeated images of the same object without moving anything, how well does the RTI data line up. Second, suppose I take something with some topography but is totally inflexible and cannot distort(make up test object here!) and I do repeated RTI on it in different orientations? Can I make the data all the same? If you are going to use an imaging method to determine changes in an object, the first thing to do is understand what is in inherent noise and uncertainty in the measuring system. It could be some combination of software, camera or inherent issues with the method itself” I wrote back: “Hey Greg - tried sending response earlier last week but I do not see it!? Sorry. I'm on vacation until the 22nd - trying to recover and recharge. It is going well but I wanted to jot down my initial thoughts. One of my interns - Greg Williamson - is working on aberration recognition software that can recognize and highlight changes in condition captured by different H-RTI computational image assemblies - obviously taken at different times, but also with different equipment and with randomly different highlight flash positions. It seems, initally, that normal reflection is normal reflection, regardless of object or flash position and that the software correctly interpolates 3D positions of surface characteristics regardless of the precise position of the flash, because it is accustomed to calculating the highlights both the capture points and everywhere in between! Likewise, we have had promise with photogrammetry when the resolution of the images used to create the mesh and solids are similar. What may turn out to be key is a calibration set that will allow correction of the various lens distortions that would naturally come from different lenses. I know Mark Mudge at Cultural Heritage Imaging has suggested that we begin taking a calibration set before RTI capture, as we had before Photogrammetry. He may be working on incorporating a calibration correction into the highlight RTI Builder that CHI has made available. I'm sending this discussion along to the CHI forum at http://forums.cultur...ageimaging.org/ to see what others might have to add. When I return to work, I'll ask Greg to give this some additional thought.” Forum Members: any thoughts?
  11. Everything up to final PTM fitting works. Highlight recognition works, cropping works. When we do final execute, about 5 minutes go by (on a PC) and then we get the "unknown error message". The PTM fitter log has the following in the dialog box: "Polynomial Texture Map (PTM) Fitter Copyright Hewlett-Packard Company 2001. All rights reserved. See included readme and license files. C:\Documents and Settings\Administrator\My Documents\RTI - CHI\Software\Windows-software\PTMViewer-and-fitter-windows\PTMfitter.exe usage: -i filename Full filename for lp file specifing input files and light positions. -PTM <path>/<file.ptm> -o <path>/<file.ptm> Output file name. -RGB | -LRGB -f format (format either 0 for RGB or 1 for LRGB Create either an RGB or LRGB PTM. (Default: LRGB) -BIVARIATE | -UNIVARIATE -b basis (basis either 0 for biquadratic or 1 for univariate Calculate a least squares fit of one independent variable: -UNIVARIATE or two independent variables: -BIVARIATE (Default: BIVARIATE) -version Prints software version -h List command line options" Any ideas?
  • Create New...