Jump to content
GOKConservator

Does RTI give repeatable and reliable normals of objects taken at different times and positions to facilitate detection of changes?

Recommended Posts

On the Linked-In discussion group Cultural Heritage Conservation Science. Research and practice’s discussion on 3-D digital imaging and photogrammetry for scientific documentation of heritage sites and collections http://linkd.in/RZMpFj , Greg Bearman wrote the following question:

 

“Does RTI give repeatable and quantitative set of normals good enough for looking for change? If I take an RTI set, rotate the object, let it warp a bit (flexible substrate), what do I get the second time? How do I align the datasets for comparison?

 

what is the system uncertainty? ie if I just take repeated images of the same object without moving anything, how well does the RTI data line up. Second, suppose I take something with some topography but is totally inflexible and cannot distort(make up test object here!) and I do repeated RTI on it in different orientations? Can I make the data all the same? If you are going to use an imaging method to determine changes in an object, the first thing to do is understand what is in inherent noise and uncertainty in the measuring system. It could be some combination of software, camera or inherent issues with the method itself”

 

I wrote back: “Hey Greg - tried sending response earlier last week but I do not see it!? Sorry. I'm on vacation until the 22nd - trying to recover and recharge. It is going well but I wanted to jot down my initial thoughts. One of my interns - Greg Williamson - is working on aberration recognition software that can recognize and highlight changes in condition captured by different H-RTI computational image assemblies - obviously taken at different times, but also with different equipment and with randomly different highlight flash positions. It seems, initally, that normal reflection is normal reflection, regardless of object or flash position and that the software correctly interpolates 3D positions of surface characteristics regardless of the precise position of the flash, because it is accustomed to calculating the highlights both the capture points and everywhere in between! Likewise, we have had promise with photogrammetry when the resolution of the images used to create the mesh and solids are similar. What may turn out to be key is a calibration set that will allow correction of the various lens distortions that would naturally come from different lenses. I know Mark Mudge at Cultural Heritage Imaging has suggested that we begin taking a calibration set before RTI capture, as we had before Photogrammetry. He may be working on incorporating a calibration correction into the highlight RTI Builder that CHI has made available. I'm sending this discussion along to the CHI forum at http://forums.cultur...ageimaging.org/ to see what others might have to add. When I return to work, I'll ask Greg to give this some additional thought.”

 

Forum Members: any thoughts?

  • Like 1

Share this post


Link to post
Share on other sites

It would seem to me that the best way to accomplish change detection quantitatively would be to use photogrammetry. At present I know of no tools for comparing sets of surface normals generated by RTI (provided they are even registered textel for textel) before and after a particular conservation treatment. In theory if the one of the images used for the photogrammetry capture were also the same as one of the images used for the RTI capture (i.e. you did an RTI at one of the positions where you captured a stereo pair) the resulting epipolar image (one corrected for the position of the camera and the distortion of the lens) from the photogrammetry could be used to match the textels of the RTI to 3d points in space. I know Diego Nehab put some scripts up online that he used in his 2005 paper where he used photometric stereo to correct the normals on range maps, but I don't think they would be usable "out of the box" to do this. I can explain further if you want. Just my two cents...

 

This is a very complicated problem and one that the community should discuss at length. It would be great if we had freely available codes for extracting the numbers from an RTI, i.e. to would convert the RTI to an array of surface normal vectors that we could then process quantitatively. I've done this a bit in Matlab with mixed success. The open source statistical computing language R might be the best package with which to create such tool.

  • Like 1

Share this post


Link to post
Share on other sites

This is a great topic that I'm going to follow closely. I'm sure the questions of error measurement and reproducibility will be studied intensively as RTI becomes more widely adopted in the field of forensics, as well as conservation. I just wanted to mention that Mark and Carla presented a good overview in their plenary address at the 2012 CAA conference in Southampton of the uses of RTI, photogrammetry, and quantitative multispectral imaging to monitor spectral and physical changes over time (see video here:

http://vimeo.com/39497935).

The subject of multispectral monitoring to assess pigment changes begins at about 34:30, and the discussion of motion tracking using the CARE tools begins at about 39:00.

Share this post


Link to post
Share on other sites

This is a big topic with a lot of aspects to it. I agree that for measurability photogrammetry is a good way to go. I'll add to that though that "monitoring change" in a conservation environment may be done through visual inspection of RTIs captured at different times. For example, in the case of material we worked on with Dale at the O'Keefe museum, we were able to detect in the paint surface evidence of soaps that form and move through the paint layers in some situations. These would be really hard to detect with a different method. I do think that you could image that painting again later, say after it had gone out on loan, and look to see if the soaps are in the same relative location or if they had moved or changed in size or shape. I get that one goal here is to do this quantitatively, but my point is that there are situations where visual inspection using RTIs can be very effective, where other methods may not pick up the detail you are tracking.

 

As George points out there are a lot of tricky complicated issues, even with just aligning photographs taken at different times, even without getting into all the other issues of different wavelengths, RTI, 3D etc. There are many approaches to aligning image data, and several teams we know of are working on this problem from slightly different angles, to solve specific problems they are facing. I also know that the team at Princeton has used normal fields to do pattern matching for various problems. The pattern matching is fairly specific to the data sets at hand and detect certain kinds of patterns they expect within that data.

 

As for kicking out a normal field from one of the viewers, that is something we are looking at doing because it is useful for a variety of things. I know there are some researcher who have code to do this now, but it's likely to be command line, or use matlab, or have other reasons to be unsuitable for broad distribution. No specific plan or date on this yet, but we'll post once we get to that stage.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...