Jump to content

Possible 3D Modeling Pathway for Automated Change-Detection of Chronologically Separated RTI's


Recommended Posts

Greg Bearman sent me this reference and I may be the last to know... (!)

But this paper - “3D Surface Reconstruction Using Polynomial Texture Mapping” by Mohammed ElFarargy Amr Rizq and Marwa Rahswan http://bit.ly/1k7Xyoe - from the 2013 G. Bebis et al (eds) "Advances in Visual Computing", Lecture Notes in Computer Science (LNCS), Vols. 8033, 8034, 9th International Symposium, ISVC 2013, Rethymnon, Crete, Greece, July 29-31 - SEEMS to offer a reliable pathway towards automated comparison of chronologically separated RTI's for the discovery and tracking of morphological changes in heritage materials and works of art.  By generating displacement (height) maps from RTI's by iteratively improving contrast between the surface normal data extracted from the RGB values, the investigators were able to generate TRUE 3D surface models. If calibrated and distortion-corrected images are used to assemble the RTI's, these models are accurate and precise.  If I'm reading past the lines correctly, this means that regardless of the alignment (tip, tilt or rotation) of the work in the capture images, the 3D models should have sufficient precision to allow for automated morphological comparison!  Any body else working in this?  I am hopeful that Greg Williamson and I can run our data sets through the iterative algorithms and check for precision.  Thoughts? 

.

Link to comment
Share on other sites

Dale -

 

Thanks for posting!  I had the pleasure of meeting Mohammed last month at the RTI meeting in Cyprus.  He is indeed doing some interesting work.

 

I think that using the calculated surface normals directly - for comparison of change - still makes sense.  When the data is converted to a ply there are known issues (specifically warping).  In Mohammed's initial work he was using surface normals from PTM files, and we also know those are les accurate and less repeatable than HSH.  He thought he could add code to work with HSH produced files.  The folks we are working with in Princeton and at Simon Fraser also have code to convert normal maps to 3D surfaces, and not all are using the same approach.  The nice thing about Mohammed's work is that he integrated it into the RTIViewer, so users can play with it more easily.  It can help a user visualize a surface and also to tell whether a certain area is convex or concave (not always easy to tell) but I think the normal maps are still a more reliable data set at this point.

 

There is additional discussion of converting normal maps to 3D in a couple of other threads.  Most recently, see Tom Malzbender's (and others)  notes in this topic:

http://forums.culturalheritageimaging.org/index.php?/topic/301-contouring-rti-data/&do=findComment&comment=846

Link to comment
Share on other sites

Good topic, Dale and Carla.  The masters thesis of Mingjing Zhang that Carla pointed to a little while ago on the CHI blog also discusses a method to improve the accuracy of normal maps and reconstruct a 3D model from reprocessing photometric stereo.  Automating comparisons over time intervals would be a big benefit.

 

I'm also interested in the idea of applying lens corrections to the images captured for RTI before processing the normal maps.  I've got lens calibration from a 3D model generated from photogrammetry (Photoscan).  I've aligned one of the images from each of my RTI capture sequences with the 3D model, so I can export an undistorted image from Photoscan, but only for the image that's aligned with the model.  Can I apply the same lens calibration to all of the RTI captures using Lightroom or Photoshop, or other image processing software?  Of course, the lens calibration is only as good as the 3D model, so optimization and processing of the photogrammetry is another potential source of error.

 

Also, I know the normals from some of my RTI captures have errors because of shadowing, specularity of the varnish, etc.  I'd be interested If there's a way to improve the accuracy of both the normal maps and the 3D model by iteratively comparing and/or combining them. 

 

Finally, a method of stitching together overlapping RTIs for objects that are too large to capture in a single RTI would be really useful.  Where the RTIs overlap, a way to reconcile the slight differences in the normal maps (some type of statistical averaging?) is needed.

Link to comment
Share on other sites

Taylor, 

 

I'm not sure the camera calibration parameters used in the thick lens model will easily translate to the still fairly simple lens correction tools in Photoshop. For instance, I don't think principal point, decentering distortion or in-plane distortion are modeled in the Photoshop tool. You could do what you are proposing in Matlab and possibly Panotools as well. You'll need to export the Photoscan calibration into another format since it expresses all of its calibration parameters in pixels (UV co-ordinates) not in X Y units, which are the ones used in the system of equations commonly used for photogrammetric distortion correction. I wonder if DXO Optics Pro may be able to import the standard photogrammetric parameters? In CalibCam I can export undistorted RTI image stacks for measurement but am unsure about what tools to chain together to accomplish this with Photoscan.  

Link to comment
Share on other sites

Taylor,

 

Once you have the camera calibration in PhotoScan for one photo of an RTI set, simply export the camera calibration from the camera calibration dialog - make sure you have picked the adjusted tab. Then just load all the photos of the RTI set into a brand new PhotoScan project. Go to the camera calibration dialog and load the calibration you just saved. Then export all the undistorted photos. They do not need to be processed in any other way, simply loading the camera calibration for a set of photos is going to use that during export.

 

Don't make it harder than it needs to be.

 

Tom

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...