Jump to content
Tom

CARE tool and surface normals

Recommended Posts

Is the CARE tool available yet? I'm struggling to produce a clear image of a very crude engraving on a piece of stone, and since the lines are fairly neat yet at different depths (the stone is 'stepped') I'm hoping some algorithmic rendering may well help me to get some decent images. I'd like to give it a shot, anyway.

 

Also - is there also a tool for Macs that will produce an image of the surface normals? I know the original PTM viewer could do this, but it no longer seems to work in OS X 10.8 ("bad cpu type in executable" - presumably it requires Rosetta).

 

Thanks,

 

Tom Goskar

Share this post


Link to post
Share on other sites

Hi Tom,

 

We don't have a CARE tool to share yet.  We have been working on various aspects of the whole pipeline, and it will probably be another year before we have something to release for that.

 

As for the surface normal visualization, you are correct that saving this file works on the PC version of the PTMviewer, but not the Mac version. That software is from HP, and the code isn't publicly available. You can see it on the screen, but saving it is not implemented there.  We are however working to add this feature in a small update to the RTIViewer we are working on.  This will support creating the Normal output vectors for both ptm and hsh RTIs. We had a small budget for some bug fixes in that software and were able to add this. We have a long list of features we would like to implement but that will require more funding. I don't have a release date for the RTIViewer update yet, but I'm hoping in April we should be able to get it out.

 

Carla

  • Like 1

Share this post


Link to post
Share on other sites

Hello,

I am a conservation student and came across RTI about one year ago. Being immediately fascinated by the possibilities and easy processing I told a lot of my fellow students and professors about it. To my surprise none of them ever heard about it before. But I was asked to change that by giving a presentation on the technique. I am very happy to do that as I think it is absolutely necessary to know how to make an RTI as it is such an easy method to find out more about the objects we are working with. (Sorry, just a quick introduction because I am new to this forum )

I was experimenting a lot with RTIs and already used them for various problems. I was so happy to find this post as I was desperately searching for a way to extract normal maps from the files I created. I was experimenting with various things and came across the need for a normal map again and again. For example I tried to compare mockups before and after changing the relative humidity or scratching the surface using RTIs and Photoshop filters. What I always found curious was that for example the printed letters and dark checkers on the scale or any area with high contrasts appeared to be "higher" although I could not imagine that the tiny amount of ink on the paper would be detected especially if one bears in mind the size of the area covered in the image (I am just not able to imagine the resolution to be THAT good). But after reading this post and extracting the normals maps from the PTM-viewer I saw that the phenomenon already occurs in these. So I am a bit confused: Are the high areas really "higher" or facing in an other direction to be more precise? Or is it not possible to completely "separate" the surface information from the brightness of the scale, writing and the color card (or the object of course)?

Thanks,

Alex

 

PS: Sorry if I do not know all the proper expressions. I hope you understand what I am wondering about anyway  :)

color_card.jpeg

Share this post


Link to post
Share on other sites

This is a very interesting issue! Do you think you could send out the "blend image" of the sphere(s) for the above RTI so we could get a sense of the light distribution?

 

Quantitative RTI is really only in its infancy. I'm hoping to write some code in the next few months to bring PTM files in R to do some analysis of the normals with directional statistics with CircStat (http://cran.r-project.org/web/packages/CircStats/index.html).  If we can develop a library in an open source package like R to do this kind of analysis it will be much more accessible to the community than any codes in Matlab. What I have noticed already in extracting normals from PTMs in Matlab is how often the parameters fail to fit, especially in cases of self-shadowing. It would be very interesting to see how colour affects the fitting of normals in PTM. It's not something I'd given much thought to. 

 

HSH is, of course, far more robust in generating normals, especially in cases of self-shadowing. Is there any published specification for HSH RTIs? I can't seem to find any...

Share this post


Link to post
Share on other sites

This paper by Lindsay Macdonald contains a very important discussion of the issues of error in the calculation of surface normals with RTI: http://ewic.bcs.org/upload/pdf/ewic_ev11_s8paper4.pdf

What Lindsay shows is that by selecting three light positions from the many light positions used (in his case a dome of 64 lights) and computing surface normals using the photometric stereo algorithm, the resulting normal field was more accurate than that produced using RTI and calculating normals using all 64 light positions. 

 

The exciting thing about this result is that existing RTI data-sets can be recalculated using this technique of photometric stereo lamp-triplets. I'd be interested to see what the normal field looks like calculated using PS triplets of your data-set above. 

  • Like 1

Share this post


Link to post
Share on other sites

George,



thank you for taking time to think about that issue. Um... what do you mean by "R"? Honestly this topic appears to be quite complicated to me. But it might just be all these formulas that frighten me off. Well, anyway, here is the blend-sphere of the PTM used for the normal map I posted above. All of my RTIs are done with a hand held lamp so the light patterns are rather irregular.


39184_blend1.jpg

Here are specular enhancement images, normal maps and blend-spheres of two other RTIs we did lately. On the scale the same phenomenon occurred again.

 

adhesive_film.jpeg

 

 

painted_iron.jpeg

 

 

If you PM me your email address I could also put the original pictures into a dropbox folder for you to experiment with. The article is very interesting and the results look really promising. When googling for it, it seemed like the PS method can also be used to generate real 3D data. Did I get that right? I guess this would be awesome as it would mean that two datasets recorded at different times (e.g. before and after a loan) from the same object could be adjusted and compared even if the object was not recorded from exactly the same position.



Alex

Share this post


Link to post
Share on other sites

I find the result of Lindsay MacDonald's paper really interesting, and a little surprising that normals calculated using RTI were less accurate than the three other methods evaluated.  It appears in this paper that only PTM normals were compared to the triplets of photometric stereo images--I wonder how normals calculated using the more robust HSH fitter would compare? 

 

It's very reassuring that the same image data sets captured for RTI can also be used to calculate normals from triplets of photometric stereo images for large data captures.  It's also very reassuring that RTI and photometric stereo compared very well with 3D scanning, which is not as practical a method for most users.  However, the RTI data sets in this paper were collected using a dome with 64 lamps, whereas most practitioners of RTI are using the highlight method and typical capture sequences consist of as few as 24 images.  The paper points to the most likely sources of error for RTI captures under controlled, indoor conditions.  In the field, there are typically larger error bars in the sources that Lindsay identified.

 

At the recent International Conference on Computational Photography, there was a paper titled Outdoor Photometric Stereo (Yu, Yeung, Tai, Terzopoulos, and Chan) that proposed a method for capturing 3D models of objects using 6-10 images under natural illumination.  I'd like to see a similar analysis of accuracy comparing their method to the techniques in Lindsay's paper. 

 

For the inevitable "it would be nice if" comment, in the future it would be nice if the RTI Builder could incorporate the triplet photometric stereo method into the processing pipeline and estimate the accuracy of the normal calculations using different methods in a histogram, similar to Figure 12 in Lindsay's paper.  This would allow the user to select an appropriate method for the particular situation and the purposes for which the documentation is intended to be used.

  • Like 1

Share this post


Link to post
Share on other sites

I'd really love to see exactly what the normals are doing on that scale bar and how much they're deviating from the expected direction. Given that the scale is placed on the outside of the image there can be problems with the normals. I suspect a big part of the problem was the shadowing from ball under certain light angles. As a mentioned above self-shadowing is a leading cause of the mis-estimation of normals with PTM. I'm really hesitant at this point to say that surface colour is causing a problem (RTI is usually very robust in this area) 

 

It would certainly be interesting to look at the normals from HSH and compare them quantitatively to PTM and PS. Does anyone have a document describing the RTI file specification? There's one for PTM but not RTI as far as I can tell. Without this it will be hard to write code to extract a normal field from an RTI built with HSH into Matlab.

 

Photometric Strereo is certainly a very powerful technique! I should ask Lindsay to try PS Triplets on highlight-based RTI, if he hasn't already.

 

R is an open source statistical programming language...it has a huge user base and has some good tools for analyzing arrays of vectors by direction.  Matlab, by contrast, is commercial and quite expensive for non-academic users. 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...