Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Bender last won the day on January 7 2017

Bender had the most liked content!

Community Reputation

4 Neutral

About Bender

  • Rank
  • Birthday 09/04/1914

Profile Information

  • Gender
  • Location
    Palo Alto

Contact Methods

  • Skype
  1. Yes, the x,y,z components of the normal are just mapped to r,g,b values with -1.0 getting mapped to 0, 0.0 getting mapped to 127 and +1.0 mapping to 255. As George and others say, the z value doesn't exploit the full range since it never points down.
  2. Photometric Stereo on its own introduces ambiguities when dealing with surface discontinuities. This is true regardless of how many lights were used in the determination of surface normals, although 3 is the minimum and 4 is often used allowing one to discard a single sample per pixel that introduces specular highlights or shadows. In addition, assumptions of simplified (typically diffuse) material properties also lead to normal estimation errors. When we wrote the original PTM viewer, we just found the maximum of the reflectance function, which we had in analytic form as a polynomial, to compute surface normals. As the RTI viewer was built on the PTM viewer, I'm assuming the same computations are performed there, but I'm not 100% positive. There are literally hundreds of papers that have been written on other variants of photometric stereo for surface normal estimation and subsequent surface reconstruction, but they all suffer from the two issues mentioned above, until combined with some sort of triangulation, photogrametry, SLAM or other camera pose/geometry estimation. In short, much has been already published on improving surface normals within RTI, and more could be done., its a fruitful topic.
  3. Gautier, offhand I don't know of any way of doing this without a little bit of programming, sorry. I'd be curious what software you used to convert the ptm file to coefficient images, am not aware of that. T
  4. Gautier, The PTM coefficients you are trying to read out are single 8 bit characters of unsigned char. When you look at them with a Text editor like Notepad, they get interpreted as text characters, so will all look like gibberish. In a C program (or other language), read them in as unsigned chars instead, and they will span 0-255. There is an implicit offset of 127, so that values below this are negative and values above are positive. So the most negative coefficient will be 0, and the most positive one 255. The scale and bias values map these values in the range of (-1,1) to their actual floating point representation as coefficients. Yes, if you read off each plane of coefficients seperately you can make 6 planes of images corresponding to the coefficients. Hope this helps, Tom Malzbender
  5. Recent work using PTMs for rock art panels in Brazil. Article is online below. Riris, P. and Corteletti, R. (2015). A New Record of Pre-Columbian Engravings in Urubici (SC), Brazil using Polynomial Texture Mapping, Internet Archaeology 38. http://dx.doi.org/10.11141/ia.38.7
  6. Gilles, Sorry if this wasn't articulated well in the 2001 Siggraph paper. To figure out what your Lu, Lv values are, construct a vector that points to the light source from the center of the image. The first coordinate of that vector is measured along the x axis of the images, the second is along the y axis of the images and the 3rd is perpendicular to those two, pointing up. Now normalize that vector, which is just dividing each of those components by the length of the vector. You should now have each component being between -1 and +1, and their squares adding up to 1. Now just drop that last component, the first two are the Lu, Lv coordinates that specify the lighting directions. cheers, Tom Malzbender
  7. Testae, Interesting stuff! It does make sense that in the transparency case you will have contributions to the reflectance function at each pixel from any surface that crosses that pixel ray, in addition to contributions from the opaque background surface. Keeping the background dark would minimize the effects from that background. Its kind of a trick to keep the fact that the subject is transparent from mattering. RTI does seem useful in the transparency case since it can potentially 'amplify' the visibility any surface structure, like bubbles, etc. in the medium as you point out. Your examples make a strong case for helping see the first reflecting surface better, do you have any examples of internal surface structure being visible that is otherwise difficult to discern?
  8. Alex, I've done similar experiments with high contrast writing on paper as well, and get similar results. I think there are 2 factors at work here. First of all, as Taylor suggests, the process of printing does change the surface somewhat. Some geometrical deformation seems unavoidable, especially when using the enhancement methods that are sensitive enough to bring out even faint indented writing without ink. The more important factor though is that the computation of the normals is not error free, even in the best photometric stereo methods. The normals are computed from some assumption on the form of the reflectance function, polynomial in the case of PTMs. In the PTM viewer, we simply find the maximum of the polynomial fit to the reflectance function to establish the normal direction, a reasonable approximation for surfaces close to Lambertian. I believe this is also done in the RTI viewer currently, although work is in progress on more advanced techniques. However, even for a perfect Lambertian surface, this is only an approximation, as is the assumption that your surface is Lambertian to start with. Certainly ink is a much less diffuse surface than paper is, so differences in normal estimation will be introduced. Even self-shadowing from microfacets will effect reflectance measurements that are used for normal estimation. For a more advanced method of estimating surface normals from RTI input, take a look at the paper by Mingjing that Carla references as well as the earlier work by Mark Drew et al. This technique works quite well in practice. There are dozens if not hundreds of papers on variants of photometric stereo out there though, it's a subfield of its own. On the topic of transparency, I would concur with Carla's comments. The enhancement techniques in RTI and PTM were designed to be used with opaque surfaces, so all bets are off in that case. Tom
  9. Graeme and James, I'm not sure I understand, but in general I don't think its a good idea to extract geometric detail from RTI, whether you use that for getting contours or otherwise. As in photometric stereo, errors in the surface normal wind up making low frequency errors in geometric shape as you spatially integrate the normals. The small scale detail winds up being pretty good, but the overall shape is usually way off. Discontinuities in the surface, edges, etc, are also problematic. Tom
  • Create New...