Jump to content

Color mapping to normals direction


leszekp

Recommended Posts

I'm not exactly sure what you are asking.  I did an explanation of the false color mapping in this post: http://forums.culturalheritageimaging.org/index.php?/topic/302-geeky-details-about-the-normals-visualization-available-in-the-new-viewer/

 

If you are asking how the normals are calculated from the PTM and RTI file format data - I'm not the one to answer that.

Link to comment
Share on other sites

Sorry, I wasn't clear. In the post, you say "It is common to represent normal fields through false color visualization, where the x, y, and z coordinates are mapped to RGB: red, green and blue, respectively." The z coordinate will always be positive, and could be mapped directly to the blue value of RGB (just multiply by 255). But the x and y values can be both positive and negative, and you can only have positive numbers for the R and G values. So, you have to somehow re-map the x and y coordinates of the normal vector so that they can be converted into positive RGB values (and remap the z vector the same way). One possible mapping method might be to divide the vector values by 2, add 0.5, then multiply the result by 255 to get the corresponding RGB values for the normals image (R for the x-vector, G for the y-vector, B for the z-vector). This approach ensures that the RGB values will always be positive. My question was about the specific mapping mode used in RTIViewer to convert the x, y and z components of the normals vector into R, G and B values in the normals visualization.

Link to comment
Share on other sites

I'm not sure of the code - but my understanding is that to do the visualization the data is all normalized to between 0-1.  It is possible to create a normal field where the data is normalized to between -1 and 1 and this could be more accurate for statistical purposes, but doesn't yield a useful visualization.  I'm not sure this helps.

Link to comment
Share on other sites

I had the idea of calculating the angle of a surface point from the normals picture using the RGB values. But when I assumed a normalized value for RGB to xyz, I was getting an angle far steeper than what was actually present on the artifact. Comparing normals colors on the artifact with the colored "sphere" in your post (and in the RTIViewer manual), I saw the same thing - the colors on the normals pic corresponded to angles far steeper than reality. Did some playing around with the data, and found out that if I divided the color values for R and G (corresponding to x and y) by two, then tried calculating the angles with the new values for normals, the values appeared to be correct. Here's the normals-colored sphere from the viewer manual, which does reflect a linear mapping of normals to color:

 

old_ball.jpg

And here's that sphere with the x and y values multiplied by 2:

 

correct_ball.jpg

I'm guessing that the non-uniformities come from working with original jpg data; I may try and fix this at some point. The colors that correspond to various angles on the sphere now match up quite well to the colors on the artifact that I was observing. You actually want a steeper color gradient like this to pull out details, since it will accentuate surface curvatures to a greater degree than a one to one mapping of xy normals to RG colors.

 

I've been getting some interesting and useful results on lithic artifacts using a -1 to 1 normals mapping, and other manipulations of the normals data; time permitting, I may show some of these at my SAA presentation next week.

Link to comment
Share on other sites

  • 2 weeks later...

Just a quick question related to this. In principle the x and y components should be between -1 and 1 and mapped onto values between 0 and 255. Since the z component must be between 0 and 1 (the z components is produced by a square root in PTM/RTI and so cannot be negative) is it mapped 0-1 to 0-255 or is the mapping only effectively to 0-128? Having looked at some of the histograms of the z-component it seems to me that it has less effective bit-depth than x and y. I could be totally off on this...

Link to comment
Share on other sites

x-, y-, and z- components are mapped from [-1 1] to [0 255], even though z cannot be negative. So yes it means that you will never use the range between 0-128 for the z-component. You can always compute z-component by z = square_root(1 - x*x + y*y) after you map the x and y components between [-1 1].

Link to comment
Share on other sites

z appears to be mapped to values of 0 to 255 corresponding to -1 to 1, but only values of 128 and above are used for normal colors, since z is never down into the screen. The B value in all normals images I've looked at ranges from 128 to 255.

Link to comment
Share on other sites

  • 2 years later...

Yes, the x,y,z components of the normal are just mapped to r,g,b values with -1.0 getting mapped to 0, 0.0 getting mapped to 127 and +1.0 mapping to 255. As George and others say, the z value doesn't exploit the full range since it never points down. 

Link to comment
Share on other sites

  • 10 months later...
On 4/7/2015 at 9:07 PM, leszekp said:

I saw the same thing - the colors on the normals pic corresponded to angles far steeper than reality. Did some playing around with the data, and found out that if I divided the color values for R and G (corresponding to x and y) by two, then tried calculating the angles with the new values for normals, the values appeared to be correct.

I'm arriving a bit late to the party here, but I think that this has something to do with the problems described by Lindsay MacDonald in p106 of his PhD thesis. He claims (and I'd agree) that there's an error in the original 2001 paper by Malzbender et al. - specifically that the surface normal is stated incorrectly in eq. 18. It looks as though this error has persisted into PTM fitter, and so what is reported by RTIBuilder to be the surface normal vector is actually the specular vector. This means that if you take the 'surface normals' image from RTIBuilder and apply the linear mapping described by several contributors above (i.e. [0 to 255] for [-1 to 1] for each of the three vector components), the result is out by a factor of roughly 2 (assuming that the camera is remote from the subject).

Can anyone else confirm this? Maybe someone involved with fitter development can lend some insight. Looking at the results from the HSH fitter there seems to be a similar thing going on, although in that case there is no mathematical reason (that I can think of) why there should be an issue...

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...