Jump to content

Contouring RTI data


Graeme

Recommended Posts

Hi all,

 

we are interested in possibilities for using RTI data for generating contours, largely as part of our scanning pipeline. There are a range of contouring approaches relevant to surface datasets, and rather fewer for point cloud data. The algorithmic rendering tools we have seen presented so far include a range of contouring approaches and we wondered if there are any tools that we could trial on some virtual RTI datasets derived from laser scanning at Portus? We could then automatically (or supervised semi-automatically) vectorise the contours if they were produced as raster datasets. Any ideas very gratefully received.

 

Best wishes,

 

Graeme Earl and James Miles

Link to comment
Share on other sites

I can't help myself on this, but I must add a cautionary note about making virtual RTIs.

 

Collecting 3D data by laser scanner, structured light scanner or photogrammetry, and then making virtual RTIs is a fine thing to try as part of a research project, or to compare data sets collected different ways, etc.  However, and this is a big however, the resulting RTIs are not the same as RTIs produced from imaging the actual surface.  And in fact the resulting RTIs can be misleading to users.  The 3D model will not contain the same level of detail as the actual surface, and might contain artifacts from the scanning process, not found on the actual surface. So, you might be missing things or adding things to the RTI.  Further, the user of an RTI must know that they are looking at a synthesized RTI and therefore that it isn't as reliable as one taken from the actual surface. 

 

I want to reiterate that I think this is fine for comparison or research as long as you are totally clear to any user of the result what the potential issues are in the result.

 

I don't have comments on Graeme's actual question here, but I'll see if I can get someone who knows more than me to comment.  I know Graeme is totally aware of the issues I'm raising here, but I do see the virtual RTI topic come up enough that I am using this thread to post my thoughts on it.

 

Carla

Link to comment
Share on other sites

Graeme and James, 

 

I'm not sure I completely understand this question. Could you tell us a little bit more about what you want to accomplish? Did you want to export a surface normal map into a contouring program like Surfer?

 

As an occasional user of virtual RTIs I can say that their chief advantage is to produce a vastly more compact representation of 3D datasets for dynamic relighting by non-expert end-users. Virtual RTIs can be very helpful in cases where actual RTI would be difficult or impossible, such as with aerial LiDAR or laser profilometry data. Where an actual RTI capture is feasible, I completely agree with Carla that a virtual RTI generated from laser or photogrammetry data will be inferior in many respects, all things being equal. 

 

This question of the merits of virtual RTIs is an interesting one!

 

George

Link to comment
Share on other sites

  • 5 weeks later...

Thanks for the replies and sorry for the late reply!

 

What we want to do is to take the laser scan data that we have and create a system of vectorised drawings that we can use for publications and illustrations. I have tried various methods of meshing the data and using automatic line data tools in 3D software and graphic editing software , but the results unfortunately simplify the model to the point where we lose too much detail. 

 

The data is currently represented as a point cloud, but high resolution images can be taken to produce a virtual RTI but this would only represent the point cloud. We can introduce a height ramp system to the points based on the z value but this doesn't provide a system that would allow for the vectors to be drawn. I can try and create a normal map of the images using a plugin for Photoshop but our thought was based on the idea of integrating algorithmic rendering within our process somehow. We can see from webpage on CHI that this could prove a worth while tool and I know that the system works through the RGBN of the images. As this is a virtual data set, I am unsure how this would work, considering that the images would be at a much lower resolution than a DSLR camera and it would have gaps within the data where points may be missing.

 

As the algorithmic software isn't currently available I was wondering how best to do this? I am not focused on outlining small details within the scan data but rather enabling a system that would allow for a generic outline of detail. The work that we are currently trying this on is from Portus and I have attached an image to represent the resolution that we have available. What I would like is a simple outline of each brick within the data. I know that this in itself may not be possible because of the original data and the rendered image may not contain all of the data, but if anyone has any ideas on how this can be done, either through a RTI or other means then it would be great to hear from you. I have tried everything possible with the software I have available (except for normal extraction) and I am finding it extremely hard to do! I have imported the data into GIS and tried contouring from the point set, from a tin and even from an image and I am not getting the results that I would like. 

 

Any ideas would be welcome!

 

James

 

http://www.southampton.ac.uk/~jm1706/portustest2.jpg

 

http://www.southampton.ac.uk/~jm1706/Room_00435.jpg

Link to comment
Share on other sites

Graeme and James,

 

I'm not sure I understand, but in general I don't think its a good idea to extract geometric detail from RTI, whether you use that for getting contours or otherwise. As in photometric stereo, errors in the surface normal wind up making low frequency errors in geometric shape as you spatially integrate the normals. The small scale detail winds up being pretty good, but the overall shape is usually way off. Discontinuities in the surface, edges, etc, are also problematic.

 

Tom

Link to comment
Share on other sites

My interpretation of James' and Graeme's description of your goal is that you'd like to get something like the technical illustrations that have been demonstrated using the algorithmic rendering in the CARE tools (under development).  However, you don't have RTIs, but you're considering extracting normal information from virtual RTIs, i.e., from the scanned point cloud information, and somehow integrating these normals with RGB images of the same surface. 

 

There are a number of edge detection tools, e.g., "Canny" edge detection, and I wonder if any of these might serve the purpose of outlining bricks in those walls using only the high-res RGB image data?  You might have difficulties with shadowing and occlusion, but that's a limitation of the RGB data set regardless of the tools you have. 

 

It would be great if you can improve on the edge detection of RGB data by extracting shape information incorporating virtual normals generated from the point cloud data, but this approach has limitations, as Carla, George, and Tom have noted.  Another difficulty you'd face would be registering the RGB images with the point clouds--not a simple problem for data captured with different systems.  Maybe if the point clouds have sufficiently high resolution, the brick edges will stand out sufficiently to be detected within the noisiness of the data, but here I'm just speculating and as noted above, converting the virtual normals to a vector line-drawing isn't a simple problem either.  Maybe something like Xshade would allow you to enhance the contrast between the bricks and the joints by creating a virtual lighting model for the point cloud.  Could you then take snapshots of the Xshade model and use one of the edge detection tools to create a vector drawing?

Link to comment
Share on other sites

  • 3 weeks later...

Hi Taylor

 

Thanks for the advice. The outline that you gave is near enough to the work that I want to complete. I have been working with Mark Nixon using his Image Ray Transform tool which is another example of automatic line extraction but I have had problems extracting the necessary data due to the lower resolution images required.

 

I will try the tools that you have suggested to see if I can get the level of detail required. I am aware of the problems associated with virtual RTIs and I think the best thing to do would be to concentrate on the extraction of line data from the high resolution RGB images.

 

I will let you know if I have any success 

 

James

Link to comment
Share on other sites

  • 3 months later...

Hi James,

 

I just came across this paper that proposes a look-up table for edge detection, which it says is more computationally efficient and effective than the Canny method or other methods:  Edge detection by using look-up table

 

You've probably already worked out a solution, but maybe this could be of some use for future situations, until the CARE tools are available.

 

Cheers,

Taylor

Link to comment
Share on other sites

  • 4 weeks later...

Dear Graeme and James,

I've came across this topic only now so I don't know if this post is already out-of-date.

Nevertheless, I would like to contribute to this discussion with a different approach directly from the 3D data sets.

Depending on the average resolution of your 3D point-cloud data (is it enough for modelling the joints between the bricks?), I'm able to produce textured 3D models and high definition images that depict the surface morphology using a technique that I'm developing since 2008, called Morphological Residual Model.

I´d be very happy to process some of your data to see if MRM fits your needs.
If you're interested please let me know.

Cheers,

Hugo

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...