Search the Community
Showing results for tags 'normals'.
Found 1 result
Taylor posted a topic in PhotogrammetryI have generated a dense point cloud of 62,500,000 points (approximately 35,000 points per square inch) and medium-res 3D mesh of a painting using Agisoft Photoscan. It would be possible to generate the dense point cloud and mesh at a higher density, but it's beyond the memory capacity of my Mac mini (16 Gb of RAM), so I might out-source further processing of the photogrammetry data. I also have a mosaic of 36 RTIs at a ground-sample resolution of approximately 500 pixels per inch (250,000 pixels per square inch), plus higher-res RTIs of some details. The mosaic RTIs have approximately 10 percent overlap horizontally and vertically. I'm interested in combining the point-cloud and normal maps into a 3D model of the painting surface using methods described in the 2005 SIGGRAPH paper whose title is quoted above, "Efficiently Combining Positions and Normals for Precise 3D Geometry," by Nehab, Rusinkiewicz, Davis, and Ramamoorthi (or other methods anyone here might suggest). However, the link to the source code for the algorithm by these authors isn't working. I'm wondering if others here have tried this technique, and if anyone can provide the source code (with appropriate permissions) and offer advice for implementing it. I haven't written to the authors to ask for the code, but may do so. I'm interested in hearing others' experience with it or other similar techniques, working with large data sets. Another tool that looks like it might be useful in this regard is XNormal, which "bakes" the texture maps from high-resolution 3D meshes into lower-resolution meshes. Could it also accurately combine high-resolution RTI normal maps with high-res 3D meshes? I'm not sure if this modeling technique would produce the same result as the algorithm from the 2005 paper cited above. I'm also interested in suggestions for an appropriate work flow for cleaning, repairing, and decimating the 3D mesh. Would it be it better to start with the highest density point cloud and mesh that I can generate from the photogrammetry data set, then combine this with the normal maps from RTIs? Or perhaps clean, repair, and decimate the 3D mesh and then apply algorithms to combine it with the normal maps? I'm learning to use MeshLab, but I find it a bit daunting with the number of possible filters and it crashes pretty frequently with large data sets (it might be too much for my Mini). I also have approximately 53 Gb of RAW multispectral images at resolutions of approximately 500 to 1,000 pixels per inch that were captured using a different camera system than the one used to capture photogrammetry and RTIs. The 500 dpi images were captured in a 4x4 mosaic, or 16 images per waveband. There are 12 discrete wavebands (1 UV, 6 visible, and 5 IR) plus visible fluorescence images captured with emission filters. I'm interested in texturing the 3D mesh generated from the combined photogrammetry and RTI datasets using each of the discrete multispectral wavebands and reconstructed visible, fluorescence, and false-color IR images. I'd like to know what would be involved in registering these images to the 3D mesh generated from a different set of images. I'm hoping that the result of this project will be a 3D model with accurate surface normals that would allow interactive relighting, tiled zooming, algorithmic enhancement, and selective re-texturing at various wavebands in a web viewer, if a suitable viewer becomes available, such as the one being developed by Graeme Earl and the AHRC project or perhaps one of the Web3D viewers. Any advice and assitance would be appreciated! Best, Taylor