Jump to content

Search the Community

Showing results for tags 'photogrammetry'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Announcements
    • FAQ
    • Project Information
  • Digital Lab Notebook (DLN) tools
    • DLN:CaptureContext (DLNCC) tool
    • DLN:Inspector tool
  • Capturing Data
    • Dome Method
    • Highlight Method
    • Photogrammetry
  • Processing RTI Data
    • Processing RTI Data
  • Viewing and Analyzing RTI Results
    • All Viewers
    • Dissemination
    • RTI AHRC Project (UK)
  • Regional Cultural Heritage Discussions
    • Nigerian Cultural Heritage

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 30 results

  1. I have generated a dense point cloud of 62,500,000 points (approximately 35,000 points per square inch) and medium-res 3D mesh of a painting using Agisoft Photoscan. It would be possible to generate the dense point cloud and mesh at a higher density, but it's beyond the memory capacity of my Mac mini (16 Gb of RAM), so I might out-source further processing of the photogrammetry data. I also have a mosaic of 36 RTIs at a ground-sample resolution of approximately 500 pixels per inch (250,000 pixels per square inch), plus higher-res RTIs of some details. The mosaic RTIs have approximately 10 percent overlap horizontally and vertically. I'm interested in combining the point-cloud and normal maps into a 3D model of the painting surface using methods described in the 2005 SIGGRAPH paper whose title is quoted above, "Efficiently Combining Positions and Normals for Precise 3D Geometry," by Nehab, Rusinkiewicz, Davis, and Ramamoorthi (or other methods anyone here might suggest). However, the link to the source code for the algorithm by these authors isn't working. I'm wondering if others here have tried this technique, and if anyone can provide the source code (with appropriate permissions) and offer advice for implementing it. I haven't written to the authors to ask for the code, but may do so. I'm interested in hearing others' experience with it or other similar techniques, working with large data sets. Another tool that looks like it might be useful in this regard is XNormal, which "bakes" the texture maps from high-resolution 3D meshes into lower-resolution meshes. Could it also accurately combine high-resolution RTI normal maps with high-res 3D meshes? I'm not sure if this modeling technique would produce the same result as the algorithm from the 2005 paper cited above. I'm also interested in suggestions for an appropriate work flow for cleaning, repairing, and decimating the 3D mesh. Would it be it better to start with the highest density point cloud and mesh that I can generate from the photogrammetry data set, then combine this with the normal maps from RTIs? Or perhaps clean, repair, and decimate the 3D mesh and then apply algorithms to combine it with the normal maps? I'm learning to use MeshLab, but I find it a bit daunting with the number of possible filters and it crashes pretty frequently with large data sets (it might be too much for my Mini). I also have approximately 53 Gb of RAW multispectral images at resolutions of approximately 500 to 1,000 pixels per inch that were captured using a different camera system than the one used to capture photogrammetry and RTIs. The 500 dpi images were captured in a 4x4 mosaic, or 16 images per waveband. There are 12 discrete wavebands (1 UV, 6 visible, and 5 IR) plus visible fluorescence images captured with emission filters. I'm interested in texturing the 3D mesh generated from the combined photogrammetry and RTI datasets using each of the discrete multispectral wavebands and reconstructed visible, fluorescence, and false-color IR images. I'd like to know what would be involved in registering these images to the 3D mesh generated from a different set of images. I'm hoping that the result of this project will be a 3D model with accurate surface normals that would allow interactive relighting, tiled zooming, algorithmic enhancement, and selective re-texturing at various wavebands in a web viewer, if a suitable viewer becomes available, such as the one being developed by Graeme Earl and the AHRC project or perhaps one of the Web3D viewers. Any advice and assitance would be appreciated! Best, Taylor
  2. The Smithsonian Museum has launched an on-line 3D viewer they refer to as X 3D, with examples of objects from their collections. The on-line viewer has some interesting features ("advanced tools"), described here. The X 3D web viewer is "powered by Autodesk" and it appears to remain proprietary for the time being. Their 3D effort is being managed by the Digitization Program Office, which states, "The Smithsonian digitization challenge and opportunity can be measured by the total number of collection items: at 137 million objects, artworks and specimens, capturing the entire collection at a rate of 1 item per minute would take over 260 years of 24/7 effort. At the present moment, the Smithsonian has prioritized the digitization of about 10% of its collections for digitization. To rise to this challenge, the Digitization Program Office is promoting rapid capture photography workflows for two-dimensional collections, and exploring innovations to speed up the capture of our three-dimensional collections, preferably in 3D." Such a high-profile effort has the likelihood to set standards for other institutions, and I'm curious to what extent the Digitization Program Office is coordinating with other organizations to conform with open standards for this type of dissemination effort. For example, they state, "For many of the 3D models, raw data can be downloaded to support further inquiry and 3D printing." I wonder if the raw data will be provided with complete metadata? I would hope and expect so, since they envision the 3D future as extending into field data collection efforts. I haven't tried downloading any of the data sets, but some examples are available here. I've noticed (e.g., here and Carla's response) that the University of Leuven has developed their own viewer and file format for RTI, CyArk has developed their own 3D viewer for point clouds, and other institutions like Oxford University are engaged in massive efforts to digitize portions, or the entirety of, their collections for on-line dissemination. With public-private partnerships such as the Smithsonian's X 3D project, it would seem there's a downside potential to have fragmented, proprietary standards and restrictive copyrights for certain subsets of data collections (e.g., CyArk), while there's an upside potential to do groundbreaking research by providing access to collections across multiple institutions and even private collections, if open standards can be used to better harmonize these data sets and make them available. Laser scanning appears to be the core technology for the X 3D project, while this video suggests they're also using CT scanning and photogrammetry. Scanning might be fine for some objects, but may provide incomplete information for others (such as manuscripts, paintings, etchings, and items with significant fine details). If this is the future of museums, how will the public and scholars know the extent to which the objects rendered on their computer screens accurately represent the objects in the collection? What are the guidelines and standards for digitizing objects (e.g., density of point clouds, and accuracy of mesh reconstructions and RGB data)? Will the Smithsonian integrate other technologies, such as RTI, multispectral imaging, photometric stereo, white-light interferometry, confocal microscopy, etc.? The Smithsonian recently held a conference built around the rollout of X 3D, and I'd be interested if anyone who attended the conference has any thoughts about these issues.
  3. I want to image this object in our collection here at the Minneapolis Institute of Arts: https://collections.artsmia.org/index.php?page=detail&id=738 It's a low-relief sarcophagus that I think would image well in either RTI or using photogrammetry, but I can't decide which to use. I know the object would be easy for RTI, and I could do it in sections or its entire sides. But I think I'd like to end up with a file that's shareable outside the RTI Viewer, like an OBJ or an STL. I know we're photographing two spheres in RTIs to make our data ready for 3D modeling, but I haven't heard an update on that since I saw the CHI team in DC in 2012. With photogrammetry I would use the advice from the BLM publication and either a trial or the $179 version of PhotoScan. I think that technique would let me take the meshes of each of the object's six sides and make them into a beautiful, shareable object. I'm going to need to figure out PhotoScan in the near future regardless: should this be the object I start with?
  4. Two remote sensing articles in Spiegel Online Inernational “Picture This” feature: Underwater photogrammetry and light detection and ranging (LIDAR /LADAR) with great images! Florian Huber University of Kiel's Institute of Prehistoric and Protohistoric Archaeology, documenting Yucatan’s cenotes using underwater photogrammetry: http://www.spiegel.de/international/zeitgeist/german-archaeologists-explore-the-mysterious-cenotes-of-mexico-a-869940.html And Axel Posluschny, Archaeolandscapes Europe (ArcLand), which operates under the Roman-Germanic Commission of the German Archaeological Institute participating in the €5 million undertaking to increase the archaeological use of modern remote-sensing technology such as LIDAR, ground-penetrating radar and other electric and magnetic techniques: http://www.spiegel.de/international/zeitgeist/remote-scanning-techniques-revolutionize-archaeology-a-846793.html
  5. For capturing high-resolution multispectral images of a 19th century landscape painting (36 x 50 inches), I found it helpful to create a spreadsheet for planning purposes, and also to supplement the shooting notes as a record of what was done. To get complete coverage of the painting at the desired resolution for RTIs and photogrammetry (500 ppi for RTIs, and up to 2600 ppi for certain details), it was necessary to take a series of overlapping images. The spreadsheet takes basic information about the object (size, material, UV sensitivity), camera and lens (format, sensor and pixel dimensions, focal length, and settings), and calculates various parameters (working distances for the camera sensor and light source, minimum size of the reflective spheres, number of images, and storage requirements) for the project, given the desired target resolution and various wavebands (UV, visible, and IR) to be captured. This information is useful for estimating the space and time requirements for capturing RTIs and photogrammetry. Since the painting is on the east coast of the U.S. and I'm in California, it was important to have a good understanding of these parameters before shipping equipment across country and for arranging studio space in which to do the work. The spreadsheet was also helpful for selecting the macro lens for the project. The storage requirements are based on a RAW image file size of 20 Mb (a slight overestimate for my 16 Mp camera) and don't take into consideration the processed file sizes. For example, generating .dng files with embedded RAW images approximately doubles the RAW file size, and exporting .jpg images adds approximately 50 percent to the storage. The final processed .ptm and .rti files range from approximately 250 Mb to 350 Mb per RTI, so accordingly, additional storage will be needed to process the files. The spreadsheet only estimates the storage needed for RAW image acquisition. Another variable is the amount of overlap for the images. For general imaging and RTIs at a given resolution, the spreadsheet uses 10 percent overlap, and for photogrammetry, it uses 66 percent overlap for the camera oriented horizontally. The spreadsheet calculates the distance to shift the camera in horizontal and vertical directions to get complete coverage of the object. It assumes three images per position for photogrammetry (horizontal and two vertical orientations) and 36 images per position for RTIs. These parameters can be adjusted for particular project needs. The input parameters are entered into the spreadsheet using metric units. A companion worksheet mirrors the format of the metric spreadsheet and automatically converts all the distances from metric to English units, for convenience. An example of the spreadsheet is attached, showing the calculations for this project. [see update below.]
×
×
  • Create New...