Jump to content

Search the Community

Showing results for tags '3d'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Announcements
    • FAQ
    • Project Information
  • Digital Lab Notebook (DLN) tools
    • DLN:CaptureContext (DLNCC) tool
    • DLN:Inspector tool
  • Capturing Data
    • Dome Method
    • Highlight Method
    • Photogrammetry
  • Processing RTI Data
    • Processing RTI Data
  • Viewing and Analyzing RTI Results
    • All Viewers
    • Dissemination
    • RTI AHRC Project (UK)
  • Regional Cultural Heritage Discussions
    • Nigerian Cultural Heritage

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 7 results

  1. Hi all, Recently we have released to the public our Photogrammetry&RTI project that we have been working for over 2 years. We were inspired by some of your posts here with the dome approach and we decided that we should share with you our results. Our aim was to create a cheap, affordable photogrammetric modeling as well as RTI imaging device, that anyone can create and does not cost thousands of euros. Because of that, we needed multiple cameras, led lights and a step motor, which allowed us to rotate an object placed on the rotating table and take photos around it. As our device heart we used RaspberryPi 4 and 4 ArduCAM cameras. We were aiming at small objects, artifacts which are not bigger than 100-120mm. In the first iteration of our device we have used a small (32cm in diameter) dome (an aluminum bowl from IKEA….) with 40 led lights and 4 cameras. This version was successfully tested on Cyprus about 2 years ago where we gathered extremely valuable experience. In the second iteration we decided to leave the bowl approach and create a spider-like shape, with 10 arms for the led strips (12 leds on each) and 1 seperate arm for the cameras. It is much more mobile than the previous version as we can disassemble it and because of that it is easier to work with. Because in the first iteration we noticed that depth field was an issue, we decided to go with higher resolution cameras that have motorized focus, so we can use focus stacking method to increase our depth field. For the led strips we used flexible pcb which we connected together with FPC cables. We are controlling them through our shield that goes on the top of the raspberry pi 4. Below I am sending 2 links to my LinkedIn posts as I can not upload images here. https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-textured-activity-6766022368184860672-n2MF https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-ptm-activity-6766744323838013440-z9yn Thanks for inspiration guys and let us know what you think about our device. I am sure we will share some results as we will start testing and creating more RTI images for the objects. Marcin
  2. 3DHOP (3D Heritage Online Presenter) is an open-source software package for the creation of interactive Web presentations of high-resolution 3D models, oriented to the Cultural Heritage field. http://3dhop.net/
  3. Nowadays, capabilities of photogrammetric software are amazing. You may wonder: “Can a video captured by drone be used to create a 3D model of a cultural heritage object?” and the answer is “Yes”. In this tutorial, you will create a georeferenced and measurable 3D model, using YouTube video captured by DJI Inspire 1 using Agisoft PhotoScan and view it with Sputnik GIS Source video (4k / 2160p): http://www.youtube.com/watch?v=f5nznXK5IrQ 3D Model created using Agisoft PhotoScan: If you're interested, you can read a tutorial.
  4. We are pleased to announce that our next 4 day photogrammetry class will take place in San Francisco on May 18-21, 2015. More details and a registration form can be found here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html The training at CHi is focused on core photogrammetric principles that produce high quality, measurable, scientific data. The capture method (how you take the images) is independent of any software package. You will collect image sets that can be used now and in the future. The processing is done in Agisoft PhotoScan Pro. We have worked in collaboration with senior photogrammatrists at the US Bureau of Land Management to develop the course and the methodology. The optimized workflow that we teach for processing can't be found anywhere else. Come, learn, add new skills! Carla
  5. HI folks, We are pleased to announce that our next 4 day photogrammetry training lass will take place at our studio in San Francisco on May 18-21. Learn more, and find a registration form here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html Note that this class has been rescheduled from April 20-23 to May 18-21, for those of you that might have seen an earlier announcement. Carla
  6. I have generated a dense point cloud of 62,500,000 points (approximately 35,000 points per square inch) and medium-res 3D mesh of a painting using Agisoft Photoscan. It would be possible to generate the dense point cloud and mesh at a higher density, but it's beyond the memory capacity of my Mac mini (16 Gb of RAM), so I might out-source further processing of the photogrammetry data. I also have a mosaic of 36 RTIs at a ground-sample resolution of approximately 500 pixels per inch (250,000 pixels per square inch), plus higher-res RTIs of some details. The mosaic RTIs have approximately 10 percent overlap horizontally and vertically. I'm interested in combining the point-cloud and normal maps into a 3D model of the painting surface using methods described in the 2005 SIGGRAPH paper whose title is quoted above, "Efficiently Combining Positions and Normals for Precise 3D Geometry," by Nehab, Rusinkiewicz, Davis, and Ramamoorthi (or other methods anyone here might suggest). However, the link to the source code for the algorithm by these authors isn't working. I'm wondering if others here have tried this technique, and if anyone can provide the source code (with appropriate permissions) and offer advice for implementing it. I haven't written to the authors to ask for the code, but may do so. I'm interested in hearing others' experience with it or other similar techniques, working with large data sets. Another tool that looks like it might be useful in this regard is XNormal, which "bakes" the texture maps from high-resolution 3D meshes into lower-resolution meshes. Could it also accurately combine high-resolution RTI normal maps with high-res 3D meshes? I'm not sure if this modeling technique would produce the same result as the algorithm from the 2005 paper cited above. I'm also interested in suggestions for an appropriate work flow for cleaning, repairing, and decimating the 3D mesh. Would it be it better to start with the highest density point cloud and mesh that I can generate from the photogrammetry data set, then combine this with the normal maps from RTIs? Or perhaps clean, repair, and decimate the 3D mesh and then apply algorithms to combine it with the normal maps? I'm learning to use MeshLab, but I find it a bit daunting with the number of possible filters and it crashes pretty frequently with large data sets (it might be too much for my Mini). I also have approximately 53 Gb of RAW multispectral images at resolutions of approximately 500 to 1,000 pixels per inch that were captured using a different camera system than the one used to capture photogrammetry and RTIs. The 500 dpi images were captured in a 4x4 mosaic, or 16 images per waveband. There are 12 discrete wavebands (1 UV, 6 visible, and 5 IR) plus visible fluorescence images captured with emission filters. I'm interested in texturing the 3D mesh generated from the combined photogrammetry and RTI datasets using each of the discrete multispectral wavebands and reconstructed visible, fluorescence, and false-color IR images. I'd like to know what would be involved in registering these images to the 3D mesh generated from a different set of images. I'm hoping that the result of this project will be a 3D model with accurate surface normals that would allow interactive relighting, tiled zooming, algorithmic enhancement, and selective re-texturing at various wavebands in a web viewer, if a suitable viewer becomes available, such as the one being developed by Graeme Earl and the AHRC project or perhaps one of the Web3D viewers. Any advice and assitance would be appreciated! Best, Taylor
  7. The Smithsonian Museum has launched an on-line 3D viewer they refer to as X 3D, with examples of objects from their collections. The on-line viewer has some interesting features ("advanced tools"), described here. The X 3D web viewer is "powered by Autodesk" and it appears to remain proprietary for the time being. Their 3D effort is being managed by the Digitization Program Office, which states, "The Smithsonian digitization challenge and opportunity can be measured by the total number of collection items: at 137 million objects, artworks and specimens, capturing the entire collection at a rate of 1 item per minute would take over 260 years of 24/7 effort. At the present moment, the Smithsonian has prioritized the digitization of about 10% of its collections for digitization. To rise to this challenge, the Digitization Program Office is promoting rapid capture photography workflows for two-dimensional collections, and exploring innovations to speed up the capture of our three-dimensional collections, preferably in 3D." Such a high-profile effort has the likelihood to set standards for other institutions, and I'm curious to what extent the Digitization Program Office is coordinating with other organizations to conform with open standards for this type of dissemination effort. For example, they state, "For many of the 3D models, raw data can be downloaded to support further inquiry and 3D printing." I wonder if the raw data will be provided with complete metadata? I would hope and expect so, since they envision the 3D future as extending into field data collection efforts. I haven't tried downloading any of the data sets, but some examples are available here. I've noticed (e.g., here and Carla's response) that the University of Leuven has developed their own viewer and file format for RTI, CyArk has developed their own 3D viewer for point clouds, and other institutions like Oxford University are engaged in massive efforts to digitize portions, or the entirety of, their collections for on-line dissemination. With public-private partnerships such as the Smithsonian's X 3D project, it would seem there's a downside potential to have fragmented, proprietary standards and restrictive copyrights for certain subsets of data collections (e.g., CyArk), while there's an upside potential to do groundbreaking research by providing access to collections across multiple institutions and even private collections, if open standards can be used to better harmonize these data sets and make them available. Laser scanning appears to be the core technology for the X 3D project, while this video suggests they're also using CT scanning and photogrammetry. Scanning might be fine for some objects, but may provide incomplete information for others (such as manuscripts, paintings, etchings, and items with significant fine details). If this is the future of museums, how will the public and scholars know the extent to which the objects rendered on their computer screens accurately represent the objects in the collection? What are the guidelines and standards for digitizing objects (e.g., density of point clouds, and accuracy of mesh reconstructions and RGB data)? Will the Smithsonian integrate other technologies, such as RTI, multispectral imaging, photometric stereo, white-light interferometry, confocal microscopy, etc.? The Smithsonian recently held a conference built around the rollout of X 3D, and I'd be interested if anyone who attended the conference has any thoughts about these issues.
×
×
  • Create New...