Jump to content

Search the Community

Showing results for tags 'photogrammetry'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Announcements
    • FAQ
    • Project Information
  • Digital Lab Notebook (DLN) software
    • DLN Core functionality
    • Inspector tool (integrated in the DLN)
    • SIP Archiver (integrated in the DLN)
  • Capturing Data
    • Dome Method
    • Highlight Method
    • Photogrammetry
  • Processing RTI Data
    • Processing RTI Data
  • Viewing and Analyzing RTI Results
    • All Viewers
    • Dissemination
    • RTI AHRC Project (UK)
  • Regional Cultural Heritage Discussions
    • Nigerian Cultural Heritage

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 6 results

  1. Hi there, I'm starting an imaging program at a maritime museum. I know from speaking to colleagues that many maritime collections are not well recorded. I'm hoping to use an RTI/Photogrammetry method to record these works, and create 3D models of them for study. Because time is a major factor, I hope to optimize my image capture through automation. I will have a smallish studio, a 50mp Canon 5DS, and an excellent computer (which I'm building, optimized for photogrammetry which should support RTI). Jorge Cano at the Factum Foundation has proposed a method using just four lights (15 degrees at NSEW) to capture up to 50nm resolution. The four images are put into Adobe Substance Maker to obtain a heightmap and that is brought into GIS software for stitching and precise height/normal mapping. Whether I use Cano's method or RTI, I would like to build a folding dome capable of imaging at least an 8in x 8in section of each artwork. (I will be visiting multiple museums, so portability is important.) I will probably build my own dome, to which I will attach lights, and which will allow me to mount my camera at the center. I need guidance on what lights to use (strobe, flash, LED, etc.) and how to trigger the lights sequentially in sync with the camera's shutter. There are a lot of references to Arduino boards, custom PCBs, etc. A company called RTI-Dome in France has a fully automated system for image capture and filing. Custom Imaging also has an automated system. I don't know whether either of these can fire LEDs strong enough for larger scale objects. Any help/direction would be much appreciated. Nick Raposo americanmarineart.com
  2. Hi all, Recently we have released to the public our Photogrammetry&RTI project that we have been working for over 2 years. We were inspired by some of your posts here with the dome approach and we decided that we should share with you our results. Our aim was to create a cheap, affordable photogrammetric modeling as well as RTI imaging device, that anyone can create and does not cost thousands of euros. Because of that, we needed multiple cameras, led lights and a step motor, which allowed us to rotate an object placed on the rotating table and take photos around it. As our device heart we used RaspberryPi 4 and 4 ArduCAM cameras. We were aiming at small objects, artifacts which are not bigger than 100-120mm. In the first iteration of our device we have used a small (32cm in diameter) dome (an aluminum bowl from IKEA….) with 40 led lights and 4 cameras. This version was successfully tested on Cyprus about 2 years ago where we gathered extremely valuable experience. In the second iteration we decided to leave the bowl approach and create a spider-like shape, with 10 arms for the led strips (12 leds on each) and 1 seperate arm for the cameras. It is much more mobile than the previous version as we can disassemble it and because of that it is easier to work with. Because in the first iteration we noticed that depth field was an issue, we decided to go with higher resolution cameras that have motorized focus, so we can use focus stacking method to increase our depth field. For the led strips we used flexible pcb which we connected together with FPC cables. We are controlling them through our shield that goes on the top of the raspberry pi 4. Below I am sending 2 links to my LinkedIn posts as I can not upload images here. https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-textured-activity-6766022368184860672-n2MF https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-ptm-activity-6766744323838013440-z9yn Thanks for inspiration guys and let us know what you think about our device. I am sure we will share some results as we will start testing and creating more RTI images for the objects. Marcin
  3. Greetings! Our article "Two new ways of documenting miniature incisions using a combination of Image-Based Modelling and Reflectance Transformation Imaging" is available in Open Access at https://www.mdpi.com/720360. Best regards, Dag-Øyvind Solem and Erich Nau, Norwegian Institute of Cultural Heritage Research Abstract: Digital 3D documentation methods such as Image-Based Modelling (IBM) and laser scanning have become increasingly popular for the recording of entire archaeological sites and landscapes, excavations and single finds during the last decade. However, they have not been applied in any significant degree to miniature incisions such as graffiti. In the same period, Reflectance Transformation Imaging (RTI) has become one of the most popular methods used to record and visualize this kind of heritage, though it lacks the benefits of 3D documentation. The aim of this paper is to introduce two new ways of combining IBM and RTI, and to assess these different techniques in relation to factors such as usability, time-efficiency, cost-efficiency and accuracy. A secondary aim is to examine the influence of two different 3D processing software packages on these factors: the widely used MetaShape (MS) and a more expensive option, RealityCapture (RC). The article shows that there is currently no recording technique that is optimal regarding all four aforementioned factors, and the way to record and produce results must be chosen based on a prioritization of these. However, we argue that the techniques combining RTI and IBM might be the overall best ways to record miniature incisions. One of these combinations is time-efficient and relatively cost-efficient, and the results have high usability even though the 3D models generated have low accuracy. The other combination has low time- and cost-efficiency but generates the most detailed 3D models of the techniques tested. In addition to cost-efficiency, the main difference between the 3D software packages tested is that RC is much faster than MS. The accuracy assessment remains inconclusive; while RC generally produces more detailed 3D models than MS, there are also areas of these models where RC creates more noise than MS.
  4. Cultural Heritage Imaging (CHI) offers some free resources to people adopting the practice of photogrammetry. In addition, our experts are available for paid consulting and/or training. Here are some resources not to be missed. 1. Videos describing key principles of good photogrammetric capture: https://vimeo.com/channels/practicalphotogrammetry See also our Photogrammetry technology overview: http://culturalheritageimaging.org/Technologies/Photogrammetry/ 2. This, our free user forum, where folks in the community help answer questions about RTI and photogrammetry. We aim to complement the resources offered by Agisoft PhotoScan and other software packages, as they have their own communities. However, discussions about equipment, capture tips, and so on are welcome here: http://forums.culturalheritageimaging.org/ 3. We sell calibrated scale bars that help users get precise, real-world measurement into your product. And we offer a "tips and tricks" free guide for working with scale bars on the Photoscan website (find the link on this page): http://culturalheritageimaging.org/What_We_Offer/Gear/Scale_Bars/index.html 4. We offer regular 4-day training classes in photogrammetry in our studio in San Francisco and in other locations. Sometimes a host institution will offer space and will purchase some seats and allow some seats to be sold. You can learn more about our photogrammetry training here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html 5. And finally, we offer custom consulting to help folks adopt and use photogrammetry and RTI. That can take a variety of forms, including video, emails, and projects in Dropbox where we can review work and give feedback. Learn more about our consulting offering here: http://culturalheritageimaging.org/What_We_Offer/Consulting/
  5. Hi, I've been using the photogrammetry workflow CHI teaches to record archaeological sites/features. I've found mounting a camera on a painter's pole with a camera adapter at the end to hold a camera about 10 or so feet above the site works quite well, and allows for a lot of control over the pictures. But I quickly end up with hundreds of photographs for even a modest sized site. So, I'm looking at using a drone to allow for doing larger areas. I've been looking at the DJI Inspire 2 drone with Zenmuse X5S camera. It's a 20mp 1" sensor micro 4/3 camera that allows for interchangeable lenses, shoots in dng RAW, and seems to allow for manual focus and aperture priority shooting. Has anyone used it? Or are there any good drone/camera combinations anyone would recommend? Brian
  6. Hello to everybody! I'm new to this forum, so I thank you all in advance for any suggestion. During the same day, I have carried out two consecutive photogrammetric captures of the same object --an oil on canvas-- using, for both the sessions, controlled illumination and the same camera network similar to the one indicated in the CHI website (that is normal landscape and overlapping convergent portrait photos). The images were processed using the same workflow on Photoscan Pro up to the generation of the orthophotos using only the normal images to avoid unwanted specular reflections and the color correction option was enabled to minimise exposure differences between the single orthos. A Digital ColorChecker SG was also captured in the scene to have a colour reference and check the quality of the final result referring to the Metamorfoze guidelines. The aim of my test is to compare the orthorectified images of the painting to check possible variations in colour in paintings. I have exported the orhtomosaics of the two models, and both of them did not pass the quality check on Delt.ae since the patches of the DCSG were altered and couldn't fit in the ranges of tolerance of the Metamorfoze after the assignment of the ICC profile. For the sake of research, I have registered them and performed difference on ImageJ2. Theoretically, the two orthomosaics should be identical (or at least very similar) but the result (below) shows a sort of random blending between the orthos. What does Photoscan do when creates the single orthos?! How does it blend them to create the orthomosaics? Did anyone of you ever noticed this? Given this result, I am wondering if photogrammetry in general is capable of recording accurately only 3D information and make random approximations in the formation of orthorectified images (or is it just a problem of Photoscan?). Thank you in advance for any suggestion to solutions/literature/processes/hints! Camilla
×
×
  • Create New...