Jump to content

Search the Community

Showing results for tags 'photogrammetry'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • Announcements
    • FAQ
    • Project Information
  • Digital Lab Notebook (DLN) tools
    • DLN:CaptureContext (DLNCC) tool
    • DLN:Inspector tool
  • Capturing Data
    • Dome Method
    • Highlight Method
    • Photogrammetry
  • Processing RTI Data
    • Processing RTI Data
  • Viewing and Analyzing RTI Results
    • All Viewers
    • Dissemination
    • RTI AHRC Project (UK)
  • Regional Cultural Heritage Discussions
    • Nigerian Cultural Heritage

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 32 results

  1. Hi there, I'm starting an imaging program at a maritime museum. I know from speaking to colleagues that many maritime collections are not well recorded. I'm hoping to use an RTI/Photogrammetry method to record these works, and create 3D models of them for study. Because time is a major factor, I hope to optimize my image capture through automation. I will have a smallish studio, a 50mp Canon 5DS, and an excellent computer (which I'm building, optimized for photogrammetry which should support RTI). Jorge Cano at the Factum Foundation has proposed a method using just four lights (15 degrees at NSEW) to capture up to 50nm resolution. The four images are put into Adobe Substance Maker to obtain a heightmap and that is brought into GIS software for stitching and precise height/normal mapping. Whether I use Cano's method or RTI, I would like to build a folding dome capable of imaging at least an 8in x 8in section of each artwork. (I will be visiting multiple museums, so portability is important.) I will probably build my own dome, to which I will attach lights, and which will allow me to mount my camera at the center. I need guidance on what lights to use (strobe, flash, LED, etc.) and how to trigger the lights sequentially in sync with the camera's shutter. There are a lot of references to Arduino boards, custom PCBs, etc. A company called RTI-Dome in France has a fully automated system for image capture and filing. Custom Imaging also has an automated system. I don't know whether either of these can fire LEDs strong enough for larger scale objects. Any help/direction would be much appreciated. Nick Raposo americanmarineart.com
  2. Hi all, Recently we have released to the public our Photogrammetry&RTI project that we have been working for over 2 years. We were inspired by some of your posts here with the dome approach and we decided that we should share with you our results. Our aim was to create a cheap, affordable photogrammetric modeling as well as RTI imaging device, that anyone can create and does not cost thousands of euros. Because of that, we needed multiple cameras, led lights and a step motor, which allowed us to rotate an object placed on the rotating table and take photos around it. As our device heart we used RaspberryPi 4 and 4 ArduCAM cameras. We were aiming at small objects, artifacts which are not bigger than 100-120mm. In the first iteration of our device we have used a small (32cm in diameter) dome (an aluminum bowl from IKEA….) with 40 led lights and 4 cameras. This version was successfully tested on Cyprus about 2 years ago where we gathered extremely valuable experience. In the second iteration we decided to leave the bowl approach and create a spider-like shape, with 10 arms for the led strips (12 leds on each) and 1 seperate arm for the cameras. It is much more mobile than the previous version as we can disassemble it and because of that it is easier to work with. Because in the first iteration we noticed that depth field was an issue, we decided to go with higher resolution cameras that have motorized focus, so we can use focus stacking method to increase our depth field. For the led strips we used flexible pcb which we connected together with FPC cables. We are controlling them through our shield that goes on the top of the raspberry pi 4. Below I am sending 2 links to my LinkedIn posts as I can not upload images here. https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-textured-activity-6766022368184860672-n2MF https://www.linkedin.com/posts/marcin-k%C5%82%C4%99bowski-458bb5146_photogrammetry-rti-ptm-activity-6766744323838013440-z9yn Thanks for inspiration guys and let us know what you think about our device. I am sure we will share some results as we will start testing and creating more RTI images for the objects. Marcin
  3. Greetings! Our article "Two new ways of documenting miniature incisions using a combination of Image-Based Modelling and Reflectance Transformation Imaging" is available in Open Access at https://www.mdpi.com/720360. Best regards, Dag-Øyvind Solem and Erich Nau, Norwegian Institute of Cultural Heritage Research Abstract: Digital 3D documentation methods such as Image-Based Modelling (IBM) and laser scanning have become increasingly popular for the recording of entire archaeological sites and landscapes, excavations and single finds during the last decade. However, they have not been applied in any significant degree to miniature incisions such as graffiti. In the same period, Reflectance Transformation Imaging (RTI) has become one of the most popular methods used to record and visualize this kind of heritage, though it lacks the benefits of 3D documentation. The aim of this paper is to introduce two new ways of combining IBM and RTI, and to assess these different techniques in relation to factors such as usability, time-efficiency, cost-efficiency and accuracy. A secondary aim is to examine the influence of two different 3D processing software packages on these factors: the widely used MetaShape (MS) and a more expensive option, RealityCapture (RC). The article shows that there is currently no recording technique that is optimal regarding all four aforementioned factors, and the way to record and produce results must be chosen based on a prioritization of these. However, we argue that the techniques combining RTI and IBM might be the overall best ways to record miniature incisions. One of these combinations is time-efficient and relatively cost-efficient, and the results have high usability even though the 3D models generated have low accuracy. The other combination has low time- and cost-efficiency but generates the most detailed 3D models of the techniques tested. In addition to cost-efficiency, the main difference between the 3D software packages tested is that RC is much faster than MS. The accuracy assessment remains inconclusive; while RC generally produces more detailed 3D models than MS, there are also areas of these models where RC creates more noise than MS.
  4. Cultural Heritage Imaging (CHI) offers some free resources to people adopting the practice of photogrammetry. In addition, our experts are available for paid consulting and/or training. Here are some resources not to be missed. 1. Videos describing key principles of good photogrammetric capture: https://vimeo.com/channels/practicalphotogrammetry See also our Photogrammetry technology overview: http://culturalheritageimaging.org/Technologies/Photogrammetry/ 2. This, our free user forum, where folks in the community help answer questions about RTI and photogrammetry. We aim to complement the resources offered by Agisoft PhotoScan and other software packages, as they have their own communities. However, discussions about equipment, capture tips, and so on are welcome here: http://forums.culturalheritageimaging.org/ 3. We sell calibrated scale bars that help users get precise, real-world measurement into your product. And we offer a "tips and tricks" free guide for working with scale bars on the Photoscan website (find the link on this page): http://culturalheritageimaging.org/What_We_Offer/Gear/Scale_Bars/index.html 4. We offer regular 4-day training classes in photogrammetry in our studio in San Francisco and in other locations. Sometimes a host institution will offer space and will purchase some seats and allow some seats to be sold. You can learn more about our photogrammetry training here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html 5. And finally, we offer custom consulting to help folks adopt and use photogrammetry and RTI. That can take a variety of forms, including video, emails, and projects in Dropbox where we can review work and give feedback. Learn more about our consulting offering here: http://culturalheritageimaging.org/What_We_Offer/Consulting/
  5. Hi, I've been using the photogrammetry workflow CHI teaches to record archaeological sites/features. I've found mounting a camera on a painter's pole with a camera adapter at the end to hold a camera about 10 or so feet above the site works quite well, and allows for a lot of control over the pictures. But I quickly end up with hundreds of photographs for even a modest sized site. So, I'm looking at using a drone to allow for doing larger areas. I've been looking at the DJI Inspire 2 drone with Zenmuse X5S camera. It's a 20mp 1" sensor micro 4/3 camera that allows for interchangeable lenses, shoots in dng RAW, and seems to allow for manual focus and aperture priority shooting. Has anyone used it? Or are there any good drone/camera combinations anyone would recommend? Brian
  6. Hello to everybody! I'm new to this forum, so I thank you all in advance for any suggestion. During the same day, I have carried out two consecutive photogrammetric captures of the same object --an oil on canvas-- using, for both the sessions, controlled illumination and the same camera network similar to the one indicated in the CHI website (that is normal landscape and overlapping convergent portrait photos). The images were processed using the same workflow on Photoscan Pro up to the generation of the orthophotos using only the normal images to avoid unwanted specular reflections and the color correction option was enabled to minimise exposure differences between the single orthos. A Digital ColorChecker SG was also captured in the scene to have a colour reference and check the quality of the final result referring to the Metamorfoze guidelines. The aim of my test is to compare the orthorectified images of the painting to check possible variations in colour in paintings. I have exported the orhtomosaics of the two models, and both of them did not pass the quality check on Delt.ae since the patches of the DCSG were altered and couldn't fit in the ranges of tolerance of the Metamorfoze after the assignment of the ICC profile. For the sake of research, I have registered them and performed difference on ImageJ2. Theoretically, the two orthomosaics should be identical (or at least very similar) but the result (below) shows a sort of random blending between the orthos. What does Photoscan do when creates the single orthos?! How does it blend them to create the orthomosaics? Did anyone of you ever noticed this? Given this result, I am wondering if photogrammetry in general is capable of recording accurately only 3D information and make random approximations in the formation of orthorectified images (or is it just a problem of Photoscan?). Thank you in advance for any suggestion to solutions/literature/processes/hints! Camilla
  7. Hi there, I am very new to photogrammetry and I have a question an image capture/lighting question. I'm wanting to do photogrammetry on a somewhat large indoor sculpture. I will be working in the round, but I am going to have to use external flashes on stands for lighting. I only have 2 lights at my disposal. My question is, will I need to move the lights around the sculpture incrementally (every ten degrees or so) as I also reposition the camera by moving around the sculpture? Or, can I leave the lights in more of fixed position, and move them when I'm say at every 9th camera position, assuming there are 36 camera positions? (ie every quarter of the way around) Thank you in advance. Crystal
  8. CHI has updated its Photogrammetry technology web page with detailed, careful instructions and images that explain about how to capture photos properly for photogrammetry. Also on the page is an added example: an embedded interactive 3D model of an Assyrian genie bas relief, hosted on Sketchfab. The imaging was done as part of a National Endowment for the Humanities (NEH) sponsored training session, hosted by the Los Angeles County Museum of Art (LACMA). See the updated "How to Capture" area of the web page with the Sketchfab example below.
  9. Don't miss CHI's next 4-day photogrammetry class in the fall: Monday, September 25 through Thursday, September 28, 2017 These classes have limited seating and tend to sell out, so please don't hesitate too long to register. You will find the reg form and other information about the class at this link: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html We hope you can join us!
  10. The US Bureau of Land Management has updated its link to the well-known Tech Note 248 (a downloadable PDF) by Neffra Matthews and Tom Noble: https://www.blm.gov/nstc/library/pdf/TN428.pdf CHI has updated the link on our Photogrammetry technology page on our website: http://culturalheritageimaging.org/Technologies/Photogrammetry/
  11. We are pleased to announce the next photogrammetry training class at Cultural Heritage Imaging in San Francisco for January 30 - February 2. Learn more and register here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html Read blog posts from 2 different attendees about their training experiences during 2016: https://culturalheritageimaging.wordpress.com/2016/02/22/four-days-with-chi-reflections-on-januarys-photogrammetry-training/ http://www.conservationaffair.com/blogreel/everything-is-better-in-3d Hope you can join us this winter! Carla
  12. At the Minneapolis Institute of Art we're now doing photogrammetry of medium-sized objects with a robot turntable/swing arm, and with each object we've been photographing a data set where the CHI photogrammetry scales occlude the object in many images. For now, I'm also photographing the objects and scales as 'flat,' just like I learned at CHI, but I'm theorizing that the measurement data we'll get from the scales on the turntable will be much more robust. Here's the shooting and PhotoScan breakdown: - Photograph the object on the turntable with no scale bars from multiple rotations and elevations (we've been making 36 columns and nine elevations, from 0-88 degrees, but fewer columns for the top elevations). Also photograph empty backgrounds for auto masking. - Photograph the object with two scales occluding the object and as close as possible to the object, and two scales on the turntable's surface (with fewer photos in this set; four elevations, from 0-66 degrees). - Use the first set of images as one Camera Group in PhotoScan, and the scales as another Camera Group. Align photos to make a sparse point cloud. - Refine the sparse cloud using CHI/BLM's magical method. Add scale. - Remove (or turn off) all images from the scales dataset. - Build the dense cloud, et cetera. P​hotoScan is identifying the scales on the turntable with no problem; it feels to me like having a much larger data set full of scales will produce better scale information than a set where the three-dimensional art is treated as a flat object. And I'm happy to report that scale is traveling with exported objects - this figure arrived in an OBJ-reading program with a size of .67 units (apparently there's no set unit in a lot of these programs), and it's 67 cm tall: https://sketchfab.com/models/8217886808944db3b3a01734d604cdd6 What do you think?
  13. Despite all my efforts to defeat this problem, it continues to be a bug in my workflow for photogrammetry. To get a good lens calibration in Photoscan, you shoot calibration images by rotating the camera -90 and +90 degrees along with the horizontal images in a sequence of overlapping positions. I've set my camera not to rotate pictures, so the calibration shots should appear as normal horizontal images, with the top of the object in the image facing right or left, depending on the camera rotation. However, the camera exif data apparently still records the image orientation and when I import the images into Lightroom, they're all rotated vertically so the top of the object is pointing up and the images are in portrait orientation, not landscape. Lightroom 4 doesn't allow you to set a preference to ignore image orientation during import, so it automatically rotates the images whether you like it or not. When you export the images as TIFFs or JPEGs, they're all rotated so the object appears oriented right-side-up. I've tried using Photoshop CS6 to open the same DNGs that were imported into Lightroom, after I set PS6 preferences to ignore image orientation, but it still rotates them into portrait mode. Photoscan has a pair of buttons that allow you to rotate the images right or left 90 degrees after they're uploaded to the workspace window, but this doesn't affect how Photoscan uses the images for calibration--Photoscan still thinks the sensor has portrait dimensions with the shorter side at the bottom, instead of the longer side. Therefore, Photoscan groups all the vertically oriented images into a separate calibration instead of using them to refine the calibration for the horizontal images. This is a maddening problem because once you've created masks for your images, the masks have to have the same orientation as the images or Photoscan won't allow you to apply the same masks that you created using rotated images to the unrotated images. If you un-rotate the original images, you also have to remove rotation information for the masks to allow them to align properly, or Photoscan won't let you re-import the masks. I've heard that some use Windows Explorer to remove image orientation data, but I don't have Windows on all my Macs, and I've heard there are also problems with Windows applying lossy compression to JPEGs when it rotates them--very bad behavior! How do I defeat the image rotation problem? This gets very time consuming for projects with hundreds of images.
  14. Various folks have asked about turntables for doing subjects in the round. Here's a good option that has very smooth movement and can handle fairly heavy weights: Shimpo Banding wheels: http://shimpoceramics.com/bandingwheels.html They aren't too expensive. They don't have marks for the degrees, this could be added with tape or other mechanisms. We modified our turntables that way. Make sure to put any marks on the edge not the top, so that it is less likely to be moving through your images. It would be great if others could post manual turntable suggestions here as well. (if someone wants to start another topic for automated ones, I think that's great too) Carla
  15. Hi Folks, We are pleased to announce that we are now offering calibrated scale bars specifically designed for photogrammetry use. We partnered with Tom Noble and Neffra Matthews at the US Bureau of Management on this product. The design comes from decades of photogrammetry experience on their end, and making their own scale bars, since there wasn't a product that did what they wanted. These use both coded and non-coded targets. Cultural Heritage Imaging calibrates them to within 1/10 mm accuracy. We also provide a user guide, and we package them for sale and manage orders. If you want to learn more and/or order a set, check out our scale bars page: http://culturalheritageimaging.org/What_We_Offer/Gear/Scale_Bars/index.html Carla
  16. Nowadays, capabilities of photogrammetric software are amazing. You may wonder: “Can a video captured by drone be used to create a 3D model of a cultural heritage object?” and the answer is “Yes”. In this tutorial, you will create a georeferenced and measurable 3D model, using YouTube video captured by DJI Inspire 1 using Agisoft PhotoScan and view it with Sputnik GIS Source video (4k / 2160p): http://www.youtube.com/watch?v=f5nznXK5IrQ 3D Model created using Agisoft PhotoScan: If you're interested, you can read a tutorial.
  17. I wanted to let folks know that the 1.2 version of Photoscan is now available. It's been in Beta for a while, but it's status just changed to be the currently shipping version. It is a free upgrade. Download it here: http://www.agisoft.com/downloads/installer/ Full change log is here: http://www.agisoft.com/pdf/photoscan_changelog.pdf Carla
  18. A panel of Style 7 Martis Complex petroglyphs at Donner Pass in the Sierra Nevada range in California was recently documented using a combination of photogrammetry and DStretch, a plug-in for ImageJ that uses Principal Components Analysis (PCA) to enhance color contrasts. Although DStretch has been used effectively to enhance rock art, wall paintings, and other art works, it has been assumed that it would not be effective for petroglyphs because they're created by carving or pecking, rather than using pigments. However, the Style 7 petroglyphs at Donner Pass were pecked into pink granite (Lake Mary tonalite, dated to approximately 95-120 million years before present). There is enough contrast between the pecked glyphs and weathered pink granite to allow DStretch to work. Photogrammetry was used to create a textured 3D model of the panel of petroglyphs using RGB images. The images were enhanced using the LRD setting in DStretch, and the model was re-textured using the DStretched images. Orthophotos from both versions of the model were then produced. A comparison of orthophotos without DStretch and with DStretch to enhance the contrast of the petroglyphs is here. A higher resolution orthophoto of the petroglyphs with DStretch can be found here (7 Mb).
  19. There are many techniques one can use to date a painting, but it is usually best to start with non-invasive methods. A good place to begin is by looking at the painting's verso to look for canvas makers' stamps, gallery stamps, the construction of the stretcher, canvas weave and thread counts, and primings, among other features. In many cases, paintings have undergone past treatments, such as relining. In such instances, the original canvas and hence, canvas supplier's stamps or other stamps (e.g., duty stamps), if present, are not visible because they are covered by the relining. Non-invasive techniques such as multispectral imaging can be useful to reveal hidden features, but art works are unpredictable and unique situations arise where a combination of methods is needed. Transmitted infrared imaging can sometimes reveal canvas makers' stamps on a relined canvas where other techniques, such as x-ray imaging, might fail. This is sometimes true in the case of a painting with a lead white ground layer, where the lead absorbs x-radiation but also happens to be relatively transparent to transmitted infrared radiation (TIR). But what to do if the canvas stamps are partly obscured by the horizontal cross-brace of the stretcher? In this case study, an on-the-spot solution was devised, with the aid of 3D photogrammetry, to identify a previously unknown artists' colourman in mid-19th century London as the supplier of the canvas, and bracketed the range of dates when the canvas could have been supplied, probably between c. 1844 and c. 1860. Because the canvas supplier's stamps were mostly hidden behind the central horizontal cross-brace of the stretcher, thin shims were placed between the canvas and stretcher to create a narrow gap, allowing TIR images to be captured at an angle, revealing most of the text of the stamps that would otherwise have remained hidden. However, reconstructing accurately scaled images of the stamps from the angled TIR images was a challenge. This is where photogrammetry proved to be very useful. A high-resolution 3D model of the painting's verso was constructed by capturing overlapping reflected visible-wavelength images that were processed using Agisoft Photoscan software. Each of the four quadrants of the painting's verso (defined by the vertical and horizontal cross-braces of the stretcher) were also captured in both reflected visible light and TIR. The visible and TIR images were registered, therefore the TIR images could then be aligned with the 3D model by swapping them for visible reflected images, and an orthorectified TIR image of the painting's verso could be constructed. The TIR orthophoto contained accurate scale information, which could be used to measure the overall dimensions of the stamps. By overlaying the TIR angle images of the stamps, which contained more of the stamps' textual information, onto the TIR orthophoto, the perspective distortion resulting from the capture angle could be accurately removed, and nearly complete, stitched images of the hidden stamps were reconstructed. Besides creating a more accurate and measurable record of the previously unrecorded stamps, the reconstructions allowed the canvas supplier's name, address, and dates of his business operations to be determined through further research, including London trade and post office directories and geneological data. In addition to providing a date range for the stamps, the position of the stamps on the 3-ft x 4-ft landscape painting was significant, since they were rotated 90 degrees from horizontal and centered behind the horizontal cross-brace. This suggested that the canvas had been purchased separately and stretched by the artist (or perhaps by an assistant) on the original stretcher, since it was not a standard size that was widely commercially available during the period.
  20. We are pleased to announce the next 4 day training class offerings at Cultural Heritage Imaging's studio in San Francisco. Also note that CHI training can come to you! And if you want to be informed about and even influence the dates of future training classes, write to us to get on our interest list. Photogrammetry Training - October 6-9, 2015 This is your last chance in 2015 to learn how to apply photogrammetry, the practice of deriving 3D measurements from overlapping sequences of digital photographs to determine the size, shape, position, and texture of objects. The results are extremely dense and accurate quantitative data with standard digital camera equipment. Recent trainees say this about the class: “Very informative, very technical, useful to people in different industries” “Wonderful, amazing, and full of applicable techniques” “In-depth knowledge sharing that is not available anywhere else” Reflectance Transformation Imaging (RTI) Training- October 13-16 Get hands-on training in Reflectance Transformation Imaging (RTI), a core practice for creating digital representations of objects. You will leave this class able to implement the digital imaging workflow, including steps to capture, process, and view RTI digital representations. Some testimonials from previous trainees: “Extremely informative and incredibly useful” “Well thought-out and thorough: the small class size was a bonus for me” “Instructors were extremely engaging and explained in a way even I could understand!”
  21. Greetings, all! I have been following this forum for some time, but only recently became a member in order to ask the community for a bit of assistance. I am a graduate student studying archaeology at the University of Colorado Boulder, and I'm putting together an interactive museum exhibit that informs the public about the many applications of aerial photogrammetry. As part of the CU Aerospace Engineering School's "Grand Challenge", this exhibit will allow guests to simulate a drone mission over a scale model of the ancient Maya archaeological site of Tikal, Guatemala. Guests will take photos using a remotely operated camera suspended over the model and will be assisted by exhibit staff in processing the data using an automated script written for Agisoft Photoscan. Our team recently set up a crowdfunding page to raise some funds to improve the exhibit and buy promotional materials. I'm not sure if this is the appropriate place to submit a request like this, but any donations to the effort would be much appreciated. Here is the crowdfunding website: http://www.colorado.edu/crowdfunding/?cfpage=project&project_id=11371 Cheers to all of you fine folks for making this forum really useful! Best, Jeff Brzezinski
  22. Here's a story on BBC about Project Mosul to use crowd-sourced photos to virtually reconstruct objects destroyed in Mosul using photogrammetry. It's an opportunity to help with the virtual reconstruction of the lost artifacts. The authors of Project Mosul are coordinating with several organizations to apply a similar scheme to virtually reconstruct other damaged cultural heritage sites. It would be better to document these sites before they get destroyed, but as the song said, "you don't know what you've got 'til it's gone."
  23. We are pleased to announce that our next 4 day photogrammetry class will take place in San Francisco on May 18-21, 2015. More details and a registration form can be found here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html The training at CHi is focused on core photogrammetric principles that produce high quality, measurable, scientific data. The capture method (how you take the images) is independent of any software package. You will collect image sets that can be used now and in the future. The processing is done in Agisoft PhotoScan Pro. We have worked in collaboration with senior photogrammatrists at the US Bureau of Land Management to develop the course and the methodology. The optimized workflow that we teach for processing can't be found anywhere else. Come, learn, add new skills! Carla
  24. HI folks, We are pleased to announce that our next 4 day photogrammetry training lass will take place at our studio in San Francisco on May 18-21. Learn more, and find a registration form here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html Note that this class has been rescheduled from April 20-23 to May 18-21, for those of you that might have seen an earlier announcement. Carla
  25. Here's an interesting paper about the application of multiple techniques for the study of murals and graffiti: http://www.ijcs.uaic.ro/public/IJCS-15-03_Cosentino.pdf Although it's not mentioned in the paper, one of the authors (A. Cosentino) describes an Arduino distance meter to check the position of the speedlight while capturing RTIs of the murals: http://chsopensource.org/reflectance-transformation-imaging-rti-with-arduino/
  • Create New...