Jump to content

Search the Community

Showing results for tags 'photogrammetry'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Announcements
    • FAQ
    • Project Information
  • Digital Lab Notebook (DLN) tools
    • DLN:CaptureContext (DLNCC) tool
    • DLN:Inspector tool
  • Capturing Data
    • Dome Method
    • Highlight Method
    • Photogrammetry
  • Processing RTI Data
    • Processing RTI Data
  • Viewing and Analyzing RTI Results
    • All Viewers
    • Dissemination
    • RTI AHRC Project (UK)

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 29 results

  1. Hi, I've been using the photogrammetry workflow CHI teaches to record archaeological sites/features. I've found mounting a camera on a painter's pole with a camera adapter at the end to hold a camera about 10 or so feet above the site works quite well, and allows for a lot of control over the pictures. But I quickly end up with hundreds of photographs for even a modest sized site. So, I'm looking at using a drone to allow for doing larger areas. I've been looking at the DJI Inspire 2 drone with Zenmuse X5S camera. It's a 20mp 1" sensor micro 4/3 camera that allows for interchangeable lenses, shoots in dng RAW, and seems to allow for manual focus and aperture priority shooting. Has anyone used it? Or are there any good drone/camera combinations anyone would recommend? Brian
  2. Cultural Heritage Imaging (CHI) offers some free resources to people adopting the practice of photogrammetry. In addition, our experts are available for paid consulting and/or training. Here are some resources not to be missed. 1. Videos describing key principles of good photogrammetric capture: https://vimeo.com/channels/practicalphotogrammetry See also our Photogrammetry technology overview: http://culturalheritageimaging.org/Technologies/Photogrammetry/ 2. This, our free user forum, where folks in the community help answer questions about RTI and photogrammetry. We aim to complement the resources offered by Agisoft PhotoScan and other software packages, as they have their own communities. However, discussions about equipment, capture tips, and so on are welcome here: http://forums.culturalheritageimaging.org/ 3. We sell calibrated scale bars that help users get precise, real-world measurement into your product. And we offer a "tips and tricks" free guide for working with scale bars on the Photoscan website (find the link on this page): http://culturalheritageimaging.org/What_We_Offer/Gear/Scale_Bars/index.html 4. We offer regular 4-day training classes in photogrammetry in our studio in San Francisco and in other locations. Sometimes a host institution will offer space and will purchase some seats and allow some seats to be sold. You can learn more about our photogrammetry training here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html 5. And finally, we offer custom consulting to help folks adopt and use photogrammetry and RTI. That can take a variety of forms, including video, emails, and projects in Dropbox where we can review work and give feedback. Learn more about our consulting offering here: http://culturalheritageimaging.org/What_We_Offer/Consulting/
  3. Camilla Perondi

    Quality of orthophotos Photoscan

    Hello to everybody! I'm new to this forum, so I thank you all in advance for any suggestion. During the same day, I have carried out two consecutive photogrammetric captures of the same object --an oil on canvas-- using, for both the sessions, controlled illumination and the same camera network similar to the one indicated in the CHI website (that is normal landscape and overlapping convergent portrait photos). The images were processed using the same workflow on Photoscan Pro up to the generation of the orthophotos using only the normal images to avoid unwanted specular reflections and the color correction option was enabled to minimise exposure differences between the single orthos. A Digital ColorChecker SG was also captured in the scene to have a colour reference and check the quality of the final result referring to the Metamorfoze guidelines. The aim of my test is to compare the orthorectified images of the painting to check possible variations in colour in paintings. I have exported the orhtomosaics of the two models, and both of them did not pass the quality check on Delt.ae since the patches of the DCSG were altered and couldn't fit in the ranges of tolerance of the Metamorfoze after the assignment of the ICC profile. For the sake of research, I have registered them and performed difference on ImageJ2. Theoretically, the two orthomosaics should be identical (or at least very similar) but the result (below) shows a sort of random blending between the orthos. What does Photoscan do when creates the single orthos?! How does it blend them to create the orthomosaics? Did anyone of you ever noticed this? Given this result, I am wondering if photogrammetry in general is capable of recording accurately only 3D information and make random approximations in the formation of orthorectified images (or is it just a problem of Photoscan?). Thank you in advance for any suggestion to solutions/literature/processes/hints! Camilla
  4. Hi there, I am very new to photogrammetry and I have a question an image capture/lighting question. I'm wanting to do photogrammetry on a somewhat large indoor sculpture. I will be working in the round, but I am going to have to use external flashes on stands for lighting. I only have 2 lights at my disposal. My question is, will I need to move the lights around the sculpture incrementally (every ten degrees or so) as I also reposition the camera by moving around the sculpture? Or, can I leave the lights in more of fixed position, and move them when I'm say at every 9th camera position, assuming there are 36 camera positions? (ie every quarter of the way around) Thank you in advance. Crystal
  5. CHI has updated its Photogrammetry technology web page with detailed, careful instructions and images that explain about how to capture photos properly for photogrammetry. Also on the page is an added example: an embedded interactive 3D model of an Assyrian genie bas relief, hosted on Sketchfab. The imaging was done as part of a National Endowment for the Humanities (NEH) sponsored training session, hosted by the Los Angeles County Museum of Art (LACMA). See the updated "How to Capture" area of the web page with the Sketchfab example below.
  6. Don't miss CHI's next 4-day photogrammetry class in the fall: Monday, September 25 through Thursday, September 28, 2017 These classes have limited seating and tend to sell out, so please don't hesitate too long to register. You will find the reg form and other information about the class at this link: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html We hope you can join us!
  7. The US Bureau of Land Management has updated its link to the well-known Tech Note 248 (a downloadable PDF) by Neffra Matthews and Tom Noble: https://www.blm.gov/nstc/library/pdf/TN428.pdf CHI has updated the link on our Photogrammetry technology page on our website: http://culturalheritageimaging.org/Technologies/Photogrammetry/
  8. We are pleased to announce the next photogrammetry training class at Cultural Heritage Imaging in San Francisco for January 30 - February 2. Learn more and register here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html Read blog posts from 2 different attendees about their training experiences during 2016: https://culturalheritageimaging.wordpress.com/2016/02/22/four-days-with-chi-reflections-on-januarys-photogrammetry-training/ http://www.conservationaffair.com/blogreel/everything-is-better-in-3d Hope you can join us this winter! Carla
  9. Charles Walbridge

    Photographing scales on the turntable

    At the Minneapolis Institute of Art we're now doing photogrammetry of medium-sized objects with a robot turntable/swing arm, and with each object we've been photographing a data set where the CHI photogrammetry scales occlude the object in many images. For now, I'm also photographing the objects and scales as 'flat,' just like I learned at CHI, but I'm theorizing that the measurement data we'll get from the scales on the turntable will be much more robust. Here's the shooting and PhotoScan breakdown: - Photograph the object on the turntable with no scale bars from multiple rotations and elevations (we've been making 36 columns and nine elevations, from 0-88 degrees, but fewer columns for the top elevations). Also photograph empty backgrounds for auto masking. - Photograph the object with two scales occluding the object and as close as possible to the object, and two scales on the turntable's surface (with fewer photos in this set; four elevations, from 0-66 degrees). - Use the first set of images as one Camera Group in PhotoScan, and the scales as another Camera Group. Align photos to make a sparse point cloud. - Refine the sparse cloud using CHI/BLM's magical method. Add scale. - Remove (or turn off) all images from the scales dataset. - Build the dense cloud, et cetera. P​hotoScan is identifying the scales on the turntable with no problem; it feels to me like having a much larger data set full of scales will produce better scale information than a set where the three-dimensional art is treated as a flat object. And I'm happy to report that scale is traveling with exported objects - this figure arrived in an OBJ-reading program with a size of .67 units (apparently there's no set unit in a lot of these programs), and it's 67 cm tall: https://sketchfab.com/models/8217886808944db3b3a01734d604cdd6 What do you think?
  10. Despite all my efforts to defeat this problem, it continues to be a bug in my workflow for photogrammetry. To get a good lens calibration in Photoscan, you shoot calibration images by rotating the camera -90 and +90 degrees along with the horizontal images in a sequence of overlapping positions. I've set my camera not to rotate pictures, so the calibration shots should appear as normal horizontal images, with the top of the object in the image facing right or left, depending on the camera rotation. However, the camera exif data apparently still records the image orientation and when I import the images into Lightroom, they're all rotated vertically so the top of the object is pointing up and the images are in portrait orientation, not landscape. Lightroom 4 doesn't allow you to set a preference to ignore image orientation during import, so it automatically rotates the images whether you like it or not. When you export the images as TIFFs or JPEGs, they're all rotated so the object appears oriented right-side-up. I've tried using Photoshop CS6 to open the same DNGs that were imported into Lightroom, after I set PS6 preferences to ignore image orientation, but it still rotates them into portrait mode. Photoscan has a pair of buttons that allow you to rotate the images right or left 90 degrees after they're uploaded to the workspace window, but this doesn't affect how Photoscan uses the images for calibration--Photoscan still thinks the sensor has portrait dimensions with the shorter side at the bottom, instead of the longer side. Therefore, Photoscan groups all the vertically oriented images into a separate calibration instead of using them to refine the calibration for the horizontal images. This is a maddening problem because once you've created masks for your images, the masks have to have the same orientation as the images or Photoscan won't allow you to apply the same masks that you created using rotated images to the unrotated images. If you un-rotate the original images, you also have to remove rotation information for the masks to allow them to align properly, or Photoscan won't let you re-import the masks. I've heard that some use Windows Explorer to remove image orientation data, but I don't have Windows on all my Macs, and I've heard there are also problems with Windows applying lossy compression to JPEGs when it rotates them--very bad behavior! How do I defeat the image rotation problem? This gets very time consuming for projects with hundreds of images.
  11. cdschroer

    Manual turntable options

    Various folks have asked about turntables for doing subjects in the round. Here's a good option that has very smooth movement and can handle fairly heavy weights: Shimpo Banding wheels: http://shimpoceramics.com/bandingwheels.html They aren't too expensive. They don't have marks for the degrees, this could be added with tape or other mechanisms. We modified our turntables that way. Make sure to put any marks on the edge not the top, so that it is less likely to be moving through your images. It would be great if others could post manual turntable suggestions here as well. (if someone wants to start another topic for automated ones, I think that's great too) Carla
  12. Hi Folks, We are pleased to announce that we are now offering calibrated scale bars specifically designed for photogrammetry use. We partnered with Tom Noble and Neffra Matthews at the US Bureau of Management on this product. The design comes from decades of photogrammetry experience on their end, and making their own scale bars, since there wasn't a product that did what they wanted. These use both coded and non-coded targets. Cultural Heritage Imaging calibrates them to within 1/10 mm accuracy. We also provide a user guide, and we package them for sale and manage orders. If you want to learn more and/or order a set, check out our scale bars page: http://culturalheritageimaging.org/What_We_Offer/Gear/Scale_Bars/index.html Carla
  13. Nowadays, capabilities of photogrammetric software are amazing. You may wonder: “Can a video captured by drone be used to create a 3D model of a cultural heritage object?” and the answer is “Yes”. In this tutorial, you will create a georeferenced and measurable 3D model, using YouTube video captured by DJI Inspire 1 using Agisoft PhotoScan and view it with Sputnik GIS Source video (4k / 2160p): http://www.youtube.com/watch?v=f5nznXK5IrQ 3D Model created using Agisoft PhotoScan: If you're interested, you can read a tutorial.
  14. I wanted to let folks know that the 1.2 version of Photoscan is now available. It's been in Beta for a while, but it's status just changed to be the currently shipping version. It is a free upgrade. Download it here: http://www.agisoft.com/downloads/installer/ Full change log is here: http://www.agisoft.com/pdf/photoscan_changelog.pdf Carla
  15. A panel of Style 7 Martis Complex petroglyphs at Donner Pass in the Sierra Nevada range in California was recently documented using a combination of photogrammetry and DStretch, a plug-in for ImageJ that uses Principal Components Analysis (PCA) to enhance color contrasts. Although DStretch has been used effectively to enhance rock art, wall paintings, and other art works, it has been assumed that it would not be effective for petroglyphs because they're created by carving or pecking, rather than using pigments. However, the Style 7 petroglyphs at Donner Pass were pecked into pink granite (Lake Mary tonalite, dated to approximately 95-120 million years before present). There is enough contrast between the pecked glyphs and weathered pink granite to allow DStretch to work. Photogrammetry was used to create a textured 3D model of the panel of petroglyphs using RGB images. The images were enhanced using the LRD setting in DStretch, and the model was re-textured using the DStretched images. Orthophotos from both versions of the model were then produced. A comparison of orthophotos without DStretch and with DStretch to enhance the contrast of the petroglyphs is here. A higher resolution orthophoto of the petroglyphs with DStretch can be found here (7 Mb).
  16. There are many techniques one can use to date a painting, but it is usually best to start with non-invasive methods. A good place to begin is by looking at the painting's verso to look for canvas makers' stamps, gallery stamps, the construction of the stretcher, canvas weave and thread counts, and primings, among other features. In many cases, paintings have undergone past treatments, such as relining. In such instances, the original canvas and hence, canvas supplier's stamps or other stamps (e.g., duty stamps), if present, are not visible because they are covered by the relining. Non-invasive techniques such as multispectral imaging can be useful to reveal hidden features, but art works are unpredictable and unique situations arise where a combination of methods is needed. Transmitted infrared imaging can sometimes reveal canvas makers' stamps on a relined canvas where other techniques, such as x-ray imaging, might fail. This is sometimes true in the case of a painting with a lead white ground layer, where the lead absorbs x-radiation but also happens to be relatively transparent to transmitted infrared radiation (TIR). But what to do if the canvas stamps are partly obscured by the horizontal cross-brace of the stretcher? In this case study, an on-the-spot solution was devised, with the aid of 3D photogrammetry, to identify a previously unknown artists' colourman in mid-19th century London as the supplier of the canvas, and bracketed the range of dates when the canvas could have been supplied, probably between c. 1844 and c. 1860. Because the canvas supplier's stamps were mostly hidden behind the central horizontal cross-brace of the stretcher, thin shims were placed between the canvas and stretcher to create a narrow gap, allowing TIR images to be captured at an angle, revealing most of the text of the stamps that would otherwise have remained hidden. However, reconstructing accurately scaled images of the stamps from the angled TIR images was a challenge. This is where photogrammetry proved to be very useful. A high-resolution 3D model of the painting's verso was constructed by capturing overlapping reflected visible-wavelength images that were processed using Agisoft Photoscan software. Each of the four quadrants of the painting's verso (defined by the vertical and horizontal cross-braces of the stretcher) were also captured in both reflected visible light and TIR. The visible and TIR images were registered, therefore the TIR images could then be aligned with the 3D model by swapping them for visible reflected images, and an orthorectified TIR image of the painting's verso could be constructed. The TIR orthophoto contained accurate scale information, which could be used to measure the overall dimensions of the stamps. By overlaying the TIR angle images of the stamps, which contained more of the stamps' textual information, onto the TIR orthophoto, the perspective distortion resulting from the capture angle could be accurately removed, and nearly complete, stitched images of the hidden stamps were reconstructed. Besides creating a more accurate and measurable record of the previously unrecorded stamps, the reconstructions allowed the canvas supplier's name, address, and dates of his business operations to be determined through further research, including London trade and post office directories and geneological data. In addition to providing a date range for the stamps, the position of the stamps on the 3-ft x 4-ft landscape painting was significant, since they were rotated 90 degrees from horizontal and centered behind the horizontal cross-brace. This suggested that the canvas had been purchased separately and stretched by the artist (or perhaps by an assistant) on the original stretcher, since it was not a standard size that was widely commercially available during the period.
  17. We are pleased to announce the next 4 day training class offerings at Cultural Heritage Imaging's studio in San Francisco. Also note that CHI training can come to you! And if you want to be informed about and even influence the dates of future training classes, write to us to get on our interest list. Photogrammetry Training - October 6-9, 2015 This is your last chance in 2015 to learn how to apply photogrammetry, the practice of deriving 3D measurements from overlapping sequences of digital photographs to determine the size, shape, position, and texture of objects. The results are extremely dense and accurate quantitative data with standard digital camera equipment. Recent trainees say this about the class: “Very informative, very technical, useful to people in different industries” “Wonderful, amazing, and full of applicable techniques” “In-depth knowledge sharing that is not available anywhere else” Reflectance Transformation Imaging (RTI) Training- October 13-16 Get hands-on training in Reflectance Transformation Imaging (RTI), a core practice for creating digital representations of objects. You will leave this class able to implement the digital imaging workflow, including steps to capture, process, and view RTI digital representations. Some testimonials from previous trainees: “Extremely informative and incredibly useful” “Well thought-out and thorough: the small class size was a bonus for me” “Instructors were extremely engaging and explained in a way even I could understand!”
  18. Greetings, all! I have been following this forum for some time, but only recently became a member in order to ask the community for a bit of assistance. I am a graduate student studying archaeology at the University of Colorado Boulder, and I'm putting together an interactive museum exhibit that informs the public about the many applications of aerial photogrammetry. As part of the CU Aerospace Engineering School's "Grand Challenge", this exhibit will allow guests to simulate a drone mission over a scale model of the ancient Maya archaeological site of Tikal, Guatemala. Guests will take photos using a remotely operated camera suspended over the model and will be assisted by exhibit staff in processing the data using an automated script written for Agisoft Photoscan. Our team recently set up a crowdfunding page to raise some funds to improve the exhibit and buy promotional materials. I'm not sure if this is the appropriate place to submit a request like this, but any donations to the effort would be much appreciated. Here is the crowdfunding website: http://www.colorado.edu/crowdfunding/?cfpage=project&project_id=11371 Cheers to all of you fine folks for making this forum really useful! Best, Jeff Brzezinski
  19. Here's a story on BBC about Project Mosul to use crowd-sourced photos to virtually reconstruct objects destroyed in Mosul using photogrammetry. It's an opportunity to help with the virtual reconstruction of the lost artifacts. The authors of Project Mosul are coordinating with several organizations to apply a similar scheme to virtually reconstruct other damaged cultural heritage sites. It would be better to document these sites before they get destroyed, but as the song said, "you don't know what you've got 'til it's gone."
  20. We are pleased to announce that our next 4 day photogrammetry class will take place in San Francisco on May 18-21, 2015. More details and a registration form can be found here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html The training at CHi is focused on core photogrammetric principles that produce high quality, measurable, scientific data. The capture method (how you take the images) is independent of any software package. You will collect image sets that can be used now and in the future. The processing is done in Agisoft PhotoScan Pro. We have worked in collaboration with senior photogrammatrists at the US Bureau of Land Management to develop the course and the methodology. The optimized workflow that we teach for processing can't be found anywhere else. Come, learn, add new skills! Carla
  21. HI folks, We are pleased to announce that our next 4 day photogrammetry training lass will take place at our studio in San Francisco on May 18-21. Learn more, and find a registration form here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html Note that this class has been rescheduled from April 20-23 to May 18-21, for those of you that might have seen an earlier announcement. Carla
  22. Here's an interesting paper about the application of multiple techniques for the study of murals and graffiti: http://www.ijcs.uaic.ro/public/IJCS-15-03_Cosentino.pdf Although it's not mentioned in the paper, one of the authors (A. Cosentino) describes an Arduino distance meter to check the position of the speedlight while capturing RTIs of the murals: http://chsopensource.org/reflectance-transformation-imaging-rti-with-arduino/
  23. Can anyone suggest a source for, or methods to fabricate target scales to allow accurate measurements to be taken from photogrammetry images? I've seen examples of target scales with machine readable circular codes and other useful scaling aids, but haven't found a good affordable source for them. I've also heard that some photogrammetry software packages come with printable coded scales that can be attached to a suitable support, but I don't see this in the trial version of Agisoft's software that I have. Some different sizes/lengths would be useful for smaller objects as well as for larger objects and scenes (within the limits of close-range photogrammetry). I've been using various metal scales, a foldable wood ruler, and a contractor's level, but would like to get the most accurate measurements possible. Many thanks! Taylor
  24. andrea.fusiello@3dflow.net

    3DF Zephyr Pro released

    You might be interested in knowing that there is a new photogrammetric software on the market, called 3DF Zephyr Pro, which is able to turn photos into 3D models. You can find more information and download the evaluation version at: http://www.3dflow.net/3df-zephyr-pro-3d-models-from-photos/. We would really appreciate feedback from CHI users.
  25. I have generated a dense point cloud of 62,500,000 points (approximately 35,000 points per square inch) and medium-res 3D mesh of a painting using Agisoft Photoscan. It would be possible to generate the dense point cloud and mesh at a higher density, but it's beyond the memory capacity of my Mac mini (16 Gb of RAM), so I might out-source further processing of the photogrammetry data. I also have a mosaic of 36 RTIs at a ground-sample resolution of approximately 500 pixels per inch (250,000 pixels per square inch), plus higher-res RTIs of some details. The mosaic RTIs have approximately 10 percent overlap horizontally and vertically. I'm interested in combining the point-cloud and normal maps into a 3D model of the painting surface using methods described in the 2005 SIGGRAPH paper whose title is quoted above, "Efficiently Combining Positions and Normals for Precise 3D Geometry," by Nehab, Rusinkiewicz, Davis, and Ramamoorthi (or other methods anyone here might suggest). However, the link to the source code for the algorithm by these authors isn't working. I'm wondering if others here have tried this technique, and if anyone can provide the source code (with appropriate permissions) and offer advice for implementing it. I haven't written to the authors to ask for the code, but may do so. I'm interested in hearing others' experience with it or other similar techniques, working with large data sets. Another tool that looks like it might be useful in this regard is XNormal, which "bakes" the texture maps from high-resolution 3D meshes into lower-resolution meshes. Could it also accurately combine high-resolution RTI normal maps with high-res 3D meshes? I'm not sure if this modeling technique would produce the same result as the algorithm from the 2005 paper cited above. I'm also interested in suggestions for an appropriate work flow for cleaning, repairing, and decimating the 3D mesh. Would it be it better to start with the highest density point cloud and mesh that I can generate from the photogrammetry data set, then combine this with the normal maps from RTIs? Or perhaps clean, repair, and decimate the 3D mesh and then apply algorithms to combine it with the normal maps? I'm learning to use MeshLab, but I find it a bit daunting with the number of possible filters and it crashes pretty frequently with large data sets (it might be too much for my Mini). I also have approximately 53 Gb of RAW multispectral images at resolutions of approximately 500 to 1,000 pixels per inch that were captured using a different camera system than the one used to capture photogrammetry and RTIs. The 500 dpi images were captured in a 4x4 mosaic, or 16 images per waveband. There are 12 discrete wavebands (1 UV, 6 visible, and 5 IR) plus visible fluorescence images captured with emission filters. I'm interested in texturing the 3D mesh generated from the combined photogrammetry and RTI datasets using each of the discrete multispectral wavebands and reconstructed visible, fluorescence, and false-color IR images. I'd like to know what would be involved in registering these images to the 3D mesh generated from a different set of images. I'm hoping that the result of this project will be a 3D model with accurate surface normals that would allow interactive relighting, tiled zooming, algorithmic enhancement, and selective re-texturing at various wavebands in a web viewer, if a suitable viewer becomes available, such as the one being developed by Graeme Earl and the AHRC project or perhaps one of the Web3D viewers. Any advice and assitance would be appreciated! Best, Taylor
×