Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 07/09/2012 in all areas

  1. 2 points
    Yours for the low, low price of $24,000 (not including camera). Or you can build your own for less than 5% of the cost: https://hackaday.io/project/11951-affordable-reflectance-transformation-imaging-dome
  2. 2 points
    I had the same thought a few years ago, but decided that it didn't make sense. You're fitting a curve to the (primarily) Lambertian lighting scattering at every point based on the lighting angle. If you change the lighting intensity with angle, you'll skew the lighting curve to fit to those higher intensities, which will introduce errors. Shadows are areas where the main light source is blocked, so any signal from those areas is a result of scattered light within the dome, which doesn't really offer you any useful signal for curve fitting.
  3. 2 points
    You might give this site a look (not that I'm biased or anything): https://hackaday.io/project/11951-affordable-reflectance-transformation-imaging-dome
  4. 2 points
    You can embed it in a WP page using an iframe tag. Example: <iframe src="http://swvirtualmuseum.nau.edu/wp/RTI/Powell_Watch_2.html" width="600" height="640" frameborder="0" allowfullscreen="allowfullscreen"></iframe> View the result on this page: http://swvirtualmuseum.nau.edu/wp/index.php/national-parks/grand-canyon-national-park/rti-gallery/
  5. 1 point
    We recently found out about this. I've set up a dropbox folder with both the Windows and Mac versions of the PTMFitter, along with the license from HP: https://www.dropbox.com/sh/jfsy0lhxu6zv4i4/AADJpq6E_GJmNw_s5C8r94CVa?dl=0 Carla
  6. 1 point
    Cultural Heritage Imaging (CHI) offers some free resources to people adopting the practice of photogrammetry. In addition, our experts are available for paid consulting and/or training. Here are some resources not to be missed. 1. Videos describing key principles of good photogrammetric capture: https://vimeo.com/channels/practicalphotogrammetry See also our Photogrammetry technology overview: http://culturalheritageimaging.org/Technologies/Photogrammetry/ 2. This, our free user forum, where folks in the community help answer questions about RTI and photogrammetry. We aim to complement the resources offered by Agisoft PhotoScan and other software packages, as they have their own communities. However, discussions about equipment, capture tips, and so on are welcome here: http://forums.culturalheritageimaging.org/ 3. We sell calibrated scale bars that help users get precise, real-world measurement into your product. And we offer a "tips and tricks" free guide for working with scale bars on the Photoscan website (find the link on this page): http://culturalheritageimaging.org/What_We_Offer/Gear/Scale_Bars/index.html 4. We offer regular 4-day training classes in photogrammetry in our studio in San Francisco and in other locations. Sometimes a host institution will offer space and will purchase some seats and allow some seats to be sold. You can learn more about our photogrammetry training here: http://culturalheritageimaging.org/What_We_Offer/Training/photogram_training/index.html 5. And finally, we offer custom consulting to help folks adopt and use photogrammetry and RTI. That can take a variety of forms, including video, emails, and projects in Dropbox where we can review work and give feedback. Learn more about our consulting offering here: http://culturalheritageimaging.org/What_We_Offer/Consulting/
  7. 1 point
    We are focusing on good practice for collecting image sets for photogrammetry - which is independent of the software used. We feel that if people collect good image sets that follow the rules (as described in our videos) then they have useful information for now and the future contained in those image sets. Especially if they collect some metadata about what they are doing, their methodology, who was involved, the subjects and locations, etc. We are developing tools for that called the Digital Lab Notebook. While we do use Metashape (aka Photoscan) at this time, the methodology is not dependent on the software. So our freely available public information is not about Metashape. We agree that the Metashape community forums are a great place to go for information about working with that software. We do use an error reduction workflow in Metashape that is a bit different than the process most folks follow. We teach that in our training classes. We spend time looking at metrics and what they mean, and also trying to impart an understanding of the sfm photogrammetry approach, and how that informs the guidance we give on collecting image sets. Our goal is to impart knowledge rather than give a "cookbook" approach, so folks can deal with a variety of situations in their own work, and make the appropriate trade-offs to meet their needs. This kind of material doesn't fit into a forum post. Carla
  8. 1 point
    Shahin, I think the answer is "it depends"! - it depends on what you're trying to image - a small artefact or a building; it depends on what version of PhotoScan you're using (and just to confuse things, they have renamed it from PhotoScan to Metashape with the release of version 1.5); it depends on the sensor used; etc. Far and away the best place to search for help on PhotoScan is on the dedicated Agisoft forum - just search for "workflow" (almost 80 hits). One thread to which I would particularly refer you to is https://www.agisoft.com/forum/index.php?topic=9485.msg43931#msg43931 which discusses the workflow developed by Tom Noble et al of the USGS, and that of Bob Meij. (would also suggest this might be better in a thread of its own ....) Dave
  9. 1 point
    The old ptmfitter had an effective image size limit of 24 megapixels on Windows, even on systems with lots of RAM. The new ptmfitter from Kirk Martinez' group has no problems handling images up to 100 megapixels in size; I don't go any higher because the RTIViewer software has problems above 100 megapixels. https://custom-imaging.co.uk/software/
  10. 1 point
    Failure to align 2 sections together if you have masked the images is usually due to a lack of overlapping detail between the two sets of images. For very small objects, differences in focus can also be problematic, especially on turntables and an object with a high length:width ratio. When planning a shoot I pick where the join between the 2 sections will be on the object and then position it so that I can ensure that images in both sets provide adequate coverage across this area. In the case of complex objects you may find that 2 orientations is not enough and you may need to shoot 3 or 4. To ensure focus overlap as well I either shoot focus stacking (rarely) or shoot handheld and move around the object so that I can carefully position the plane of focus.
  11. 1 point
    Hmmmm... https://broncolor.swiss/scope/ https://truvis.ch/ For a brand-new breakthrough visualization technology, it looks awfully familiar ...
  12. 1 point
    I just found another reply that solved the problem. The harddrive name had spaces, I changed the spaces to underscores and that seems to solve the problem. I also read I need to download PTMftter from HP. Sorry I missed these issues in the documentation. Best, Graham
  13. 1 point
    The RTI group have a new website - this is the new address with the fix, it worked for me on macOS Sierra 10.12.6 https://www.rtigroup.org/news/a-workaround-for-rtibuilder-2-0-2-on-macos-sierra
  14. 1 point
    Thanks for reporting this! I wasn't aware of it, and don't think it has been reported before. We don't have any plans for updates to the viewer in the short term, but we will put this on the list for the next time we do updates. Carla
  15. 1 point
    Thank you both for your input I did read through the various articles on capture and processing but the idea of modifying the lighting levels was just niggling away in my head so I asked the question I have programmed the LEDs so they are outputting the same amount of light, the only correction I've made is to calibrate all of the LEDs output so they are identical within 1-2%. The errors are just from tolerances of the components and the LEDs themselves, the difference between the 'worse' and 'best' LED was less than 5% but the correction was a simple fix. best regards Kev
  16. 1 point
    I'm with leszekp on this. The software expects that nothing changes except the light positions. The images from the lower angles should be darker. Don't compensate with the lights, and don't exposure comp those images. You might want to read through the guide to highlight image capture on the CHI website - even though you are building a dome - because some core principles like this are described there. Carla
  17. 1 point
    Thank you Dave and Carla for the quick responses! Changing the path to the ptmviewer was the fix! Thanks for catching that mistake!
  18. 1 point
    Katie - have you downloaded the PTM from the Hewlett Packard research site?
  19. 1 point
    It is simply the stationary point of the paraboloid described by the 6 coefficients. The formula is described in the original PTM paper: http://www.hpl.hp.com/research/ptm/papers/ptm.pdf
  20. 1 point
    We at CHI are thrilled to announce the release of the first two tools (Version 1 Beta) in a new software suite, the Digital Lab Notebook (DLN). DLN is a metadata toolkit designed to simplify the collection of standards-based metadata, essential to scientific imaging. The goal is widespread democratization of tools that worldwide cultural caretakers can deploy to digitally capture, build, archive, and reuse digital representations. We hope you will download and try out these new tools! DLN:CaptureContext With a user-friendly interface, this tool expedites and simplifies user input of metadata -- such as locations, institutions, imaging subjects, image rights -- with a template process. DLN:Inspector This tool automatically ensures that each image set meets the requirements for high-quality computational photography imaging, checking for image-processing errors, such as sharpening, that should not be applied to photogrammetry or RTI image data. Instructional Videos for DLN See our new Vimeo channel -- Simplifying Scientific Imaging -- where we have posted an instructional video series about the DLN, what it does, how to use the software tools, and more.
  21. 1 point
    It's hard to answer this without understanding more about the subject and about your lighting setup. I'l give some general advice though. The key thing you want to avoid is moving shadows. The matching software in photogrammetry can try to match pixels using shadow lines, and if these are moving around, you may not get a good alignment, or a bunch of stuff may be in the wrong place. We recommend using soft boxes or umbrellas on your lighting (or diffusers, depending on your lights) Then look in terms of what is lit, and whether you have shadowed areas, and how those are affected as you move around the subject. I'll also note that you should plan to do more than one circuit around your subject. Three circuits is kind of a minimum to get enough look angles (redundancy) and more may be required, depending on the size and shape of your subject, how far away you are with what field of view, etc. Good luck! Carla
  22. 1 point
    At the Minneapolis Institute of Art we're now doing photogrammetry of medium-sized objects with a robot turntable/swing arm, and with each object we've been photographing a data set where the CHI photogrammetry scales occlude the object in many images. For now, I'm also photographing the objects and scales as 'flat,' just like I learned at CHI, but I'm theorizing that the measurement data we'll get from the scales on the turntable will be much more robust. Here's the shooting and PhotoScan breakdown: - Photograph the object on the turntable with no scale bars from multiple rotations and elevations (we've been making 36 columns and nine elevations, from 0-88 degrees, but fewer columns for the top elevations). Also photograph empty backgrounds for auto masking. - Photograph the object with two scales occluding the object and as close as possible to the object, and two scales on the turntable's surface (with fewer photos in this set; four elevations, from 0-66 degrees). - Use the first set of images as one Camera Group in PhotoScan, and the scales as another Camera Group. Align photos to make a sparse point cloud. - Refine the sparse cloud using CHI/BLM's magical method. Add scale. - Remove (or turn off) all images from the scales dataset. - Build the dense cloud, et cetera. P​hotoScan is identifying the scales on the turntable with no problem; it feels to me like having a much larger data set full of scales will produce better scale information than a set where the three-dimensional art is treated as a flat object. And I'm happy to report that scale is traveling with exported objects - this figure arrived in an OBJ-reading program with a size of .67 units (apparently there's no set unit in a lot of these programs), and it's 67 cm tall: https://sketchfab.com/models/8217886808944db3b3a01734d604cdd6 What do you think?
  23. 1 point
    P.S.: It helps to have notes on your capture sequence, especially if there's any variation in your calibration sequence (order of camera rotation). I tried to be consistent in the direction I rotated the camera, but I was glad I had notes for the exception among hundreds of images, so I could reverse the rotation in Lightroom. LR doesn't provide metadata on the image orientation (at least I couldn't easily find it) and once the images are auto-rotated, they look the same whether they were rotated clockwise or counter-clockwise. It's important to unrotate them back to their original orientation if you want Photoscan to use them for calibration. In Bridge, I had to add the image orientation to the list of camera exif data that are displayed--it's not in the default list.
  24. 1 point
    I've been exploring the use of an open-source image processing program developed by the National Institutes of Health (NIH) called ImageJ, and a related plug-in called DStretch. While RTI is very effective for examining texture, ImageJ and DStretch provide tools for analyzing color information, in addition to other capabilities, which can complement the use of RTI. (Image J and DStretch are Java-based programs, and you should check that you have installed the latest Java security update.) Both programs can be found easily by a simple search, or you can use the following links: ImageJ: http://rsb.info.nih.gov/ij/index.html DStretch: http://www.dstretch.com/ DStretch is short for "decorrelation stretch," an image enhancement technique originally developed at JPL for remote sensing applications. It has been successfully applied at rock art sites for identifying pictographs, where the composition consists primarily of pigments, as distinct from petroglyphs, which are defined by texture (apologies to archeologists for my loose definitions). An advantage of these tools over a proprietary program such as Photoshop is that they allow users to modify color-spaces in very specific, controlled ways, using defined algorithms. The Dstretch algorithm allows one to "reset" the image back to its original color values, so you can "undo" the effects of any filter you apply using the software. A more detailed explanation of the algorithms in DStretch is provided here: http://www.dstretch.com/AlgorithmDescription.html The approach I'm using to integrate DStretch into RTI workflows is to process the RTIs first (creating a .ptm or .rti file), then use the RTIViewer to relight the image and apply PTM/RTI algorithms to bring out textural details. When I'm happy with the RTI image, I save a snapshot from the viewer. The snapshot can then be opened in ImageJ, and the DStretch algorithms can be applied to enhance color features. A limitation of this approach is that it's generally best to leave the colors unchanged when saving the image in the RTI Viewer. This can be done either by using the "default" setting or, if using Diffuse Gain or Specular Enhancement, by setting the "gain" or "Kd" controls at the positions that leave the most color in the image. This constrains the application of algorithmic enhancements in the RTIViewer somewhat, but it allows both textural and color features to be rendered in ways that are complementary. Another approach, which might be preferable from a scientific perspective, is to process the images for color and texture using ImageJ/DStretch and RTI separately, and compare or combine the images using fade features available in other software. Keeping a log of how the images are processed would be very important, in any case. (Thanks to CHI Forums member Dr. George Bevan for pointing me to ImageJ and to Dr. Jon Harman for the DStretch plug-in.)
  25. 0 points
    Hi, I'm trying to find a stand to help with photogrammetry of small, thin, or awkward objects like coins or projectile points. Objects thin enough that it would be difficult to get two chunks of the object modeled lying flat to align, but which if put upright on a turntable should work. Thanks
×