Jump to content


Popular Content

Showing content with the highest reputation since 07/09/2012 in all areas

  1. 2 points
    Yours for the low, low price of $24,000 (not including camera). Or you can build your own for less than 5% of the cost: https://hackaday.io/project/11951-affordable-reflectance-transformation-imaging-dome
  2. 2 points
    I had the same thought a few years ago, but decided that it didn't make sense. You're fitting a curve to the (primarily) Lambertian lighting scattering at every point based on the lighting angle. If you change the lighting intensity with angle, you'll skew the lighting curve to fit to those higher intensities, which will introduce errors. Shadows are areas where the main light source is blocked, so any signal from those areas is a result of scattered light within the dome, which doesn't really offer you any useful signal for curve fitting.
  3. 2 points
    You might give this site a look (not that I'm biased or anything): https://hackaday.io/project/11951-affordable-reflectance-transformation-imaging-dome
  4. 2 points
    You can embed it in a WP page using an iframe tag. Example: <iframe src="http://swvirtualmuseum.nau.edu/wp/RTI/Powell_Watch_2.html" width="600" height="640" frameborder="0" allowfullscreen="allowfullscreen"></iframe> View the result on this page: http://swvirtualmuseum.nau.edu/wp/index.php/national-parks/grand-canyon-national-park/rti-gallery/
  5. 1 point
    Hmmmm... https://broncolor.swiss/scope/ https://truvis.ch/ For a brand-new breakthrough visualization technology, it looks awfully familiar ...
  6. 1 point
    I just found another reply that solved the problem. The harddrive name had spaces, I changed the spaces to underscores and that seems to solve the problem. I also read I need to download PTMftter from HP. Sorry I missed these issues in the documentation. Best, Graham
  7. 1 point
    The RTI group have a new website - this is the new address with the fix, it worked for me on macOS Sierra 10.12.6 https://www.rtigroup.org/news/a-workaround-for-rtibuilder-2-0-2-on-macos-sierra
  8. 1 point
    Thanks for reporting this! I wasn't aware of it, and don't think it has been reported before. We don't have any plans for updates to the viewer in the short term, but we will put this on the list for the next time we do updates. Carla
  9. 1 point
    Thank you both for your input I did read through the various articles on capture and processing but the idea of modifying the lighting levels was just niggling away in my head so I asked the question I have programmed the LEDs so they are outputting the same amount of light, the only correction I've made is to calibrate all of the LEDs output so they are identical within 1-2%. The errors are just from tolerances of the components and the LEDs themselves, the difference between the 'worse' and 'best' LED was less than 5% but the correction was a simple fix. best regards Kev
  10. 1 point
    I'm with leszekp on this. The software expects that nothing changes except the light positions. The images from the lower angles should be darker. Don't compensate with the lights, and don't exposure comp those images. You might want to read through the guide to highlight image capture on the CHI website - even though you are building a dome - because some core principles like this are described there. Carla
  11. 1 point
    Thank you Dave and Carla for the quick responses! Changing the path to the ptmviewer was the fix! Thanks for catching that mistake!
  12. 1 point
    Katie - have you downloaded the PTM from the Hewlett Packard research site?
  13. 1 point
    It is simply the stationary point of the paraboloid described by the 6 coefficients. The formula is described in the original PTM paper: http://www.hpl.hp.com/research/ptm/papers/ptm.pdf
  14. 1 point
    We at CHI are thrilled to announce the release of the first two tools (Version 1 Beta) in a new software suite, the Digital Lab Notebook (DLN). DLN is a metadata toolkit designed to simplify the collection of standards-based metadata, essential to scientific imaging. The goal is widespread democratization of tools that worldwide cultural caretakers can deploy to digitally capture, build, archive, and reuse digital representations. We hope you will download and try out these new tools! DLN:CaptureContext With a user-friendly interface, this tool expedites and simplifies user input of metadata -- such as locations, institutions, imaging subjects, image rights -- with a template process. DLN:Inspector This tool automatically ensures that each image set meets the requirements for high-quality computational photography imaging, checking for image-processing errors, such as sharpening, that should not be applied to photogrammetry or RTI image data. Instructional Videos for DLN See our new Vimeo channel -- Simplifying Scientific Imaging -- where we have posted an instructional video series about the DLN, what it does, how to use the software tools, and more.
  15. 1 point
    It's hard to answer this without understanding more about the subject and about your lighting setup. I'l give some general advice though. The key thing you want to avoid is moving shadows. The matching software in photogrammetry can try to match pixels using shadow lines, and if these are moving around, you may not get a good alignment, or a bunch of stuff may be in the wrong place. We recommend using soft boxes or umbrellas on your lighting (or diffusers, depending on your lights) Then look in terms of what is lit, and whether you have shadowed areas, and how those are affected as you move around the subject. I'll also note that you should plan to do more than one circuit around your subject. Three circuits is kind of a minimum to get enough look angles (redundancy) and more may be required, depending on the size and shape of your subject, how far away you are with what field of view, etc. Good luck! Carla
  16. 1 point
    At the Minneapolis Institute of Art we're now doing photogrammetry of medium-sized objects with a robot turntable/swing arm, and with each object we've been photographing a data set where the CHI photogrammetry scales occlude the object in many images. For now, I'm also photographing the objects and scales as 'flat,' just like I learned at CHI, but I'm theorizing that the measurement data we'll get from the scales on the turntable will be much more robust. Here's the shooting and PhotoScan breakdown: - Photograph the object on the turntable with no scale bars from multiple rotations and elevations (we've been making 36 columns and nine elevations, from 0-88 degrees, but fewer columns for the top elevations). Also photograph empty backgrounds for auto masking. - Photograph the object with two scales occluding the object and as close as possible to the object, and two scales on the turntable's surface (with fewer photos in this set; four elevations, from 0-66 degrees). - Use the first set of images as one Camera Group in PhotoScan, and the scales as another Camera Group. Align photos to make a sparse point cloud. - Refine the sparse cloud using CHI/BLM's magical method. Add scale. - Remove (or turn off) all images from the scales dataset. - Build the dense cloud, et cetera. P​hotoScan is identifying the scales on the turntable with no problem; it feels to me like having a much larger data set full of scales will produce better scale information than a set where the three-dimensional art is treated as a flat object. And I'm happy to report that scale is traveling with exported objects - this figure arrived in an OBJ-reading program with a size of .67 units (apparently there's no set unit in a lot of these programs), and it's 67 cm tall: https://sketchfab.com/models/8217886808944db3b3a01734d604cdd6 What do you think?
  17. 1 point
    P.S.: It helps to have notes on your capture sequence, especially if there's any variation in your calibration sequence (order of camera rotation). I tried to be consistent in the direction I rotated the camera, but I was glad I had notes for the exception among hundreds of images, so I could reverse the rotation in Lightroom. LR doesn't provide metadata on the image orientation (at least I couldn't easily find it) and once the images are auto-rotated, they look the same whether they were rotated clockwise or counter-clockwise. It's important to unrotate them back to their original orientation if you want Photoscan to use them for calibration. In Bridge, I had to add the image orientation to the list of camera exif data that are displayed--it's not in the default list.
  18. 0 points
    Hi, I'm trying to find a stand to help with photogrammetry of small, thin, or awkward objects like coins or projectile points. Objects thin enough that it would be difficult to get two chunks of the object modeled lying flat to align, but which if put upright on a turntable should work. Thanks