At the Minneapolis Institute of Art we're now doing photogrammetry of medium-sized objects with a robot turntable/swing arm, and with each object we've been photographing a data set where the CHI photogrammetry scales occlude the object in many images. For now, I'm also photographing the objects and scales as 'flat,' just like I learned at CHI, but I'm theorizing that the measurement data we'll get from the scales on the turntable will be much more robust.
Here's the shooting and PhotoScan breakdown:
- Photograph the object on the turntable with no scale bars from multiple rotations and elevations (we've been making 36 columns and nine elevations, from 0-88 degrees, but fewer columns for the top elevations). Also photograph empty backgrounds for auto masking.
- Photograph the object with two scales occluding the object and as close as possible to the object, and two scales on the turntable's surface (with fewer photos in this set; four elevations, from 0-66 degrees).
- Use the first set of images as one Camera Group in PhotoScan, and the scales as another Camera Group. Align photos to make a sparse point cloud.
- Refine the sparse cloud using CHI/BLM's magical method. Add scale.
- Remove (or turn off) all images from the scales dataset.
- Build the dense cloud, et cetera.
PhotoScan is identifying the scales on the turntable with no problem; it feels to me like having a much larger data set full of scales will produce better scale information than a set where the three-dimensional art is treated as a flat object.
And I'm happy to report that scale is traveling with exported objects - this figure arrived in an OBJ-reading program with a size of .67 units (apparently there's no set unit in a lot of these programs), and it's 67 cm tall: https://sketchfab.co...3a01734d604cdd6
What do you think?