Jump to content

Charles Walbridge

  • Content Count

  • Joined

  • Last visited

  • Days Won


Charles Walbridge last won the day on April 16 2015

Charles Walbridge had the most liked content!

Community Reputation

14 Good

About Charles Walbridge

  • Rank
  1. Matt, the auto-masking is a feature in Photoscan. If you have a photo of the empty set and feed that into Photoscan's Import Mask, it will make masks for you for all the photos in that set. In our case with the turntable/swing arm combination, we're making one empty background for each camera elevation, then having Photoscan auto-mask the 36 images from that elevation. (Again, the highest elevations get fewer than 36 images per elevation.)
  2. At the Minneapolis Institute of Art we're now doing photogrammetry of medium-sized objects with a robot turntable/swing arm, and with each object we've been photographing a data set where the CHI photogrammetry scales occlude the object in many images. For now, I'm also photographing the objects and scales as 'flat,' just like I learned at CHI, but I'm theorizing that the measurement data we'll get from the scales on the turntable will be much more robust. Here's the shooting and PhotoScan breakdown: - Photograph the object on the turntable with no scale bars from multiple rotations and elevations (we've been making 36 columns and nine elevations, from 0-88 degrees, but fewer columns for the top elevations). Also photograph empty backgrounds for auto masking. - Photograph the object with two scales occluding the object and as close as possible to the object, and two scales on the turntable's surface (with fewer photos in this set; four elevations, from 0-66 degrees). - Use the first set of images as one Camera Group in PhotoScan, and the scales as another Camera Group. Align photos to make a sparse point cloud. - Refine the sparse cloud using CHI/BLM's magical method. Add scale. - Remove (or turn off) all images from the scales dataset. - Build the dense cloud, et cetera. P​hotoScan is identifying the scales on the turntable with no problem; it feels to me like having a much larger data set full of scales will produce better scale information than a set where the three-dimensional art is treated as a flat object. And I'm happy to report that scale is traveling with exported objects - this figure arrived in an OBJ-reading program with a size of .67 units (apparently there's no set unit in a lot of these programs), and it's 67 cm tall: https://sketchfab.com/models/8217886808944db3b3a01734d604cdd6 What do you think?
  3. Update: I've added a wifi-enabled microcontroller (like an Arduino) and a relay to my RTI lighting array so I can control the four lights from my phone while I'm shooting at the computer. I still need to move the array to each of 12 spots on the floor (when I'm shooting 48 source files for the RTI), but it's definitely a time-saver. I'd like to write a program to trigger the lights and the camera in sequence, but that's beyond my ability right now. There's a little more info at the page I made here: http://goo.gl/fMdQtW Marlin, I forgot to answer your light questions in the comment above: The lights in the array are the same LEDs we use to light art in the galleries at the Minneapolis Institute of Arts. They're Philips Endura PAR 38s, 2700 degrees Kelvin, and they're rated for 45,000 hours of use. They draw 18 Watts, and they don't get hot, and we've found them to be very consistent from bulb to bulb. At about three feet, my exposure was f13 at half a second and 50 ASA, which is no trouble with my solid tripod and concrete floors.
  4. I've put together a cheap and quick lighting array for making RTIs of daguerreotypes we have in the collection of the Minneapolis Institute of Arts. I used lighting equipment we have in the photo studio, LED spots we use in the galleries, and about $30 worth of lamp wire and cord switches. The light stand I've used has wheels, so now two of us can shoot the 48 source images for an RTI in about 10 minutes. Because the four lights on the array are the same distance from the object, I can position the array at 12 o'clock, measure the distance to the object with my fancy RTI string, and then turn the lights on and off from the cord switches on the individual lamps. Then I'll roll the stand to the one o'clock position and repeat the process. I've put up pictures of the array on a Google Plus page here: https://plus.google.com/u/0/b/112094054932297248042/112094054932297248042/posts and I can share them with the RTI community on Facebook too. Let me know what you think --
  5. Hey all - I'm pleased to tell you that our article featuring RTI is on the cover of the December 2013 issue of Verso, the Minneapolis Institute of Arts' free iPad magazine. It's here in the iTunes Store: https://itunes.apple.com/us/app/minneapolis-institute-arts/id569985601?mt=8 The web version of the magazine, which has most of the functionality, is here: http://adobe.ly/1bSt2DM I'm pretty pleased with it. Our curator Roberta Bartoli has asked me to present RTI to her Art History students, and last week we showed the technique to some of the museum's trustees. Progress!
  6. I want to image this object in our collection here at the Minneapolis Institute of Arts: https://collections.artsmia.org/index.php?page=detail&id=738 It's a low-relief sarcophagus that I think would image well in either RTI or using photogrammetry, but I can't decide which to use. I know the object would be easy for RTI, and I could do it in sections or its entire sides. But I think I'd like to end up with a file that's shareable outside the RTI Viewer, like an OBJ or an STL. I know we're photographing two spheres in RTIs to make our data ready for 3D modeling, but I haven't heard an update on that since I saw the CHI team in DC in 2012. With photogrammetry I would use the advice from the BLM publication and either a trial or the $179 version of PhotoScan. I think that technique would let me take the meshes of each of the object's six sides and make them into a beautiful, shareable object. I'm going to need to figure out PhotoScan in the near future regardless: should this be the object I start with?
  7. George, I think we here at the Minneapolis Institute of Arts could make good use of an RTI viewer for iPad. I've made RTIs of saints in tempera and gold on panel like St. Sirus here - https://collections.artsmia.org/index.php?page=detail&id=1610 - that show the artist's techniques better than you can see them with the naked eye.
  8. George, do you have a shareable version of the iPad PTM viewer? I have RTIs of art that I'd love to show to some curators on an iPad.
  9. Thanks for the links and info, guys. The NCPTT video is very informative. I'm going to find a project that needs PhotoScan. Marlin, I've been making my RTIs with two spheres in the frame so the Builder can someday build me a 3D model from those same data sets; has the Builder progressed to that stage yet? And while I'm here on the forum, I'm going to go look for progress on the RTI to iPad front...
  10. I've been following the progress our colleagues at the O'Keeffe Museum are making with RTI and photogrammetry, and as a separate project I've been learning about 3D scanning for art objects. Both paths have led me to the PhotoScan software, which looks as capable as anything for both photogrammetry and 3D modeling. I'm thinking CHI has recommended PhotoScan to the O'Keeffe Museum, but I'm not seeing anything about it here in the forum: are there many cultural heritage organizations using it for photogrammetry and 3D scanning? And is there something better for 3D scanning of art objects, assuming I want to start with photographs and not lasers? And can I get by with the $200 version of the PhotoScan software, or should I budget for the $3500 version? Useful links: The O'Keeffe's blog about RTI and photogrammetry: http://okeeffeimagingproject.wordpress.com/daily-documenting/video/ 3D recording of archaeological remains, processed with PhotoScan: http://www.academia.edu/1922635/Three-dimensional_recording_of_archaeological_remains_in_the_Altai_Mountains From Agisoft's website, making a 3D model from a statue tutorial: http://www.agisoft.ru/tutorials/photoscan/04/
  11. I've written up reccommendations for how we should archive our RTIs and all their supporting files, and I've attached a PDF here. If I make major changes to the document I'll post a new one. One of the things I'll discuss with our database guru is if we can put all the files into a single ZIP file or something like it, and if that will enter and exit our digital asset management system unscathed. That ZIP wouldn't have the ability to preview, or download individual files, but it may retain the directory structure, which we don't do with other assets that are imported into the system. Thanks, Carla, for clarifying what RTI Builder does with the cropped-jpegs - I was imagining that the Viewer was reading layers assembled from the jpegs, but now I have a better understanding of RTIs. 20121114_MIA_RTIarchiving.pdf
  12. Thanks for the additional information. I'll do more experimentation with the builder and viewer before we come to any semi-final archiving strategy, but for now I'm recommending we archive, at a minimum: - the DNGs from the capture session, with our object metadata embedded, and with the presets zeroed out. In our workflow, we process from CR2 to DNG as a very last step, even after making JPEGs, so every change we made to the CR2 is included in the final DNG. - the entire assembly-files folder, because in the example I'm looking at it's 111k, and it contains both the blend and the LP file - the .ptm file - the Shooting Log spreadsheet And here's what we can leave out of the archive: - the jpeg-exports file: these can be easily recreated from the DNGs (but as ACR changes, interpretations of DNGs may change, so maybe having a processed JPEG is worthwhile) - the cropped-files folder: each of these JPEG layers, I think, is included pixel-for-pixel inside the PTM And here's what I don't know: - is the XML file that's at the same level as the assembly-files folder (it's not inside the assembly-files folder) the log file? - how can I wrap all these files up into a neat little package that we'll be able to find and reinterpret in a few decades?
  13. I'm writing recommendations for how we at the Minneapolis Institute of Arts will archive and access RTIs, both inside and outside our DAMS; we are using TMS for collections management, and Virage MediaBin for asset management. I don't know how MediaBin will treat groups of images plus XML and LP and other supporting files, but I'm going to be looking for a way to keep them all associated with each other. My first steps will be to establish which of the files that RTI Builder creates that RTI Viewer needs, both for archiving and dissemination of RTIs. For example, does RTI Viewer need the original jpeg-exports folder, or just cropped-files? I'm eagerly awaiting the arrival of CHI's Digital Lab Notebook (http://culturalheritageimaging.org/Technologies/Digital_Lab_Notebook/index.html) but we need an interim solution and in-house documentation for best practices for archiving RTIs, and that's why I'm posting here. I'll keep the group updated with my progress.
  • Create New...