Jump to content


  • Content count

  • Joined

  • Last visited

  • Days Won


Everything posted by Taylor

  1. Eleni Koutoula and colleagues at the Archaeological Computing Research Group at U. of Southampton did some work on transmitted RTI using visible light and infrared radiation some years ago, for example, this demonstration at the CAA conference in 2013: https://uk.caa-international.org/2013/02/19/reflectance-transformation-imaging/ As you mentioned, among the challenges of this method would be that the setup doesn't follow a basic assumption on which RTI is based, which is that the reflected light can be modeled using a biquadratic function in the case of PTMs or hemispherical harmonics in the case of RTIs. That doesn't mean you couldn't find a way to process transmitted "RTI" microscopy and get an interesting result (once you supply a set of light positions) or that the information gained using transmitted "RTI" microscopy wouldn't be useful. With more transmitted light positions, you should get more information than is possible using a single light position. There are examples of reflected RTI microscopy using a light array dome, for example, see Paul Messier's "monkey brain": http://www.paulmessier.com/single-post/2014/10/01/The-first-batch-of-studios-Monkey-Brain-light-arrays-for-RTI-microscopy-have-shipped-to-museum-conservation-departments Also, there are some interesting results using an LED array microscope with phase contrast, bright-field, and dark-field illumination to achieve extremely high resolution and 3D imaging: http://www.laurawaller.com/research/
  2. RTI claims a new success, the discovery of a drawing of a windmill on a wall of Isaac Newton's childhood home, Woolsthorpe Manor, in Lincolnshire, UK: https://www.theguardian.com/science/2017/dec/08/windmill-drawing-found-on-wall-of-isaac-newton-childhood-home
  3. Taylor

    Reflected UV with RTI

    There's no inherent reason why an RTI generated from your UV reflectance images should be blurry, as long as they're as sharp as you say. The first thing I'd check is whether the images are registered (aligned), and one way to do this is to check that the spheres are all in the same position in the images you captured to construct the RTI. After you detect the spheres (pp. 15-17 of the RTI Guide to Highlight Image Processing), the circle and the center of each sphere should be in the same position in all the images. If any of the spheres is out of position, then something has moved (e.g., the camera, object, or possibly just a sphere). If the camera or object moved, you could remove the images captured from that point on from the RTI, resulting in a limited range of light positions in the final RTI. Also, make sure you're set up on a stable surface. If only the sphere you chose to detect spheres has moved (I've seen it happen), you could try detecting and aligning the images on the alternate sphere instead. If the camera or object moved, you could either re-do the capture sequence or try aligning the images using a tool such as ImAlign (available from CHI), PTGUI, or Photoshop. If all the images are sharp and aligned properly, I'd check if anything else could have changed during the capture sequence, such as aperture or focus, zoom, or other exposure settings. Check also to make sure you have image stabilization turned off. Normally, these settings shouldn't change if you follow the steps in the guide, but it's easy to forget and leave something on "auto," for example, or gravity could have moved the zoom lens if you're using one. You can get elastic bands to hold the zoom and focus rings, or just use a piece of gaffer's tape to hold them in place.
  4. Perhaps of interest primarily to the forum readers in the U.S.: If only Thomas Jefferson could settle the issue Happy 4th of July!
  5. I've been looking for focusing aids for IR and UV multispectral imaging, and while browsing on a whim at our local salvage store, Urban Ore, in Berkeley, I happened across an item that might be useful. It's a vintage face tanning device with both a UV lamp and IR resistance-heating sources. I'm guessing it dates to the '50s or '60s. It's a German made device by "Dr. Kern & Sprenger, KG" and it runs on 220V AC. I picked it up for $10. I haven't checked, but I wouldn't be surprised if you could find one of these on eBay. The IR heaters would radiate mostly in the thermal-range (3,000 to 30,000 nm, or 3-30 microns), outside the range of the CMOS sensors on most cameras, but they certainly also radiate in the near IR (700 to 1,300 nm). However, I'm thinking of disconnecting them or rewiring the switches because they draw a lot of watts and produce mostly heat. They can't be switched off while the UV lamp is on, unfortunately (the switches offer only IR "Wärme," or IR+UV "Sonne + Wärme"). I also have an IR LED flashlight from maxmax.com. I'm trying to find out more about the UV lamp in this device, particularly about the spectrum it emits and whether there are modern lamps that I can replace it with to get specific UV wavebands. It appears to be a high-pressure mercury lamp, but could also be a low-pressure lamp. A label on the back of the device identifies it ("Typ UV Brenner") as a "UV800". I've attempted, but haven't yet found a way to remove the lamp to replace it. It emits quite a bit of visible light, and while testing it I've been wearing sunglasses with a good UV-blocking rating to protect my eyes, and I avoid looking at the lamp directly. I'd also like to find out if I can attach dichroic or other filters in front of the UV lamp to specifically select certain wavelengths or wavebands of UV light. For safety reasons, I'd like to filter out any UVB and UVC. If anyone has experience with filtering for specific UV bands on the light source, I'd be interested to hear of your experiences. Other UV light sources for use as focusing aids would be welcome. I know others in this forum have made good suggestions for UV sources for actual imaging. For UV imaging, here's another example of a tunable UV light source: "High Power UV LED Radiation System: 365nm 385nm 395nm 400nm 405nm" http://photographyoftheinvisibleworld.blogspot.de/2012/09/high-power-uv-led-radiation-system.html Dr. Schmitt's blog is my favorite resource for UV photography, and his inventory of macro lenses is another fantastic resource: The Macro Lens Collection Database http://www.macrolenses.de/ Thanks to Dr. Schmitt for permission to link to his sites here. Anyway, apart from the novelty appeal and low cost, the tanning device is relatively small and light, and it has a nice hinge that allows the light to be directed at various angles. It's an interesting addition to my inventory for multispectral RTI.
  6. Taylor

    AIC PhD Targets

    Sorry to hear if the AIC PhD targets won't be made available. They have a nice layout, different sizes, and useful features. I've thought about buying a couple different sizes for imaging paintings and other subjects if I had the budget, but maybe I missed the opportunity. You might also want to look at the Artist Paint Target (APT) developed by Roy Berns' team at Rochester Institute of Technology: http://www.rit-mcsl.org/Mellon/PDFs/Artist_Paint_Target_TR_2014.pdf They're relatively small, similar to the discontinued X-Rite mini color checker. You might contact Dr. Roy Berns to find out how to order one.
  7. Taylor

    Slow dense cloud generation on MacBook Pro

    You'll probably do better running Photoscan in OS X rather than running it in Windows through Parallels on a Macbook. And for whatever reason, Photoscan seems to play nicer with Nvidia graphics cards than with the Radeon GPUs, perhaps because it takes advantage of the CUDA architecture on Nvidia cards. That said, I've easily processed much larger datasets on a 2012 Mac Mini that has lesser specs (Quad i7 processor with integrated HD 4000 graphics, and 16 Gb of RAM) than your newer Macbook Pro, so I don’t think the specs of your MacBook are the concern—it’s more likely to be the settings you’re using or possibly the image quality. Processing time depends on a number of factors and settings in Photoscan, which makes the most use of discrete graphics cards (GPUs) during the step of generating a dense point cloud from a sparse cloud, so I'd focus on getting the GPU running. You should certainly disable one core of the CPU as recommended in the Photoscan User's Guide, (using Preferences and the OpenCL tab in Photoscan) to take advantage of the discrete GPU on your laptop. If you have XCode on your laptop, you might try also disabling multithreading by opening "Instruments" and using Preferences. It used to be said that Photoscan doesn't run as well with multithreading enabled, but I'm not sure if this is true of the current version of Photoscan, so I'd try it and see. I would also check the quality of the images (you can do this in the Photos pane) and make sure you've carefully optimized the sparse point cloud before you try generating the dense point cloud. It wasn't really clear if the 40 hours required to generate the dense point cloud (Step 2 of the Photoscan workflow) on your laptop also included the time to align the images and generate the sparse point cloud (Step 1 of the workflow). It would, for example, take much longer to align the images and generate a sparse point cloud if the number of tie points is increased to 80,000 and pair preselection is turned off. Try using 60,000 tie points and use the default "generic" setting rather than "disabled" in the pair preselection box. 75 cameras isn't that many images and I wouldn't expect that you'd have so much trouble processing a dense point cloud from this data set on your MacBook Pro. However, lots of things can affect the performance and the time required isn't always predictable. As an example, I recently processed a dense point cloud from an identical sparse point cloud using both "aggressive" and "mild" filtering settings, and the combined CPU + GPU performance differed by a factor of almost three, simply because of the filtering setting (roughly 850 million samples/sec using "aggressive" filtering vs. 350 million samples/sec using "mild" filtering). Sometimes, trial and error is the best way to improve performance.
  8. Taylor

    Reflected-UV photography

    I've been doing reflected UV using a modified mirrorless, micro four-thirds camera, which allows you to focus in live view without having to refocus when you capture the images. I use a small, 1-watt UV-LED flashlight as both a focusing aid and as the UV-A radiation source for reflected UV imaging. I use a relatively inexpensive Noflexar Novoflex 35mm f/3.5 macro, which has good UV transmission properties because of its lack of UV coatings. Here are links to information about this lens as well as other specialized lens options on Dr. Klaus Schmidt's UV photography and macro lens websites (for which I'm relying on his prior permission to cite): http://photographyoftheinvisibleworld.blogspot.de/2011/01/psychedelic-lilly-uv-and-vis-comparison_30.html http://www.macrolenses.de/ Some versions of this lens reportedly perform better than others in the UV and it's worth checking before buying. [Disclosure: I purchased my Noflexar macro from Dr. Schmidt.] But they're much less expensive than the apochromatic lenses and allow UV imaging with reasonable exposure times (10-30 sec on my camera, usually). As George points out, you'll still get some focus shift using this lens when you compare IR and visible images to the UV image. For stacking and image registration, there are several options (Photoshop, CHI's Imalign tool, and others). I agree with George that the Baader Venus filter is the best for UV reflectance. I find I get better contrast with shorter exposures using the Baader filter. The Baader filter is expensive and delicate; you can also use a variety of combinations of UV-pass + IR-blocking filters, which have varying performance, such as those discussed here: http://photographyoftheinvisibleworld.blogspot.de/2012/08/leakage-in-reflected-uv-ultraviolet.html and here: http://www.ultravioletphotography.com/content/index.php/topic/1313-filter-transmission-charts/ It's important to block any IR component in the UV source to get good UV reflectance images, because silicon CCD and CMOS sensors are more sensitive in the IR than in the UV, so even a small amount of IR leakage will ruin the UV image. Since the Baader filter is so effective at blocking IR, you can even use daylight or any source of UV-A with the Baader filter if you give adequate consideration to the sensitivity of the object to UV, but this might not be a concern with skeletal remains. Wear UV-protective plastic glasses to protect your eyes from UV-A (they're inexpensive and easily available). There are several sources for how to process the images; probably the best source is the AIC Guide. For UV-induced visible fluorescence that Dale suggested, you'll need a dark room with little to no ambient visible light; a good UV + IR cutoff filter (visible bandpass filter) such as a Hoya IR/UV cutoff filter; and a clean source of UV radiation with no visible blue component. This usually requires filtering the UV source, even if it says "365 nm LED" or such. These LEDs nearly always have a blue tail that extends to wavelengths around 400-410 nm, which interferes with the induced visible fluorescence and degrades the images. I put a Hoya U-340 filter on the UV-LED flashlight to reduce the blue tail, which gives me a peak wavelength of 371 nm and near-zero blue tail. For UV reflectance and UV-induced fluorescence using the flashlight, I use a long exposure (usually between 10 and 30 sec at around f/5.6 to f/11) and sweep the flashlight beam across the surface to get an even exposure. I might do this 3-4 times and choose the best image. Because you're capturing lots of images and good, even exposure is critical for photogrammetry, this technique won't work for photogrammetry. So you'll probably want to get a stronger UV source or a pair of sources that you can leave stationary relative to the object. Strong sources of UV-A with good filtration to remove the blue tail tend to either cost a lot or they're bulky and awkward to handle, which is why you may want to follow George's suggestion of modifying strobes. I've attempted this but found they didn't produce enough UV-A for my needs, a topic for another discussion. The trick to getting good UV-induced visible fluorescence images is good filtration on the UV source to eliminate the visible blue tail; good filtration on the lens to cutoff the UV source and any stray IR (only if you're using a modified full-spectrum camera); plus a dark room to reduce stray ambient light. However, at least it doesn't require a modified camera or a special lens, and it has other benefits as Dale suggested. Because UV-C has shorter wavelengths, it has special optical properties that are useful, but you can't capture UV-C reflectance images using any consumer camera with a silicon-based sensor. You need both special optics and a camera with a specialized sensor to capture UV-C reflectance, and as George mentioned, UV-C is dangerous to work with. You absolutely need to wear protective eyewear and cover skin, and do due diligence. However, you can capture UV-C induced visible fluorescence with an unmodified consumer camera if you put the right filtration on both the source of UV-C and the lens (to eliminate any visible components of the UV source or stray ambient light). These topics are also covered in the AIC Guide and elsewhere.
  9. Here are some additional tools to enable web publication and shared scholarship of 3D and complex, high-resolution data sets, in addition to the CHER-Ob project at Yale that Eleni Kotoula mentioned in the previous post: ARIADNE Visual Media Service: http://visual.ariadne-infrastructure.eu/ Archaeological Data Service: http://archaeologydataservice.ac.uk/archives/ Both of these sites are based on the 3D Heritage Online Presenter (3DHOP) developed by the Visual Computing Lab (CNR-ISTI), who, as Carla noted, also developed Meshlab and the RTIViewer. Along with the free RTI Mobile App, there are lots of new tools being developed for data sharing and scholarship.
  10. Thanks, Eleni--this looks really useful and I look forward to trying it out. Taylor
  11. Taylor

    The Raw and the Cooked

    Tom, thanks very much for those tips and observations. I wasn't aware of the importance of capturing +/- 90-degree camera rotations from different camera positions than the 0-degree camera positions to reduce the z-errors, and I'll make this part of my workflow. The +/- 90-degree rotations aren't exactly at the same positions and optical axis as the 0-degree camera positions, but they're close. I use a camera rotation device made by Really Right Stuff, but it would be very easy to change the baseline positions before rotating the camera for better calibration, as you suggested. I realize the 35-mm macro lens (70 mm equivalent on a full-frame camera) isn't ideal for geometry, which is why I don't use it as the primary lens for photogrammetry (instead, I use the 14mm or 20mm [equivalent to 28mm to 40mm prime lenses on a full-frame camera] for photogrammetry), but I use it because of its uncoated optics and better transmissivity and contrast in the UV range. I also capture UV-induced visible fluorescence (UVF), visible and IR images for the same camera positions, so to get good registration of the spectral images at various wavelengths, I only change the on-lens filtration and light source, not the lens. Essentially, I'm using photogrammetry as an aid for producing spectral orthomosaics and also as a tool for spatial correlation of spectral data and other analyses. I thought I'd get better alignment of the various camera positions for macro and spectral imaging with the 3D model if I also captured a sequence of visible calibration images for the 35mm macro lens at each camera position. After I align the set of visible calibration images with the model, I rely on registration of the spectral images using Photoshop or Imalign (which has a feature to align IR and VIS images) to spatially correlate the spectral data with the visible 3D model. As I understand your suggestions, I'd be better off simply aligning the single visible camera position for the whole sequence of UVR, UVF, VIS, and various IR wavebands, rather than aligning an entire calibration sequence for each position. I've encountered the stair-stepping problem you mentioned when I merge chunks of the same surface captured with different sets of overlapping images (for example, if the capture sequence is repeated with a slightly different focus or a different lens). Slight differences in the calibration of each chunk or calibration group can result in two overplapping surfaces because of slight errors in the z-depth. Turning off pair preselection before aligning the images from different groups seems to help, although it takes much longer to align this way. My assumption has been that this establishes tie points between images from different calibration groups that overlap the same area to reduce the z-errors, and it has worked for several projects (although perhaps not without the rippling effect mentioned above). Finally, here are two references published in 2014 that discuss the algorithms that Photoscan uses. The first seems to indicate that earlier versions of Agisoft Photoscan used a version of the semi-global matching (SGM, or "SGM-like") algorithm (although this might no longer be true): Dall’ Asta, E. and Roncella, R., 2014. A comparison of semiglobal and local dense matching algorithms for surface reconstruction. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XL(5/ WG V/1) pp. 1–8. ISPRS Technical Commission V Symposium, 23–25 June, Riva del Garda, Italy. The second discusses a different set of algorithms and includes a link to the Photoscan Forum (http://www.agisoft.com/forum/index.php?topic=89.0) for further discussion: Remondino, F., Spera, M.G., Nocerino, E., Menna, F., Nex, F., 2014. State of the art in high density image matching. The Photogrammetric Record, 29(146), pp. 144–166, DOI: 10.1111/phor.12063 Taylor
  12. Taylor

    The Raw and the Cooked

    And George, feel free to contact me about exchanging a set of TIFFs for testing, if you'd like. Taylor
  13. Taylor

    The Raw and the Cooked

    Tom, et al., I recently aligned a set of JPEGs captured using a 14 mm focal length lens on a micro four-thirds camera (equivalent to 28 mm on a full-frame sensor), which I would have thought would have provided pretty good geometry and overlap, on the latest version of Photoscan. I forget the exact number, but it was about 150 images in total to cover the entire surface of the painting with about 70 percent horizontal and 35 percent vertical overlap at a ground sample distance of about 250 pixels per inch. I captured 3 images for each camera position, including +/- 90-degree rotations. In addition to these, the data set also included close-up images captured with a longer, 35 mm focal length lens (70 mm equivalent for a full-frame sensor), for which I also captured at least 3 to 6 sets of overlapping images with +/- 90-degree rotations at each camera position. Finally, I aligned a set of images of the entire painting captured with the 35 mm lens (70 mm equivalent). This set had 5 or 6 camera positions with 70 percent horizontal overlap and +/- 90 degree rotations at each camera position. The idea was to obtain a set of calibration images for every camera position used to document the painting and for each camera position to be aligned with the model, so I could spatially correlate the spectral data from every camera position using the model. I aligned using the highest setting for the sparse cloud with pair pre-selection turned off (this seems to help establish tie points between the different calibration groups) and Photoscan successfully aligned the 200+ images on the first attempt. After the initial alignment, I removed some of the obviously stray pixels, adjusted the bounding box, and optimized the reprojection uncertainty a few times (Photoscan very quickly gets the reprojection uncertainty to less than 0.6 arbitrary units after a few iterations). I then used gradual selection only once to remove reconstruction uncertainty to 10 arbitrary units or less, which eliminated roughly 75 percent of the points, but left well over 1000 to 10,000 projections per image to finish the optimization. I did several more iterations of optimization with gradual selection to get the reprojection uncertainty to less than 0.4, set my scale bars, and continued gradual selection and optimizing until the reprojection uncertainty was less than 0.25 on the slider. I don't have the final error statistics handy, but as I recall the sigma error was about 0.3 pixel before I built the dense point cloud. Still, I noticed the slight rippling effect. Maybe this was due to the longer focal-length images with less favorable geometry? Maybe I'll try the whole process again using TIFFs. I also often notice a color shift on the orthomosaics compared to the original images. The orthos often appear darker and a shade or two cooler in color temperature.
  14. Taylor

    The Raw and the Cooked

    George, I'd be happy to run some comparison tests between TIFFs and JPEGs of the same images. As I mentioned above, I'm often not sure why some data sets seem to process much more quickly than others. I suspect there are other factors affecting the speed of the dense-cloud generation in addition to bit-depth, since I've gotten both faster and relatively slower samples/sec with 8-bit JPEGs, from my recollection. Thanks for the link to Hirschmüller's paper on SGM and the observations about the different dense-cloud settings in Photoscan--the menu descriptions of "ultra-high," "medium," etc. could be made more explicit. I recall discussions about how the settings relate to downsampling somewhere on Photoscan's website and maybe also in their forums, but their impact on accuracy isn't always made clear. The rippling effect you mentioned is an interesting phenomenon and I wonder if there are certain kinds of data that tend to cause more or less of this effect--for example, the range and frequency of contrast differences, texture, and other characteristics of the images.
  15. Taylor

    The Raw and the Cooked

    Good points. I save all of my RAW files as DNGs with embedded RAW which doubles the file size. I generally export a TIFF of images I plan to use (3 times the RAW file size at 16 bits per RGB channel) after I've done my basic processing in Lightroom. Then I export JPEGs for processing with Photoscan or RTIBuilder. If I want to add notations (boxes, arrows and scale bars), I usually do this on JPEG images at maximum quality in ImageJ, and then I export a smaller compressed JPEG (usually limited to 500 Kb) for embedding into reports, and I include a link to a higher-res version of the JPEG. On occasion, I'll use Photoshop on the TIFFs to convert to gray-scale and for false-color IR, or for processing prior to other software such as DCRAW, ImageJ, or Dstretch for additional post-processing of false-color IR, UV, or Principal Components Analysis. Saving a false-color IR as a TIFF increases the number of layers, which increases the file size. I keep a log file of all the image processing steps in Photoshop. For a recent project documenting a relatively small portrait painting (recto and verso) with dimensions of 48 x 50 cm (roughly 19 inches squared), I captured approximately 700 images for visible, IR (various wavebands), UV, UV-induced fluorescence, visible-induced IR fluorescence, visible- and IR-RTIs, false-color infrared, and photogrammetry. Every camera position used to document the recto included a calibration set of 9 or more images to align all the data with the 3D model. By the time all the post-processing was complete, I had accumulated about 1500 images and nearly 70 Gb of data. Not all of these images yielded data that was ultimately used (e.g., calibration and bracketing exposures, visible-induced IR that didn't reveal pigments that fluoresced in the IR, and dark-field exposures). All of this adds up very quickly, so at some point I may need to make the decision whether to keep all the data or let some of the derivatives and unused images go.
  16. Taylor

    The Raw and the Cooked

    Lots of interesting observations here, George, especially about the downside of using 16-bit TIFFs for speed and accuracy. I have processed some data sets with 16-bit TIFFs under the assumption, rather than the knowledge, that more bit-depth would give better results, but it's a question that has been nagging me for a while. I wouldn't mind running some comparisons between 16-bit TIFFs and 8-bit JPEGs. I get a wide range of processing speeds on my GPU, and I'm not sure what determines the speed. I once achieved a combined 1 billion samples per second between the GPU and CPU (Nvidia GTX 980Ti and 12-core Xeons on a 2012 Mac Pro), but more often I get something in the range of 250 million to 750 million samples/second. It's good to know, if I understand your post correctly, that processing the sparse point cloud at less than the highest quality setting can still yield accurate results. I also rarely process the dense point cloud at the ultra-high setting except for smaller data sets, mostly because of memory consumption (128 Gb of RAM). I've also noticed the rippling effect in some models. At first, I thought it was just a moire effect of rendering the point cloud or mesh at different scales relative to my screen resolution, but it's definitely there in some models. On your third point, storage, I've been reluctant to give up my RAW files, but I tend to store everything. I'm running up against the limits of this habit, however. I have a total of 9 Tb of storage, including a 4 Tb external backup drive, and 5 Tb of primary storage on 2 internal and one external drive. I use Apple's Time Machine for backing up my files. The 4 Tb backup drive is essentially full, since Time Machine preferentially overwrites the oldest files first. The external drives start to fill up power strips, since they require an external AC power source, so its an issue I'll have to face up to sooner or later.
  17. Taylor

    RTI standard suggestion

    This strikes me as a good idea, and not that difficult to implement. A wish list of features for future versions of the RTIViewer might include the ability to automatically annotate snapshots with a scale bar and the light-direction track-ball from the viewer. It's an interesting Master's thesis, considering the application of RTI to transparent or semi-transparent objects that don't fit the Lambertian, non-transmissive model assumptions that the RTI algorithms are designed for. I've had some qualitative success capturing RTIs of translucent and specular objects, but they produce artifacts and the resulting normal directions are quantitatively questionable. I don't suppose the thesis is available in English translation?
  18. Taylor

    New publication:

    This new book looks like it could be useful: "3D Recording, Documentation and Management of Cultural Heritage" http://3dom.fbk.eu/sites/3dom.fbk.eu/files/pdf/3DRecordingFlyer.pdf If anyone picks up a copy for £85 plus shipping, write us a review!
  19. Despite all my efforts to defeat this problem, it continues to be a bug in my workflow for photogrammetry. To get a good lens calibration in Photoscan, you shoot calibration images by rotating the camera -90 and +90 degrees along with the horizontal images in a sequence of overlapping positions. I've set my camera not to rotate pictures, so the calibration shots should appear as normal horizontal images, with the top of the object in the image facing right or left, depending on the camera rotation. However, the camera exif data apparently still records the image orientation and when I import the images into Lightroom, they're all rotated vertically so the top of the object is pointing up and the images are in portrait orientation, not landscape. Lightroom 4 doesn't allow you to set a preference to ignore image orientation during import, so it automatically rotates the images whether you like it or not. When you export the images as TIFFs or JPEGs, they're all rotated so the object appears oriented right-side-up. I've tried using Photoshop CS6 to open the same DNGs that were imported into Lightroom, after I set PS6 preferences to ignore image orientation, but it still rotates them into portrait mode. Photoscan has a pair of buttons that allow you to rotate the images right or left 90 degrees after they're uploaded to the workspace window, but this doesn't affect how Photoscan uses the images for calibration--Photoscan still thinks the sensor has portrait dimensions with the shorter side at the bottom, instead of the longer side. Therefore, Photoscan groups all the vertically oriented images into a separate calibration instead of using them to refine the calibration for the horizontal images. This is a maddening problem because once you've created masks for your images, the masks have to have the same orientation as the images or Photoscan won't allow you to apply the same masks that you created using rotated images to the unrotated images. If you un-rotate the original images, you also have to remove rotation information for the masks to allow them to align properly, or Photoscan won't let you re-import the masks. I've heard that some use Windows Explorer to remove image orientation data, but I don't have Windows on all my Macs, and I've heard there are also problems with Windows applying lossy compression to JPEGs when it rotates them--very bad behavior! How do I defeat the image rotation problem? This gets very time consuming for projects with hundreds of images.
  20. Fortunately, I haven't had this difficulty recur in a long time, but if I do, I'll try running ozbigben's script. Thanks for the discussion! Taylor
  21. 3DHOP (3D Heritage Online Presenter) is an open-source software package for the creation of interactive Web presentations of high-resolution 3D models, oriented to the Cultural Heritage field. http://3dhop.net/
  22. Taylor

    Lens movement

    I sometimes use a focusing rail and they have their uses. But I think for the problem you're having with the lens extending, it won't help, because it's gravity plus vibration that's causing your lens to extend. This would still be the case if you put the camera on a rail and shoot vertically (effectively, it would be just an extension to the copy stand). Jon's suggestions are your best bet and are the easiest, least expensive options, I think.
  23. This should be of interest to fans of RTI, underwater Greek archaeology, and ancient astronomical computers: http://www.getty.edu/museum/programs/lectures/foley_lecture.html
  24. Taylor

    OrthoPhoto from Photoscan?

    Hi Rich, It's a very common problem running out of memory while processing meshes in Photoscan. Once the program freezes due to lack of memory, you really can't proceed with it. FWIW, I usually monitor the memory using the Activity Monitor provided as a utility in OS X. If the memory demand moves into the yellow and red zones, I generally quit and try generating the dense point cloud at the next-lower quality setting. Occasionally, I can squeak by with a few excursions into the red, but usually this means the dense point cloud is just too big for the available memory and I go back and generate a less-dense point cloud. For a given size of point cloud, the mesh generation step takes as much memory no matter what settings you use in the Generate Mesh step. For example, if you select medium quality, Photoscan generates the mesh at the highest possible number of polygons, then decimates the mesh to the size you select, so it's just as memory-intensive. Another option you could consider is to make copies of the point cloud as new chunks and reduce the size of the bounding box to overlapping areas of the point cloud. You can then process the mesh for each chunk at higher settings, produce orthophotos, and then stitch the orthophotos together in Photoshop or other software. This alternative is now easier with the new file format (PSX) in v. 1.2.1, since it doesn't dramatically increase the size of the file like the older (PSZ) format. My 2012 Mac 12-core currently has 32Gb of RAM and I can sometimes generate a mesh at the highest quality settings (ultra-high point cloud and high-polygon mesh) for about 200 images that are ~18 Mb TIFFs. I'm upgrading it now to 128 Gb of RAM and an Nvidia GTX 980Ti GPU modified for Mac. It will be interesting to see how much this helps with the mesh generation and I'll let you know when it's running. I could try processing a mesh for you if it would help. Best, Taylor
  25. Taylor

    OrthoPhoto from Photoscan?

    Hi Richard, It sounds like a really interesting project. I have a mural project on-going as well and hope we can exchange information. If you haven't already, I'd recommend you update to the latest version of Photoscan (currently v. 1.2.1). It seems to produce orthomosaics with fewer artifacts and better resolution, based on the results I've seen so far. It's not necessary to generate texture for the mesh in order to generate an orthophoto. However, the resolution of the orthomosaic is partly dependent on the quality of the mesh. The size of the pixels in the "Export Orthomosaic" dialogue box is set by default to the highest possible resolution, so I wouldn't change these values. If the image is too large, it might help to break the orthomosaic into blocks, which you can choose in the Export Orthophoto dialogue box. First, I'd generate the highest quality point cloud that your computer's memory is capable of using to generate a mesh (it's the step most demanding of RAM). It's better to start with the most points possible in the dense point cloud and then decimate the mesh if necessary, rather than to start with a less dense point cloud and generate the mesh at the highest possible settings. After you've generated the mesh using the highest quality settings your computer's memory will allow, I usually use Tools->Mesh->View Mesh Statistics->Fix Topology to repair defects in the mesh. If there are holes in the mesh or occlusions, you can also use Tools->Mesh->Close Holes. Then you can export the orthomosaic using the best quality settings. I've found the tutorial on generating orthophotos is a useful reference, also. Best of luck, Taylor