Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Taylor last won the day on March 7 2019

Taylor had the most liked content!

Community Reputation

44 Excellent

About Taylor

  • Rank
    Advanced Member

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

1,413 profile views
  1. It has taken me a long time to try out the beta DLN-CC and DLN-Inspector, but they look to be really promising tools to help organize projects and manage metadata. Some initial observations: The RTI Image Set Details tab titled "General" is currently set up for Highlight-RTIs. It would be helpful to have some fields in the "RTI Properties" section (or on the "Setup" tab, below) for dome-based RTI parameters, such as the radius of the dome (I used string length as a substitute, but it's not exactly the same geometry), the number of lights, etc. The "Setup" tab allows a single photo of the setup to be added, but this might include another section to place parameters for the geometry of the capture setup (which is currently in the "General" tab for "RTI Properties"). For our RTI dome geometry, such measurements could include the height of the stage on which objects are placed, the vertical distance between the stage and the plane of the equator of the dome, the distance between the camera sensor and the plane of the equator of the dome, the number and type of lights; etc. For dome-RTI capture setups, a way to point to LP files and other calibration files (e.g., flat-fields, color checker profiles) that are used for processing multiple image sets would be helpful. An easier way to define custom directory structures for image sets without manually entering details into JSON files would be useful, especially for legacy projects with existing directory structures that don't fit the examples in the user guides. These are just some suggestions that I'd find useful. I realize there's a balance between tracking all the details and keeping the DLN tools simple and easy to use for the majority of people using RTI and photogrammetry. Thanks for all your hard work on these DLN tools! Best wishes, Taylor
  2. Hi Carla, I've set up a project, image set group, and a single RTI image set for initial testing using the DLN-CC tool. I then tried to inspect the image set using the DLN-Inspector tool, after updating the DLN database according to instructions in Appendix A1 of the User Guide DLN Inspector. When I opened the DLN Inspector, it shows me the Project Name in the first drop-down selection box, but it doesn't show any Image Groups in the second drop-down selection box. I've tried quitting and opening both the DLN-CC tool and the DLN-Inspector tools, but this doesn't make a difference. I've tried checking all my entries in the DLN-CC tool, but I can't find anything wrong. Do you have any suggestions for what else I should be checking? Other items: I found a couple of typos in the Appendix A.1.1 of the User Guide, DLN Inspector. In step 7, it states, This appears to refer to Step 3 rather than Step 2, and there isn't a scripts subfolder in the directory (Step 4 also refers to the scripts subfolder, which doesn't exist). Using DLN-CC, when I use the button "Save RDF" in the Image Sets Overview window, I get a series of java errors that says the following: However, I checked the directory for the image set and it does contain a new file named "dln-cptset.xml" that appears to contain all the relevant information for the image set. Best wishes, Taylor
  3. Eleni Koutoula and colleagues at the Archaeological Computing Research Group at U. of Southampton did some work on transmitted RTI using visible light and infrared radiation some years ago, for example, this demonstration at the CAA conference in 2013: https://uk.caa-international.org/2013/02/19/reflectance-transformation-imaging/ As you mentioned, among the challenges of this method would be that the setup doesn't follow a basic assumption on which RTI is based, which is that the reflected light can be modeled using a biquadratic function in the case of PTMs or hemispherical harmonics in the case of RTIs. That doesn't mean you couldn't find a way to process transmitted "RTI" microscopy and get an interesting result (once you supply a set of light positions) or that the information gained using transmitted "RTI" microscopy wouldn't be useful. With more transmitted light positions, you should get more information than is possible using a single light position. There are examples of reflected RTI microscopy using a light array dome, for example, see Paul Messier's "monkey brain": http://www.paulmessier.com/single-post/2014/10/01/The-first-batch-of-studios-Monkey-Brain-light-arrays-for-RTI-microscopy-have-shipped-to-museum-conservation-departments Also, there are some interesting results using an LED array microscope with phase contrast, bright-field, and dark-field illumination to achieve extremely high resolution and 3D imaging: http://www.laurawaller.com/research/
  4. RTI claims a new success, the discovery of a drawing of a windmill on a wall of Isaac Newton's childhood home, Woolsthorpe Manor, in Lincolnshire, UK: https://www.theguardian.com/science/2017/dec/08/windmill-drawing-found-on-wall-of-isaac-newton-childhood-home
  5. There's no inherent reason why an RTI generated from your UV reflectance images should be blurry, as long as they're as sharp as you say. The first thing I'd check is whether the images are registered (aligned), and one way to do this is to check that the spheres are all in the same position in the images you captured to construct the RTI. After you detect the spheres (pp. 15-17 of the RTI Guide to Highlight Image Processing), the circle and the center of each sphere should be in the same position in all the images. If any of the spheres is out of position, then something has moved (e.g., the camera, object, or possibly just a sphere). If the camera or object moved, you could remove the images captured from that point on from the RTI, resulting in a limited range of light positions in the final RTI. Also, make sure you're set up on a stable surface. If only the sphere you chose to detect spheres has moved (I've seen it happen), you could try detecting and aligning the images on the alternate sphere instead. If the camera or object moved, you could either re-do the capture sequence or try aligning the images using a tool such as ImAlign (available from CHI), PTGUI, or Photoshop. If all the images are sharp and aligned properly, I'd check if anything else could have changed during the capture sequence, such as aperture or focus, zoom, or other exposure settings. Check also to make sure you have image stabilization turned off. Normally, these settings shouldn't change if you follow the steps in the guide, but it's easy to forget and leave something on "auto," for example, or gravity could have moved the zoom lens if you're using one. You can get elastic bands to hold the zoom and focus rings, or just use a piece of gaffer's tape to hold them in place.
  6. Sorry to hear if the AIC PhD targets won't be made available. They have a nice layout, different sizes, and useful features. I've thought about buying a couple different sizes for imaging paintings and other subjects if I had the budget, but maybe I missed the opportunity. You might also want to look at the Artist Paint Target (APT) developed by Roy Berns' team at Rochester Institute of Technology: http://www.rit-mcsl.org/Mellon/PDFs/Artist_Paint_Target_TR_2014.pdf They're relatively small, similar to the discontinued X-Rite mini color checker. You might contact Dr. Roy Berns to find out how to order one.
  7. You'll probably do better running Photoscan in OS X rather than running it in Windows through Parallels on a Macbook. And for whatever reason, Photoscan seems to play nicer with Nvidia graphics cards than with the Radeon GPUs, perhaps because it takes advantage of the CUDA architecture on Nvidia cards. That said, I've easily processed much larger datasets on a 2012 Mac Mini that has lesser specs (Quad i7 processor with integrated HD 4000 graphics, and 16 Gb of RAM) than your newer Macbook Pro, so I don’t think the specs of your MacBook are the concern—it’s more likely to be the settings you’re using or possibly the image quality. Processing time depends on a number of factors and settings in Photoscan, which makes the most use of discrete graphics cards (GPUs) during the step of generating a dense point cloud from a sparse cloud, so I'd focus on getting the GPU running. You should certainly disable one core of the CPU as recommended in the Photoscan User's Guide, (using Preferences and the OpenCL tab in Photoscan) to take advantage of the discrete GPU on your laptop. If you have XCode on your laptop, you might try also disabling multithreading by opening "Instruments" and using Preferences. It used to be said that Photoscan doesn't run as well with multithreading enabled, but I'm not sure if this is true of the current version of Photoscan, so I'd try it and see. I would also check the quality of the images (you can do this in the Photos pane) and make sure you've carefully optimized the sparse point cloud before you try generating the dense point cloud. It wasn't really clear if the 40 hours required to generate the dense point cloud (Step 2 of the Photoscan workflow) on your laptop also included the time to align the images and generate the sparse point cloud (Step 1 of the workflow). It would, for example, take much longer to align the images and generate a sparse point cloud if the number of tie points is increased to 80,000 and pair preselection is turned off. Try using 60,000 tie points and use the default "generic" setting rather than "disabled" in the pair preselection box. 75 cameras isn't that many images and I wouldn't expect that you'd have so much trouble processing a dense point cloud from this data set on your MacBook Pro. However, lots of things can affect the performance and the time required isn't always predictable. As an example, I recently processed a dense point cloud from an identical sparse point cloud using both "aggressive" and "mild" filtering settings, and the combined CPU + GPU performance differed by a factor of almost three, simply because of the filtering setting (roughly 850 million samples/sec using "aggressive" filtering vs. 350 million samples/sec using "mild" filtering). Sometimes, trial and error is the best way to improve performance.
  8. I've been doing reflected UV using a modified mirrorless, micro four-thirds camera, which allows you to focus in live view without having to refocus when you capture the images. I use a small, 1-watt UV-LED flashlight as both a focusing aid and as the UV-A radiation source for reflected UV imaging. I use a relatively inexpensive Noflexar Novoflex 35mm f/3.5 macro, which has good UV transmission properties because of its lack of UV coatings. Here are links to information about this lens as well as other specialized lens options on Dr. Klaus Schmidt's UV photography and macro lens websites (for which I'm relying on his prior permission to cite): http://photographyoftheinvisibleworld.blogspot.de/2011/01/psychedelic-lilly-uv-and-vis-comparison_30.html http://www.macrolenses.de/ Some versions of this lens reportedly perform better than others in the UV and it's worth checking before buying. [Disclosure: I purchased my Noflexar macro from Dr. Schmidt.] But they're much less expensive than the apochromatic lenses and allow UV imaging with reasonable exposure times (10-30 sec on my camera, usually). As George points out, you'll still get some focus shift using this lens when you compare IR and visible images to the UV image. For stacking and image registration, there are several options (Photoshop, CHI's Imalign tool, and others). I agree with George that the Baader Venus filter is the best for UV reflectance. I find I get better contrast with shorter exposures using the Baader filter. The Baader filter is expensive and delicate; you can also use a variety of combinations of UV-pass + IR-blocking filters, which have varying performance, such as those discussed here: http://photographyoftheinvisibleworld.blogspot.de/2012/08/leakage-in-reflected-uv-ultraviolet.html and here: http://www.ultravioletphotography.com/content/index.php/topic/1313-filter-transmission-charts/ It's important to block any IR component in the UV source to get good UV reflectance images, because silicon CCD and CMOS sensors are more sensitive in the IR than in the UV, so even a small amount of IR leakage will ruin the UV image. Since the Baader filter is so effective at blocking IR, you can even use daylight or any source of UV-A with the Baader filter if you give adequate consideration to the sensitivity of the object to UV, but this might not be a concern with skeletal remains. Wear UV-protective plastic glasses to protect your eyes from UV-A (they're inexpensive and easily available). There are several sources for how to process the images; probably the best source is the AIC Guide. For UV-induced visible fluorescence that Dale suggested, you'll need a dark room with little to no ambient visible light; a good UV + IR cutoff filter (visible bandpass filter) such as a Hoya IR/UV cutoff filter; and a clean source of UV radiation with no visible blue component. This usually requires filtering the UV source, even if it says "365 nm LED" or such. These LEDs nearly always have a blue tail that extends to wavelengths around 400-410 nm, which interferes with the induced visible fluorescence and degrades the images. I put a Hoya U-340 filter on the UV-LED flashlight to reduce the blue tail, which gives me a peak wavelength of 371 nm and near-zero blue tail. For UV reflectance and UV-induced fluorescence using the flashlight, I use a long exposure (usually between 10 and 30 sec at around f/5.6 to f/11) and sweep the flashlight beam across the surface to get an even exposure. I might do this 3-4 times and choose the best image. Because you're capturing lots of images and good, even exposure is critical for photogrammetry, this technique won't work for photogrammetry. So you'll probably want to get a stronger UV source or a pair of sources that you can leave stationary relative to the object. Strong sources of UV-A with good filtration to remove the blue tail tend to either cost a lot or they're bulky and awkward to handle, which is why you may want to follow George's suggestion of modifying strobes. I've attempted this but found they didn't produce enough UV-A for my needs, a topic for another discussion. The trick to getting good UV-induced visible fluorescence images is good filtration on the UV source to eliminate the visible blue tail; good filtration on the lens to cutoff the UV source and any stray IR (only if you're using a modified full-spectrum camera); plus a dark room to reduce stray ambient light. However, at least it doesn't require a modified camera or a special lens, and it has other benefits as Dale suggested. Because UV-C has shorter wavelengths, it has special optical properties that are useful, but you can't capture UV-C reflectance images using any consumer camera with a silicon-based sensor. You need both special optics and a camera with a specialized sensor to capture UV-C reflectance, and as George mentioned, UV-C is dangerous to work with. You absolutely need to wear protective eyewear and cover skin, and do due diligence. However, you can capture UV-C induced visible fluorescence with an unmodified consumer camera if you put the right filtration on both the source of UV-C and the lens (to eliminate any visible components of the UV source or stray ambient light). These topics are also covered in the AIC Guide and elsewhere.
  9. Here are some additional tools to enable web publication and shared scholarship of 3D and complex, high-resolution data sets, in addition to the CHER-Ob project at Yale that Eleni Kotoula mentioned in the previous post: ARIADNE Visual Media Service: http://visual.ariadne-infrastructure.eu/ Archaeological Data Service: http://archaeologydataservice.ac.uk/archives/ Both of these sites are based on the 3D Heritage Online Presenter (3DHOP) developed by the Visual Computing Lab (CNR-ISTI), who, as Carla noted, also developed Meshlab and the RTIViewer. Along with the free RTI Mobile App, there are lots of new tools being developed for data sharing and scholarship.
  10. Thanks, Eleni--this looks really useful and I look forward to trying it out. Taylor
  11. Tom, thanks very much for those tips and observations. I wasn't aware of the importance of capturing +/- 90-degree camera rotations from different camera positions than the 0-degree camera positions to reduce the z-errors, and I'll make this part of my workflow. The +/- 90-degree rotations aren't exactly at the same positions and optical axis as the 0-degree camera positions, but they're close. I use a camera rotation device made by Really Right Stuff, but it would be very easy to change the baseline positions before rotating the camera for better calibration, as you suggested. I realize the 35-mm macro lens (70 mm equivalent on a full-frame camera) isn't ideal for geometry, which is why I don't use it as the primary lens for photogrammetry (instead, I use the 14mm or 20mm [equivalent to 28mm to 40mm prime lenses on a full-frame camera] for photogrammetry), but I use it because of its uncoated optics and better transmissivity and contrast in the UV range. I also capture UV-induced visible fluorescence (UVF), visible and IR images for the same camera positions, so to get good registration of the spectral images at various wavelengths, I only change the on-lens filtration and light source, not the lens. Essentially, I'm using photogrammetry as an aid for producing spectral orthomosaics and also as a tool for spatial correlation of spectral data and other analyses. I thought I'd get better alignment of the various camera positions for macro and spectral imaging with the 3D model if I also captured a sequence of visible calibration images for the 35mm macro lens at each camera position. After I align the set of visible calibration images with the model, I rely on registration of the spectral images using Photoshop or Imalign (which has a feature to align IR and VIS images) to spatially correlate the spectral data with the visible 3D model. As I understand your suggestions, I'd be better off simply aligning the single visible camera position for the whole sequence of UVR, UVF, VIS, and various IR wavebands, rather than aligning an entire calibration sequence for each position. I've encountered the stair-stepping problem you mentioned when I merge chunks of the same surface captured with different sets of overlapping images (for example, if the capture sequence is repeated with a slightly different focus or a different lens). Slight differences in the calibration of each chunk or calibration group can result in two overplapping surfaces because of slight errors in the z-depth. Turning off pair preselection before aligning the images from different groups seems to help, although it takes much longer to align this way. My assumption has been that this establishes tie points between images from different calibration groups that overlap the same area to reduce the z-errors, and it has worked for several projects (although perhaps not without the rippling effect mentioned above). Finally, here are two references published in 2014 that discuss the algorithms that Photoscan uses. The first seems to indicate that earlier versions of Agisoft Photoscan used a version of the semi-global matching (SGM, or "SGM-like") algorithm (although this might no longer be true): Dall’ Asta, E. and Roncella, R., 2014. A comparison of semiglobal and local dense matching algorithms for surface reconstruction. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XL(5/ WG V/1) pp. 1–8. ISPRS Technical Commission V Symposium, 23–25 June, Riva del Garda, Italy. The second discusses a different set of algorithms and includes a link to the Photoscan Forum (http://www.agisoft.com/forum/index.php?topic=89.0) for further discussion: Remondino, F., Spera, M.G., Nocerino, E., Menna, F., Nex, F., 2014. State of the art in high density image matching. The Photogrammetric Record, 29(146), pp. 144–166, DOI: 10.1111/phor.12063 Taylor
  12. And George, feel free to contact me about exchanging a set of TIFFs for testing, if you'd like. Taylor
  13. Tom, et al., I recently aligned a set of JPEGs captured using a 14 mm focal length lens on a micro four-thirds camera (equivalent to 28 mm on a full-frame sensor), which I would have thought would have provided pretty good geometry and overlap, on the latest version of Photoscan. I forget the exact number, but it was about 150 images in total to cover the entire surface of the painting with about 70 percent horizontal and 35 percent vertical overlap at a ground sample distance of about 250 pixels per inch. I captured 3 images for each camera position, including +/- 90-degree rotations. In addition to these, the data set also included close-up images captured with a longer, 35 mm focal length lens (70 mm equivalent for a full-frame sensor), for which I also captured at least 3 to 6 sets of overlapping images with +/- 90-degree rotations at each camera position. Finally, I aligned a set of images of the entire painting captured with the 35 mm lens (70 mm equivalent). This set had 5 or 6 camera positions with 70 percent horizontal overlap and +/- 90 degree rotations at each camera position. The idea was to obtain a set of calibration images for every camera position used to document the painting and for each camera position to be aligned with the model, so I could spatially correlate the spectral data from every camera position using the model. I aligned using the highest setting for the sparse cloud with pair pre-selection turned off (this seems to help establish tie points between the different calibration groups) and Photoscan successfully aligned the 200+ images on the first attempt. After the initial alignment, I removed some of the obviously stray pixels, adjusted the bounding box, and optimized the reprojection uncertainty a few times (Photoscan very quickly gets the reprojection uncertainty to less than 0.6 arbitrary units after a few iterations). I then used gradual selection only once to remove reconstruction uncertainty to 10 arbitrary units or less, which eliminated roughly 75 percent of the points, but left well over 1000 to 10,000 projections per image to finish the optimization. I did several more iterations of optimization with gradual selection to get the reprojection uncertainty to less than 0.4, set my scale bars, and continued gradual selection and optimizing until the reprojection uncertainty was less than 0.25 on the slider. I don't have the final error statistics handy, but as I recall the sigma error was about 0.3 pixel before I built the dense point cloud. Still, I noticed the slight rippling effect. Maybe this was due to the longer focal-length images with less favorable geometry? Maybe I'll try the whole process again using TIFFs. I also often notice a color shift on the orthomosaics compared to the original images. The orthos often appear darker and a shade or two cooler in color temperature.
  14. George, I'd be happy to run some comparison tests between TIFFs and JPEGs of the same images. As I mentioned above, I'm often not sure why some data sets seem to process much more quickly than others. I suspect there are other factors affecting the speed of the dense-cloud generation in addition to bit-depth, since I've gotten both faster and relatively slower samples/sec with 8-bit JPEGs, from my recollection. Thanks for the link to Hirschmüller's paper on SGM and the observations about the different dense-cloud settings in Photoscan--the menu descriptions of "ultra-high," "medium," etc. could be made more explicit. I recall discussions about how the settings relate to downsampling somewhere on Photoscan's website and maybe also in their forums, but their impact on accuracy isn't always made clear. The rippling effect you mentioned is an interesting phenomenon and I wonder if there are certain kinds of data that tend to cause more or less of this effect--for example, the range and frequency of contrast differences, texture, and other characteristics of the images.
  15. Good points. I save all of my RAW files as DNGs with embedded RAW which doubles the file size. I generally export a TIFF of images I plan to use (3 times the RAW file size at 16 bits per RGB channel) after I've done my basic processing in Lightroom. Then I export JPEGs for processing with Photoscan or RTIBuilder. If I want to add notations (boxes, arrows and scale bars), I usually do this on JPEG images at maximum quality in ImageJ, and then I export a smaller compressed JPEG (usually limited to 500 Kb) for embedding into reports, and I include a link to a higher-res version of the JPEG. On occasion, I'll use Photoshop on the TIFFs to convert to gray-scale and for false-color IR, or for processing prior to other software such as DCRAW, ImageJ, or Dstretch for additional post-processing of false-color IR, UV, or Principal Components Analysis. Saving a false-color IR as a TIFF increases the number of layers, which increases the file size. I keep a log file of all the image processing steps in Photoshop. For a recent project documenting a relatively small portrait painting (recto and verso) with dimensions of 48 x 50 cm (roughly 19 inches squared), I captured approximately 700 images for visible, IR (various wavebands), UV, UV-induced fluorescence, visible-induced IR fluorescence, visible- and IR-RTIs, false-color infrared, and photogrammetry. Every camera position used to document the recto included a calibration set of 9 or more images to align all the data with the 3D model. By the time all the post-processing was complete, I had accumulated about 1500 images and nearly 70 Gb of data. Not all of these images yielded data that was ultimately used (e.g., calibration and bracketing exposures, visible-induced IR that didn't reveal pigments that fluoresced in the IR, and dark-field exposures). All of this adds up very quickly, so at some point I may need to make the decision whether to keep all the data or let some of the derivatives and unused images go.
  • Create New...