Jump to content

GeorgeBevan

Members
  • Content Count

    95
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by GeorgeBevan

  1. It's an enormous frustration of mine that the "Open GPS" projects, GPL-GPS and GRINGO, of the early 2000s seem to have expired (http://gps.psas.pdx.edu/ and http://www.helenav.nl/). It isn't even possible to get a RINEX file (the raw data from the satellites) out of a hand-held to reprocess, although the technology would certainly allow it. With long enough measurement times, it is possible to get under a metre with a single frequency (http://www.geod.nrcan.gc.ca/products-produits/images/p1a_4_e.jpg) The publication from the GRINGO project in 2001 presented some preliminary results that it was possible to get decimetre accuracy from a 12 channel carrier phase GPS with software reprocessing of a measurement taken for as short as 30 mins: http://www.ingentaconnect.com/content/maney/sre/2001/00000036/00000280/art00006 It would be fantastic if we could reprocess the GPS data from a geotagger over the course of a long gigpan...sadly I don't think that will ever happen. Trimble and the others have the whole field sewn up tight.
  2. I can't speak to the Canon product, but I can say something a bit more general (I'm sure Tom will want to chime in on this). The best handheld GPS units with a 100ms clock will get you to within 3m horizontally. Height accuracy with GPS is considerably inferior (4.5 m or more; 1.5 x the horizontal accuracy). All of the geotaggers for cameras on the market essentially use these chip-sets. By surveying standards this is not very good. Generally though, geotaggers are becoming a must-have for those of us in the field. It provides great, "free" meta-data on how the photos were shot. As a nice bonus, the geotagger we're using also functions as a kick-ass shutter release for RTI The problem Tom and I were discussing is how this low quality, absolute GPS positioning data can be used in high accuracy, close-range photogrammetry. It can certainly be used to put the models in the right place in the world, but can it be used to scale and orient the models accurately? At present, if I want to scale a model I can use scale sticks, or I can use a total station of differentially corrected GPS (accurate to 1cm horizontally) to put in four or more control points into the scene. Getting correct orientation on photogrammetry models without surveyed points gets a little bit trickier...the scale bars need to be aligned using a spirit level so that they are perfectly in line with the x and y axes of the model. In the software one then stipulates that the scale bars, as well as providing scale, also correspond with the x and y axes (the z axis is then constrained to a unique direction). At least this is the way I deal with the issue in ADAM CalibCam...PhotoScan appears to handle camera location and pose data in a bit more relaxed fashion. This question is one I've been wrestling with viz. laser scanning. LiDAR always knows which way is down and what the absolutely scale is without adding anything to the scene being scanned. It would be ideal if a geotagging device added to a DSLR could provide high accuracy position and camera pose information to solve this problem. For most close-range work, like rock art, precise absolute orientation isn't going to matter that much (scaling is more of an issue). Personally I'm more concerned about cases where I want to measure how far, say, a distant tall monument was leaning using photogrammetry. If I can't access the scene, I can't put in place the ground-control points to establish verticality. I've certainly been inspired by Tom's recent experience with PhotoScan to reprocess some of my data with ground control. How PhotoScan will deal with GPS data from UAVs with no surveyed GCPs is an urgent concern of mine for an upcoming project.
  3. I use the free Bulk Rename Utility quite a bit: http://www.bulkrenameutility.co.uk/Main_Intro.php It's probably overkill for converting upper case extensions to lower-case but it sure is handy. Personally I find it bit easier to use than Bridge or Lightroom for renaming.
  4. So you're exporting the relative-only bundle points from CalibCam into PhotoScan? How much low-level access does PhotoScan give you to the matching points? I've only worked with PhotoScan briefly for some underwater work. It didn't give me the level of control I needed when working with low quality imagery, although I know others have used it with some success.
  5. Interesting! So you've got decent orientation + scaling from GPS EXIF data? I've had lots of trouble even with a big sigma to get usable results in CalibCam. I'll be very interested to see how PhotoScan deals with control point data.
  6. Tom, How does the latest version of PhotoScan work with GPS data embedded in EXIF files from geotaggers? Is it able to use this lower quality GPS data to scale and orient the models? ADAM has really shied away from using this sort of control data, although I understand the new version will allow the integration of GPS data from UAVs. We're just starting to use a really nice geotagger for our work (Solmeta Geotagger Pro 2). The triple axis compass is very handy! George
  7. What sort of camera/lens are you using for your work on these stones? Detailed images would definitely be desirable.
  8. Sigmund, I talked to Taylor and his images are of paintings. I've worked with his data before and I'm not sure it would be as suitable as your rock-art examples (it tends not produce the really dense point clouds you can get from stone). In particular, I'd like to compare the resulting photogrammetry data-sets with adaptive depth mapping (see below) as well as an ambient occlusion filter. How well do the fine features in your stones come out with the data produced by each software package? If you've not seen it, the 2012 English Heritage report on Stonehenge shows what can be done with depth mapping of range data to reveal otherwise hidden surface features, in this case daggers. http://services.english-heritage.org.uk/ResearchReportsPdfs/032_2012web.pdf While this data was generated with lasers, photogrammetry definitely produces data of sufficient, if not better, quality to do similar work. George
  9. This paper by Lindsay Macdonald contains a very important discussion of the issues of error in the calculation of surface normals with RTI: http://ewic.bcs.org/upload/pdf/ewic_ev11_s8paper4.pdf What Lindsay shows is that by selecting three light positions from the many light positions used (in his case a dome of 64 lights) and computing surface normals using the photometric stereo algorithm, the resulting normal field was more accurate than that produced using RTI and calculating normals using all 64 light positions. The exciting thing about this result is that existing RTI data-sets can be recalculated using this technique of photometric stereo lamp-triplets. I'd be interested to see what the normal field looks like calculated using PS triplets of your data-set above.
  10. This is a very interesting issue! Do you think you could send out the "blend image" of the sphere(s) for the above RTI so we could get a sense of the light distribution? Quantitative RTI is really only in its infancy. I'm hoping to write some code in the next few months to bring PTM files in R to do some analysis of the normals with directional statistics with CircStat (http://cran.r-project.org/web/packages/CircStats/index.html). If we can develop a library in an open source package like R to do this kind of analysis it will be much more accessible to the community than any codes in Matlab. What I have noticed already in extracting normals from PTMs in Matlab is how often the parameters fail to fit, especially in cases of self-shadowing. It would be very interesting to see how colour affects the fitting of normals in PTM. It's not something I'd given much thought to. HSH is, of course, far more robust in generating normals, especially in cases of self-shadowing. Is there any published specification for HSH RTIs? I can't seem to find any...
  11. Tom, This is just intended as a fun exercise to process a single set of imagery in different packages. In the first instance it's number of points generated, but beyond that it's a question of where points get placed, particularly on edges. The latter will be pretty clear when the point clouds get compared. At the moment there's very little out there in terms of quantitative comparison of different packages. A lot of the time when photogrammetry/SFM results are presented it's as a textured surface that tell one very little about how useful the model is going to be.
  12. Great. Which program do you want to use? Do you have Agisoft?
  13. Sigmund If you have DropBox you could share a folder with a set of images with me. I'm sure Taylor would also like to be involved. If the Georgia O'Keefe people (Dale?) are into it, they could process in Agisoft. Let's aim to try Arc3D, 123D Catch, Agisoft, ADAMTech and Bundler/SfM. It is crucial that there is some sort of scale bar in your image set so we can create models that can be registered and compared to each other. I could then generate some statistics on the differences between the models. George
  14. Sigmund, It seems to me we should try a little experiment in this thread! Why don't we take the same set of photos and run them through three or four different programs to generate data? I have used 123D Catch and it makes nice, quick models, particularly 360 degrees around objects. I know many people are also getting good results with Agisoft Photoscan. George
  15. The simplest way, IMHO, would be to use a colourized PTS (ASCII) file. Each point would be described by XYZ, RGB, and Nx, Ny, Nz (the surface normal). Such files can be read into CloudCompare. The chief difficulty is registering a point-cloud generated by photogrammetry to an RTI. One would have to apply the same transformation to the RTI as for the epipolar images in a stereo pair, as well as undistorting the same image to account for the lens. All this can be done...in theory.
  16. These are some great points, Denis. I've been amazed at the relatively low quality imagery produced by most commercial ROVs. The video cameras are fine for underwater repair tasks, but are really sub-par compared to terrestrial archaeological photography. I still think that photogrammetry is among the most promising tools for mapping deepwater sites, not least because it gathers all of indispensable colour information as well as 3D information.
  17. In terms of processing speed we've found Helicon is definitely the fastest, and the best-of-breed in terms of functionality. If you're working at high magnification (of bugs etc.) you may need to process hundreds of individual images. There's quite a powerful open source plug-in for ImageJ that does focusing stacking: http://bigwww.epfl.ch/demo/edf/ Have you tried that already?
  18. Sorry, I should have provided the link to Stack-Shot: http://www.cognisys-inc.com/stackshot/stackshot.php The advantage of moving the camera as opposed to changing the focal distance on the lens is that you avoid the effects of "focus breathing", which is particularly acute in the macro realm. In other words, a change of focal distance creates an effective change of focal length, thus changing the composition of the shot and making pixel-to-pixel stacking difficult, if not impossible.
  19. This feature was described back in the 2001 paper on PTM. It is distinct from the use of PTM to represent light directions as we commonly do. This is a way to give a compact, parametric representation of multiple images at different focal distances, and to interpolate focal distances between the constituent images. It seems to me straightforward to implement and may be available through an undocumented command-line option for the PTMFitter. The refocussing example Tom posted on HPLabs crashes RTIViewer, btw. Today, since computing power is so much greater, Helicon Focus + Stack-shot is going to do what you want quite nicely.
  20. You've probably seen this already, but the Raytrix R29 seems to be the best commercial light-field camera available. They have CUDA-driven software for reconstruction: http://www.raytrix.de/index.php/Optical_inspection.html I'm going to try to get over to Germany in a few months to try it with a test object. I could certainly imagine applications for it in the underwater environment.
  21. It seems to me you'll definitely want to have a very specific goal in mind to go to the time and expense of attempting RTI underwater. Besides photogrammetry, which can get accuracies of <1mm underwater, there are a bunch of other remote sensing tools that are either recently on the market or about to come on. To name a few: BlueView BV5000 (http://www.blueview.com/Bv-5000.html) 2G Robotics ULS-500 (http://www.2grobotics.com/products/underwater-laser-scanner-uls-500/) 3D at Depth DP1 underwater LiDAR (http://www.3datdepth.com/) Enea REVUE (http://www.enea.it/it/produzione-scientifica/energia-ambiente-e-innovazione-1/anno-2012/knowledge-diagnostics-and-preservation-of-cultural-heritage/terrestrial-and-subsea-3d-laser-scanners-for-cultural-heritage-applications) Currently there's a lot of buzz about the DP1 in the offshore industry, while NOAA seems very keen on the BlueView. Each have their advantages. You've probably already encountered a few of these. Taylor, would a light-field camera give you surface normals? I've looked at the Raytrix systems for an industrial metrology project but it seems all you get is pure 3D information, much of which is, at present, of an inferior quality to what can be obtained by lasers or structured light.
  22. Sigmund, For what you want to do CloudCompare is the way to go. It has very good tools for creating depth maps that can reveal very small features. We have a few examples here: http://wadihafirsurvey.info/photogrammetry.html Meshlab's Depth Mapping features just aren't as good. CloudCompare can take very high resolution snapshots. If you can send me some sample data I'd be happy to show you some of the processing techniques we use. I think we need to have some discussion of Structure from Motion vs. Photogrammetry. While the difference is, to some extent, terminological there are some fundamental differences between the two.
  23. Sigmund, For post-processing photogrammetric data-sets many people use Meshlab. At the moment, however, our group uses almost exclusively CloudCompare (http://www.danielgm.net/cc/). While it doesn't have as many filters as Meshlab, it is much more stable with large data-sets. It also mimics the functionality of the best commercial 3D packages like PolyWorks. What is your specific application? In general using a colourized point-cloud will be produce better images than layering many OBJ files with separate photographic textures. If you save you data-set as an RGB PTS file it becomes rather easy to manipulate colour (you could even do this in MS Excel). George
  24. Interesting project, David. I have thought of doing RTI underwater but there are formidable technical challenges in deploying the highlight method underwater, not least stirring up sediment. I'd suggest that a fixed dome+camera system, possibly mounted on an ROV would be ideal. The movement as the ROV hovers could be compensated for by post-processed alignment. Using photometric stereo rather than RTI to process the data-set may also be advantageous as you'll need many fewer images than for PTM/HSH,. Alternatively, I think the best way to do what you want is to the shoot the photogrammetry (preferably using ADAMTech) and then create a "Virtual RTI" in Blender or another rendering project. The reason I suggest ADAMTech is that it will generate vastly more points than any other package...the calibration of the camera needs to be very precise to get the good stereo alignment to generate points. One significant application we've seen for this kind of surface metrology underwater is understanding machining or woodworking marks of vessels (provided there isn't a huge amount of corrosion). I work extensively with the underwater archaeology unit at Parks Canada...they may be interested as well in some of the applications you have in mind.
  25. I'd reiterate Klaus' word of caution. UV-B and UV-C sources are very dangerous. Generally for filtering UV sources like quartz and xenon bulbs for reflected UV or fluorescence you'd want Schott UG-11: http://www.schott.com/advanced_optics/english/download/schott_uv_bandpass_ug11_2008_eng.pdf or (what we use) the Schott UG-1 (much cheaper): http://www.schott.com/advanced_optics/english/download/schott_uv_bandpass_ug1_2008_eng.pdf It won't get you the specific wavelengths you want but it's quite effective. Schott has some high-pass filters in their KV series but I haven't used them: http://www.schott.com/advanced_optics/english/download/schott_interference_filter_catalogue_july_2008_en.pdf As for focusing in UV, if you get the Coastal Optics 60mm you won't regret it. Just focus in visible light, drop in your UV filter and you get nice sharp images. You'll never look back.
×
×
  • Create New...