Jump to content

GeorgeBevan

Members
  • Content count

    95
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by GeorgeBevan

  1. The Phantom does permit manual focus (it can be locked out at infiniti). It's questionable how stable that setting is between flights in terms of calibration (I am doing experiments with it), but it definitely is possible. I hope that helps!
  2. Hi Brian, I can't speak to the specifics of the CHI workflow, but I can say more generally that there are now relatively inexpensive options in the UAV market that can achieve quite nice results. In particular, the DJI Phantom 4 Professional has a 20mp camera with a 24mm (35mm equivalent) lens that has relatively little complex distortion and produces quite high quality images, at least for a small sensor. This fixed lens can achieve good base-to-distance ratios if proper planning is observed -- it sounds as if you've already learned this with CHI -- and pretty good image accuracy, in my experience. The Professional version of the Phantom can shoot in RAW and has a mechanical shutter so you won't get the rolling-shutter effects commonly seen with GoPros and other lower-quality miniature cameras. The key is to find good flight-planning software. MapPilot will work pretty well (https://www.dronesmadeeasy.com/), as well an Australia-designed desktop software that connects to the popular Lichti app (http://www.djiflightplanner.com/). There is also the excellent UGCS flight-planning package (https://www.ugcs.com/en/page/photogrammetry-tool-for-land-surveying) but this requires you to use an Android tablet with the UAV. In my experience iOS is generally more reliable with DJI systems. As I understand it, the CHI workflow strongly encourages the use of cross-strips to increase the overall robustness of the block triangulation. I think UCGS can do this, but with the other apps you'll need to plan a second flight with cross-strips. Fortunately this isn't terribly hard. One more point I would make is that ground control is really essential for a high-quality project in archaeology, if what you want to do is map features in a site co-ordinate system. The onboard GPS of the UAV will not provide a good solution. Hope that helps.
  3. GeorgeBevan

    AIC PhD Targets

    I just heard back from Robin Myers that they are going to develop a new target that is less expensive to manufacture in the new year.
  4. GeorgeBevan

    AIC PhD Targets

    Does anyone have suggestions for alternatives to the AIC PhD Targets available through Robin Myers Imaging? I just emailed this week to inquire about purchasing another set and they said that it was uncertain whether they would put them into production. In the past I've also used MacBeth charts, as well as the X-Rite Passport. Suggestions would be greatly appreciated!
  5. Ingenious! Thanks for making the plans available.
  6. GeorgeBevan

    Photogrammetry with 3d camera?

    The issue you're going to have with tools like the Kula is that the short stereo-base, similar to the distance between our eyes, is designed for stereo photography and video, not photogrammetry. Depth accuracy in photogrammetry is dependent on the ratio of the distance to the object and the distance between the cameras ("base"). The base is so short with the Kula that no additional accuracy would be gained, IMHO. You're right that there are a number of photogrammetry systems that do use stereo cameras, or even rigs with dozens of cameras, but this is primarily for instantaneous capture of moving subjects, like bodies and faces. I should say also that camera calibration with the types of split images produced by the Kula would be rather difficult.
  7. There are some powerful tools in the iWitness package for working with scanned photos taken with unknown cameras: http://www.iwitnessphoto.com/solutions/foom.html Note that to get the sort of results you want you'll need to find features in the photographs that have not changed over the past 50+ years and to take (ideally) accurate GNSS measurements at those positions. With that information it is possible to establish the focal length of the lens. If there is sufficient stereo-base between the photos you have, it may be possible to calculate the position of the tent in the foreground. From the three images you attached I wouldn't have thought this process is possible. There is another piece of software, sv3DVision that may also be worth looking at, if you can find it (I can't). There are several publications that use it to derive 3D measurements from single historic photos. Here's a newspaper piece on it: http://www.economist.com/node/15595689 No doubt there are others on the forum who have experience with this kind of historic image interpretation...
  8. GeorgeBevan

    The Raw and the Cooked

    My experience is similar to Taylor's. Even with carefully pre-calibrated cameras (0.2 pixel accuracy) and proper camera networks, this same rippling appears on horizontal surfaces. The problem can actually be worse in my experience when classical camera networks are used with longer baselines. I would beg to differ with Tom on the origin of this problem. Clearly these ripples are an interference pattern generated during dense matching by a mismatch of sampling frequencies between the images. Mis-estimation of calibration parameters would result in global deviations, like curving of the entire model if radial distortion were not properly solved. This is not a "bug" but an intrinsic limitation of Semi-Global Matching and one that Heiko Hirschmüller observed after the development of SGM. Other implementations of SGM exhibit the same problem. What is especially troubling about it is that this quantizing effect, as Tom notes, is still visible on so-called "Ultra High" matching. The quantizing in this case is happening at the level of the pixel. SGM cannot on its own perform sub-pixel matching but requires a biquadratic interpolation function, or similar, to accomplish sub-pixel matching. I don't know if anyone outside of Agisoft knows what they're using. The problem usually disappears for most users not when calibration is improved but only appears to go away when high-overlap, short-baseline image-sets are used and then meshed. The Poisson algorithm effectively filters out the high-frequency rippling and applies, in effect, local smoothing. This gives a pleasing surface, but one that also loses information at high spatial frequencies. The indication of this is that "when Ultra High" pointclouds are meshed they invariably have many fewer vertices than the original points. This is true of most meshing operations, but the cleaner the original points the more vertices will be retained in the mesh. As for multi-view 3D saving the day, as far as I know Photoscan uses multi-view only for the sparse alignment on the points generated by SIFT, but processes by pairs when it comes to dense matching with SGM. This may well have changed since the algorithms used by Photoscan were last made public in a 2011 forum post.
  9. GeorgeBevan

    The Raw and the Cooked

    I have some 60 megapixel digital images taken with a Trimble Aerial Camera. There's EO data, as well as GCPs, along with cross-strips for auto-calibration. Most of my other TIFF projects are scanned film projects.
  10. GeorgeBevan

    The Raw and the Cooked

    Just to clarify, I was only addressing the issue of whether an increase in radiometric depth produces an increase in photogrammetric accuracy, a question posed by IainS at the beginning of the thread, if I understand it properly. At present, the answer seems to be no. It just results in a big increase in compute-time. Perhaps future algorithms will make use of greater dynamic range and that could be an argument for keeping all the RAW/DNG files. I don't know. The really important issue, it seems to me, is that the dense-cloud processing option gives you options to resample images. I find that most people are processing at "medium" in Photoscan without knowing what the implications of this are for accuracy. This setting means that if they are using fancy 36-megapixel camera to capture their images in RAW, they are only using 9-megapixel images to actually do the stereo-matching. Doing the initial alignment in the highest quality, ie full resolution images, can give good exterior orientations but when it comes to building a surface this good work can be lost by increasing the ground-pixel size during the dense match.
  11. GeorgeBevan

    The Raw and the Cooked

    I would like to add a couple of points to what Carla said above. First, there are definite draw-backs in terms of computing time from processing uncompressed, full bit-depth images, at least in theory (16-bits, although as Carla says the sensor only reads out 14-bits). It is unclear that full bit-depth images produce more accurate stereo matches, all things being equal, e.g., no sharpening or other interpolations applied after capture. In Semi-Global Matching, the cost-function, Mutual Information (MI), used to calculate matches along the eight radial paths emanating from each pixel degrades in efficiency as the bit-depth is increased according to Hirschmueller, the devleoper of SGM: "Furthermore, MI does not scale well with increasing radiometric depth (i.e. 12 or 16 bit instead of 8 bit quantization per pixel), since the joint histogram becomes too sparse in this case." (http://www.ifp.uni-stuttgart.de/publications/phowo11/180Hirschmueller.pdf). It is not known exactly how Agisoft Photoscan implements SGM -- they clearly have figured out some very clever ways of speeding up the matching process -- so this observation may not be valid. I would be interested to see a speed and accuracy comparison on the same data-set with 16-bit uncompressed TIFFS vs. JPEGs (Taylor, are you up for this?). The second point is that there is another level of interpolation that most users of Photoscan chose to ignore. Tom Noble, Neffra Matthews and CHI teach, as I understand, that the camera alignment should be done to the highest accuracy. In other words, the initial sparse match that allows the cameras to be resected from the matching points is performed on full-resolution images. In testing, full-resolution alignments in Photoscan are very good, even with low quality images and poor camera networks. The problem comes during the dense matching stage. Most users don't have the time or computing power to generate point-clouds with full resolution images (I seldom do except for critical images sets, even though I have relatively powerful workstations). Groud-pixel size is one of the most important determinants of photogrammetric accuracy. When the user selects "Medium" for the dense cloud, the images are resampled to 1/4th of their original size, thus leading to significant loss in accuracy. The software would more accurately characterize the options not as "ultra-high", "high" and "medium" but as "full resolution", "half resolution" and "quarter resolution". As soon as the images are resampled at any stage then there are serious artifacts that can be observed in the models due to quantizing effects. In particular, if you look closely and turn off texture you will notice a sort of rippling effect that radiates out across the model. This has been observed in several ISPRS publications (I'm thinking here mainly of dall'Aste's work on SfM accuracy). It's something to really watch for. If you're using check-points (control points or scaling targets witheld from the bundle adjustment) the degradation in the accuracy of the model is quite clear. The third point I wanted to make on the RAW/DNG vs. JPEG debate, if I dare, is that much of the discussion assumes that the cost of storage is effectively zero. In real-world projects this not the case. Although I do shoot in RAW when I'm working in changing light conditions --- this practice has saved my bacon on a number of occasions -- I seldom keep the RAW images once I've done an exposure adjustment. The data is so enormous that I cannot afford to store it redundantly on multiple drives in multiple locations, especially if I to upload them through FTP. I would much rather have JPEGs stored on three separate drives in three locations, than a single back-up of my RAWs/DNG. Of course, the cost of storage is always coming down, but the amount of data generated by new cameras (50 megapixels) is going up at perhaps a higher rate. Just some random thoughts! I'm sure others will disagree...
  12. Usually the ASCII format of a point file runs something like this: X Y Z Nx Ny Nz R G B Where X,Y,Z is the location on the point in 3D space, R,G,B are the colours (each 0-255), and Nx,Ny,Nz (each between -1.0 and 1.0) are components of the surface normal vector at that point.
  13. GeorgeBevan

    Photogrammetry and RTI

    You can simply export either point clouds or surfaces from Photoscan with surface normals to another software package, like Meshlab. You must have the normals on the exported 3D data to change light position. If the data is exported with normals, it is then trivial to then change the light-angle or the shader model (the Phong Model in Meshlab corresponds to the Specular Enhancement filter in RTIViewer). Mapping RTI normals onto 3D models generated by other processes, like photogrammetry, is rather more challenging and is discussed elsewhere in the forum.
  14. GeorgeBevan

    Reflected-UV photography

    Hi Sian, We've done lots of reflected UV photography and it is a very powerful technique for observing certain types of surface features. I assume you're talking about the UVA range, not UVB and UVC. There have been some experiments with film in the deeper UV range but it is very tricky and dangerous light to work with, particularly UVC. Recently I worked on a project using reflected UV to identify the degradation mechanisms of the patterns on the wings of moths in natural history collections. It remains, as you indicated, a much less explored technique than IR, but there are some practical reasons for that. 1) you need lots and lots of UVA light across the 300-400nm range. Be prepared for long exposure times. I would recommend using strobes converted to output UV. Remember that most commercial strobes have coatings intended to reduce UV output so you'll have to do some research. 2) normal photograph glass absorbs UVA to a high degree. If you're serious about reflected UV consider purchasing the Coastal Optics 60mm UV/IR lens: http://www.jenoptik-inc.com/coastalopt-standard-lenses/uv-vis-nir-60mm-slr-lens-mainmenu-155/80-uv-vis-ir-60-mm-apo-macro.html It is VERY expensive but worth every penny, particularly because it is focus-corrected across the UV-VIS-IR range. That means you can focus through the viewfinder in visible light and then drop the filter in place and get the same tack-sharp focus. Non-focus corrected lenses require an adjustment when you move out of the visible range to keep focus. 3) you need a really good filter. The Baader "Venus" filter is currently the best UVA cut filter on the market but is quite expensive in itself: http://www.company7.com/baader/options/u-filter_bpu2.html you'll also need a pile of adapters to put this relatively small astronomical filter on a DSLR lens. 4) you'll need to get your DSLR converted to remove the Internal Cut-off Filter that is intended to remove extraneous UV and IR light so as to improve exposure and metering in the visible range. A monochrome sensor would be ideal with no Bayer filter to further cut-down on UV transmission. I've been meaning to experiment with Foveon sensors in this application to see if they can get us better sensitivity. Once you've mastered UVR for single images then you could consider using it for RTI and photogrammetry. I hope that helps get you started!
  15. Very interesting! Can you show us a version of this data without texture, either a mesh or a point-cloud?
  16. GeorgeBevan

    Ground Control Points for aerial photographs - question

    Thanks! I just worked through this data-set. I was getting an overall accuracy on the GPS of about 17.8cm with about 40 GCPs observed. I would be very interested in doing some Round Robin Testing with this dataset with you and others on the forum. The GCPs are rather difficult to pick out, but I did manage to centroid on quite a few of them. Which GCPs did you pick to orient the model? Could you share a mesh of the model? I'm going to generate my georeferenced mesh and will share it with a public drop-box link. It's a good set to work with with because of the good pick-up on the survey (46 points).
  17. GeorgeBevan

    Ground Control Points for aerial photographs - question

    Very interesting! Could you share the link to the aerial data? I'd be interested to see what sort of error numbers can be obtained with your technique.
  18. GeorgeBevan

    Ground Control Points for aerial photographs - question

    Quick question, Taylor. When you say GCPs are you referring to markers used for scaling or markers with known 3D co-ordinates? Centroiding is certainly a very handy way to digitize points. The accuracy, in theory, can be 0.01 pixels or lower.
  19. GeorgeBevan

    Manual turntable options

    We're currently using a Sherline CNC rotary table, more or less in this configuration: http://www.sherline.com/3700cncpg.htm A stepper motor and USB controller have been added. We've rewritten some control software to have it work with a PC for automated capture and shutter release on the camera. Other than that we use a pretty standard lazy-susan in the field. Looking forward to hearing what others are using!
  20. GeorgeBevan

    Ground Control Points for aerial photographs - question

    Daniela, Generally you will want to use a minimum of 4 GCPs in an aerial project like this. While 3 GCPs can effectively scale and orient your model there is no redundancy in case one of your measurements is incorrect, as sometimes happens, or is obscured in the photos. Ideally you would shoot in about 7 GCPs so that the software can actually improve on the accuracy of the survey (the inherent accuracy of photogrammetry, particularly in plan, will always be superior to that of a total station or even good GNSS receivers). These GCPs should be evenly, and randomly distributed throughout the area your are photographing. Avoid putting the GCPs in straight lines ("co-linear"). Are you using a total station or GNSS? If the former, will you be using a reflector pole or shooting the targets reflectorlessly? Without knowing more about the camera and the software it is difficult to say how large the targets should be. You'll need to calculate the Ground Sample Size for your project, a value that is dependent on the camera, sensor size, focal length of the lens and flying height. Many software packages will detect targets automatically with right sort of targets. I couldn't determine what sort of targets Aspect3D uses from the online documentation. Perhaps the fastest solution would be purchase targets made for a total station that can be easily picked out from the air. There are also some LiDAR targets printed on rigid substrates that would work. Others have used targets used for archery and secured them to the ground or taped them to a heavier material so they don't blow away. I'd look for something at least 15cm in size given your flying height, but I can't be 100% sure. Hope that helps.
  21. GeorgeBevan

    Color mapping to normals direction

    Just a quick question related to this. In principle the x and y components should be between -1 and 1 and mapped onto values between 0 and 255. Since the z component must be between 0 and 1 (the z components is produced by a square root in PTM/RTI and so cannot be negative) is it mapped 0-1 to 0-255 or is the mapping only effectively to 0-128? Having looked at some of the histograms of the z-component it seems to me that it has less effective bit-depth than x and y. I could be totally off on this...
  22. GeorgeBevan

    Photoscan Pro vs Photomodeler Scanner

    It seems to me that the terminological waters are very muddy here. I'm not sure I would contrast SfM directly with stereo photogrammetry (it is not uncommon in publications to hear of "SfM photogrammetry"). My understanding is that SfM is an outgrowth of traditional photogrammetry developed by the machine vision community to provide quick 3D data. The emphasis in SfM was on speed, and not necessarily extremely high accuracy. For a while it could be meaningfully contrasted with the sort of analytical stereo-photogrammetry practiced mainly in aerial mapping applications for decades. At that point SfM was a huge innovation because it didn't required detailed and expensive-to- acquire information about camera position and pose, or special pre-calibrated metric cameras. Since then the innovations of SfM have been rolled back into the mainstream. Today most stereo photogrammetry packages, at least from my limited experience, allow for the simultaneous calculation of the interior and exterior orientation. Indeed, Photoscan itself permits the separation of the two steps, if the user desires (sometimes a separate calibration process and be advantageous when large numbers of images are being processed or the object does not fill the frame and provide a good look at all parts of the lense). Photoscan also uses all the same camera calibration parameters from Brown/Fryer thick lens model. I'm told even venerable "traditional" stereo-photogrammetry systems like Geodetic's VStars allow for autocalibration in the field provided there is a good distribution of coded targets. Though I'm not 100% sure, I gather the Photomodeler "SmartPoints" technology can allow for SfM-like autocalibration in the field, provided the scene has enough texture. I don't know whether the Photomodeler is as robust as Photoscan in dealing with poorly shot projects or problematic lenses. IMHO, the innovation of Photoscan, apart from its high level of automation, lies mainly in its use of multi-view photogrammetry and Semi-Global Matching. Multi-view is used at the initial alignment stage with the sparse cloud. Triangulating points using many different rays should, in principle, be more accurate than using only two rays in the stereo method. The next stage of the process in Photoscan, dense reconstruction, is a bit of a black box. It remains unclear to me from what I've seen published that multi-view gets used in dense reconstruction. It has been suggested by some photogrammetrists that Photoscan is doing this final reconstruction stage by stereo pairs and then merging the resulting data into a single cloud (it's clear from the data that the software is doing a lot of smoothing at this stage as well to give relatively inexperienced operators pleasing final results). Semi-Global Matching was a major innovation in stereo matching that Heiko Hirschmuller published in 2005 (Global Matching has been proven to be an NP complete problem and would take longer than life of the universe to solve). It offers the possibility of better matching in areas where NCC/LSM would have problems, particularly on the outside parts of the scene. The downside of SGM is that it takes a lot longer than NCC/LSM. Agisoft have done a really impressive job in using GPU computing to improve the processing speed for SGM. I'd say from my own experience that SGM is also problematic in modelling sharp edges. There is a commonly observed "wavy" effect on edges and a tendency to over-match, particularly with the sky/background, a problem usually remedied by masking. The paper I cited in an earlier post shows an example of how SGM can result in systematic error, rather than the sort of random error seen with NCC/LSM. I guess this is question of how you like your error: systematic or random. This is just my two cents. I know others will have different views on the history here. These thoughts come out of my own ongoing struggle to clarify the terminology for myself and get a grip on the underlying technology.
  23. GeorgeBevan

    Photoscan Pro vs Photomodeler Scanner

    Another good recent paper looking at accuracy is this one : Remondino, F., Spera, M. G., Nocerino, E., Menna, F., and Nex, F. (2014). State of the art in high density image matching. The Photogrammetric Record, 29(146), 144-166. Photomodeler is not one of the packages compared, but Photoscan is. One thing you should look at closely with Photomodeler is how their SmartPoint technology has developed over the past few years. This is what they call the automated generation of relative matching points between images to provide a solution for the interior and exterior orientation of the cameras. I had heard that SmartPoint wasn't very reliable but my information may be out of date. Before SmartPoint coded targets were needed in the scene to link the images and to perform calibration. This group used Photomodeler to produce a model of an entire cathedral: Martínez, S., Ortiz, J., Gil, M. L., Rego, M. T. (2013). Recording Complex Structures Using Close Range Photogrammetry: The Cathedral of Santiago De Compostela. The Photogrammetric Record, 28(144), 375-395. They do note at the end that the project might have benefited from the new generation of multi-view software (like Photoscan). They used check-points to test the accuracy of the model against a total station and usually got an average error of about 1.3mm. The first paper shows that sub-mm accuracy is attainable at close range with most of the new packages they tested.
  24. GeorgeBevan

    Photoscan Pro vs Photomodeler Scanner

    I've never worked with Photomodeler personally, but it has been on the market for quite some time. A good question to ask when comparing photogrammetry packages, particularly with your requirements for high accuracy, is which dense matching algorithm the software uses. Photoscan appears to use the comparatively recent Semi-Global Matching algorithm while I think Photomodeler uses the older Normalized Cross Correlation/Least-squares matching. Some of the advantages/disadvantages of the two algorithms are demonstrated empirically in this ISPRS paper: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5/187/2014/isprsarchives-XL-5-187-2014.pdf There are a few other recent papers that do accuracy testing on a variety of algorithms/software packages, usually against a "ground truth" acquired by some other scanning technique. Generally the quality of the photography in the field and the parameters chosen for the dense matching can have such huge impact on the eventual quality of the model that true "apples to apples" comparisons are quite hard to make.
  25. GeorgeBevan

    Meshlab

    Just an FYI for everyone working with point clouds and meshes....a new version of Meshlab was released yesterday (2 April 2014) that promises to fix a lot of the bugs that had cropped up since the last release in 2012.
×