Jump to content

GeorgeBevan

Members
  • Content count

    95
  • Joined

  • Last visited

  • Days Won

    19

GeorgeBevan last won the day on July 18 2017

GeorgeBevan had the most liked content!

Community Reputation

42 Excellent

About GeorgeBevan

  • Rank
    Advanced Member
  1. The Phantom does permit manual focus (it can be locked out at infiniti). It's questionable how stable that setting is between flights in terms of calibration (I am doing experiments with it), but it definitely is possible. I hope that helps!
  2. Hi Brian, I can't speak to the specifics of the CHI workflow, but I can say more generally that there are now relatively inexpensive options in the UAV market that can achieve quite nice results. In particular, the DJI Phantom 4 Professional has a 20mp camera with a 24mm (35mm equivalent) lens that has relatively little complex distortion and produces quite high quality images, at least for a small sensor. This fixed lens can achieve good base-to-distance ratios if proper planning is observed -- it sounds as if you've already learned this with CHI -- and pretty good image accuracy, in my experience. The Professional version of the Phantom can shoot in RAW and has a mechanical shutter so you won't get the rolling-shutter effects commonly seen with GoPros and other lower-quality miniature cameras. The key is to find good flight-planning software. MapPilot will work pretty well (https://www.dronesmadeeasy.com/), as well an Australia-designed desktop software that connects to the popular Lichti app (http://www.djiflightplanner.com/). There is also the excellent UGCS flight-planning package (https://www.ugcs.com/en/page/photogrammetry-tool-for-land-surveying) but this requires you to use an Android tablet with the UAV. In my experience iOS is generally more reliable with DJI systems. As I understand it, the CHI workflow strongly encourages the use of cross-strips to increase the overall robustness of the block triangulation. I think UCGS can do this, but with the other apps you'll need to plan a second flight with cross-strips. Fortunately this isn't terribly hard. One more point I would make is that ground control is really essential for a high-quality project in archaeology, if what you want to do is map features in a site co-ordinate system. The onboard GPS of the UAV will not provide a good solution. Hope that helps.
  3. GeorgeBevan

    AIC PhD Targets

    I just heard back from Robin Myers that they are going to develop a new target that is less expensive to manufacture in the new year.
  4. GeorgeBevan

    AIC PhD Targets

    Does anyone have suggestions for alternatives to the AIC PhD Targets available through Robin Myers Imaging? I just emailed this week to inquire about purchasing another set and they said that it was uncertain whether they would put them into production. In the past I've also used MacBeth charts, as well as the X-Rite Passport. Suggestions would be greatly appreciated!
  5. Ingenious! Thanks for making the plans available.
  6. GeorgeBevan

    Photogrammetry with 3d camera?

    The issue you're going to have with tools like the Kula is that the short stereo-base, similar to the distance between our eyes, is designed for stereo photography and video, not photogrammetry. Depth accuracy in photogrammetry is dependent on the ratio of the distance to the object and the distance between the cameras ("base"). The base is so short with the Kula that no additional accuracy would be gained, IMHO. You're right that there are a number of photogrammetry systems that do use stereo cameras, or even rigs with dozens of cameras, but this is primarily for instantaneous capture of moving subjects, like bodies and faces. I should say also that camera calibration with the types of split images produced by the Kula would be rather difficult.
  7. There are some powerful tools in the iWitness package for working with scanned photos taken with unknown cameras: http://www.iwitnessphoto.com/solutions/foom.html Note that to get the sort of results you want you'll need to find features in the photographs that have not changed over the past 50+ years and to take (ideally) accurate GNSS measurements at those positions. With that information it is possible to establish the focal length of the lens. If there is sufficient stereo-base between the photos you have, it may be possible to calculate the position of the tent in the foreground. From the three images you attached I wouldn't have thought this process is possible. There is another piece of software, sv3DVision that may also be worth looking at, if you can find it (I can't). There are several publications that use it to derive 3D measurements from single historic photos. Here's a newspaper piece on it: http://www.economist.com/node/15595689 No doubt there are others on the forum who have experience with this kind of historic image interpretation...
  8. GeorgeBevan

    The Raw and the Cooked

    My experience is similar to Taylor's. Even with carefully pre-calibrated cameras (0.2 pixel accuracy) and proper camera networks, this same rippling appears on horizontal surfaces. The problem can actually be worse in my experience when classical camera networks are used with longer baselines. I would beg to differ with Tom on the origin of this problem. Clearly these ripples are an interference pattern generated during dense matching by a mismatch of sampling frequencies between the images. Mis-estimation of calibration parameters would result in global deviations, like curving of the entire model if radial distortion were not properly solved. This is not a "bug" but an intrinsic limitation of Semi-Global Matching and one that Heiko Hirschm├╝ller observed after the development of SGM. Other implementations of SGM exhibit the same problem. What is especially troubling about it is that this quantizing effect, as Tom notes, is still visible on so-called "Ultra High" matching. The quantizing in this case is happening at the level of the pixel. SGM cannot on its own perform sub-pixel matching but requires a biquadratic interpolation function, or similar, to accomplish sub-pixel matching. I don't know if anyone outside of Agisoft knows what they're using. The problem usually disappears for most users not when calibration is improved but only appears to go away when high-overlap, short-baseline image-sets are used and then meshed. The Poisson algorithm effectively filters out the high-frequency rippling and applies, in effect, local smoothing. This gives a pleasing surface, but one that also loses information at high spatial frequencies. The indication of this is that "when Ultra High" pointclouds are meshed they invariably have many fewer vertices than the original points. This is true of most meshing operations, but the cleaner the original points the more vertices will be retained in the mesh. As for multi-view 3D saving the day, as far as I know Photoscan uses multi-view only for the sparse alignment on the points generated by SIFT, but processes by pairs when it comes to dense matching with SGM. This may well have changed since the algorithms used by Photoscan were last made public in a 2011 forum post.
  9. GeorgeBevan

    The Raw and the Cooked

    I have some 60 megapixel digital images taken with a Trimble Aerial Camera. There's EO data, as well as GCPs, along with cross-strips for auto-calibration. Most of my other TIFF projects are scanned film projects.
  10. GeorgeBevan

    The Raw and the Cooked

    Just to clarify, I was only addressing the issue of whether an increase in radiometric depth produces an increase in photogrammetric accuracy, a question posed by IainS at the beginning of the thread, if I understand it properly. At present, the answer seems to be no. It just results in a big increase in compute-time. Perhaps future algorithms will make use of greater dynamic range and that could be an argument for keeping all the RAW/DNG files. I don't know. The really important issue, it seems to me, is that the dense-cloud processing option gives you options to resample images. I find that most people are processing at "medium" in Photoscan without knowing what the implications of this are for accuracy. This setting means that if they are using fancy 36-megapixel camera to capture their images in RAW, they are only using 9-megapixel images to actually do the stereo-matching. Doing the initial alignment in the highest quality, ie full resolution images, can give good exterior orientations but when it comes to building a surface this good work can be lost by increasing the ground-pixel size during the dense match.
  11. GeorgeBevan

    The Raw and the Cooked

    I would like to add a couple of points to what Carla said above. First, there are definite draw-backs in terms of computing time from processing uncompressed, full bit-depth images, at least in theory (16-bits, although as Carla says the sensor only reads out 14-bits). It is unclear that full bit-depth images produce more accurate stereo matches, all things being equal, e.g., no sharpening or other interpolations applied after capture. In Semi-Global Matching, the cost-function, Mutual Information (MI), used to calculate matches along the eight radial paths emanating from each pixel degrades in efficiency as the bit-depth is increased according to Hirschmueller, the devleoper of SGM: "Furthermore, MI does not scale well with increasing radiometric depth (i.e. 12 or 16 bit instead of 8 bit quantization per pixel), since the joint histogram becomes too sparse in this case." (http://www.ifp.uni-stuttgart.de/publications/phowo11/180Hirschmueller.pdf). It is not known exactly how Agisoft Photoscan implements SGM -- they clearly have figured out some very clever ways of speeding up the matching process -- so this observation may not be valid. I would be interested to see a speed and accuracy comparison on the same data-set with 16-bit uncompressed TIFFS vs. JPEGs (Taylor, are you up for this?). The second point is that there is another level of interpolation that most users of Photoscan chose to ignore. Tom Noble, Neffra Matthews and CHI teach, as I understand, that the camera alignment should be done to the highest accuracy. In other words, the initial sparse match that allows the cameras to be resected from the matching points is performed on full-resolution images. In testing, full-resolution alignments in Photoscan are very good, even with low quality images and poor camera networks. The problem comes during the dense matching stage. Most users don't have the time or computing power to generate point-clouds with full resolution images (I seldom do except for critical images sets, even though I have relatively powerful workstations). Groud-pixel size is one of the most important determinants of photogrammetric accuracy. When the user selects "Medium" for the dense cloud, the images are resampled to 1/4th of their original size, thus leading to significant loss in accuracy. The software would more accurately characterize the options not as "ultra-high", "high" and "medium" but as "full resolution", "half resolution" and "quarter resolution". As soon as the images are resampled at any stage then there are serious artifacts that can be observed in the models due to quantizing effects. In particular, if you look closely and turn off texture you will notice a sort of rippling effect that radiates out across the model. This has been observed in several ISPRS publications (I'm thinking here mainly of dall'Aste's work on SfM accuracy). It's something to really watch for. If you're using check-points (control points or scaling targets witheld from the bundle adjustment) the degradation in the accuracy of the model is quite clear. The third point I wanted to make on the RAW/DNG vs. JPEG debate, if I dare, is that much of the discussion assumes that the cost of storage is effectively zero. In real-world projects this not the case. Although I do shoot in RAW when I'm working in changing light conditions --- this practice has saved my bacon on a number of occasions -- I seldom keep the RAW images once I've done an exposure adjustment. The data is so enormous that I cannot afford to store it redundantly on multiple drives in multiple locations, especially if I to upload them through FTP. I would much rather have JPEGs stored on three separate drives in three locations, than a single back-up of my RAWs/DNG. Of course, the cost of storage is always coming down, but the amount of data generated by new cameras (50 megapixels) is going up at perhaps a higher rate. Just some random thoughts! I'm sure others will disagree...
  12. Usually the ASCII format of a point file runs something like this: X Y Z Nx Ny Nz R G B Where X,Y,Z is the location on the point in 3D space, R,G,B are the colours (each 0-255), and Nx,Ny,Nz (each between -1.0 and 1.0) are components of the surface normal vector at that point.
  13. GeorgeBevan

    Photogrammetry and RTI

    You can simply export either point clouds or surfaces from Photoscan with surface normals to another software package, like Meshlab. You must have the normals on the exported 3D data to change light position. If the data is exported with normals, it is then trivial to then change the light-angle or the shader model (the Phong Model in Meshlab corresponds to the Specular Enhancement filter in RTIViewer). Mapping RTI normals onto 3D models generated by other processes, like photogrammetry, is rather more challenging and is discussed elsewhere in the forum.
  14. GeorgeBevan

    Reflected-UV photography

    Hi Sian, We've done lots of reflected UV photography and it is a very powerful technique for observing certain types of surface features. I assume you're talking about the UVA range, not UVB and UVC. There have been some experiments with film in the deeper UV range but it is very tricky and dangerous light to work with, particularly UVC. Recently I worked on a project using reflected UV to identify the degradation mechanisms of the patterns on the wings of moths in natural history collections. It remains, as you indicated, a much less explored technique than IR, but there are some practical reasons for that. 1) you need lots and lots of UVA light across the 300-400nm range. Be prepared for long exposure times. I would recommend using strobes converted to output UV. Remember that most commercial strobes have coatings intended to reduce UV output so you'll have to do some research. 2) normal photograph glass absorbs UVA to a high degree. If you're serious about reflected UV consider purchasing the Coastal Optics 60mm UV/IR lens: http://www.jenoptik-inc.com/coastalopt-standard-lenses/uv-vis-nir-60mm-slr-lens-mainmenu-155/80-uv-vis-ir-60-mm-apo-macro.html It is VERY expensive but worth every penny, particularly because it is focus-corrected across the UV-VIS-IR range. That means you can focus through the viewfinder in visible light and then drop the filter in place and get the same tack-sharp focus. Non-focus corrected lenses require an adjustment when you move out of the visible range to keep focus. 3) you need a really good filter. The Baader "Venus" filter is currently the best UVA cut filter on the market but is quite expensive in itself: http://www.company7.com/baader/options/u-filter_bpu2.html you'll also need a pile of adapters to put this relatively small astronomical filter on a DSLR lens. 4) you'll need to get your DSLR converted to remove the Internal Cut-off Filter that is intended to remove extraneous UV and IR light so as to improve exposure and metering in the visible range. A monochrome sensor would be ideal with no Bayer filter to further cut-down on UV transmission. I've been meaning to experiment with Foveon sensors in this application to see if they can get us better sensitivity. Once you've mastered UVR for single images then you could consider using it for RTI and photogrammetry. I hope that helps get you started!
  15. Very interesting! Can you show us a version of this data without texture, either a mesh or a point-cloud?
×