Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


macsurveyr last won the day on May 30 2017

macsurveyr had the most liked content!

Community Reputation

17 Good

About macsurveyr

  • Rank
    Advanced Member
  1. Martin, There is a more exact formula that can be used to calculate the footprint at closer distances such as you are doing but it is still an approximation. You can explore this website for more information. Depth of Field (DoF), Angle of View, and Equivalent Lens Calculator Make sure to click on the See Notes link under the 2 methods for calculating the field of view. Once there, be sure to follow the other link Calculation the Angle of View: When Theory Meets Practice. Additionally, lens distortion can account for a few percent less (usually) field of view. Tom
  2. Hmmm, Not sure I can explain it since the calculated footprint can be confirmed. That leaves what you are seeing to be the mystery. I believe spec wise, the Canon viewfinder is supposed to cover ~96% of the actual footprint so again, not sure I can explain it based on the numbers. Tom
  3. Hello Camilla, Very interesting test results! As I understand it, choosing the Enable color correction option in PhotoScan will cause a least squares best fit of the color values for all the images used and then alter every image so that the color is the best fit. That means that if you had done a very careful job of color correction for each image, in say PhotoShop, that work would be altered by the color correction in PhotoScan. If you want the original color of each photo maintained, you must uncheck the Enable color correction option. Out of curiosity, what Blending mode did you choose? Tom
  4. The question is whether the in camera correction is only for chromatic aberration of if it includes other corrections such as lens distortion which you do NOT want in camera corrections applied.
  5. There are several issues that may be confusing the discussion. First, Taylor, if I understand correctly, you captured 3 photos from almost exactly the same position, each camera location, at 0, +90, and -90 degree rotations? That is not recommended. Multiple images from the same location and aligned together will create problems as points matched on those cameras will have nearly infinite potential error in Z. Potential error in Z would be infinite if no rotation was done, but probably close to infinite between pixels matched on the + and - 90 rotated images although I doubt the camera locations have exactly the same nodal point location. If they do - perhaps you are using a calibrated nodal ninja mount - then the same nodal point images should be grouped as Station camera groups. Aligning and then keeping images during surface reconstruction that are at or very nearly at the same location will cause surface noise. Also, a stereo pair of the 70 mm lens at 70% overlap will have ~6.5 times more potential error in Z. Even if the camera calibration is good to 0.2 pixels, the potential Z error exceeds a pixel (~1.3) for a 70 mm 70% overlap stereo pair. Having additional images that actually contribute to geometry will reduce that potential error of course. Most of the rippling that I have seen is in fact due to poor geometry and perhaps made worse if the camera calibration is not optimized. Having said that, I have also seen slightly different artifacts - more like stair steps - but might be called ripples I suppose - in some dense surfaces. I have had some exchanges with Agisoft about it since I noticed it, and again, it was introduced with version 1.1 when they changed their stereo rectification approach and made more noticeable by a bug that has since been fixed. Since it is still there to a lesser degree, it may if fact be due to some quantizing at the level of the pixel but they have indicated that some of it was/is due to sub pixel disparity estimation. I am continuing to follow up on what is actually going on. I do not have any inside knowledge of what algorithms are being used by Agisoft but I don't believe the sparse alignment is actually using SIFT and it really does do only a pair at a time but tries to match additional image pixels to previously matched points on other images and weights them according to the size of the pixels being matched. And I also am guessing that they are not using true SGM for dense matching but I could be wrong. I will try to post what I find out however, I am going to be traveling for a bit and won't be checking in very often. Plus, I will need to test some things as well and that will be a while. Tom
  6. First of all, Iain, when you say that you ran the RAW through Photoscan, what did you actually do. Since PhotoScan can now open DNG files directly, and previous versions opened CR2 files, I want to be sure what is going on. Currently, PhotoScan only opens DNG files and uses "as shot" settings. Currently, even if you process the RAW files and export DNG files, any tweaks to the DNG file are NOT used by PhotoScan. I, like Carla, strongly recommend taking RAW, remove chromatic aberration, minimize vignetting, do NOT sharpen and do NOT remove lens distortion. The results in PhotoScan will improve - every time - may not be much as it really depends on the quality of camera and lens used as well as the quality of the on camera JPEGs that are produced - but results will improve. And yes, Neffra and I, and CHI, teach that it is very important to align - and optimize that alignment - using full resolution images. It is critical that a very high quality camera calibration be determined to the tenths of pixels - full resolution pixels - so that subsequent products are the best that can be. The rippling effect is not really due to the downsampling of images at the dense point cloud stage. Yes, the accuracy of the surface can be affected by the downsampling but that is because, in effect, not all the original pixels are being matched. Some detail may not be represented in the surface because those pixels are at best averaged. But it is not really the same as downsampling from 36 megapixel to 9 as the pixels are being matched across - hopefully - multiple images and the downsampling will be different for each image. Which leads back to the rippling effect. It is actually mostly due to poor base to height - poor geometry - insufficient parallax, as well as the quality of the sub-pixel interpolation - a higher quality alignment/optimization and camera calibration does help. Also, almost always only two photos are being used or cover the subject area. The effect can be seen even at Ultra high - full resolution images - using only two images with insufficient parallax. The rippling artifacts were introduced in version 1.1 and exacerbated by an integer rounding bug for a while. It has been mostly eliminated in the latest version - the ripples that is - but very noisy surfaces will still be produced using photos with weak geometry. As far as 16 bit. ‚ÄčIf the subject is a high dynamic range subject or there are deep shadows and/or highlights, and tweaking the shadows and highlights reveals more detail, then actually tweaking less but saving as 16 bit TIFF files will provide more pixel detail for PhotoScan to work with at all stages. ‚ÄčThe biggest difference in time when processing 16 bit TIFF vs JPEG at 8 bit, is the loading time. According to Agisoft, the actual processing time is the same no matter what the file type, but my experience is that TIFF files can take substantially longer to load on some some computers. Macs seem to not have much different load times, but on Windows I have experienced a a 5 second JPEG load time vs 90 seconds load time of TIFF even at the same bit depth. Of course, bit depth - image size - also affects load time. However, using 16 bit TIFF files has resulted in better results on some tests and yes, a little more processing time since more points may be matched. That is a good thing. The important thing is that with 16 bit TIFF files going in will allow 16 bit products to be output, thus preserving original dynamic range. Often the subjects I have worked with don't actually warrant the extra range but I am sure others may benefit from more bit depth in some outputs. Orthos and textures can be enhanced differently with more bits. So back to RAW and DNG. Hopefully future versions of PhotoScan will support DNG files which have been corrected for chromatic aberration, exposure, etc. and eliminate some of the redundant images we all currently create. Tom
  7. The orientation flag is a problem. There are a couple of ways that the flag can get changed in a way that causes problems. The simplest way to cause a problem is to look at an image in Windows explorer and rotate it while looking at it. The image will get saved as if the NORMAL viewing rotation is now as you were last viewing it. - essentially destroying the flag. Say you copy the image first and then rotate the copy - destroy the flag. Now add both those photos to PhotoScan. PhotoScan will think they were taken by two different cameras and separate them. For camera calibration purposes, they should be the same camera. It can be very difficult to know which way to unrotate images back to the original way especially with images taken from a UAS. RAW files can have the flag. If images are opened in camera raw and appear rotated, then exporting them as a tif or jpg will create a file that is rotated but the flag says it is not. Again, two cameras in PhotoScan when there should only be one. Just need to be sure before destroying the flag.
  8. I have been testing Reality Capture just a bit. Unfortunately I can only currently test it using Windows 10 in a Parallels virtual environment so I have not tested anything beyond alignment. However, learning how good the alignment and subsequent camera calibration is really my biggest curiosity/concern. Everyone seems to think that RC is faster for alignment but I am not so certain. I believe that by default the alignment is done using down sampled images. At least half the original size, perhaps even more. If you do an alignment using PhotoScan at Medium or Low I believe you will see nearly the same speed increase. With that speed of course there is a compromise. The camera calibration will never be as good as is possible on High. I of course am a stickler for the highest quality camera calibration as that is the single biggest source of error for a photogrammetry solution. Meshing of course will always be faster if CUDA is available and RC may very well be faster at meshing than PhotoScan and others but I cannot test that currently. I would be very interested in any testing that others might do. That is not to say that RC is not good. I think it is quite good and fast. I am not sure it is better than what is already available but it is too early to tell. It will be interesting once they announce pricing. Of course PhotoScan is available and runs well on Macs which is a big deal for me. It will also be interesting to see if RC becomes available on something other than Windows and without the CUDA requirement. I hope to do more testing before the end of the trial period but there never seems to be enough time. The interface is a bit different. A little more black box than I would like. It seems a little difficult to figure out what is really going on and how to make sure of the quality of the results. If anyone else does some testing, please send some feedback. Tom
  9. Hello, Putting scales on the turntable works just fine as you have proven yourself. In our testing it won't necessarily provide a better solution than scale from a flat project. We have found that a proper flat project - proper lens and base to height, photos at 90's and 270's etc. will provide a highly reliable camera calibration - perhaps very slightly better than from a single circuit in the round. The number of tie points that connect the in the round captures and the flat capture is usually in the thousands and will more than adequately tie the multiple captures together for highly accurate transfer of scale. Than being said, the biggest reason we tend to use the flat project method is that it always seems more difficult to make sure the target sticks don't move and are in good focus when capturing them on a turntable or occluding the subject. If you like that approach better that is great. There should not be any difference in the final results. Tom
  10. Taylor, Once you have the camera calibration in PhotoScan for one photo of an RTI set, simply export the camera calibration from the camera calibration dialog - make sure you have picked the adjusted tab. Then just load all the photos of the RTI set into a brand new PhotoScan project. Go to the camera calibration dialog and load the calibration you just saved. Then export all the undistorted photos. They do not need to be processed in any other way, simply loading the camera calibration for a set of photos is going to use that during export. Don't make it harder than it needs to be. Tom
  11. Coded targets used to be essential in order to be efficient with PhotoModeler. Lots of circular targets, coded or not, also used to be necessary in order to quickly orient photos to each other, at least initially. No targets, coded or otherwise, are actually necessary for PhotoScan, or other software, including PhotoModeler these days. However, targets are very useful for marking the objects used for scale and or control points. Circular targets are ideal because most software can actually assist in finding the center of the target very accurately - far better than human error will allow. If the circular targets happen to be compatible coded targets, PhotoScan and PhotoModeler can find them automatically, almost no user interaction required. AdamTech CalibCam finds the centroids of targets - like PhotoScan and PhotoModeler - very accurately but does not support or require coded targets. So coded targets can be very useful but not at all mandatory. PhotoModeler supports several variations of coded targets. The ones in the kit are the latest version, but those are not supported by PhotoScan. As I said, we are in the process of redesigning the scale bars that many of you have seen - that we use all the time - and are going to look for sources to print. We have been in the field the last couple of days and are packing today for a week and a half in the field so I will have limited access to internet until July 2. If I get done what we shot yesterday I might post some shots or data with some scale bars in action. Tom
  12. Taylor , We made own a couple of different times but are actually looking into getting some printed options. We hope to have some better ideas about sources soon. We are going to the field next week so not sure if that soon. PhotoScan professional has printable coded targets but not target sticks or scale bars. We hope to find out about cost of scale bars that have coded targets and some other things. Bug me or this forum again in a while if no updates. Tom
  13. Sigmund, The new images are very nice. They seem consistent and the camera calibration is much better. When combined with the 4 images from farther away, the alignment and surface is now seamless between the two sets. Much better results. Do you have some way of providing scale? If any comparisons are to be made, it would be better if things were scaled correctly and put into the same arbitrary coordinate system. George, I am still pondering some things in your last post but I have had a nasty cold and not sure I have the energy to respond just yet. I just wanted to give Sigmund some feedback. Tom
  14. George, PhotoScan uses multi-view stereo reconstruction to generate the dense surface model, but employs a couple of different algorithms depending on the type of object. It tries to match every Nth pixel - N being 1, 2, 4, 8, or 16 - of each photo across multiple photos. Some filtering is performed, but I do not know what they use. Some artifacts can be created depending on how aggressive the surface matching is, but those are easy to identify. On the other hand, the differences between stereo models is minimized or eliminated and does not have to be reconciled after the fact. Tom
  15. Sigmund, A few things about photos. Some of the questions about focus, angles, lenses and whether things are necessary. Well, obviously, some results can be created without all the "necessary" things. But, there are errors. I can see them. Sure, they are very small, but that is why some things are necessary - to eliminate or at least minimize very small errors. With consistent, correctly captured sets of images, I can solve for and eliminate errors at the subpixel level - approaching 1/10th of a pixel. Changing focus, using longer lenses which degrades the geometry, and the opposite - moving too much which changes the look angle and causes fewer matched points, having image stabilization turned on which has to change the relationship of the lens to the sensor, and to a much lesser extent - even changing aperture which can cause light to refract so slightly differently. All those things, and more, can add up - making it nearly impossible to solve to the sub-pixel. These days, I can make some fabulous looking 3D models from images that really should not work at all. I really struggle with whether that is a good thing, or a bad thing. Your Hangvar images. Some have been rotated - probably in Windows explorer, which destroys the original EXIF tag so they all say they have an orientation of 1 (Normal) - we know that is not true. In order to have a chance at camera calibration, I need to know the relationship of the lens to the sensor as captured. Some images were captured with image stabilization turned on. They all had some image compression - jpeg compression artifacts can be a problem at high compression. The 50 mm lens has an effective focal length of 75 mm which for normal strip photos and the prescribed 60% to 70% overlap does not make for good base to height. You did take convergent photos which improves the geometry - very necessary with longer lenses, but can cause fewer matches if too convergent. A couple of photos were a little blurry - not surprising considering the low light - but does impact a sub-pixel solution. And yet the surface looks pretty good. Like I said, I am not sure if that is good or bad. So, even though I was able to align the far images with the detail images, rotated and not, the camera calibration(s) are less than perfect and I can detect a slight Z offset between the two sets of images. Without scale of course, I can't really say how much it is. I could manually intervene and add points to minimize the differences, but that takes time and would not be necessary with good "sets" of images. Hope this helps in the event you do take more photos next week. George, the data set is a 3D surface, with normals, in an arbitrary coordinate system, not to real scale, oriented in an arbitrary plane. Tom
  • Create New...