Jump to content

The Raw and the Cooked


IainS

Recommended Posts

I am learning and last weekend I shot a rock in a park near where I lived. I shot in RAW plus JPG. I ran the RAW through Photoscan and I fel that the results were not great lacking sharpness and detail however today I ran the jpegs through Photoscan and although not 100% what I was hoping for they were considerably sharper and the final image had more definition. I would have thought that the reverse would have been true with the JPEG having less information than RAW images?

 

Iain

 

 

Link to comment
Share on other sites

Iain,

 

There are a couple of issues here that could explain what you are seeing.  First, it is correct that the RAW file carries more data than a JPEG.  For most DSLR sensors it is 14 bits per pixel per color channel.  A JPEG is 8 bits per pixel per color channel.  But more importantly, the RAW data is un processed by the camera.  A JPEG produced by the camera is processed and likely has sharpening, contrast and saturation applied.  Even if you plan to use JPEGs, if you make the JPEGs from the RAW, you are in control of, and have a record of the processing.

 

It will depend on your camera, and the settings for the camera, what is done to produce JPGs by the camera.  I suspect what you are seeing is the result of sharpening.  Note that sharpening is a really bad idea for all computational photography techniques, if your goal is high precision, low uncertainty, and reproducibility.  Sharpening changes the pixels. A good photogrammetry workflow and optimization should yield RMSE errors in the tenths of pixels.  And, if you are modifying the pixels by sharpening, then all bets are off about the metrics on your results.

 

Go take a look at the RAW file, the Camera produced JPEG and a JPEG you produce from your RAW, with no processing applied - except a white balance.  Can you see any difference at 100%  (Note that to not process the RAW, you may need to "zero out" the default values in programs like Adobe Camera Raw and Lightroom, because they want to process your images by default as well)

 

Our strong recommendation is to shoot RAW, save it as DNG for archiving, and create controlled tiffs or jpegs for processing into photogrammetry.  In general we don't process the images other than a white balance and exposure compensation.  If your lens is in the database you can remove chromatic aberration as well.   We strongly recommend to NEVER apply sharpening or tone curves, or distortion correction, if you are doing scientific imaging.  If you are making games and entertainment, then do whatever makes it look better.

 

Carla

Link to comment
Share on other sites

  • 2 weeks later...

I would like to add a couple of points to what Carla said above. First, there are definite draw-backs in terms of computing time from processing uncompressed, full bit-depth images, at least in theory (16-bits, although as Carla says the sensor only reads out 14-bits). It is unclear that full bit-depth images produce more accurate stereo matches, all things being equal, e.g., no sharpening or other interpolations applied after capture. In Semi-Global Matching, the cost-function, Mutual Information (MI), used to calculate matches along the eight radial paths emanating from each pixel degrades in efficiency as the bit-depth is increased according to Hirschmueller, the devleoper of SGM: "Furthermore, MI does not scale well with increasing radiometric depth (i.e. 12 or 16 bit instead of 8 bit quantization per pixel), since the joint histogram becomes too sparse in this case." (http://www.ifp.uni-stuttgart.de/publications/phowo11/180Hirschmueller.pdf). It is not known exactly how Agisoft Photoscan implements SGM -- they clearly have figured out some very clever ways of speeding up the matching process -- so this observation may not be valid. I would be interested to see a speed and accuracy comparison on the same data-set with 16-bit uncompressed TIFFS vs. JPEGs (Taylor, are you up for this?).

 

The second point is that there is another level of interpolation that most users of Photoscan chose to ignore. Tom Noble, Neffra Matthews and CHI teach, as I understand, that the camera alignment should be done to the highest accuracy. In other words, the initial sparse match that allows the cameras to be resected from the matching points is performed on full-resolution images. In testing, full-resolution alignments in Photoscan are very good, even with low quality images and poor camera networks. The problem comes during the dense matching stage. Most users don't have the time or computing power to generate point-clouds with full resolution images (I seldom do except for critical images sets, even though I have relatively powerful workstations). Groud-pixel size is one of the most important determinants of photogrammetric accuracy. When the user selects "Medium" for the dense cloud, the images are resampled to 1/4th of their original size, thus leading to significant loss in accuracy. The software would more accurately characterize the options not as "ultra-high", "high" and "medium" but as "full resolution", "half resolution" and "quarter resolution". As soon as the images are resampled at any stage then there are serious artifacts that can be observed in the models due to quantizing effects. In particular, if you look closely and turn off texture you will notice a sort of rippling effect that radiates out across the model. This has been observed in several ISPRS publications (I'm thinking here mainly of dall'Aste's work on SfM accuracy). It's something to really watch for. If you're using check-points (control points or scaling targets witheld from the bundle adjustment) the degradation in the accuracy of the model is quite clear. 

 

The third point I wanted to make on the RAW/DNG vs. JPEG debate, if I dare, is that much of the discussion assumes that the cost of storage is effectively zero. In real-world projects this not the case. Although I do shoot in RAW when I'm working in changing light conditions --- this practice has saved my bacon on a number of occasions -- I seldom keep the RAW images once I've done an exposure adjustment. The data is so enormous that I cannot afford to store it redundantly on multiple drives in multiple locations, especially if I to upload them through FTP. I would much rather have JPEGs stored on three separate drives in three locations, than a single back-up of my RAWs/DNG. Of course, the cost of storage is always coming down, but the amount of data generated by new cameras (50 megapixels) is going up at perhaps a higher rate. 

 

Just some random thoughts! I'm sure others will disagree...

Link to comment
Share on other sites

Lots of interesting observations here, George, especially about the downside of using 16-bit TIFFs for speed and accuracy.  I have processed some data sets with 16-bit TIFFs under the assumption, rather than the knowledge, that more bit-depth would give better results, but it's a question that has been nagging me for a while.  I wouldn't mind running some comparisons between 16-bit TIFFs and 8-bit JPEGs.

 

I get a wide range of processing speeds on my GPU, and I'm not sure what determines the speed.  I once achieved a combined 1 billion samples per second between the GPU and CPU (Nvidia GTX 980Ti and 12-core Xeons on a 2012 Mac Pro), but more often I get something in the range of 250 million to 750 million samples/second.

 

It's good to know, if I understand your post correctly, that processing the sparse point cloud at less than the highest quality setting can still yield accurate results.  I also rarely process the dense point cloud at the ultra-high setting except for smaller data sets, mostly because of memory consumption (128 Gb of RAM).

 

I've also noticed the rippling effect in some models.  At first, I thought it was just a moire effect of rendering the point cloud or mesh at different scales relative to my screen resolution, but it's definitely there in some models. 

 

On your third point, storage, I've been reluctant to give up my RAW files, but I tend to store everything.  I'm running up against the limits of this habit, however.  I have a total of 9 Tb of storage, including a 4 Tb external backup drive, and 5 Tb of primary storage on 2 internal and one external drive.  I use Apple's Time Machine for backing up my files.  The 4 Tb backup drive is essentially full, since Time Machine preferentially overwrites the oldest files first.  The external drives start to fill up power strips, since they require an external AC power source, so its an issue I'll have to face up to sooner or later.

Link to comment
Share on other sites

I think there are a few things getting conflated here - so I'll weigh in on the RAW vs JPEG and 8bit vs 16 bit in PhotoScan.

 

I'll say again that I think there is enormous value in shooting RAW and controlling and having a record of the processing for your JPEGS (if you are using them).  JPEGS from the camera are outside of your control and you have no records of how they were processed because the camera is a black box.  Further, most of our work and the people we work with are creating documentation and want to use scientific imaging practices. One goal for scientific imaging is future reuse and also the ability for others to assess the data.  Collecting and keeping data that can be much more accurately corrected for color, exposure, etc. as well as the clear record of it's processing seems like a no-brainer to me.  Pretty much every museum and library imaging staffer we talk to says the same thing.  They may not all use DNG for archiving, but they pretty much all shoot RAW.

 

As for JPEG vs TIFF inside photoscan.  It is proprietary software, so we don't know for sure.  However, it's my understanding that the choice of jpeg vs tiff will not affect the alignment, optimization, and geometry produced by photoscan.  The higher bit depth tiff images only give an advantage when building texture maps or ortho-photos.  We generally use jpegs for our photogrammetry projects, unless we have a specific need for better, richer texture maps. However, the fact that we have shot RAW, controlled the processing, and saved the DNGs means that we (or anyone else who might want to use the data) can get back to 16 bit data for any changes in the image processing they might want to make. Also note that other software and future improvements and modifications to the processing might give greater advantages for the 16 bit images.  We want to "future proof" our data as much as possible, so that the image sets we collect have the maximum reuse and payback over time.

 

An example of this is new research software that uses a technique called "unstructured lumigraphs" to make a real-time renderer with much better rendering of the textured surface for specular material and other complex surfaces. This is work being done at the University of Minnesota and University of Wisconsin Stout are working on this.  We expect to release an open source version of this software in collaboration with them before the end of the year.  It relies on software like photoscan to create the model and align the photos, but then can take the results and give the user a much better experience of the surface.  Unlike approaches that do this all with algorithms in software (like making a surface appear metallic), this tool uses the original captured photos for the real-time rendering.  There is definitely a difference between using 16 bit tiffs in this case when compared to jpegs. More things like this will be coming.  If you want maximum reuse for your data you should collect RAW and archive DNGs.  You can then choose a lower resolution path, like jpegs for your current processing needs, knowing that you can recreate images that take advantage of new breakthroughs in the future.

 

Carla

Link to comment
Share on other sites

Good points.  I save all of my RAW files as DNGs with embedded RAW which doubles the file size.  I generally export a TIFF of images I plan to use (3 times the RAW file size at 16 bits per RGB channel) after I've done my basic processing in Lightroom.  Then I export JPEGs for processing with Photoscan or RTIBuilder.  If I want to add notations (boxes, arrows and scale bars), I usually do this on JPEG images at maximum quality in ImageJ, and then I export a smaller compressed JPEG (usually limited to 500 Kb) for embedding into reports, and I include a link to a higher-res version of the JPEG.  On occasion, I'll use Photoshop on the TIFFs to convert to gray-scale and for false-color IR, or for processing prior to other software such as DCRAW, ImageJ, or Dstretch for additional post-processing of false-color IR, UV, or Principal Components Analysis.  Saving a false-color IR as a TIFF increases the number of layers, which increases the file size.  I keep a log file of all the image processing steps in Photoshop.

 

For a recent project documenting a relatively small portrait painting (recto and verso) with dimensions of 48 x 50 cm (roughly 19 inches squared), I captured approximately 700 images for visible, IR (various wavebands), UV, UV-induced fluorescence, visible-induced IR fluorescence, visible- and IR-RTIs, false-color infrared, and photogrammetry.  Every camera position used to document the recto included a calibration set of 9 or more images to align all the data with the 3D model.  By the time all the post-processing was complete, I had accumulated about 1500 images and nearly 70 Gb of data.  Not all of these images yielded data that was ultimately used (e.g., calibration and bracketing exposures, visible-induced IR that didn't reveal pigments that fluoresced in the IR, and dark-field exposures).  All of this adds up very quickly, so at some point I may need to make the decision whether to keep all the data or let some of the derivatives and unused images go.

Link to comment
Share on other sites

Just to clarify, I was only addressing the issue of whether an increase in radiometric depth produces an increase in photogrammetric accuracy, a question posed by IainS at the beginning of the thread, if I understand it properly. At present, the answer seems to be no. It just results in a big increase in compute-time. Perhaps future algorithms will make use of greater dynamic range and that could be an argument for keeping all the RAW/DNG files. I don't know. The really important issue, it seems to me, is that the dense-cloud processing option gives you options to resample images. I find that most people are processing at "medium" in Photoscan without knowing what the implications of this are for accuracy. This setting means that if they are using fancy 36-megapixel camera to capture their images in RAW, they are only using 9-megapixel images to actually do the stereo-matching. Doing the initial alignment in the highest quality, ie full resolution images, can give good exterior orientations but when it comes to building a surface this good work can be lost by increasing the ground-pixel size during the dense match. 

Link to comment
Share on other sites

George, I'd be happy to run some comparison tests between TIFFs and JPEGs of the same images.  As I mentioned above, I'm often not sure why some data sets seem to process much more quickly than others.  I suspect there are other factors affecting the speed of the dense-cloud generation in addition to bit-depth, since I've gotten both faster and relatively slower samples/sec with 8-bit JPEGs, from my recollection. 

 

Thanks for the link to Hirschmüller's paper on SGM and the observations about the different dense-cloud settings in Photoscan--the menu descriptions of "ultra-high," "medium," etc. could be made more explicit.  I recall discussions about how the settings relate to downsampling somewhere on Photoscan's website and maybe also in their forums, but their impact on accuracy isn't always made clear.  The rippling effect you mentioned is an interesting phenomenon and I wonder if there are certain kinds of data that tend to cause more or less of this effect--for example, the range and frequency of contrast differences, texture, and other characteristics of the images.

Link to comment
Share on other sites

First of all, Iain, when you say that you ran the RAW through Photoscan, what did you actually do. Since PhotoScan can now open DNG files directly, and previous versions opened CR2 files, I want to be sure what is going on. Currently, PhotoScan only opens DNG files and uses "as shot" settings. Currently, even if you process the RAW files and export DNG files, any tweaks to the DNG file are NOT used by PhotoScan.

 

I, like Carla, strongly recommend taking RAW, remove chromatic aberration, minimize vignetting, do NOT sharpen and do NOT remove lens distortion. The results in PhotoScan will improve - every time - may not be much as it really depends on the quality of camera and lens used as well as the quality of the on camera JPEGs that are produced - but results will improve. And yes, Neffra and I, and CHI, teach that it is very important to align - and optimize that alignment - using full resolution images. It is critical that a very high quality camera calibration be determined to the tenths of pixels - full resolution pixels - so that subsequent products are the best that can be.

 

The rippling effect is not really due to the downsampling of images at the dense point cloud stage. Yes, the accuracy of the surface can be affected by the downsampling but that is because, in effect, not all the original pixels are being matched. Some detail may not be represented in the surface because those pixels are at best averaged. But it is not really the same as downsampling from 36 megapixel to 9 as the pixels are being matched across - hopefully - multiple images and the downsampling will be different for each image. Which leads back to the rippling effect. It is actually mostly due to poor base to height - poor geometry - insufficient parallax, as well as the quality of the sub-pixel interpolation - a higher quality alignment/optimization and camera calibration does help. Also, almost always only two photos are being used or cover the subject area. The effect can be seen even at Ultra high - full resolution images - using only two images with insufficient parallax. The rippling artifacts were introduced in version 1.1 and exacerbated by an integer rounding bug for a while. It has been mostly eliminated in the latest version - the ripples that is - but very noisy surfaces will still be produced using photos with weak geometry.

 

As far as 16 bit. ​If the subject is a high dynamic range subject or there are deep shadows and/or highlights, and tweaking the shadows and highlights reveals more detail, then actually tweaking less but saving as 16 bit TIFF files will provide more pixel detail for PhotoScan to work with at all stages.

 

​The biggest difference in time when processing 16 bit TIFF vs JPEG at 8 bit, is the loading time. According to Agisoft, the actual processing time is the same no matter what the file type, but my experience is that TIFF files can take substantially longer to load on some some computers. Macs seem to not have much different load times, but on Windows I have experienced a a 5 second JPEG load time vs 90 seconds load time of TIFF even at the same bit depth. Of course, bit depth - image size - also affects load time.

 

However, using 16 bit TIFF files has resulted in better results on some tests and yes, a little more processing time since more points may be matched. That is a good thing. The important thing is that with 16 bit TIFF files going in will allow 16 bit products to be output, thus preserving original dynamic range. Often the subjects I have worked with don't actually warrant the extra range but I am sure others may benefit from more bit depth in some outputs. Orthos and textures can be enhanced differently with more bits.

 

So back to RAW and DNG. Hopefully future versions of PhotoScan will support DNG files which have been corrected for chromatic aberration, exposure, etc. and eliminate some of the redundant images we all currently create.

 

Tom

Link to comment
Share on other sites

Tom, et al., I recently aligned a set of JPEGs captured using a 14 mm focal length lens on a micro four-thirds camera (equivalent to 28 mm on a full-frame sensor), which I would have thought would have provided pretty good geometry and overlap, on the latest version of Photoscan.  I forget the exact number, but it was about 150 images in total to cover the entire surface of the painting with about 70 percent horizontal and 35 percent vertical overlap at a ground sample distance of about 250 pixels per inch.  I captured 3 images for each camera position, including +/- 90-degree rotations. 

 

In addition to these, the data set also included close-up images captured with a longer, 35 mm focal length lens (70 mm equivalent for a full-frame sensor), for which I also captured at least 3 to 6 sets of overlapping images with +/- 90-degree rotations at each camera position.  Finally, I aligned a set of images of the entire painting captured with the 35 mm lens (70 mm equivalent).  This set had 5 or 6 camera positions with 70 percent horizontal overlap and +/- 90 degree rotations at each camera position.  The idea was to obtain a set of calibration images for every camera position used to document the painting and for each camera position to be aligned with the model, so I could spatially correlate the spectral data from every camera position using the model. 

 

I aligned using the highest setting for the sparse cloud with pair pre-selection turned off (this seems to help establish tie points between the different calibration groups) and Photoscan successfully aligned the 200+ images on the first attempt.  After the initial alignment, I removed some of the obviously stray pixels, adjusted the bounding box, and optimized the reprojection uncertainty a few times (Photoscan very quickly gets the reprojection uncertainty to less than 0.6 arbitrary units after a few iterations). 

 

I then used gradual selection only once to remove reconstruction uncertainty to 10 arbitrary units or less, which eliminated roughly 75 percent of the points, but left well over 1000 to 10,000 projections per image to finish the optimization.  I did several more iterations of optimization with gradual selection to get the reprojection uncertainty to less than 0.4, set my scale bars, and continued gradual selection and optimizing until the reprojection uncertainty was less than 0.25 on the slider.  I don't have the final error statistics handy, but as I recall the sigma error was about 0.3 pixel before I built the dense point cloud.  Still, I noticed the slight rippling effect.  Maybe this was due to the longer focal-length images with less favorable geometry?  Maybe I'll try the whole process again using TIFFs.

 

I also often notice a color shift on the orthomosaics compared to the original images.  The orthos often appear darker and a shade or two cooler in color temperature.

Link to comment
Share on other sites

My experience is similar to Taylor's. Even with carefully pre-calibrated cameras (0.2 pixel accuracy) and proper camera networks, this same rippling appears on horizontal surfaces. The problem can actually be worse in my experience when classical camera networks are used with longer baselines. I would beg to differ with Tom on the origin of this problem. Clearly these ripples are an interference pattern generated during dense matching by a mismatch of sampling frequencies between the images. Mis-estimation of calibration parameters would result in global deviations, like curving of the entire model if radial distortion were not properly solved. This is not a "bug" but an intrinsic limitation of Semi-Global Matching and one that Heiko Hirschmüller observed after the development of SGM. Other implementations of SGM exhibit the same problem. What is especially troubling about it is that this quantizing effect, as Tom notes, is still visible on so-called "Ultra High" matching. The quantizing in this case is happening at the level of the pixel. SGM cannot on its own perform sub-pixel matching but requires a biquadratic interpolation function, or similar, to accomplish sub-pixel matching. I don't know if anyone outside of Agisoft knows what they're using. The problem usually disappears for most users not when calibration is improved but only appears to go away when high-overlap, short-baseline image-sets are used and then meshed. The Poisson algorithm effectively filters out the high-frequency rippling and applies, in effect, local smoothing. This gives a pleasing surface, but one that also loses information at high spatial frequencies. The indication of this is that "when Ultra High" pointclouds are meshed they invariably have many fewer vertices than the original points. This is true of most meshing operations, but the cleaner the original points the more vertices will be retained in the mesh. 

 

As for multi-view 3D saving the day, as far as I know Photoscan uses multi-view only for the sparse alignment on the points generated by SIFT, but processes by pairs when it comes to dense matching with SGM. This may well have changed since the algorithms used by Photoscan were last made public in a 2011 forum post.  

Link to comment
Share on other sites

There are several issues that may be confusing the discussion.

 

First, Taylor, if I understand correctly, you captured 3 photos from almost exactly the same position, each camera location, at 0, +90, and -90 degree rotations? That is not recommended. Multiple images from the same location and aligned together will create problems as points matched on those cameras will have nearly infinite potential error in Z. Potential error in Z would be infinite if no rotation was done, but probably close to infinite between pixels matched on the + and - 90 rotated images although I doubt the camera locations have exactly the same nodal point location. If they do - perhaps you are using a calibrated nodal ninja mount - then the same nodal point images should be grouped as Station camera groups. Aligning and then keeping images during surface reconstruction that are at or very nearly at the same location will cause surface noise. Also, a stereo pair of the 70 mm lens at 70% overlap will have ~6.5 times more potential error in Z. Even if the camera calibration is good to 0.2 pixels, the potential Z error exceeds a pixel (~1.3) for a 70 mm 70% overlap stereo pair. Having additional images that actually contribute to geometry will reduce that potential error of course. Most of the rippling that I have seen is in fact due to poor geometry and perhaps made worse if the camera calibration is not optimized.

 

Having said that, I have also seen slightly different artifacts - more like stair steps - but might be called ripples I suppose - in some dense surfaces. I have had some exchanges with Agisoft about it since I noticed it, and again, it was introduced with version 1.1 when they changed their stereo rectification approach and made more noticeable by a bug that has since been fixed. Since it is still there to a lesser degree, it may if fact be due to some quantizing at the level of the pixel but they have indicated that some of it was/is due to sub pixel disparity estimation. I am continuing to follow up on what is actually going on.

 

I do not have any inside knowledge of what algorithms are being used by Agisoft but I don't believe the sparse alignment is actually using SIFT and it really does do only a pair at a time but tries to match additional image pixels to previously matched points on other images and weights them according to the size of the pixels being matched. And I also am guessing that they are not using true SGM for dense matching but I could be wrong.

 

I will try to post what I find out however, I am going to be traveling for a bit and won't be checking in very often. Plus, I will need to test some things as well and that will be a while.

 

Tom

Link to comment
Share on other sites

Tom, thanks very much for those tips and observations.  I wasn't aware of the importance of capturing +/- 90-degree camera rotations from different camera positions than the 0-degree camera positions to reduce the z-errors, and I'll make this part of my workflow.  The +/- 90-degree rotations aren't exactly at the same positions and optical axis as the 0-degree camera positions, but they're close.  I use a camera rotation device made by Really Right Stuff, but it would be very easy to change the baseline positions before rotating the camera for better calibration, as you suggested.

 

I realize the 35-mm macro lens (70 mm equivalent on a full-frame camera) isn't ideal for geometry, which is why I don't use it as the primary lens for photogrammetry (instead, I use the 14mm or 20mm [equivalent to 28mm to 40mm prime lenses on a full-frame camera] for photogrammetry), but I use it because of its uncoated optics and better transmissivity and contrast in the UV range.  I also capture UV-induced visible fluorescence (UVF), visible and IR images for the same camera positions,  so to get good registration of the spectral images at various wavelengths, I only change the on-lens filtration and light source, not the lens.  Essentially, I'm using photogrammetry as an aid for producing spectral orthomosaics and also as a tool for spatial correlation of spectral data and other analyses.

 

I thought I'd get better alignment of the various camera positions for macro and spectral imaging with the 3D model if I also captured a sequence of visible calibration images for the 35mm macro lens at each camera position.  After I align the set of visible calibration images with the model, I rely on registration of the spectral images using Photoshop or Imalign (which has a feature to align IR and VIS images) to spatially correlate the spectral data with the visible 3D model.  As I understand your suggestions, I'd be better off simply aligning the single visible camera position for the whole sequence of UVR, UVF, VIS, and various IR wavebands, rather than aligning an entire calibration sequence for each position.

 

I've encountered the stair-stepping problem you mentioned when I merge chunks of the same surface captured with different sets of overlapping images (for example, if the capture sequence is repeated with a slightly different focus or a different lens).  Slight differences in the calibration of each chunk or calibration group can result in two overplapping surfaces because of slight errors in the z-depth.  Turning off pair preselection before aligning the images from different groups seems to help, although it takes much longer to align this way.  My assumption has been that this establishes tie points between images from different calibration groups that overlap the same area to reduce the z-errors, and it has worked for several projects (although perhaps not without the rippling effect mentioned above).

 

Finally, here are two references published in 2014 that discuss the algorithms that Photoscan uses.  The first seems to indicate that earlier versions of Agisoft Photoscan used a version of the semi-global matching (SGM, or "SGM-like") algorithm (although this might no longer be true):

 

Dall’ Asta, E. and Roncella, R., 2014. A comparison of semiglobal and local dense matching algorithms for surface reconstruction. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XL(5/ WG V/1) pp. 1–8. ISPRS Technical Commission V Symposium, 23–25 June, Riva del Garda, Italy.

 

The second discusses a different set of algorithms and includes a link to the Photoscan Forum (http://www.agisoft.com/forum/index.php?topic=89.0) for further discussion:

 

Remondino, F., Spera, M.G., Nocerino, E., Menna, F., Nex, F., 2014. State of the art in high density image matching.  The Photogrammetric Record, 29(146), pp. 144–166, DOI: 10.1111/phor.12063

 

Taylor

Link to comment
Share on other sites

  • 9 months later...

Hello!

 

Thought this thread might be the best place to ask this clarifying question, re: chromatic aberration — is it better to remove / correct for chromatic aberration in-camera [if camera has such built-in correction], or post-capture in a third-party application like Lightroom, CaptureOne, etc. Will it make a difference either way?

 

Thanks!

 

b

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...