Jump to content

Welcome!

Sign In or Register to gain full access to our forums.

Photo

Photoscan Pro vs Photomodeler Scanner


  • Please log in to reply
7 replies to this topic

#1 jlutgen

jlutgen

    Member

  • Members
  • 18 posts

Posted 16 March 2015 - 03:50 PM

We currently use the standard Photoscan product. We are considering an upgrade to Agisoft Photoscan Pro, which CHI recommends.  We are also taking a look at EOS Photomodeler Scanner.  Has CHI or anyone else done a side-by-side comparison of these two products and made a case for choosing one over the other?

 

We use 3D photogrammetry on a wide variety of objects ranging very small objects shot in the studio to large stone objects shot in the field.  We almost always have a requirement for high resolution and highly accurate measurement.  Down the line we may also get involved in 3D aerial photogrammetry.

 

Jerry Lutgen

HistoryTec



#2 GeorgeBevan

GeorgeBevan

    Advanced Member

  • Members
  • 93 posts

Posted 17 March 2015 - 05:59 PM

I've never worked with Photomodeler personally, but it has been on the market for quite some time. A good question to ask when comparing photogrammetry packages, particularly with your requirements for high accuracy, is which dense matching algorithm the software uses. Photoscan appears to use the comparatively recent Semi-Global Matching algorithm while I think Photomodeler uses the older Normalized Cross Correlation/Least-squares matching. Some of the advantages/disadvantages of the two algorithms are demonstrated empirically in this ISPRS paper:

 

http://www.int-arch-...-5-187-2014.pdf

 

There are a few other recent papers that do accuracy testing on a variety of algorithms/software packages, usually against a "ground truth" acquired by some other scanning technique. Generally the quality of the photography in the field and the parameters chosen for the dense matching can have such huge impact on the eventual quality of the model that true "apples to apples" comparisons are quite hard to make.  


  • Taylor Bennett likes this

#3 jlutgen

jlutgen

    Member

  • Members
  • 18 posts

Posted 18 March 2015 - 03:07 AM

Thanks George for the reference.  While some of it was beyond me, I think i got the general idea.

 

Jerry Lutgen



#4 GeorgeBevan

GeorgeBevan

    Advanced Member

  • Members
  • 93 posts

Posted 19 March 2015 - 05:31 PM

Another good recent paper looking at accuracy is this one :

 

Remondino, F., Spera, M. G., Nocerino, E., Menna, F., and Nex, F. (2014). State of the art in high density image matching. The Photogrammetric Record, 29(146), 144-166.

Photomodeler is not one of the packages compared, but Photoscan is. One thing you should look at closely with Photomodeler is how their SmartPoint technology has developed over the past few years. This is what they call the automated generation of relative matching points between images to provide a solution for the interior and exterior orientation of the cameras. I had heard that SmartPoint wasn't very reliable but my information may be out of date. Before SmartPoint coded targets were needed in the scene to link the images and to perform calibration. 
 
This group used Photomodeler to produce a model of an entire cathedral: 
 
Martínez, S., Ortiz, J., Gil, M. L., Rego, M. T. (2013). Recording Complex Structures Using Close Range Photogrammetry: The Cathedral of Santiago De Compostela. The Photogrammetric Record, 28(144), 375-395.
 
They do note at the end that the project might have benefited from the new generation of multi-view software (like Photoscan). They used check-points to test the accuracy of the model against a total station and usually got an average error of about 1.3mm. The first paper shows that sub-mm accuracy is attainable at close range with most of the new packages they tested.


#5 Carla Schroer

Carla Schroer

    Advanced Member

  • Administrators
  • 365 posts
  • LocationSan Francisco, CA

Posted 24 March 2015 - 08:15 PM

Thanks for the discussion so far!  I'm a bit late to the party because we were doing photogrammetry training last week.
 
At CHI we have looked at and followed a number of photogrammetry solutions.  Our assessment has mostly been based on the features in the product, along with the published info about the approach/algorithms.  In some cases we have seen side by side comparisons using the same image sets.
 
A couple of notes to start with.  Getting good results from photogrammetry requires good photos.  The photos need to be in focus and with a proper exposure.  In addition, the photos need to be shot with a good "base to height" ratio and proper 2/3 overlap, as well as multiple "look angles" at each point on the surface. (this is sometimes called redundancy) in order to get high quality, quantifiable geometry.  In addition, the focus, aperture and other settings (except shutter speed) need to be locked down, at least for each set of photos. (i.e. you can mix different groups of images in the same project, but within each "calibration group" these settings must be identical to get a good calibration)
 
With that said, we chose PhotoScam Pro at this time because it is based on the approach of Structure from Motion (SfM) and we also like the feature set.  It is my understanding that Photomodeler uses stereo pairs.  As George points out there are other algorithms involved.  In a SfM system all three of the camera calibration, photo pose (alignment, pitch, roll, and yaw) and the 3D points in space influence each other and are adjusted and improved together.  In a stereo pair system, generally there is a separate step to do camera calibration, and then the calibration is locked down and applied to the other processes.  This is in the initial steps of doing the processing.  Once you are ready to create a dense cloud, mesh, and texture, then different algorithms are applied, which use the data you created during alignment, refinement, and optimization. The bottom line is that having a very high quality camera calibration, photo pose, and 3D points are critical to getting a high quality result.  We prefer the way that PhotoScan pro does this, and we like its workflow for getting this done.
 
Carla
 


#6 GeorgeBevan

GeorgeBevan

    Advanced Member

  • Members
  • 93 posts

Posted 25 March 2015 - 05:09 PM

It seems to me that the terminological waters are very muddy here. I'm not sure I would contrast SfM directly with stereo photogrammetry (it is not uncommon in publications to hear of "SfM photogrammetry"). My understanding is that SfM is an outgrowth of traditional photogrammetry developed by the machine vision community to provide quick 3D data. The emphasis in SfM was on speed, and not necessarily extremely high accuracy. For a while it could be meaningfully contrasted with the sort of analytical stereo-photogrammetry practiced mainly in aerial mapping applications for decades. At that point SfM was a huge innovation because it didn't required detailed and expensive-to- acquire information about camera position and pose, or special pre-calibrated  metric cameras.

 

Since then the innovations of SfM have been rolled back into the mainstream. Today most stereo photogrammetry packages, at least from my limited experience, allow for the simultaneous calculation of the interior and exterior orientation. Indeed, Photoscan itself permits the separation of the two steps, if the user desires (sometimes a separate calibration process and be advantageous when large numbers of images are being processed or the object does not fill the frame and provide a good look at all parts of the lense). Photoscan also uses all the same camera calibration parameters from Brown/Fryer thick lens model. I'm told even venerable "traditional" stereo-photogrammetry systems like Geodetic's VStars allow for autocalibration in the field provided there is a good distribution of coded targets. Though I'm not 100% sure, I gather the Photomodeler "SmartPoints" technology can allow for SfM-like autocalibration in the field, provided the scene has enough texture. I don't know whether the Photomodeler is as robust as Photoscan in dealing with poorly shot projects or problematic lenses.  

 

IMHO, the innovation of Photoscan, apart from its high level of automation, lies mainly in its use of multi-view photogrammetry and Semi-Global Matching. Multi-view is used at the initial alignment stage with the sparse cloud. Triangulating points using many different rays should, in principle, be more accurate than using only two rays in the stereo method. The next stage of the process in Photoscan, dense reconstruction, is a bit of a black box. It remains unclear to me from what I've seen published that multi-view gets used in dense reconstruction. It has been suggested by some photogrammetrists that Photoscan is doing this final reconstruction stage by stereo pairs and then merging the resulting data into a single cloud (it's clear from the data that the software is doing a lot of smoothing at this stage as well to give relatively inexperienced operators pleasing final results). 

 

Semi-Global Matching was a major innovation in stereo matching that Heiko Hirschmuller published in 2005 (Global Matching has been proven to be an NP complete problem and would take longer than life of the universe to solve). It offers the possibility of better matching in areas where NCC/LSM would have problems, particularly on the outside parts of the scene. The downside of SGM is that it takes a lot longer than NCC/LSM. Agisoft have done a really impressive job in using GPU computing to improve the processing speed for SGM. I'd say from my own experience that SGM is also problematic in modelling sharp edges. There is a commonly observed "wavy" effect on edges and a tendency to over-match, particularly with the sky/background, a problem usually remedied by masking. The paper I cited in an earlier post shows an example of how SGM can result in systematic error, rather than the sort of random error seen with NCC/LSM. I guess this is question of how you like your error: systematic or random.  

 

This is just my two cents. I know others will have different views on the history here. These thoughts come out of my own ongoing struggle to clarify the terminology for myself and get a grip on the underlying technology. 


  • Taylor Bennett likes this

#7 Carla Schroer

Carla Schroer

    Advanced Member

  • Administrators
  • 365 posts
  • LocationSan Francisco, CA

Posted 06 April 2015 - 06:00 PM

We have just done a major update to our Photogrammetry technology page - which includes some explanatory material about structure from motion based systems.  We will be updating the graphics and examples over the next week or two as well.  Since some of these issues came up in this thread, I thought I'd put in a link to our new page.

 

http://culturalherit...etry/index.html

 

Carla



#8 niman

niman

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 28 September 2016 - 01:28 PM

Hi Carla.

 

I just read that link you posted above and noticed that it states:

 

"PhotoScan then uses multi-viewpoint stereo algorithms to build a dense point cloud"

 

However, in his post above yours George Bevan says:

 

"It remains unclear to me from what I've seen published that multi-view gets used in dense reconstruction"

 

Do you know for sure that the Cultural Heritage Imaging "photogrammetry" article is accurate wrt to this point?

 

Any further comment would be very appreciated.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users