Jump to content

Welcome!

Sign In or Register to gain full access to our forums.

Photo

Slow dense cloud generation on MacBook Pro


  • Please log in to reply
2 replies to this topic

#1 Blackbyrde

Blackbyrde

    Newbie

  • Members
  • Pip
  • 4 posts

Posted 19 September 2016 - 07:05 PM

I just purchased a new 15-inch MacBook Pro (Intel Core i7, 16 GB RAM, AMD Radeon graphics processor), thinking that it would give me much better Agisoft Photoscan processing times than my old 2010 MacBook Pro and should be about the same as a PC laptop with the same approximate specs. I've found, though, that whether I'm running models on Mac OS X or on Windows using Parallels, it wants over 40 hours to run the dense cloud for a model with 75 cameras. I've run models with similar and greater numbers of cameras in 3-6 hours on the Surface Book with i7 processor and discrete graphics, and I'm wondering why the difference is so great. 

 

I know that a custom desktop rig would be better all around for this, and that I may need to disable a processor core to take advantage of the graphics card. Given the limitations of these platforms, though, with everything else being equal, why would the Mac be performing so poorly? Do I need to switch platforms? 



#2 Taylor Bennett

Taylor Bennett

    Advanced Member

  • Members
  • 134 posts

Posted 22 September 2016 - 11:37 AM

You'll probably do better running Photoscan in OS X rather than running it in Windows through Parallels on a Macbook.  And for whatever reason, Photoscan seems to play nicer with Nvidia graphics cards than with the Radeon GPUs, perhaps because it takes advantage of the CUDA architecture on Nvidia cards.  That said, I've easily processed much larger datasets on a 2012 Mac Mini that has lesser specs (Quad i7 processor with integrated HD 4000 graphics, and 16 Gb of RAM) than your newer Macbook Pro, so I don’t think the specs of your MacBook are the concern—it’s more likely to be the settings you’re using or possibly the image quality.

 

Processing time depends on a number of factors and settings in Photoscan, which makes the most use of discrete graphics cards (GPUs) during the step of generating a dense point cloud from a sparse cloud, so I'd focus on getting the GPU running.  You should certainly disable one core of the CPU as recommended in the Photoscan User's Guide, (using Preferences and the OpenCL tab in Photoscan) to take advantage of the discrete GPU on your laptop.  If you have XCode on your laptop, you might try also disabling multithreading by opening "Instruments" and using Preferences.  It used to be said that Photoscan doesn't run as well with multithreading enabled, but I'm not sure if this is true of the current version of Photoscan, so I'd try it and see.  I would also check the quality of the images (you can do this in the Photos pane) and make sure you've carefully optimized the sparse point cloud before you try generating the dense point cloud.  

 

It wasn't really clear if the 40 hours required to generate the dense point cloud (Step 2 of the Photoscan workflow) on your laptop also included the time to align the images and generate the sparse point cloud (Step 1 of the workflow).  It would, for example, take much longer to align the images and generate a sparse point cloud if the number of tie points is increased to 80,000 and pair preselection is turned off.  Try using 60,000 tie points and use the default "generic" setting rather than "disabled" in the pair preselection box.

 

75 cameras isn't that many images and I wouldn't expect that you'd have so much trouble processing a dense point cloud from this data set on your MacBook Pro.  However, lots of things can affect the performance and the time required isn't always predictable.  As an example, I recently processed a dense point cloud from an identical sparse point cloud using both "aggressive" and "mild" filtering settings, and the combined CPU + GPU performance differed by a factor of almost three, simply because of the filtering setting (roughly 850 million samples/sec using "aggressive" filtering vs. 350 million samples/sec using "mild" filtering).  Sometimes, trial and error is the best  way to improve performance.
 



#3 Blackbyrde

Blackbyrde

    Newbie

  • Members
  • Pip
  • 4 posts

Posted 22 September 2016 - 01:54 PM

Thanks so much, Taylor - this is fantastic! I'll make those adjustments and see how it goes! 






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users