You'll probably do better running Photoscan in OS X rather than running it in Windows through Parallels on a Macbook. And for whatever reason, Photoscan seems to play nicer with Nvidia graphics cards than with the Radeon GPUs, perhaps because it takes advantage of the CUDA architecture on Nvidia cards. That said, I've easily processed much larger datasets on a 2012 Mac Mini that has lesser specs (Quad i7 processor with integrated HD 4000 graphics, and 16 Gb of RAM) than your newer Macbook Pro, so I don’t think the specs of your MacBook are the concern—it’s more likely to be the settings you’re using or possibly the image quality.
Processing time depends on a number of factors and settings in Photoscan, which makes the most use of discrete graphics cards (GPUs) during the step of generating a dense point cloud from a sparse cloud, so I'd focus on getting the GPU running. You should certainly disable one core of the CPU as recommended in the Photoscan User's Guide, (using Preferences and the OpenCL tab in Photoscan) to take advantage of the discrete GPU on your laptop. If you have XCode on your laptop, you might try also disabling multithreading by opening "Instruments" and using Preferences. It used to be said that Photoscan doesn't run as well with multithreading enabled, but I'm not sure if this is true of the current version of Photoscan, so I'd try it and see. I would also check the quality of the images (you can do this in the Photos pane) and make sure you've carefully optimized the sparse point cloud before you try generating the dense point cloud.
It wasn't really clear if the 40 hours required to generate the dense point cloud (Step 2 of the Photoscan workflow) on your laptop also included the time to align the images and generate the sparse point cloud (Step 1 of the workflow). It would, for example, take much longer to align the images and generate a sparse point cloud if the number of tie points is increased to 80,000 and pair preselection is turned off. Try using 60,000 tie points and use the default "generic" setting rather than "disabled" in the pair preselection box.
75 cameras isn't that many images and I wouldn't expect that you'd have so much trouble processing a dense point cloud from this data set on your MacBook Pro. However, lots of things can affect the performance and the time required isn't always predictable. As an example, I recently processed a dense point cloud from an identical sparse point cloud using both "aggressive" and "mild" filtering settings, and the combined CPU + GPU performance differed by a factor of almost three, simply because of the filtering setting (roughly 850 million samples/sec using "aggressive" filtering vs. 350 million samples/sec using "mild" filtering). Sometimes, trial and error is the best way to improve performance.