A few things about photos. Some of the questions about focus, angles, lenses and whether things are necessary. Well, obviously, some results can be created without all the "necessary" things. But, there are errors. I can see them. Sure, they are very small, but that is why some things are necessary - to eliminate or at least minimize very small errors. With consistent, correctly captured sets of images, I can solve for and eliminate errors at the subpixel level - approaching 1/10th of a pixel. Changing focus, using longer lenses which degrades the geometry, and the opposite - moving too much which changes the look angle and causes fewer matched points, having image stabilization turned on which has to change the relationship of the lens to the sensor, and to a much lesser extent - even changing aperture which can cause light to refract so slightly differently. All those things, and more, can add up - making it nearly impossible to solve to the sub-pixel.
These days, I can make some fabulous looking 3D models from images that really should not work at all. I really struggle with whether that is a good thing, or a bad thing.
Your Hangvar images. Some have been rotated - probably in Windows explorer, which destroys the original EXIF tag so they all say they have an orientation of 1 (Normal) - we know that is not true. In order to have a chance at camera calibration, I need to know the relationship of the lens to the sensor as captured. Some images were captured with image stabilization turned on. They all had some image compression - jpeg compression artifacts can be a problem at high compression. The 50 mm lens has an effective focal length of 75 mm which for normal strip photos and the prescribed 60% to 70% overlap does not make for good base to height. You did take convergent photos which improves the geometry - very necessary with longer lenses, but can cause fewer matches if too convergent. A couple of photos were a little blurry - not surprising considering the low light - but does impact a sub-pixel solution.
And yet the surface looks pretty good. Like I said, I am not sure if that is good or bad.
So, even though I was able to align the far images with the detail images, rotated and not, the camera calibration(s) are less than perfect and I can detect a slight Z offset between the two sets of images. Without scale of course, I can't really say how much it is. I could manually intervene and add points to minimize the differences, but that takes time and would not be necessary with good "sets" of images.
Hope this helps in the event you do take more photos next week.
George, the data set is a 3D surface, with normals, in an arbitrary coordinate system, not to real scale, oriented in an arbitrary plane.