Jump to content

Photogrammetry


Sigmund

Recommended Posts

I love these Pictures Stones! We did some wonderful examples at the Gotlands Museum in Visby. 

 

I'm not sure focus is the issue. You need to get a better range of angles around the stone. If your camera is sufficiently high resolution I prefer to handle this sort of subject with "convergent pairs". We're finding with a 36mp D800 that we can get fantastic results on moderately sized panels without using a "strip project". You definitely don't want to change focus during the shoot, nor do you want to change the aperture. Shutter speed, however, can change with altering the photogrammetric parameters of the lens. 

Link to comment
Share on other sites

Those are indeed extraordinarily beautiful stones, Sigmund!  I'm looking forward to learning more about them and seeing the results of George's, Tom's, and your efforts on the photogrammetry. 

 

Also, I'd like to know more about the different ways of doing photogrammetry--when it's better to use a strip, convergent pairs, or other techniques--especially in situations where a very high density of points or better depth resolution are needed.  There are many sources of information on-line, but it can be hard to sift through it for answers to specific questions.  The BLM Tech Note 248 that can be downloaded from CHI's website has a lot of information, but it doesn't cover everything (for example, it doesn't say very much about the base-to-height distance ratio). 

 

I wonder if this topic has generated enough interest to move it together with the other photogrammetry thread that's started under Projects into a new forum, as Carla suggested above?

Link to comment
Share on other sites

I did a quick project on the sets of photos.

 

More photos would be much better. 

 

Taylor - you are right - not much mention of base to height. We usually try to shoot with a 24 to 28 mm (full size sensor equivalent) with 60 to 70 % overlap which gives you a base to height in the range of 1:1.67 to 1:2.5. A base to height in the range of 1:1.5 up to 1:4.0  is good. Over 1:8 is the beginning of bad geometry but with software that takes advantage of multiple photos the additional redundancy can overcome much of that.

 

When taking photos - try not to focus for a "set" of photos - a series of overlapping photos - some convergent okay and may be desirable - and photos taken with camera rotated.

 

It is okay to then refocus from a different distance - perhaps for detail stereo photos - for another "set". Make a note or otherwise know which photos belong to each set - take a "tourist" photo to separate. 

 

Turn off the auto-rotate function in the camera and NEVER rotate the photos when looking at them in windows.

 

Try to keep the aperture at f8 or smaller (bigger number) to maintain a good depth of field, but by all means do what you can to get a crisp picture - without auto focus that is. 

 

The attached files show the normals produced from photogrammetry and the RGB point cloud - no actual photo draped on the surface, just 3D points with RGB values.

 

The first set of photos also worked - surprisingly actually - and I can upload a surface if you like. It has holes because of not enough photos, look angles, but not bad considering.

 

Again, more photos from different locations will always help. You can move up and down also.

 

Tom

post-240-0-31210600-1366918376_thumb.jpg

post-240-0-20382900-1366918377_thumb.jpg

Link to comment
Share on other sites

I got about 350K points just working in 3DM Analyst and without tweaking the epipolar image settings. Here's the colourized PTS filed that I got: https://dl.dropboxusercontent.com/u/17766689/Hangvar.pts  Tom, did you generate your final dense surface in PhotoScan?

 

Sigmund, did you refocus at all when you were shooting the second set of photos? 

 

Anyone tried some of the webservices on this data yet? 

Link to comment
Share on other sites

Thank you for all these tests and information!

 

These photos were just some samples. Here I have more photos of a detail of the second stone and the model I made with Autodesk 123D catch:

 

https://www.dropbox.com/sh/1pdurml53znovvf/hzRG31OmIj

 

I will make new and more photos today and next week! Did I understood correctly, that I should try to make all photos from appr. the same distance, without changing the focus? For the second set of photos I tried to do not refocus at all. Is that really necessary? ANother question: How many rotations with which angles would you recommend? I made two rows, one a little bit from above, one a little bit from below. Perhaps 110° and 70°.

 

George - you worked with the Gotland picture stones? Tell me more about it! I'm here for three months and my aim is to document the most interesting slabs with RTI and perhaps photogrammetry. In Gotlands Museum I'm ready for now with RTI and I just started there trying photogrammetry. It was hard work because I had no assistance and the slabs are erected. I only made small portions, appr. 30-40cm, so I could manage the measurements. Here is an example: post-233-0-01394300-1366959958_thumb.jpg

 

For all who are interested in the stones in general, I put my new article into the photo folder.

 

 

Link to comment
Share on other sites

Hi,

 

in Autodesk you can refine your models to standard or maximum quality. However, when I submit photos with 16 megapixels this does not work. I asked Autodesk and they wrote me:

 

"Sorry for the confusion. Please do not submit images
higher than 6 megapixels. Any images higher than this may be able to
process at mobile mesh setting, but run into issues when processing at a
higher mesh (standard, maximum)resolution. When you submit photos
between 3-6 megapixels, you can process at any of the desired mesh
resolution settings (mobile, standard, maximum). For best results, and
what I do, I submit 6 megapixel images. Then, when I get my result, I
select the part of the mesh I am focused on and resubmit at a higher
mesh resolution setting. When you resubmit for a different mesh
resolution and have a portion of the mesh selected, only the bounding
box of that mesh will be reprocessed at the requested mesh processing
setting."

 

I wonder if it is better to make photos with 16 megapixels and let the model how it was created or to make photos with only 4-6mp and then refine them to maximun quality. I have to admit, that I do not really understand the difference. How do I get the best model??

Link to comment
Share on other sites

Sigmund, 

 

We were at Gotlands Museum doing an RTI/photogrammetry workshop with the RAA (Swedish National Heritage Board) in May. I take it you've met Laila Kitzler Ahfeldt already? She's done some amazing work on surface metrology with pictures stones using structured light scanning. One of the exciting things we demonstrated was that the photogrammetry could produce data equivalent to the structured light, without the requirement that the stone be in a shaded enclosure. Laila should have most of the data we collected (RTI and photogrammetry), although some of the photogrammetry projects were only built this fall.

 

It's crucial for these sorts of projects that some sort of scale bar be put in the images, especially if you want to get metrics on the carvings.  

Link to comment
Share on other sites

Sigmund,

 

A few things about photos. Some of the questions about focus, angles, lenses and whether things are necessary. Well, obviously, some results can be created without all the "necessary" things. But, there are errors. I can see them. Sure, they are very small, but that is why some things are necessary - to eliminate or at least minimize very small errors. With consistent, correctly captured sets of images, I can solve for and eliminate errors at the subpixel level - approaching 1/10th of a pixel. Changing focus, using longer lenses which degrades the geometry, and the opposite - moving too much which changes the look angle and causes fewer matched points, having image stabilization turned on which has to change the relationship of the lens to the sensor, and to a much lesser extent - even changing aperture which can cause light to refract so slightly differently. All those things, and more, can add up  - making it nearly impossible to solve to the sub-pixel.


These days, I can make some fabulous looking 3D models from images that really should not work at all. I really struggle with whether that is a good thing, or a bad thing.


Your Hangvar images. Some have been rotated - probably in Windows explorer, which destroys the original EXIF tag so they all say they have an orientation of 1 (Normal) - we know that is not true. In order to have a chance at camera calibration, I need to know the relationship of the lens to the sensor as captured. Some images were captured with image stabilization turned on. They all had some image compression - jpeg compression artifacts can be a problem at high compression. The 50 mm lens has an effective focal length of 75 mm which for normal strip photos and  the prescribed 60% to 70% overlap does not make for good base to height. You did take convergent photos which improves the geometry - very necessary with longer lenses, but can cause fewer matches if too convergent. A couple of photos were a little blurry - not surprising considering the low light - but does impact a sub-pixel solution.


And yet the surface looks pretty good. Like I said, I am not sure if that is good or bad.


So, even though I was able to align the far images with the detail images, rotated and not, the camera calibration(s) are less than perfect and I can detect a slight Z offset between the two sets of images. Without scale of course, I can't really say how much it is. I could manually intervene and add points to minimize the differences, but that takes time and would not be necessary with good "sets" of images.

 

Hope this helps in the event you do take more photos next week.

 

George, the data set is a 3D surface, with normals, in an arbitrary coordinate system, not to real scale, oriented in an arbitrary plane. 

 

Tom

Link to comment
Share on other sites

George,

 

PhotoScan uses multi-view stereo reconstruction to generate the dense surface model, but employs a couple of different algorithms depending on the type of object. It tries to match every Nth pixel - N being 1, 2, 4, 8, or 16 - of each photo across multiple photos. Some filtering is performed, but I do not know what they use. Some artifacts can be created depending on how aggressive the surface matching is, but those are easy to identify. On the other hand, the differences between stereo models is minimized or eliminated and does not have to be reconciled after the fact.

 

Tom

Link to comment
Share on other sites

That's interesting to know. DTM Generator, I suppose, matches only where there are the strongest matches (with certain constraints on feature rate, WinSize etc.). I do like the final PS surface. It's very clean...almost too clean.  Comparison with my output is quite difficult unless I also clean and remesh. Although it doesn't show that much, I attach a difference map after the data-sets were aligned by ICP and compared in PolyWorks: https://dl.dropboxusercontent.com/u/17766689/image_6.png  I'm pretty sure 123D Catch is performing similar cleaning and remeshing operations. On the other hand, maybe multi-view stereo is just so good that this is "raw" data! 

 

At this point it would be nice to have a common set of high quality "reference images" to work on. I'm involved in a similar project for evaluating rock outcrops with LiDAR and photogrammetry here: http://geol.queensu.ca/faculty/harrap/RockBench/  There's also a nice comparison of LiDAR and Photogrammetry on a quantitative level here: http://www.rocksense.ca/Research/PlaneDetect.html   One could even imagine similar reference data-sets for RTI. Because texture is so important to photogrammetric matching the sort of test objects used for laser scanning are inappropriate, e.g., metal gauge blocks. I think good, scaled photos of, say, a granite surface plate would be incredibly useful. These surface plates have certificates specifying flatness. One could compare the nominal variance on the surface plate to the variance in the data on the photogrammetry, or the laser for that matter. 

 

I'm belabouring this point not because I'm particularly unhappy with the output of any particular package. As you point out it's stunning the sorts of models even the free web portals pump out from seemingly poor inputs. But routinely I'm asked in workshops what the difference is between software at the free, $500, $5000 or $15000 levels are (if you include Sirovision and VStars, $150 000). The answer is going to depend a lot on the final application and what sort of post-processing the end user wants. Is it just a "cool model" to show in a display, or intended for depth-mapping to reveal features that may only be 10s of microns deep? As you know, many users haven't even formulated what questions they'll be asking of the data...but for those who have, how are they best to spend their hard-won research dollars? 

Link to comment
Share on other sites

Sigmund,

 

The new images are very nice. They seem consistent and the camera calibration is much better. When combined with the 4 images from farther away, the alignment and surface is now seamless between the two sets. Much better results.

 

Do you have some way of providing scale? If any comparisons are to be made, it would be better if things were scaled correctly and put into the same arbitrary coordinate system. 

 

 

George,

 

I am still pondering some things in your last post but I have had a nasty cold and not sure I have the energy to respond just yet. I just wanted to give Sigmund some feedback.

 

Tom

Link to comment
Share on other sites

  • 2 years later...
Hello everyone,
have you tried RealityCapture? https://www.capturingreality.com/Home
 
RealityCapture is a state-of-the-art photogrammetry software which automatically extracts beautiful and accurate 3D models from images, laser-scans and other inputs.
It is able to process laser-scans and scales linearly, which especially makes it unique to anything else. This software is the only one to mix laser-scans and photographs easily without seams at astounding speed. The algorithms are out-of-core so you can reconstruct unlimited size models on limited hardware (a PC with just 8GB RAM). The models are complete and without seams. Of course, geo-referencing using ground control points, flight logs, gps in exif go without saying. There are also more advantages like camera-rigs support, arbitrary ortho-projections, nice and intuitive UI etc.
 
Have a closer look at what RealityCapture is capable of: Reconstruction of a castle - interior, exterior, all in one model. 28 975 images of 36 or 80 Mpx resolution. 118 terrestrial laser-scans (Leica). It all has been processed on a single PC. The model consists of 1 500 000 000 (1.5B) triangles: https://youtu.be/E7LLELllus4.
Few others:


Here you can see the advantages of automatic mixing images and laser-scans: https://youtu.be/1-4RsCIuKCw.
 
You can download a test dataset here: ttp://rcdata.capturingreality.com/testset1.zip.
 
The software also allows you to filter, simplify, texture, export model. You can also export -> post-process (retopologize, clean) -> import -> texture your models...
 
 

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...