Jump to content

Photogrammetry


Sigmund

Recommended Posts

Hi,

 

I hope a question concerning photogrammetry is not out of space here? This topic appears in some postings and thus I dare to ask the following:

 

In addition to RTI I would like to create 3D-models and for now I made some tests with Autodesk 123D. There are two features I need for my research: I need to make high resolution snapshots and I need to change the original colour of the surface in the model.

 

Can somebody recommend freeware for these purposes? Has ARC3D these functions?

 

Thank you!

 

Sigmund

 

Link to comment
Share on other sites

Sigmund, 

 

For post-processing photogrammetric data-sets many people use Meshlab. At the moment, however, our group uses almost exclusively CloudCompare (http://www.danielgm.net/cc/). While it doesn't have as many filters as Meshlab, it is much more stable with large data-sets. It also mimics the functionality of the best commercial 3D packages like PolyWorks. 

 

What is your specific application? In general using a colourized point-cloud will be produce better images than layering many OBJ files with separate photographic textures. If you save you data-set as an RGB PTS file it becomes rather easy to manipulate colour (you could even do this in MS Excel). 

 

George

Link to comment
Share on other sites

I've had mixed success with Arc3D, which requires 90 percent overlap of images to create a 3D point cloud.  Sometimes, Arc3D has returned a message that the processing was successful, but it was unable to render a 3D model.  In other instances, it has worked very well.  I've used Photosynth successfully, but it only renders a 3D model in the cloud, and doesn't provide a way to download the results.

 

I'm also curious about others' experiences with open-source applications such as SFMToolkit and VisualSFM for Mac.  I'm aware that SFMToolkit requires a 64-bit Windows platform and runs best on machines with an NVidia GPU.  I'm trying to decide which of these applications would be easier to install and run on my 64-bit Mac running OSX 10.8.1, which doesn't have a GPU (it has integrated graphics that are reportedly pretty good).  From my reading so far, it has been difficult to find answers to some pretty basic questions for a beginner at processing photogrammetry, such as:

  1. Can SFMToolkit run on a Mac with BootCamp or Parallels, in either a 64-bit version of Windows XP or Windows 7? 
  2. Would VisualSFM for Mac provide similar functions as SFMToolkit with Meshlab or CloudCompare? 
  3. Are there pros and cons to running any of these on a 64-bit Mac that doesn't have a GPU?  I understand VisualSFM can be run on Mac laptops without a GPU, so I think it would likely be easier to install and run on my machine, but perhaps it has less functionality.  I can be patient if processing is slower. 
  4. Any recommendations for open-source or low-cost lens calibration software (e.g., Agisoft Lens)?
  5. Would lens calibration provide better 3D point cloud results using the open-source tools mentioned above?

Sorry if this takes up too much bandwidth here.  I see that CHI is giving a workshop that includes RTI, algorithmic rendering, and SFM at the 2013 International Rock Art Conference, which I hope I might be able to attend.  Thanks for any helpful responses!

 

 

Link to comment
Share on other sites

If there is enough interest in discussing photogrammetry here, we can set up a new forum for it, and move this content over to it.  I'll wait and see the responses and interest in this topic over the next couple of weeks, and we can make the call.  We are happy to support that if folks think it is useful.

 

Carla

Link to comment
Share on other sites

Thank you for the answers!

 

I deal with picture stones with (today very weathered) low reliefs, most of them were painted in recent times. In order to detect and document remains of carvings, I need to make good snapshots and I need to take out the secondary colour.

 

I already made some models with Autodesk 123D catch and I really like the results. It's very easy. I saved them as obj files and can open them with Meshlab (apparently, only the 32Bit version of Meshlab can open obj files). There I can make snapshots and remove the colour with the "texture" button. Do you think this is a good way?

 

Can CloudCompare also make Snapshots (suitable for publishing)

 

I'm also disappointed by Arc3D.

 

A new forum for photogrammetry would be great!

 

Sigmund

Link to comment
Share on other sites

Sigmund, 

 

For what you want to do CloudCompare is the way to go. It has very good tools for creating depth maps that can reveal very small features. We have a few examples here: 

http://wadihafirsurvey.info/photogrammetry.html

Meshlab's Depth Mapping features just aren't as good. CloudCompare can take very high resolution snapshots. 

 

If you can send me some sample data I'd be happy to show you some of the processing techniques we use. 

 

I think we need to have some discussion of Structure from Motion vs. Photogrammetry. While the difference is, to some extent, terminological there are some fundamental differences between the two. 

Link to comment
Share on other sites

  • 3 weeks later...

Mark and Carla have mentioned a procedure in the planning stage to use photogrammetry to stitch together mosaic RTIs of larger objects to create a 3D digital surrogate with more accurate shape and texture.  As I understand it, this would combine the advantages of both techniques to capture both low-frequency and high-frequency information in a single model.  I really hope funding becomes available to complete this work.  I'm wondering what type of photogrammetry output would be needed to accomplish this--would it be a point cloud, mesh, digital elevation map, all of these, or another type of output?

Link to comment
Share on other sites

The simplest way, IMHO, would be to use a colourized PTS (ASCII) file. Each point would be described by XYZ, RGB, and Nx, Ny, Nz (the surface normal). Such files can be read into CloudCompare. The chief difficulty is registering a point-cloud generated by photogrammetry to an RTI. One would have to apply the same transformation to the RTI as for the epipolar images in a stereo pair, as well as undistorting the same image to account for the lens. All this can be done...in theory. 

Link to comment
Share on other sites

Thanky for the information.

 

As I said, I'm not experienced with photogrammetry. I'm on location now and I would like to take the opportunity to get started with this technique. I use Autodesk 123D catch and convert the model to OBJ and work with it in Meshlab. Has anybody experiences with Autodesk? Generally, I wonder how many pictures I can use. I'm under the impression, that Autodesk can't manage more than appr. 50 photos. Did you already work with Autodesk and what do you think about it? To me - but I'm a beginner - the results I obtained are rather good and suitable for my purposes.

 

Sigmund

Link to comment
Share on other sites

Sigmund, 

 

It seems to me we should try a little experiment in this thread! Why don't we take the same set of photos and run them through three or four different programs to generate data? 

 

I have used 123D Catch and it makes nice, quick models, particularly 360 degrees around objects. I know many people are also getting good results with Agisoft Photoscan. 

 

George

Link to comment
Share on other sites

Hi Folks,

 

I think there have been some tests like this done, and I was trying to find links.  Of course, it never hurts to do your own experiments.

 

Here are a couple of photogrammetry resources, I'll throw out there.  They don't answer the specific question, but might be of use.

 

First - the tem at the Georgia O'Keefe museum did a summer project last summer where they used both RTI and photogrammetry to document a variety of material.  They shot some great how-to videos, and they also did some videos discussing their experience and what was easy and what was hard. etc.  They used Agisoft for processing the photogrammetry, and they also posted a couple of video tutorials of their workflow.  They put up a blog site with all this material.  It can be a bit hard to find things on this site, but if you poke around you will find things. A bunch of the vides are on this page: http://okeeffeimagingproject.wordpress.com/daily-documenting/video/

 

There were a number of talks at the NCPTT sponsored 3D summit held in San Francisco last summer.  They have been putting up the videos of the talks a few at a time.  Here's an interesting one: http://ncptt.nps.gov/blog/close-range-photogrammetry-vs-3d-scanning-for-archaeological-documentation/ comparing photogrammetry to laser scanning.

 

There were other talks that were part of that event that would be useful too, but I don't see them up yet.  Not all the talks from the event seem to be tagged in a way that makes them easy to find.

 

I hope these resources are useful.

 

Carla

Link to comment
Share on other sites

Sigmund

 

If you have DropBox you could share a folder with a set of images with me. I'm sure Taylor would also like to be involved. If the Georgia O'Keefe people (Dale?) are into it, they could process in Agisoft. Let's aim to try Arc3D, 123D Catch, Agisoft, ADAMTech and Bundler/SfM. 

 

It is crucial that there is some sort of scale bar in your image set so we can create models that can be registered and compared to each other. I could then generate some statistics on the differences between the models. 

 

George

Link to comment
Share on other sites

Hello everyone.

 

I agree with Carla that it never hurts to do your own experiments but a lot of testing has already been done. I am not sure I can point to any comparison of all the software and online services but I have tried many of them myself.

 

I have tried the online services Arc3D, My3DScanner, Hypr3D now Cubify Capture, 123D Catch - including several that were purchased by AutoDesk - and others that either no longer exist or are not free. I have used several traditional, high end photogrammetry softcopy systems as well as several free Bundler/SfM variations from PhotoSynth to VisualSFM.  I also have many, many hours using several, somewhat non-traditional, commercial photogrammetry solutions - those being PhotoModeler Scanner, ADAMTech, and Agisoft PhotoScan.

 

I can tell you that all the above can deliver some truly stunning results, visually. However, not all the results are equal with regard to precision, accuracy, and repeatability. Having said that - they all are getting better and better.

 

I don't claim to have all the answers, nor do I claim to have tried everything. I am also truly excited by all the options that are available these days with more showing up constantly it seems. It is difficult and time consuming to try and test everything.

 

So, what do you want any testing to accomplish? If images are acquired properly - and for the most part the requirements are the same, with a couple of exceptions - then they all work. What do you hope to actually compare?

 

Tom

Link to comment
Share on other sites

Tom, 

 

This is just intended as a fun exercise to process a single set of imagery in different packages. In the first instance it's number of points generated, but beyond that it's a question of where points get placed, particularly on edges. The latter will be pretty clear when the point clouds get compared. At the moment there's very little out there in terms of quantitative comparison of different packages. A lot of the time when photogrammetry/SFM results are presented it's as a textured surface that tell one very little about how useful the model is going to be.  

Link to comment
Share on other sites

Tom,

 

thank you for sharing your experiences, this was helpful. As you said, it is really, really time consuming to try and test these things. The buttom line appears to be, according to your experience, that all these services/softwares provide good results - as far as the images meet certain requirements.

 

Nevertheless, I would like to make the tests as George suggests. I would like to test 123D catch, let's take Taylor's images.

 

Sigmund

Link to comment
Share on other sites

Sigmund, 

 

I talked to Taylor and his images are of paintings. I've worked with his data before and I'm not sure it would be as suitable as your rock-art examples (it tends not produce the really dense point clouds you can get from stone). In particular, I'd like to compare the resulting photogrammetry data-sets with adaptive depth mapping (see below) as well as an ambient occlusion filter. How well do the fine features in your stones come out with the data produced by each software package?

 

If you've not seen it, the 2012 English Heritage report on Stonehenge shows what can be done with depth mapping of range data to reveal otherwise hidden surface features, in this case daggers. http://services.english-heritage.org.uk/ResearchReportsPdfs/032_2012web.pdf  While this data was generated with lasers, photogrammetry definitely produces data of sufficient, if not better, quality to do similar work.  

 

George

Link to comment
Share on other sites

George,

 

this looks amazing! If you could help me to do something similar with my stones I would be deeply grateful.

 

I have a set of about 70 images. I only made images of the entire stone, no images of details. The pictures have 16 megapixels. Is that OK or should I make new images? As far as I know, Autodesk has problems with images more than 6 megapixels.

 

We can use the dropbox if you want.

 

Sigmund

Link to comment
Share on other sites

Sigmund,

 

If you upload the images, I will take a look at them. Don't alter the images in any way, just upload the full 16 megapixel photos. As I understand it, 123D catch does not necessarily have a problem with large images but down samples all uploads before processing to save time and space. 

 

What can you tell us about the images? Are they all taken at the same settings - no change in focus or zoom? 

 

Tom

Link to comment
Share on other sites

Sorry for answering late!

 

Here is a folder with some of the pictures: https://www.dropbox.com/home/Bro%20Kyrka%20I_Photos

 

I use a Alpha 55 with a 2.8/50 Macro. In this case I used the auto focus. Lightsource was the sun. Unfortunately, the images are not very detailed. I also made some photos of certain details but I don't believe that Autodesk will be able to fit them in. Please check them, I can easily make new pictures, this would not be a problem!

 

Sigmund

Link to comment
Share on other sites

Hi,

 

OK, I have to admit, that I do not use dropbox often. Does this link work?

 

https://www.dropbox.com/sh/dfzu0ju1jevxl1f/ux-QbNiE0Q

 

 

What can you tell about the focus? Do I have to use manual focus, auto focus and is it advisable to focus always the same point in all images?

 

 

Here I have another sample, made with 4 megapixels and manual focus:

 

https://www.dropbox.com/sh/1pdurml53znovvf/hzRG31OmIj

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...