Jump to content

GeorgeBevan

Members
  • Content Count

    95
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by GeorgeBevan

  1. Taylor, I'm not sure the camera calibration parameters used in the thick lens model will easily translate to the still fairly simple lens correction tools in Photoshop. For instance, I don't think principal point, decentering distortion or in-plane distortion are modeled in the Photoshop tool. You could do what you are proposing in Matlab and possibly Panotools as well. You'll need to export the Photoscan calibration into another format since it expresses all of its calibration parameters in pixels (UV co-ordinates) not in X Y units, which are the ones used in the system of equations commonly used for photogrammetric distortion correction. I wonder if DXO Optics Pro may be able to import the standard photogrammetric parameters? In CalibCam I can export undistorted RTI image stacks for measurement but am unsure about what tools to chain together to accomplish this with Photoscan.
  2. Just an FYI for everyone working with point clouds and meshes....a new version of Meshlab was released yesterday (2 April 2014) that promises to fix a lot of the bugs that had cropped up since the last release in 2012.
  3. Very interesting work! Out of curiosity, what sort of normal map to do you get from the mesh surface of the object without applying the normal map from RTI as texture? This would be an informative comparison. I think the coding your propose is actually quite simple, especially if you save your point cloud as an xyz rgb file. You just need to apply the inverse transformation from the 8-bit integer to the normal vector and call that Nx, Ny, Nz instead of R,G,B. You could even do it in something like Excel. In CloudCompare you can very easily change the columns from RGB to Nx, Ny, Nz when you import the cloud.
  4. To effectively undistort the normal maps from RTI you'd need the camera parameters from exactly the camera set-up you used for the RTI, i.e. same focal length, focal distance, f-stop. As CHI has indicated in their workflow it is always prudent to do a calibration sequence with your RTI set-up just in case you need to get those parameters at some time in the future. I don't know how accurate it would be to recover those parameters after the fact with the lens calibration software. It may be good enough for government work. In principal it should be possible to align photogrammetry and RTI dataq quite precisely. After all, the image used for texture in an OBJ file (the image pointed to by the MTL file) is just an undistorted image taken from the original photogrammetry image series. If this texture image could be swapped for a normal map that had undergone the same transformation as the texture image (assuming that you've used imagery for photogrammetry taken with the same set-up and positions as your RTIs) you'd have your alignment. Maybe someone is working on this sort of thing right now? I'm very interested to hear how you find XNormal/Blender for applying RTI normals to a mesh! A flat painting is a good candidate for this since the normals should suffer a lot less from the errors introduced by objects that have more depth and self-shadowing.
  5. Taylor, My two cents...The algorithm developed by Nehab et al. is amazingly powerful, but I fear would be extraordinarily difficult to implement with your data even if you had the source code. The scanner Nehab developed used the same two machine vision cameras to capture the range data by structured light AND the photometric stereo produced by the six (?) lamps. Note in the article he specifies that both the cameras were calibrated, i.e. their photogrammetric distortion parameters were all calculated. What this all means is that the normal maps and the range maps were perfectly aligned right from the start. Each normal generated by photometric stereo could be put in a one-to-one correspondence with the range point because they were gathered by the same cameras. In your case you have two evidently rich data sets, but no way of precisely aligning them that I can see (did you have an fiducial markers that were shared by the RTI and the photogrammetry?). If there's the slightest misalignment the two data sets will not correct each other but instead propagated even more errors (as Nehab et al note), the opposite of what you want. Alignment of RTI data on your photogrammetry data is further complicated by the fact that the RTIs you have produced have not been "undistorted" according to the photogrammetric parameters of the camera. Even if you have been careful to use a prime lens with minimal distortion like a 50mm or 105mm prime, there will be still be enough distortion to prevent perfect point-to-textel alignment. I suggested in another thread on the forum that alignment of RTIs and photogrammetry could be done, but it would required code to "undistort" normal maps using the parameters collected by photogrammetry. Why, you may ask is this code already not out there (it may already, correct me if I'm wrong)? The (game) designers who use XNormal and equivalent packages aren't concerned that the texture of a bricks perfectly aligns with the building they're modelling. The normal map just gets repeated across the structure of the building to give the impression of photorealism. For them the surface normals are just a vastly more computational efficient way of creating realistic scenes than using, say, ray-tracing on a detailed 3D surface. What you could do as an interesting experiment would be put a light Laplacian filter on your mesh to smooth it. This means the low-frequency structure on the range data will be retained but the low frequency stuff removed (little bumps, cracks etc). You could then filter your normal map to increase high-frequency, and decrease low-frequency (XNormal should be able to do this). Then align and bake your normal map onto the range data. In theory the small features of the painting will be handled by the RTI-generated normals, and larger structures, like unevenness in the canvas, would be represented by the photogrammetry data. What you don't want is to have the more error-prone high-frequency texture from your photogrammetry conflicting with the more accurate high-frequency texture from the RTI normals. With the size of your mesh and point cloud, I would suggest using CloudCompare over Meshlab. The former is regularly updated and can deal quite well with big data sets, provided you have a good enough video card (I don't think Mac Minis have discrete graphics, do they?). The meshing algorithms in CloudCompare aren't quite the equal of what's in Meshlab, but Photoscan probably generated a pretty clean mesh for you already, no? Just a thought. George
  6. Graeme and James, I'm not sure I completely understand this question. Could you tell us a little bit more about what you want to accomplish? Did you want to export a surface normal map into a contouring program like Surfer? As an occasional user of virtual RTIs I can say that their chief advantage is to produce a vastly more compact representation of 3D datasets for dynamic relighting by non-expert end-users. Virtual RTIs can be very helpful in cases where actual RTI would be difficult or impossible, such as with aerial LiDAR or laser profilometry data. Where an actual RTI capture is feasible, I completely agree with Carla that a virtual RTI generated from laser or photogrammetry data will be inferior in many respects, all things being equal. This question of the merits of virtual RTIs is an interesting one! George
  7. Since I don't work with coded targets, I prefer to create my own in Adobe Illustrator and have them printed on a heavy substrate like Alupanel at a sign printing company. This gives me the most flexibility in terms of target size. The target kits that are commercially available are generally designed for only one scale of project. If you're working at the macro-scale you'll need much smaller targets, and likewise for really big rock faces you'll need much larger ones. Macro-scale targets are a bit challenging since the imperfections of the printing process for signs becomes rapidly apparent. Some sort of custom etched solution is probably best in that case. Does Agisoft use coded targets in the way Photoscan does? Unless you absolutely need the coded targets, I wouldn't bother.
  8. David, If you use the HP Viewer you can select the filter "Surface Normal Visualization". It's a very good way to determine whether a PTM was successful in estimating the surface normals. This filter is not, at present, available on the RTIViewer but may well be included in a future release. I suspect shadowing is very likely the cause of the irregularities I observed. Your new dome looks really nice!
  9. David, Have you looked at the "Surface Normal Visualization" of this RTI? I just looked at it...very strange results. The upper part of your subject looks ok, but there seems to be some strange effects elsewhere. George
  10. This is very similar to a couple of inscriptions I was working on recently. On the advice of Kathryn Piquette I dropped the exposure by about 0.8EV to get a better result in specular (very similar situation to what you have). It worked rather nicely. As a matter of course I usually over-expose my RTI shoots, but in this case I overdid it. Dropping exposure is something I generally avoid since it can cause more internal shadowing in low-angle shots, and thus more inaccuracies in surface normal generation. I think sort of grainy specular enhancement Sigmund has is from a different source.
  11. Sigmund, If the sun is coming in and out of clouds during the shoot the total amount of light entering the scene is going to change substantially. In principal only your flash unit should be changing the lighting. The sun will consequently cause problems in the estimation of normals. There are a couple of ways of dealing with this: either shade the entire relief or use a powerful flash to effectively overpower the sun. The latter technique will require you to use a neutral density filter to bring down the aperture/shutter speed to numbers that will produce the best spatial resolution (no diffraction at very small f-stops) and will sync with your flash unit (the remotes we use won't permit a sync with any speed above 1/400s). The former technique -- manually shadowing the scene -- often requires someone to hold a large photographic reflector or umbella, sometimes up on a ladder. Outdoor RTI in changing light conditions, or very bright light is extremely challenging...I think there is discussion elsewhere on the forum on these challenges. I'd appreciate hearing from other forum members about their experiences. Could you and John post the "surface normal visualizations" of both your PTMs? George
  12. Sigmund, I'm not sure this is a camera angle issue...in my experience this grainy quality is due to lots of bad surface normals. It may have a lot to do with the light distribution and the light distance.
  13. Were you using the ISO 12233 text chart? Would you be willing to post either the photos or the MTF curves, if you have them? I'm amazed that RAW conversion causes such a change in spatial resolution. Do you think it has to do with anti-aliasing applied during the conversion? That's the only thing I can think of during the conversion that would cause such a change in spatial resolution.
  14. John, The RAW vs. JPEG thing is a minefield I try to stay out of. What are you currently using to convert from RAW to JPEG? Adobe CameraRAW via Bridge or Lightroom? I've been curious about RawTherapee, an open source RAW processing program: http://www.rawtherapee.com/ I haven't had much time to play with it, though. It seemed a bit unstable last time I used it. I've heard extremely good things about the PhaseOne CaptureOne package for RAW, but haven't tried it personally. What exactly do you mean by "equally good images"? What metric are you using? Just curious. You've hit on a big issue with RAW workflows, though. Naturally we want our image processing to be as open as possible, but if we shoot RAW we have to deal with a big black box in the form of the manufacturer specific conversion tables. I suppose the benefits of shooting RAW outweigh the lack of openness.
  15. David, Have you seen the Triggerfish by Hedwig Dieraert? http://wetpixel.com/articles/review-triggerfish-remote-slave-trigger We were considering using them to trigger two Ikelite DS160s off camera to give us a fill light to improve photogrammetric matching. I wonder if they could also be used for your project to trigger an Ikelite strobe off camera. You'd probably need to have a small master flash on the camera, but given that the camera isn't moving it would alter your RTI data (it would just be like doing an RTI with the overhead lights on). I haven't seen another good off-camera underwater flash trigger. Has anyone else seen anything? Another thing to consider is a hand-held sonar unit connected to your off-camera flash. Something like this: http://www.mantasonar.com/diveray.htm They've really come down in price. If you could set an acoustic alarm to tell you you're at the right distance it would replace the string used in terrestrial RTI. This may unduly complicated things, though. Another thing to consider is something like a Lastolite white balance card or a DSC Labs Splash Underwater EFP Chart. We haven't been using gray-cards or colour cards underwater (we just do a correction in Photoshop or Lightroom), but I definitely think they should be routine if terrestrial technical photography is to be matched underwater. One final thought...LED lights have very poor throw through the water column (at least the ones I've seen). They don't seem to me as good a solution for RTI as the sort of HID you have.
  16. Wow. I'm impressed with how far you've got with this. Looking forward to seeing the final RTI. Can you try shooting some convergent pairs underwater using the same camera for some photogrammetry as well?
  17. That's interesting to know. DTM Generator, I suppose, matches only where there are the strongest matches (with certain constraints on feature rate, WinSize etc.). I do like the final PS surface. It's very clean...almost too clean. Comparison with my output is quite difficult unless I also clean and remesh. Although it doesn't show that much, I attach a difference map after the data-sets were aligned by ICP and compared in PolyWorks: https://dl.dropboxusercontent.com/u/17766689/image_6.png I'm pretty sure 123D Catch is performing similar cleaning and remeshing operations. On the other hand, maybe multi-view stereo is just so good that this is "raw" data! At this point it would be nice to have a common set of high quality "reference images" to work on. I'm involved in a similar project for evaluating rock outcrops with LiDAR and photogrammetry here: http://geol.queensu.ca/faculty/harrap/RockBench/ There's also a nice comparison of LiDAR and Photogrammetry on a quantitative level here: http://www.rocksense.ca/Research/PlaneDetect.html One could even imagine similar reference data-sets for RTI. Because texture is so important to photogrammetric matching the sort of test objects used for laser scanning are inappropriate, e.g., metal gauge blocks. I think good, scaled photos of, say, a granite surface plate would be incredibly useful. These surface plates have certificates specifying flatness. One could compare the nominal variance on the surface plate to the variance in the data on the photogrammetry, or the laser for that matter. I'm belabouring this point not because I'm particularly unhappy with the output of any particular package. As you point out it's stunning the sorts of models even the free web portals pump out from seemingly poor inputs. But routinely I'm asked in workshops what the difference is between software at the free, $500, $5000 or $15000 levels are (if you include Sirovision and VStars, $150 000). The answer is going to depend a lot on the final application and what sort of post-processing the end user wants. Is it just a "cool model" to show in a display, or intended for depth-mapping to reveal features that may only be 10s of microns deep? As you know, many users haven't even formulated what questions they'll be asking of the data...but for those who have, how are they best to spend their hard-won research dollars?
  18. Tom, One thing I notice about the data you put up is that it has evidently been remeshed (the even point spacing is a give-away) and likely filtered for noise (a light Laplacian?). Is this something you did or is this how PhotoScan spit it out? George
  19. I just did a grey-scale depth map on Tom's data: https://dl.dropboxusercontent.com/u/17766689/HangvarPS_DepthMap.JPG The depth values are assigned according to a "mean surface" not according to a flat plane, as is the case in Meshlab's "depthmap" shader. The latter is still a very, very useful exploratory tool for rock art.
  20. Charles...I answered in the other thread...I think it could be a few weeks out, even months. The programmers are still optimizing the code for specular enhancement (the other filters aren't implemented).
  21. Charles. I've seen PTMs viewed on an iPhone but specular enhancement is still slow! Last time I checked the programmers were still optimizing the code. We're hoping for a YouTube demo in a few weeks. I wouldn't be at all surprised if there are other groups doing this as well...they might be further ahead. Is a PTM viewer for iPad something you'd be interested in for a museum display?
  22. Sigmund, We were at Gotlands Museum doing an RTI/photogrammetry workshop with the RAA (Swedish National Heritage Board) in May. I take it you've met Laila Kitzler Ahfeldt already? She's done some amazing work on surface metrology with pictures stones using structured light scanning. One of the exciting things we demonstrated was that the photogrammetry could produce data equivalent to the structured light, without the requirement that the stone be in a shaded enclosure. Laila should have most of the data we collected (RTI and photogrammetry), although some of the photogrammetry projects were only built this fall. It's crucial for these sorts of projects that some sort of scale bar be put in the images, especially if you want to get metrics on the carvings.
  23. I'd really love to see exactly what the normals are doing on that scale bar and how much they're deviating from the expected direction. Given that the scale is placed on the outside of the image there can be problems with the normals. I suspect a big part of the problem was the shadowing from ball under certain light angles. As a mentioned above self-shadowing is a leading cause of the mis-estimation of normals with PTM. I'm really hesitant at this point to say that surface colour is causing a problem (RTI is usually very robust in this area) It would certainly be interesting to look at the normals from HSH and compare them quantitatively to PTM and PS. Does anyone have a document describing the RTI file specification? There's one for PTM but not RTI as far as I can tell. Without this it will be hard to write code to extract a normal field from an RTI built with HSH into Matlab. Photometric Strereo is certainly a very powerful technique! I should ask Lindsay to try PS Triplets on highlight-based RTI, if he hasn't already. R is an open source statistical programming language...it has a huge user base and has some good tools for analyzing arrays of vectors by direction. Matlab, by contrast, is commercial and quite expensive for non-academic users.
  24. I got about 350K points just working in 3DM Analyst and without tweaking the epipolar image settings. Here's the colourized PTS filed that I got: https://dl.dropboxusercontent.com/u/17766689/Hangvar.pts Tom, did you generate your final dense surface in PhotoScan? Sigmund, did you refocus at all when you were shooting the second set of photos? Anyone tried some of the webservices on this data yet?
  25. I love these Pictures Stones! We did some wonderful examples at the Gotlands Museum in Visby. I'm not sure focus is the issue. You need to get a better range of angles around the stone. If your camera is sufficiently high resolution I prefer to handle this sort of subject with "convergent pairs". We're finding with a 36mp D800 that we can get fantastic results on moderately sized panels without using a "strip project". You definitely don't want to change focus during the shoot, nor do you want to change the aperture. Shutter speed, however, can change with altering the photogrammetric parameters of the lens.
×
×
  • Create New...