Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


ozbigben last won the day on February 28 2020

ozbigben had the most liked content!

Community Reputation

7 Neutral

About ozbigben

  • Rank

Recent Profile Visitors

457 profile views
  1. Two common options include: Texture laser model inside photogrammetry software Export the photogrammetry model Align the laser model to the photogrammetry model Import the aligned laser model into the photogrammetry software and generate the texture. Texture baking Export the textured photogrammetry model Align the laser model Bake the texture from photogrammetry to laser model The photogrammetry mesh needs to be accurate enough for alignment with the laser model. As long as the camera alignment is good the texture map should be good as well even if the photogrammetry mesh has some defects from shiny/featureless surfaces.
  2. Hi Paul. Shooting: I place the scale bars in a position where I can get both the scale bar and a portion of the object in focus and then shoot sufficient images to accurately capture the scale bar. I repeat this for each orientation of the object, or dispersed throughout a scene. After removing the scale bar I shoot the object or scene, repeating any angles that had the scale bar. Processing. All images are aligned together. Add CPs, set scales etc... then disable the images with scale bars to just prior to producing the mesh/dense cloud. In Reality Capture you can select the scale bar images and disable them from meshing/texturing. In Agisoft, disable the images prior to alignment. If the scale bars aren't touching the object you can always crop it out later but this saves you a bit of work. By separating the 2 sets of images I find you worry less about where to place the scale bars (wanting them to be less obtrusive) allowing you to focus on placing them where they're needed. In this scene for example (https://skfb.ly/6JUGE), there were scale bars obscuring part of the sculptures (resting on arms etc) as well as on the ground.
  3. ozbigben

    tiny objects

    Failure to align 2 sections together if you have masked the images is usually due to a lack of overlapping detail between the two sets of images. For very small objects, differences in focus can also be problematic, especially on turntables and an object with a high length:width ratio. When planning a shoot I pick where the join between the 2 sections will be on the object and then position it so that I can ensure that images in both sets provide adequate coverage across this area. In the case of complex objects you may find that 2 orientations is not enough and you may need to shoot 3 or 4. To ensure focus overlap as well I either shoot focus stacking (rarely) or shoot handheld and move around the object so that I can carefully position the plane of focus.
  4. There are a couple of variations on this. My preference is to mask all images as it simplifies supporting the object and reduces alignment issues. Photographing the flat surfaces is the easy bit. Connecting the 2 surfaces as one object requires images that can connect around the thin profile of the object, preferably at each end of the object. To achieve this you'll need to be able to shoot horizontally to the object, or even better, from slightly below the object. This in turn means supporting the object a little higher above the table. Alternatively if you can support the object on its side it's easier as you're shooting from above and can easily move across the edge. For the thicker end you may be able to get enough surface detail in each image as you move from front to back but for the tapered end I'd add a small round'ish object just beyond the tip to provide a continuous connection of images. I use a different object for the left/right sides if you need to move the object in between shooting each side. Focus is critical when shooting near the edge because you have very little area of detail for alignment. A slight focus shift can break the alignment.
  5. If I understand correctly, your objects are flatter (as in plates). In this case the main problems are linking images between the top and bottom of the object. This usually requires more images with more overlap as the camera approaches the edge. Depth of field can also be a problem if similar views from top and bottom image sets don't have matching areas that are completely in focus.
  6. It depends a bit on the distance between the glass and the object. I've used a step-up ring with some felt attached to prevent damage to the glass, and then photograph the object with the lens pressed against the glass. This avoids the reflections from the glass. Reflections from the back of the case can be removed by masking the images. If you need more distance from the glass you can make up a black velvet "cone" to block out reflections from the front of the case... but not always practical either.
  7. Yes, if you do different things to the same set of images it will cause problems... but then that would be potentially causing problems irrespective of whether the flag exists or not. I was merely providing this for a situation where you're assuming that the image is a relatively raw image set direct from the camera. If you're manually correcting orientation in Lightroom or Bridge when the flag exists there is the potential to make an error, setting the flag to either 0 or 180° rotation. Removing the flag in this instance sets it to the original sensor image. In Taylor's situation, manually rotating images even with the value of the flag visible is going to take a lot of time that could be saved by merely removing the flag. I run this script prior to any image processing, and if anyone has provided me with images that have already had some processing done, then that would be their problem for not following instructions... but most people are understandably happy not to do anything extra themsleves.
  8. It shouldn't really make any difference which derivative is being processed as the flag doesn't change the image data, it's just an instruction for software for what to do with the image when it opens. It is not a setting that defines the orientation of the camera. e.g. if you take a 4000x3000px and rotate it 90° in Lightroom it will set the flag to correspond with the rotation angle required to put the image in the displayed orientation when opened. Explorer will still show the image as 4000x3000px. If you open the image in Photoshop it will have an asterisk next to it's name in the window title indicating that it has been modified. When you save this image, it will now be a 3000x4000px image and the orientation flag will be set to 0 (horizontal) as no rotation is required. As long as the files haven't been edited, changing the orientation of any derivatives can be undone by removing the orientation flag.
  9. I use that script whenever the orientation has been set by the camera to remove the flag from the source files before any further processing. Images load in landscape in every application after this. It's essentially the same thing as correcting the orientation in Lightroom/Bridge... changing the orientation from a variable to a fixed value (you could alternatively set the orientation to a specific setting) but it's automated and not dependent on the user knowing the actual direction of rotation. If you need to document the original camera orientation for whatever reason you could also use EXIFTool to create a table of these values prior to removing them (in the same script), but then the orientation tends to be quite random when the camera is pointing downwards so it's not a particularly reliable flag. I always advise people to turn the orientation off too, but I always have a backup plan
  10. Very interesting. Definitely have to look into this one.
  11. I'm starting to deal with images from multiple photographers now. To get around any camera settings I created a drag and drop batch file (Windows) for EXIFTool that strips out the offending metadata. FOR %%a in (*.*) DO c:\cmd\exiftool.exe -overwrite_original -orientation= %%a
  12. I'm currently running the basic licence if anyone wants any info/ comparisons. I'm sure many will baulk at the idea of a subscription licence, especially when compared to Photoscan's academic price... Crossing my fingers that some other option may be available for academic/cultural heritage licencing in the near future. A lot of my experimentation over the last year with photogrammetry has involved the use of fisheye lenses since the release of support in Photoscan. RC doesn't have a true fisheye distortion model (yet), but I have been able to coax it to produce some good results with the distortion models it does have. e.g. Large scene including several detailed sculptures: http://files.digitisation.unimelb.edu.au/potree/pointclouds/rc-fisheye-test2.html I'll be mentoring a student doing a project on this area over the year which will include a comparison of Photoscan and Reality Capture, including an estimated timeline for producing 3D models of all sculptures.
  13. When I butt up against the RAM limit for generating a mesh in Photoscan I export the dense point cloud and do the surface reconstruction (Poisson) in CloudCompare. You can do it in Meshlab as well but I find CloudCompare is more robust when handling large point clouds. It takes a bit of juggling of settings to get the best mesh for the available RAM but it saves generating a smaller point cloud. CloudCompare also has a nice feature where you can set the vertex density to a scalar field and then use that to filter out low quality polygons. Then import it the model back into Photoscan to generate the texture. For really large textures I export tiled image sets and recombine them later in GlobalMapper (but then I have GM because I'm a map nerd). GIS applications are good at handling huge amounts of data with relatively low memory requirements and are quite useful for converting to other formats.
  14. Hi Carla Fair question since this was also my first post here. I am not affiliated with them in any way. I'm a Technical Support Officer at the University of Melbourne http://digitisation.unimelb.edu.au/ ...and I'm also active on the Photoscan forum (bigben). I'm a scientific photographer by qualification (BAppSci Photography, RMIT) and have been exploring photogrammetry for a couple of years now to provide support and training for staff at our university. You can see some of my experiments here: https://sketchfab.com/uomdigitisation/models and I also participate in the Cultural Heritage lounge of Sketchfab's forum. It was a pretty big call to make so early on in an evaluation, but I am now continuing experiments that I had previously abandoned because I had reach the limits of our hardware. [edit]I also agree with your comments on metrics and I don't know enough on that yet to be able to provide meaningful comment on RC. Given the performance though, I think it is worth in depth investigation. Cheers Ben
  15. https://www.youtube.com/watch?v=d8naLEtLqDY Each pane can be set to 1d, 2d (for images) 3d (model) or con (console). Image list is best at 1D to start with. Control points listed below images with two items to create a point and create a distance. There are a few ways to make GCPs... drag from the list onto the image. (Click and hold on the point in the image to zoom in). click on a point in one image and then click on the other image to place a matching point hold down CTRL key and click and drag point from a 3D view onto an image with control point tool active, clicking in an empty space on an image creates a new point. Control point tool can be toggled with F3
  • Create New...