Jump to content

ozbigben

Members
  • Content count

    14
  • Joined

  • Last visited

  • Days Won

    3

ozbigben last won the day on April 3 2016

ozbigben had the most liked content!

Community Reputation

5 Neutral

About ozbigben

  • Rank
    Member

Recent Profile Visitors

159 profile views
  1. ozbigben

    tiny objects

    Failure to align 2 sections together if you have masked the images is usually due to a lack of overlapping detail between the two sets of images. For very small objects, differences in focus can also be problematic, especially on turntables and an object with a high length:width ratio. When planning a shoot I pick where the join between the 2 sections will be on the object and then position it so that I can ensure that images in both sets provide adequate coverage across this area. In the case of complex objects you may find that 2 orientations is not enough and you may need to shoot 3 or 4. To ensure focus overlap as well I either shoot focus stacking (rarely) or shoot handheld and move around the object so that I can carefully position the plane of focus.
  2. ozbigben

    flat run question

    There are a couple of variations on this. My preference is to mask all images as it simplifies supporting the object and reduces alignment issues. Photographing the flat surfaces is the easy bit. Connecting the 2 surfaces as one object requires images that can connect around the thin profile of the object, preferably at each end of the object. To achieve this you'll need to be able to shoot horizontally to the object, or even better, from slightly below the object. This in turn means supporting the object a little higher above the table. Alternatively if you can support the object on its side it's easier as you're shooting from above and can easily move across the edge. For the thicker end you may be able to get enough surface detail in each image as you move from front to back but for the tapered end I'd add a small round'ish object just beyond the tip to provide a continuous connection of images. I use a different object for the left/right sides if you need to move the object in between shooting each side. Focus is critical when shooting near the edge because you have very little area of detail for alignment. A slight focus shift can break the alignment.
  3. ozbigben

    Problems with Flat Runs on Pottery

    If I understand correctly, your objects are flatter (as in plates). In this case the main problems are linking images between the top and bottom of the object. This usually requires more images with more overlap as the camera approaches the edge. Depth of field can also be a problem if similar views from top and bottom image sets don't have matching areas that are completely in focus.
  4. ozbigben

    photogrammetry of objects behind glass

    It depends a bit on the distance between the glass and the object. I've used a step-up ring with some felt attached to prevent damage to the glass, and then photograph the object with the lens pressed against the glass. This avoids the reflections from the glass. Reflections from the back of the case can be removed by masking the images. If you need more distance from the glass you can make up a black velvet "cone" to block out reflections from the front of the case... but not always practical either.
  5. ozbigben

    Calibration and image rotation in Photoscan

    Yes, if you do different things to the same set of images it will cause problems... but then that would be potentially causing problems irrespective of whether the flag exists or not. I was merely providing this for a situation where you're assuming that the image is a relatively raw image set direct from the camera. If you're manually correcting orientation in Lightroom or Bridge when the flag exists there is the potential to make an error, setting the flag to either 0 or 180° rotation. Removing the flag in this instance sets it to the original sensor image. In Taylor's situation, manually rotating images even with the value of the flag visible is going to take a lot of time that could be saved by merely removing the flag. I run this script prior to any image processing, and if anyone has provided me with images that have already had some processing done, then that would be their problem for not following instructions... but most people are understandably happy not to do anything extra themsleves.
  6. ozbigben

    Calibration and image rotation in Photoscan

    It shouldn't really make any difference which derivative is being processed as the flag doesn't change the image data, it's just an instruction for software for what to do with the image when it opens. It is not a setting that defines the orientation of the camera. e.g. if you take a 4000x3000px and rotate it 90° in Lightroom it will set the flag to correspond with the rotation angle required to put the image in the displayed orientation when opened. Explorer will still show the image as 4000x3000px. If you open the image in Photoshop it will have an asterisk next to it's name in the window title indicating that it has been modified. When you save this image, it will now be a 3000x4000px image and the orientation flag will be set to 0 (horizontal) as no rotation is required. As long as the files haven't been edited, changing the orientation of any derivatives can be undone by removing the orientation flag.
  7. ozbigben

    Calibration and image rotation in Photoscan

    I use that script whenever the orientation has been set by the camera to remove the flag from the source files before any further processing. Images load in landscape in every application after this. It's essentially the same thing as correcting the orientation in Lightroom/Bridge... changing the orientation from a variable to a fixed value (you could alternatively set the orientation to a specific setting) but it's automated and not dependent on the user knowing the actual direction of rotation. If you need to document the original camera orientation for whatever reason you could also use EXIFTool to create a table of these values prior to removing them (in the same script), but then the orientation tends to be quite random when the camera is pointing downwards so it's not a particularly reliable flag. I always advise people to turn the orientation off too, but I always have a backup plan
  8. ozbigben

    3DHOP: 3D Heritage Online Presenter

    Very interesting. Definitely have to look into this one.
  9. ozbigben

    Calibration and image rotation in Photoscan

    I'm starting to deal with images from multiple photographers now. To get around any camera settings I created a drag and drop batch file (Windows) for EXIFTool that strips out the offending metadata. FOR %%a in (*.*) DO c:\cmd\exiftool.exe -overwrite_original -orientation= %%a
  10. I'm currently running the basic licence if anyone wants any info/ comparisons. I'm sure many will baulk at the idea of a subscription licence, especially when compared to Photoscan's academic price... Crossing my fingers that some other option may be available for academic/cultural heritage licencing in the near future. A lot of my experimentation over the last year with photogrammetry has involved the use of fisheye lenses since the release of support in Photoscan. RC doesn't have a true fisheye distortion model (yet), but I have been able to coax it to produce some good results with the distortion models it does have. e.g. Large scene including several detailed sculptures: http://files.digitisation.unimelb.edu.au/potree/pointclouds/rc-fisheye-test2.html I'll be mentoring a student doing a project on this area over the year which will include a comparison of Photoscan and Reality Capture, including an estimated timeline for producing 3D models of all sculptures.
  11. ozbigben

    OrthoPhoto from Photoscan?

    When I butt up against the RAM limit for generating a mesh in Photoscan I export the dense point cloud and do the surface reconstruction (Poisson) in CloudCompare. You can do it in Meshlab as well but I find CloudCompare is more robust when handling large point clouds. It takes a bit of juggling of settings to get the best mesh for the available RAM but it saves generating a smaller point cloud. CloudCompare also has a nice feature where you can set the vertex density to a scalar field and then use that to filter out low quality polygons. Then import it the model back into Photoscan to generate the texture. For really large textures I export tiled image sets and recombine them later in GlobalMapper (but then I have GM because I'm a map nerd). GIS applications are good at handling huge amounts of data with relatively low memory requirements and are quite useful for converting to other formats.
  12. Hi Carla Fair question since this was also my first post here. I am not affiliated with them in any way. I'm a Technical Support Officer at the University of Melbourne http://digitisation.unimelb.edu.au/ ...and I'm also active on the Photoscan forum (bigben). I'm a scientific photographer by qualification (BAppSci Photography, RMIT) and have been exploring photogrammetry for a couple of years now to provide support and training for staff at our university. You can see some of my experiments here: https://sketchfab.com/uomdigitisation/models and I also participate in the Cultural Heritage lounge of Sketchfab's forum. It was a pretty big call to make so early on in an evaluation, but I am now continuing experiments that I had previously abandoned because I had reach the limits of our hardware. [edit]I also agree with your comments on metrics and I don't know enough on that yet to be able to provide meaningful comment on RC. Given the performance though, I think it is worth in depth investigation. Cheers Ben
  13. https://www.youtube.com/watch?v=d8naLEtLqDY Each pane can be set to 1d, 2d (for images) 3d (model) or con (console). Image list is best at 1D to start with. Control points listed below images with two items to create a point and create a distance. There are a few ways to make GCPs... drag from the list onto the image. (Click and hold on the point in the image to zoom in). click on a point in one image and then click on the other image to place a matching point hold down CTRL key and click and drag point from a 3D view onto an image with control point tool active, clicking in an empty space on an image creates a new point. Control point tool can be toggled with F3
  14. I had this when after my first model, but realised that it if not all of the images align into a single model you will get multiple components. By default after the alignment it shows you the component with the highest number of aligned images. When you open a project it shows you the first component, which may only contain a small number of images and thus not show much. I've been using it for a couple of weeks now. I stopped a Photoscan project 5/6 of the way through creating a dense cloud 9 days after starting the initial alignment (~800 20mp images). I had a mesh using RC within 12 hours. Without much documentation it can be pretty quirky to use but it is definitely worth the effort. It splits tasks up into manageable chunks for your hardware extending the range of possibilities and alignment is very fast. Detected points in images are cached for the session (even if you start a new project) so tweaking and realigning doesn't have to wait for point detection each time. There is now a Facebook group for sharing experiences and getting help: https://www.facebook.com/groups/CapturingRealityArena/ For me, this is potentially a game changer. Increasing productivity 7-10X and coping with at least 2X the number of images for a project on the same hardware. Definitely one to watch.
×