Jump to content

Charles Walbridge

Moderators
  • Posts

    18
  • Joined

  • Last visited

  • Days Won

    10

Charles Walbridge last won the day on July 20

Charles Walbridge had the most liked content!

Charles Walbridge's Achievements

Newbie

Newbie (1/14)

  • First Post Rare
  • Collaborator Rare
  • Week One Done Rare
  • One Month Later Rare
  • One Year In Rare

Recent Badges

18

Reputation

  1. Hey Rich - some thoughts for you: - At Mia we've had good luck with flash-on-camera and smaller objects: we often use longer lenses for smaller objects and try to get the camera 2 or 3 feet (60-100cm) away from the object. - I think Kintsugi will lose its specularity visualization effectiveness at some combination of distance and light size - when the light source is so close to the object that the highlight is always huge on the surface that's facing the camera, Kintsugi can't tell what part of the surface is really specular. (Michael will have a better way to explain this.) I wouldn't trust Kintsugi to make empirical highlights with a macro lens and close focus and a ring light, because the light would be so big compared to the object. But we've had great luck with a ring light that's about 30 cm / 10 inches in diameter and the light and camera about a meter from the object. - I absolutely think you should try the ringlight to get into the shadowed areas on the lamp and box. But then I think you would want the whole image set to be with the ringlight. - For the pixel matching and uneven lighting: we often lighten the shadows and darken the highlights in photogrammetry data sets so Metashape has an easier time matching areas of interest on the object - would that work in this case? And the grey tone (re)calibration step in Kintsugi 3D Builder will re-map your light and dark tones on the surface of the object when it's making a new diffuse map for you. - And about TTL: don't do it! Kintsugi expects your light output to be the same for all your images and therefore to fall off in a consistent way. So in photo terms, if your light was f16 at one meter, Kintsugi expects that light to be f8 at two meters (a two-stop / one-quarter of the light difference).
  2. @KurtH , this week I had some image alignment issues that I investigated and solved, and it led me back to a mistake (or a 'variation in the workflow') that I had made early in the process. Our normal turntable photogrammetry workflow is to use Canon CR2 or CR3 raw files in the capture process, then add data in Adobe Bridge and process the images to DNG and full-resolution (but sometimes cropped) JPEGs in Adobe Camera Raw. We use lightly-compressed JPEGs in our process because we used to transfer Metashape projects over our networks and the JPEGs are about a third as big as the equivalent TIFFs. I don't think it makes Metashape processing any faster. With our turntable photogrammetry process, I often crop the source images before I put them into a Metashape project, because there are a lot of unused pixels in the source images - but that's not the way CHI teaches, so don't do that. (But let me know if you want to see a thread about that, because I want to talk to Carla about it some more.) I have a Metashape project that's the lid of a shiny silver censer. I had good alignment and mesh-building in Metashape but of course I want the improved color, specularity, and normal maps from Kintsugi 3D Builder. When I ran the model through the Kintsugi 3D Builder process the images didn't align properly, and it looks like the offset that you've noted in your photos here. I could see the misalignment in both in the early Kintsugi workflow and in the built textures. Looking back at the whole Metashape project I realized I had mistakenly used the DNGs as the source images and not the cropped JPEGs. I could see in the image information in Metashape that Metashape doesn't see the crop that you've put on the DNG - and I've since learned from Carla that Metashape also doesn't use the color balance or any other adjustments that you might make to a DNG in Camera Raw - it just loads the TIFF that's stored in the DNG (and I assume that TIFF is like the quick preview JPEG that a camera makes right when it takes a photo). When I asked Kintsugi 3D Builder to make textures, it used the cropped JPEGs, and the images didn't align. But I went back and made new uncropped JPEGs with the same filenames and used those, and now everything has aligned well. I don't know if that's the issue you're having with the model you show here, but maybe there's something like that going on. Let us know if any of this helps --
  3. @KurtH The scale bars are definitely reflected in an object like the censer in the attached photos. One workaround I've developed for turntable photography is to do a full or partial photogrammetry set with the scale bars in place, then another turntable set with the camera moved a bit - in this case the camera is raised about five centimeters, and the object is turned five degrees, before starting the set. Then I use the scale-bars set and the not-scale-bars set to build in Metashape. This has the advantage of giving me twice as many angles of the object, and Metashape has no trouble separating the two different heights. For the final model texture in Metashape - and for the texture build in Kintsugi 3D Builder - I'll turn off the set of photos with the scale bars in them.
  4. Kurt, a general question that I don't think is the problem here: is Kintsugi using the masked images as its source images? (As a review for this Forum post) You would have to export new images from Metashape using the masks that you made via Mask From Model; then you'd need to rename the images to match the image names in the XML file that specifies the camera locations; and for that you may need to take the exported PNGs with their masks and save them as TIFs or JPEGs, whatever your source images are. I don't know if Kintsugi 3D Builder sees alpha-channel masks, so that we'll put that on the list of things to test. But in the near future, Kintsugi 3D Builder will be able to read Metashape PSX files directly - we're testing that now - but I don't know if Builder is going to be aware of any masks if the masks aren't part of the source images (instead of the masks being stored in Metashape). We'll talk more and test this. But as I mentioned, with the example image you've shared here, I don't think that's what's happening in this case. I'd like to see that data too.
  5. Rich, here at Minneapolis Institute of Art we have a specific (and I think common) workflow for our models from Metashape: - we'll align photos and refine our sparse cloud using the CHI / BLM protocols; - we'll either build a model from the depth maps (most of the time now) or from a dense cloud; - the first model will be around a million polygons. I'll then clean that mesh as necessary, Close Holes if the model needs it, then: - decimate (and duplicate) the model down to around 64,000 polygons; - have Metashape make a diffuse texture map from the photos (usually our textures are 4096x4096-pixel JPEGs); - have Metashape make the normal, occlusion and displacement maps - and for all those maps it references the larger (million-poly) mesh. - export that 64,000-poly model as an OBJ, and most of the textures are exported alongside it. I haven't seen Metashape export the displacement map, so I need to look into that. - use the 64,000-poly model for sharing. And - here's my actual point - your object viewer (Sketchfab or Kintsugi 3D Viewer in this case) is using a lower-resolution model but using the occlusion and normal textures, which are derived from the higher-resolution mesh, to give the illusion that you're looking at a high-res model. I think a million-poly OBJ is around 100MB, but a 64,000-poly OBJ is around 10MB. In separate forum posts we should talk about Kintsugi 3D Builder-generated diffuse textures and normal maps, but let's leave that for another day. I hope that helps -
×
×
  • Create New...