Jump to content

Michael Tetzlaff

Moderators
  • Posts

    32
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Michael Tetzlaff's Achievements

Explorer

Explorer (4/14)

  • Reacting Well Rare
  • Collaborator Rare
  • Conversation Starter Rare
  • First Post Rare
  • One Month Later Rare

Recent Badges

0

Reputation

  1. It's actually configurable in the settings, but I think by default it's 5 images blended. The tricky thing is that it in theory actually varies by pixel (depending on the focal length of the real camera compared to the virtual camera, it's possible that at extreme angles, the weights and even the views selected would be different -- and this is even more true if you turn relighting on and the normal vector starts playing a role). But I think we could still figure out a way to make it work as for most pixels, with relighting off, the weights and view selection shouldn't be too different.
  2. Hi Kurt, Both of those are great ideas. Currently, the only way to force it to reload all images is to clear the cache under File > Settings > Cache Settings. This will reset all projects, though (which shouldn't really matter apart from some longer loading times assuming all photos are still there). Clearing the cache for just a single project would be a good feature that we could probably implement pretty easily. I'll add that along with the view filtering idea to our backlog.
  3. @sjoerv Ideally, yes. We've used Kintsugi successfully with projects that included photos that did cut off part of the object ( @Charles Walbridge might be able to comment more on that) but there is always a chance of seams if such photos are included (for now, until we have time to implement a better solution for this use case). If you have a dataset where you're seeing issues because of that and are willing to share it, we might be able to use it as a test case for addressing this situation in a future version of Kintsugi.
  4. Interesting -- if you are willing to zip up the project that's causing problems and send it to me, I or one of my students could look into what's causing the problem for the full project import method. And I still need to look at the other dataset you sent me assuming you didn't figure out what the problem was there. It's been on my to-do list but just haven't had the time.
  5. Sounds good -- I'll also add looking into support for Cycles to the backlog. It looks like it might be possible using an Open Shading Language script.
  6. Thanks, Kurt. In the original Metashape project, were there images with the same name and folder but different file extensions? (i.e. IMAGE0001.png and IMAGE0001.jpg in the same directory)? That's my best guess as to the source of the issue -- if you had two capture sessions differentiated only by file extension in the same folder, Kintsugi strips the file extension and tries all recognized extensions in a predetermined (arbitrary) order.
  7. Hi, thanks for trying out Kintsugi 3D! Can I first ask where you're viewing the model in the Kintsugi screenshot? Is it in Kintsugi 3D Viewer, Kintsugi 3D Builder, Sketchfab, Blender, or somewhere else? My first instinct is that looks like it might actually be a shadow being cast by a light source. I might see if you can move the light to see if the line moves. If it is truly baked into the textures, sometimes that can happen when you have a photograph in the input set that cuts off part of the object -- this causes a seam between the part of the object visible in the photo and the part not visible. One temporary solution might be to edit the photo in Photoshop and add an alpha mask that fades to transparent at the edge of the photo if you can identify the image that might be causing this -- Kintsugi 3D Builder will read the alpha channel and decrease the influence of the masked/semi-transparent pixels accordingly. I'd like to eventually have a more robust solution integrated into Kintsugi 3D for scenarios like this.
  8. Thanks for sharing! Glad that was able to resolve the issue, and I agree that the specular count = 4 version looks the best.
  9. Hi Kurt, Sorry for the delay in responding. Here's what I expect will give you the best results in Blender: First of all, I suspect you've already figured that out (or maybe we already had an exchange about this), but for the benefit of others, there's a known issue where Blender does not open GLB files exported by Kintsugi. I don't know if this an issue with Kintsugi or with Blender -- we use a third-party library for saving the GLB files and I haven't dug into it deeper than that yet. The workaround is just to import the OBJ or PLY exported from Metashape, which should be the same geometry but without the textures set up. For the surface shader type, I would recommend using the "Specular BSDF" rather than "Principled BSDF." This only works with the EEVEE renderer, so it won't work if you need to use Cycles. But it's going to be the most accurate to Kintsugi's export, comparable to Sketchfab for material accuracy but with Blender's improved lighting capabilities. In general, you want to hook the textures up as follows: diffuse.png -> Base Color specular.png -> Specular roughness.png -> Roughness normal.png -> Normal However, there are few important adjustments to make besides just hooking those up. First, make sure each texture has its color space set correctly. They should be as follows: diffuse.png: sRGB specular.png: sRGB normal.png: Non-Color roughness.png: Non-Color Second, by default, Blender will try to use the alpha channel of roughness.png for the Roughness map. To fix this, you need to open the "Shader Editor" and connect the "Color" pin from roughness.png to the "Roughness" pin on the Specular BSDF node (by default it will use the Alpha pin instead). Here's what it should all look like (with the Roughness fix circled): Regarding GLSL shaders, we'd need to go into the Blender source code or at least write a plugin for Blender in order to support custom shaders -- which could be done in theory but would be a more involved project that isn't probably going to be in the priority backlog right now (unless I come across an honors student interested in taking it up or something like that).
  10. I think the issue here is that Kintsugi never gets a good head-on shot of the backside of the rings, so it doesn't have good specular data there. I think it's probably overfitting to the images that is has there. One more thing you could try would be to reduce the "specular count" when doing the process textures step, to maybe 4 or even 2. That will force it to use fewer distinct materials when generating the textures, which might be enough to make that bright yellow one go away. Note that we haven't tested that feature much recently, so if you run into any strange issues (i.e. it gets a lot worse after doing that), check in here as there might be a bug to fix. In the future, I want to add a feature to let you manually edit the materials Kintsugi "discovered" so that you can discard ones that you know are problematic... but that's probably a 2.0 feature that's a couple years away at least to ensure that we do it right with a good user experience.
  11. Thanks for the feedback, Kurt. I put that in the backlog earlier, but will note that it's a higher priority QoL feature.
  12. You can either suppress the highlights with tonemapping and then try to bring them back in Kintsugi using tone calibration -- or redo the tonemapping after processing in Metashape (and still do tone calibration in Kintsugi but it will have less of an effect). The one caveat about Kintsugi's tone calibration is that it doesn't have any data from the ColorChecker beyond "diffuse white" -- so it has to assume a linear sRGB tonemapping curve beyond that point. We haven't done a good head-to-head experiment between using Kintsugi's tone calibration vs just going back to a linear sRGB encoded image -- so I can't say definitively how much this matters. (@Charles Walbridge, maybe we could look into this next time you have a dataset that might be good for testing this?) Regarding cropping, I believe that Carla recommends (in general, for all photogrammetry projects) not to do cropping. As mentioned above, I'm not sure if one set or two sets is better -- Kintsugi will do its best if you have heavily tonemapped images, but isn't necessarily perfect -- and I'd love to hear about the results if anyone wants to do a head-to-head comparison. For the box with holes, I think that's going to be tricky even with conventional photogrammetry. You could try using some black cloth in the interior, but it might also just be something that requires some manual editing. I have less experience with the actual photogrammetry process as many on this forum, but I've always found holes like that to be extremely difficult to get Metashape to model correctly (it likes to fill holes whether you want it to or not). I think I've answered all your questions, Rich -- let me know if I missed something.
  13. Hi Rich, to add to what Charles said: As Charles said, definitely don't do TTL. You want all exposure related settings (ISO, f-stop, exposure time, flash power) set to manual and locked down, otherwise Kintsugi doesn't know if the difference is due to the material of the object or the lighting (and will assume it's from the object's material). Don't worry too much about if the image is underexposed, especially if the object is oriented away from the camera and flash -- those angles are less important for Kintsugi since they don't have much highlight information (and it aggregates the texture information from all images -- unlike Metashape which stitches together what it things are the "best views"). If the images seem too underexposed for Metashape, you can always tonemap them differently for Metashape than for Kintsugi -- so long as you are working from the same raw images and don't do any cropping, the camera calibration and 3D model will still be valid. As Charles noted, the tone calibration step in Kintsugi CAN undo tonemapping applied consistently across all images -- the key is that you need to do the same exposure / tonemapping for all images in the set. I think what Charles was trying to get at with the distance / light size combination thing is that you want the light to act as small as possible, so that it's as close to a "point light" as possible. This isn't the biggest deal if it doesn't happen -- it's what you see is what you get; if you use a larger light or have it closer to the object, you'll get more diffuse highlights that can't be sharpened easily in software (whereas conversely, it is possible in theory to simulate softer/ larger lights if you capture the "point light" reflectance). For the hotshoe flash, you'd want to use the "light calibration" task in the Kintsugi workflow, which was in the 2and3d workshop and the documentation... but is admittedly still not intuitive at all from a user experience perspective. But if you can learn how to do that, you can compensate for the offset and mask out the self-shadows. Of course, if the hotshoe flash is causing problems in Metashape, this won't help with that. Go for a ring light if you feel more comfortable with that! We'd like to get more feedback on how the ring light workflow works.
  14. Kintsugi should read in the alpha channel of a png. We could potentially support other workflows for importing masks -- that was asked in another thread -- but that's not currently a feature in the software. For now, if you're able to put the masks in the alpha channel, that would be the best workflow to test that out for the time being.
  15. Regarding masks: Kintsugi should see alpha channel masks. We could also add the ability to import masks stored as separate files -- that feature was in the code a long time ago and got removed in a UI update (predating even the Kintsugi development push I think). If we do that, it should be able to handle different file extensions - TIF vs JPEG vs PNG. We should look into also importing the masks from the PSX. Let me know if you discover anything, Kurt. I'm on vacation for a couple weeks with my family, so I'm not sure when I'll get around to looking at your bugged dataset -- it is one of my top priorities when I find some time to work on Kintsugi.
×
×
  • Create New...