Jump to content

KurtH

Members
  • Posts

    22
  • Joined

  • Last visited

  • Days Won

    1

KurtH last won the day on September 5 2023

KurtH had the most liked content!

Profile Information

  • Location
    DC

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KurtH's Achievements

Apprentice

Apprentice (3/14)

  • Collaborator Rare
  • Conversation Starter Rare
  • First Post Rare
  • Week One Done Rare
  • One Month Later Rare

Recent Badges

1

Reputation

  1. A 90 degree click would be nice (though at least I can just type in 90 or -90 if needed) the option to respect original orientation would be a much bigger QoL improvement (though probably a few more lines of code and/or re-thinking of some processes than adding a 90 degree button).
  2. I like the idea of having a primary view to help have a default orientation for models. But because we do no rotation to the images when processing, half the time the images used may be sideways and we do often already orient the object in Metashape when optimizing the alignment. I wonder if it would be valuable to have an option to not orient to the image and respect the model's orientation (though that may then require yet another question of axes: Y-up, Z-up / Left-Hand, Right-Hand).
  3. I'm uncertain if in the initial ImageBased view it's only projecting 1, a couple, or way too many images at a single position. But if it's 1 or only a few images, a simple view toggle that lists the current images being projected at this current view might be sufficient to help us narrow down "oh wait when I'm at this angle I'm seeing a Photogrammetry target projected on to the model, I must have missed that in processing. I should either disable that image or mask it" and if there was an option that put in the top corner the name (or list of names) of the image(s) being projected on to the model at that moment.
  4. It's clear that a number of projects will need additional masking to avoid occlusions or in some cases undesirable reflections from the object itself (or other objects in the room that cannot be mitigated otherwise) negatively impacting the processing. Two features (that hopefully wouldn't be too difficult to implement down the road) would be in the initial image-based render have a view option to see what image(s?) are being used at that view. So if you see a big white pillow showing up on a dark bronze at a given angle and see it showing up at a specific angle, you know exactly which image to look for to mask. Then (and this may be a feature I have missed) if there is a way to force the Kintsugi builder to reload all (or even better... just a selection) of the images if we did remask. At the moment I'm just re-creating the project. Of course more automated or on-model image masking would be even nicer, but understandably far more complex and probably not something in a reasonable scope.
  5. The files all had unique names that denoted if it was on the side or circuits (or details) from a turntable set. Only JPGs were used in this model, no PNGs or TIFFs. The only thing I could think of is the Agisoft file was referring to the JPGs spread across 4 different folders. Knowing I could only reference one JPG folder when using the Camera.XML/Model.OBJ manual set up, I copied all the JPGs to one folder before using the manual mode, and then it worked fine. 17661-20240726-24mmOCF-OnesideCalSet-001.jpg 17661-20240726-24mmOCF-c01-01.jpg 17661-20240726-50mmOCF-OnesideCalSet-001.jpg 17661-20240726-50mmOCF-c01-01.jpg
  6. Had another model where the images were not being aligned properly. This was using the newer version Kintsugi builder and the Metashape import. I THINK the problem was that the Metashape project had the JPG files spread across multiple folders (one for 24mm on a turn table, one for 50mm on a turntable, one for 24mm with the object lying down to get the bottom, one at 50mm for the bottom). I copied all the JPGs to one folder and used the manual OBJ/Cameras.XML/JPG folder and then it worked fine.
  7. Thanks. I'll play with EEVEE a bit more. The method I mentioned above was trying to get the closest approximation in Cycle (which historically produced more realistic looking renderings, albeit slower). But as you point out a recent update to Cycles changed the specularity options.
  8. Hi Rich, I've been working with some smaller objects that require getting close. A few things that I've been doing. Like Charles said try to get back a little bit. Not only will you have light fall off issues, you'll also have DoF issues. So we've been capturing sets with 24mm but backed up 2 feet which leaves a lot of empty space in the frame (but gives a solid basis for alignment and scale accuracy) and then following up with a 50mm. I'm trying to fashion a nice potato masher for a picolite like Charles has but in the mean time we've been using 580EX flashes, which at 24mm aren't the most perfectly even illumination. So once I fix the distance of the lens, I tape a color checker to the a white wall, I square up and focus on the color checker, take a shot, and slide to the left/right and take a shot of the white wall (same distance, still squared up). I now have a file I can create an LCC in C1 (or flat field in Lightroom) along with the color checker file. I think that helps a bit. (CHI's guides recommend this for turntable work) If the object is not filling the frame, mask. Either create a low res mediocre model and have Agisoft create masks of the object, or make some photoshop actions. If on a turntable, if agisoft is set to "ignore static tie points" it will still waste its key-point count on areas in the static background that won't be used. I have been rotating with Camera Flash, particularly to fill-in areas of occlusion or to have some more lighting angles and they seem to align alright You can have a 2nd set of images to aid with alignment and then disable them before texturing. We shot a set with cross polarized ring flash along and another set with a non-polarized speed light. We were able to align everything, but didn't find the polarized gave us substantially more so for subsequent objects we've been using the cross polarized ring flash less. That can be the case, but working about 2 feet back I haven't had as much issue. When processing the RAWs, try to use a flatter/linear profile or tone curve. Don't be afraid if you get some small areas of blown out speculars, but you may need to capture a few more images to have those areas covered at more of an angle where it doesn't blow out. I've had some things where occlusions put areas in completely shadow. I took additional photos with extra rotations and positions. Keep in mind that every point needs to be (LIT) in at least 3 (preferably 9+) images from different positions. Areas that are in shadow, that image doesn't help the build. I've had some promising results with some ring flash use, but I haven't tested enough on SUPER shiny objects (if the specular shows up as a donut, what happens? I don't know). The nice thing with the ring flash is the lighting calibration step can be skipped as the light is basically at 0-0. Use as flat a profile as you can to minimize the contrast, accept that there will be some blown out areas and nearly blocked up areas. I really think that would be problematic the way things are working, but the ring flash would be a better option. Don't hesitate to give me a call at some point, you helped me out a ton early on with RTI and Photogrammetry. If you want me to take a look at anything I'm happy to repay the favor if I can.
  9. So we've been playing with getting the object to render nicely for producing movies of spins with better lighting quality. There's still some room for improvement but the results have been pleasing thus far. Here's where we're at with settings. If I can get a screen grab of the nodes I'll add that later. I am curious if we can get the GLSL shaders Kintsugi is using to work in Blender... but I have a way to go to level-up my skills to that point. The Blender renders we had to mess a bit with the textures. We used the Principled BSDF in Blender 4.1 Diffuse -> Base Color (and depending on the object may have had a slight hue/sat adjustment along the way to tweak the color) ORM -> isolate (Green/roughness channel) -> Roughness Specular -> Specular IOR level Normal -> Normal (we had blender export at TIFF and then used Premiere to turn those TIFFs into video as Blender’s video rendering sometimes can add posterization on the gray background we use).
  10. We had a 2nd model recently that had bad alignment without masking. We were able to export PNGs with alphas that solved the problem, but for high res images that is a bit painful (PNG saves VERY slowly and are large). Ideally if there could be a check-box/preference to optionally read masks from the Agisoft file when using that workflow if possible would be a very nice quality of life improvement alternatively if kintsugi could read a folder of Black/White mask files named the same or read from TIFF files with alpha channels.
  11. Thanks for the idea Charles. I ran all the image masks to create JPGs with the object floating in white space to see how it would behave, and the images were still mis-aligned. When I have a moment I'm going to completely re-align the project and see what I can get and probably run the model through at a few stages... 1) Straight out of metashape, holes and all. 2) After MeshLab cleaning and rebuilding textures in Metashape. 3) Bottom hole filled in blender and retextured in Metashape. Knowing at what stage it breaks will give more clues to where the problem is.
  12. I have a model that is causing me a number of headaches. It's something I shot before thinking about Kintsugi but I captured polarized and non-polarized ring-flash images. Additionally it was shot on a turntable but with a square wood platform (not built as part of the model) that sticks out and occludes part of the sculpture for the circuits with the lowest angles. Additionally this object we were not able to capture the bottom so I filled the hole and brought it back into Agisoft. To make sure the issues I've been having weren't due to me somehow mis-aligning the model while hole-filling, I re-decimated the model in agisoft, and built a new set of textures in agisoft to confirm the images are aligned in Agisoft. I then exported the model, cameras (only the non-polarized ones), and triple-checked that I grabbed the folder of non-polarized JPGs that were used in this last round of Agisoft. I am unsure where the issue is coming from but wondering if the occlusion (that is masked out in Agisoft) combined with the hole filling is causing problems. Attached is an image with one view showing the wood platform that causes the occlusions. Attached is an image showing the object on the turntable with the platform occluding part of the object along with two screen captures in the initial state in Kintsugi showing the images misaligned (proceeding anyway and trying to render textures does not improve things, I tried incase it was just an issue with the initial render). I can package up the images, object, and cameras. If you want the whole agisoft project, I can probably do that but it might be next week so I can get a colleague to send it via a file transfer service.
  13. One thing I've been thinking about as I'm doing more Kintsugi tests is the CHI scale bars are kind of bright and reflective which may not be ideal for on camera flash. Obviously they can be removed from the scene when at lower angles but wondering if dark/black scale bars would be advantageous. For Carla/Mark: I don't know how much revenue the scale bars make for you, but a could be an opportunity for a "pro" scale bar in black and if it's a higher end product there are some places that make coded targets that are printed in a process that is finer resolution if doing very high resolution photogrammetry if they were tipped into scale bars, just a way to justify a higher price for useful but smaller market item.
  14. As far a decimation goes, we're working similar to what Charles is doing. There are some models that we make that have much higher face counts. At the moment I'm targeting 160k faces... still playing a little to find the spot though Charles has done this a lot more, I'm erring a little on the large side as I'm looking at other resizing options that might be over the horizon. We try to not decimate more than 10x in one go So if we have an 8million face model we might go 8m->1m->160k. And we might need to evaluate how much high frequency data is in the 8m face mesh. Often it may be better to build normals on the 1m face as the 8m face could make it too noisy (though Kintsugi does seem to remake or at least improve the normals so might be a moot point).
  15. Went ahead and tested it... Just removed the images from the chunk got it down to around 860 images and exported a new Camera Locations file and works fine (didn't need to remove the JPGs from the folder) and now after loading the object shows up. If it's graphics related, for your notes: we are running dual RTX-4090's with 24GB each. And it turns out I got it to load this after cutting down the images while my intern had a rather large alignment running in meta shape at the same time (which should be leveraging the GPUs), though it's probably something dumber like Nvidia wanting you to go to an A series card.
×
×
  • Create New...