Jump to content

3D model export too big for Sketchfab


Recommended Posts

Hi. I am looking for a little clarification on how texture files work with the .obj or .glb files. I wound up processing an UltraHighQuality model in Metashape. I then used that model in Kintsugi 3D to rebuild the new texture files. Upon attempting to upload to Sketchfab is when I realized I have a model that is too big to upload with my account limitations. I am not sure if I can just go back to Metashape and decimate the UHQ model down to 64K faces and then still use the texture files built from the UltraHighQuality model or if I need to redo the Kintsugi3D process using the 64K decimated model.  Thanks for any clarification on how texture files relate to the differing mesh quality models.

Rich House

 

Link to comment
Share on other sites

Hi Rich, currently, you'd need to reprocess the model in Kintsugi 3D Builder after decimating -- the decimation process typically changes the UVs used by Kintsugi 3D when building textures.  (I'd like to streamline this use case in the future -- perhaps reducing the processing time by reusing some of the previously processed data -- but for now the solution is to just start from scratch.)

Link to comment
Share on other sites

Rich, not directly answering your Kintsugi 3D question, but if the files aren't 'massively' too big, there may be a short-term workaround. I don't know if it is still the case, but it used to be that the limit on Sketchfab was actually on the physical upload file size - and whilst you can upload the model files as-generated, Sketchfab used to also accept them as a 'zip' archive, which compression could help if the upload was marginally over (but not if it was orders of magnitude too big!).

Dave

Link to comment
Share on other sites

Thank you for these replies Michael and Dave.

Michael, I suspected as much and will be mindful of the intended final viewing environment when I decimate and built the textures of my model in Metashape before working on it in Kintsugi 3D. 

Dave, That is a very good tip. This particular model was a fair amount too big but I have had some issues previously when the png texture map format pushes the total file size just over the limit and I had to re-export in the jpeg format instead. I'll try out a Zip archive format next time that happens.

Rich

  • Like 2
Link to comment
Share on other sites

  • 3 weeks later...

Rich, here at Minneapolis Institute of Art we have a specific (and I think common) workflow for our models from Metashape:

- we'll align photos and refine our sparse cloud using the CHI / BLM protocols;

- we'll either build a model from the depth maps (most of the time now) or from a dense cloud;

- the first model will be around a million polygons.

I'll then clean that mesh as necessary, Close Holes if the model needs it, then:

- decimate (and duplicate) the model down to around 64,000 polygons;

- have Metashape make a diffuse texture map from the photos (usually our textures are 4096x4096-pixel JPEGs);

- have Metashape make the normal, occlusion and displacement maps - and for all those maps it references the larger (million-poly) mesh.

- export that 64,000-poly model as an OBJ, and most of the textures are exported alongside it. I haven't seen Metashape export the displacement map, so I need to look into that.

- use the 64,000-poly model for sharing.

And - here's my actual point - your object viewer (Sketchfab or Kintsugi 3D Viewer in this case) is using a lower-resolution model but using the occlusion and normal textures, which are derived from the higher-resolution mesh, to give the illusion that you're looking at a high-res model. I think a million-poly OBJ is around 100MB, but a 64,000-poly OBJ is around 10MB.

In separate forum posts we should talk about Kintsugi 3D Builder-generated diffuse textures and normal maps, but let's leave that for another day.

I hope that helps -

 

  • Like 2
Link to comment
Share on other sites

As far a decimation goes, we're working similar to what Charles is doing. There are some models that we make that have much higher face counts. At the moment I'm targeting 160k faces... still playing a little to find the spot though Charles has done this a lot more, I'm erring a little on the large side as I'm looking at other resizing options that might be over the horizon. We try to not decimate more than 10x in one go So if we have an 8million face model we might go 8m->1m->160k. And we might need to evaluate how much high frequency data is in the 8m face mesh. Often it may be better to build normals on the 1m face as the 8m face could make it too noisy (though Kintsugi does seem to remake or at least improve the normals so might be a moot point).

Link to comment
Share on other sites

@KurtH Because Kintsugi refines the normals but can use the Metashape normal maps as input, that actually probably reinforces the workflow of building Metashape's normal map from the 1m model, since you'd rather start with a cleaner model that might be lacking detail, than one with noisy, incorrect detail that could persist through the Kintsugi refinement step -- unless you decide to just discard the Metashape normal maps altogether and start from scratch in Kintsugi.

@Charles Walbridge I didn't realize that Metashape can also generate displacement maps -- that might be interesting to look into supporting in Kintsugi down the road; among other things, it could improve texture projection alignment, resulting in sharper / more accurate texture maps.

Link to comment
Share on other sites

@Richard HouseThat's an open question; we need to do some more experiments -- does starting with a Metashape normal map lead to better results than not exporting / importing the normal map and starting from scratch?  It probably depends on how accurate the Metashape normal map is -- or maybe it doesn't matter at all -- but mostly speculation on my part at this point.  I should be clear though that in most cases I would still recommend optimizing the normal map in Kintsugi regardless (thus why it might be a moot point) -- except in edge cases like the example you shared where there were oddities in the source images that messed up the Kintsugi normal map, in which case you'd disable normal optimization entirely -- otherwise, it's just a question of whether you start with the Metashape normal map and optimize from there, or start with the assumption of a smooth mesh.  Let me know if this doesn't make sense.

What I was trying to say to Kurt is that if you are going to import normal maps from Metashape, it's probably safer to bake the normal map from a lower detail version of the model (i.e. 1m rather than 8m) if there's concern about noise in the 8m map - since you don't want that noise to get baked into the Metashape normal map and potentially persist in some fashion in the Kintsugi normal map even if you do still optimize in Kintsugi.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...