Jump to content

Relight Lab 2024.01 - Different Highlight Region Size When Defining Spheres, Time to Detect Highlights, Understanding Exporting Normals Algorithm


Recommended Posts

Hello!

I've recently started using the 2024.01 RelightLab release and have noticed a few differences from the 2023.02 release I was using previously.

-The sphere outlines in my scenes are generally circular albeit not perfectly due to camera distortion. Firstly, is it recommended to correct lens distortion in another program (like Photoshop) before processing data in Relight? Secondly, in the 2024.01 release, when I outline the spheres using less than four points, the inner circle in which highlights are detected is much larger but when I define the sphere with a large number of points, this becomes significantly smaller (see attached screenshot). Is there a reason for this? This did not happen in the 2023.02 release.

-Also, what is the significance of the 90 degree axes marks in the sphere defined with many points ("SmallSphereDetail2") and what does the red dot in the upper left corner of the "LargeSphere" image represent?

-All of my datasets are of similar size and quality, but I've noticed that for some, highlight detection progresses rather quickly (a few minutes) while for others it can take hours. What might be causing this difference?

-Lastly, when using the "Export Normals" function after highlights are detected, what algorithm/process is Relight using to create the normals image? At present, the only selectable option is "least squares" but what exactly does that mean? Is it PTM? HSH? Something else?

In case it is relevant, I've been experiencing these issues both on a Windows 10 and Windows 11 computer.

Apologies for my ignorance but I greatly appreciate any help or insight anyone can offer! 

LargeSphere.jpg

LargeSphereDetail.jpg

SmallSphere.jpg

SmallSphereDetail2.jpg

Link to comment
Share on other sites

  • 1 month later...

I have introduced ellipse fitting (but still using the old code for the reflection computation). The major axis should point toward the center of the image (this is drawn to verify that the fitting is working... it wont of course work if the sphere appers like a circle).

The red dot appears when an highlight could not be detected, instead of the regular green dot placed on the highlight.

Yes, removing lens distortions is adviced if you have wide lenses, at least until I have figured out all the issues with the ellipse fitting and reflections calculations (apart for having a less distorted image, which is still a nice thing to have).

Highlight processing time can vary depending on the position on the sphere, as we need to decode all the images line by line up to the position on the sphere, so sphere on the top would be much faster to process. In any case hours is definitely too much, if you can reproduce the issue I would love to have the sample and investigate.

Finally there are many algorithms for normal computation, one of the simpler one assumes rotational simmetry of the BRDF around the normal, and fits a plane (using least squares) to each pixel. Other algorithms are planned (might take longer (even waaaay longer) but be more robust to bad light sampling, most of the available software is in python or worse using deep learning, which is a bit of a pain to convert to C++ (or worse requires training).
This is pretty close to the PTM  (LPTM actually) way of extracting normals: instead of a plane a quadric function is used, and then the plane is extracted.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...