Jump to content

ponchio

Members
  • Posts

    10
  • Joined

  • Last visited

  • Days Won

    3

ponchio last won the day on January 17

ponchio had the most liked content!

Recent Profile Visitors

515 profile views

ponchio's Achievements

Rookie

Rookie (2/14)

  • Collaborator Rare
  • First Post Rare
  • Week One Done Rare
  • One Month Later Rare
  • One Year In Rare

Recent Badges

4

Reputation

  1. I have introduced ellipse fitting (but still using the old code for the reflection computation). The major axis should point toward the center of the image (this is drawn to verify that the fitting is working... it wont of course work if the sphere appers like a circle). The red dot appears when an highlight could not be detected, instead of the regular green dot placed on the highlight. Yes, removing lens distortions is adviced if you have wide lenses, at least until I have figured out all the issues with the ellipse fitting and reflections calculations (apart for having a less distorted image, which is still a nice thing to have). Highlight processing time can vary depending on the position on the sphere, as we need to decode all the images line by line up to the position on the sphere, so sphere on the top would be much faster to process. In any case hours is definitely too much, if you can reproduce the issue I would love to have the sample and investigate. Finally there are many algorithms for normal computation, one of the simpler one assumes rotational simmetry of the BRDF around the normal, and fits a plane (using least squares) to each pixel. Other algorithms are planned (might take longer (even waaaay longer) but be more robust to bad light sampling, most of the available software is in python or worse using deep learning, which is a bit of a pain to convert to C++ (or worse requires training). This is pretty close to the PTM (LPTM actually) way of extracting normals: instead of a plane a quadric function is used, and then the plane is extracted.
  2. Once you have marked the spheres and detected the highlights (Menu->Edit->FindHighlights) you can save the .LP from Menu->File->Save LPs... The files are saved alongside the images as sphere.lp for the computed directions and sphere1.lp, sphere2.lp, etc for each reflective sphere. In case it is not working for you, you can share images and .reilight I will investigate the issue, otherwise setup a video call (write me at ponchio@gmail,com)). Soirry for the messy interface. Federico
  3. Last version has now rotation implemented. Ciao, Federico
  4. Hi, I think i solved the problem, latest version should have no problems with spaces and accents (or emoji, should you be so inclined ) I tested the code only in Windows 11 (that's all I got). Federico
  5. Hi, I am the developer of RelightLab, I totally forgot to implement the image rotation! You seems to be the first one to notice or just tell me. I will implement it this weekend, sorry (and find it as a new release). The relightLab browser require the 'relight' web format: json + .jpg (or deepzoom), .rti are extremely web unfriendly, and the browser do not support them. If you wish to inspect the .rti, use RTIViewer. The browser is looking in the folder searching for the info.json file and failing, sorry for the confusing error message. .relight is the relightlab project file, it's not an RTI (just to be clear). If you need further assistance, just ask. Ciao, Federico
  6. Hi everyone, I could not replicate the issue in Linux (It worked inserting emoji!), but before I get someone to test it in Windows, could you tell me exactly how to replicate the issue? Federico
  7. Hi Noam, I am the coder or RelightLab. I can't really tell what the reason could be, but if you are available we could have a call and try together to solve the issue. Write me on ponchio@gmail.com or just call me on skype (username: ponchietto). Federico
  8. Sorry, forgot to submit the draft. I am planning to add this feature to relightlab (both command line and UI). I am just a bit swamped right now.
  9. Hi, author of RelightLab here. Specular and diffuse renderings you see in openlime are computed on the fly from the normal maps and the light direction (+ an shininess parameter for the specular one), they are not maps. You are probably thinking about the glossiness map (https://kcoley.github.io/glTF/extensions/2.0/Khronos/KHR_materials_pbrSpecularGlossiness/) it's a totally different approach. This requires fitting some BRDF (albedo, shininess, metallicity etc) to the RTI dataset and I plan to support it in RelightLab, in the near (hopefully) future, unless you have a pressing need... Otherwise you might want to describe what is the purpose of those specular and diffese maps would be, and that might help clarify the issue. Federico.
  10. There is also a service that converts it for you: http://visual.ariadne-infrastructure.eu/ You can host it there (directly or using an iframe), if you want, orjust download a webpage + data in a zip.
×
×
  • Create New...