Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Earlier
  3. I have introduced ellipse fitting (but still using the old code for the reflection computation). The major axis should point toward the center of the image (this is drawn to verify that the fitting is working... it wont of course work if the sphere appers like a circle). The red dot appears when an highlight could not be detected, instead of the regular green dot placed on the highlight. Yes, removing lens distortions is adviced if you have wide lenses, at least until I have figured out all the issues with the ellipse fitting and reflections calculations (apart for having a less distorted image, which is still a nice thing to have). Highlight processing time can vary depending on the position on the sphere, as we need to decode all the images line by line up to the position on the sphere, so sphere on the top would be much faster to process. In any case hours is definitely too much, if you can reproduce the issue I would love to have the sample and investigate. Finally there are many algorithms for normal computation, one of the simpler one assumes rotational simmetry of the BRDF around the normal, and fits a plane (using least squares) to each pixel. Other algorithms are planned (might take longer (even waaaay longer) but be more robust to bad light sampling, most of the available software is in python or worse using deep learning, which is a bit of a pain to convert to C++ (or worse requires training). This is pretty close to the PTM (LPTM actually) way of extracting normals: instead of a plane a quadric function is used, and then the plane is extracted.
  4. Once you have marked the spheres and detected the highlights (Menu->Edit->FindHighlights) you can save the .LP from Menu->File->Save LPs... The files are saved alongside the images as sphere.lp for the computed directions and sphere1.lp, sphere2.lp, etc for each reflective sphere. In case it is not working for you, you can share images and .reilight I will investigate the issue, otherwise setup a video call (write me at ponchio@gmail,com)). Soirry for the messy interface. Federico
  5. What funny timing! I had just checked last night, as I am putting together an easy, cheap RTI dome parts and instructions list for classroom use and this would make the whole process much easier. When I tried again yesterday, I could neither save nor load calibration files using the buttons, nor is it clear what the process is.
  6. I have the same question that was posted by seanwinslow last summer, so just reposting here to see if folks have suggestions. For dome-based RTI, is it possible to create a light position (LP) file in Relight and then use that file for processing subsequent RTI image sets in Relight?
  7. In case people in the thread are interested, we are the team behind RTI-dome.com, and we've been building domes for 4 years now, shipped all around the world. We aim to provide portable, autonomous, easy-to-use, robust and versatile RTI Domes. The project started as a weekend project in a fablab around friends at the Museum of Natural History in Paris and we ended up creating a small company to support researchers everywhere and make this imaging technique more accessible. We build domes of varying sizes (30cm - 50cm - 90cm are classics, but we can fit other needs) with lighting options like RGB, White light, UV/IR and other special needs, for prices starting in the 2k€ range. Don't hesitate to contact us through our website to discuss your project. Best !
  8. Hi Annalisa, Using a red carpet and CDs for GCPs sounds innovative! Just ensure precise measurement, visibility, and stability for accurate results. Best of luck with your project! Best regards, John Ethan
  9. Ian, that's brilliant, thanks for sharing.
  10. Here is the image of my stitched subject (quality very low due to upload limits but it gives you an idea of what my post-processing stitching looks like). There are really no seams present and the normals are quite accurate compared to the actual geometry of the timber surface all things considered. If anyone copies/references this please credit: Ian Dunshee, East Carolina University Program in Maritime Studies. Thanks!
  11. I think there are several possibilities that might give you a usable product: 1. If the RTI data collection is very controlled and with a flat subject (as is mentioned by Carla Schroer in the post Dave linked) then it might work. I know that Klaus Wagensonner at Yale (see https://www.academia.edu/45614487/RTI_Stitching) has successfully done this and we had a brief email conversation about it. Basically he suggested using a dome or dome skeleton setup to capture full overlapping RTI sequences for each section of the object (i.e. capture all light angles for one section without moving the camera then moving on to the next) and keeping the dome orientation and light positions exactly the same between image sequences. He then uses PTGui Pro software to create a template for aligning and merging all of the images (using tie points between the pictures) to combine those with the same lighting angle together. After he just repeats using that template for the other image groups to get a standard looking RTI dataset. The image stitching could probably be done with other software like Photoshop, it would just take longer. Lastly he processes the RTI with the light position data of the dome, not using the highlight method. That's at least my understanding of his process; I still see some potential issues with it, particularly regarding the light distances changing across the scene, but it seems to have worked for him. I would encourage you to read through the linked presentation and maybe reach out to him! 2. I think the above could be repeated using the highlight method but you would have to have some way to get the lighting position in the exact same place for each image sequence which seems very difficult. I don't know the size of your object but maybe a much larger dome framework with enough room inside to change the camera position between sequences while keeping the light positions the same. But this might only be feasible in a macro photography setting... or with a gigantic light, polished bowling ball, and an old playground climbing dome! 3. Lastly, I personally have had good experiences just processing RTI normals images (highlight method) in segments with 20% overlap or more and then aligning them in photoshop. Taking that many RTI sequences of a large object can be painstaking (and it was) but the alignment could also be done manually if you wanted to avoid the extra image sequences to achieve the overlap. The trick was again keeping the lighting positions the same but for me this has been pretty forgiving considering I did it free-hand. I had the camera on rolling scaffolding, taking each image sequence with the same light positions, and moving the scaffolding for each new section. I've attached an image of one of my subjects. It is a 4-5 ft. timber that I did in 5 sections. I will say it is not perfect as there are tiny inconsistencies in the normals that don't line up. However, my intention wasn't to use the normal data for anything other than visualization and I don't think it would work to create an interactive RTI in the viewer but then again I haven't tried. My light position accuracy between segments was good enough to get a good normals image but was likely not precise enough to merge the images together before processing like Klaus. Hopefully this at least gives you ideas!
  12. Hello! I've recently started using the 2024.01 RelightLab release and have noticed a few differences from the 2023.02 release I was using previously. -The sphere outlines in my scenes are generally circular albeit not perfectly due to camera distortion. Firstly, is it recommended to correct lens distortion in another program (like Photoshop) before processing data in Relight? Secondly, in the 2024.01 release, when I outline the spheres using less than four points, the inner circle in which highlights are detected is much larger but when I define the sphere with a large number of points, this becomes significantly smaller (see attached screenshot). Is there a reason for this? This did not happen in the 2023.02 release. -Also, what is the significance of the 90 degree axes marks in the sphere defined with many points ("SmallSphereDetail2") and what does the red dot in the upper left corner of the "LargeSphere" image represent? -All of my datasets are of similar size and quality, but I've noticed that for some, highlight detection progresses rather quickly (a few minutes) while for others it can take hours. What might be causing this difference? -Lastly, when using the "Export Normals" function after highlights are detected, what algorithm/process is Relight using to create the normals image? At present, the only selectable option is "least squares" but what exactly does that mean? Is it PTM? HSH? Something else? In case it is relevant, I've been experiencing these issues both on a Windows 10 and Windows 11 computer. Apologies for my ignorance but I greatly appreciate any help or insight anyone can offer!
  13. Hi Duilio, Can you provide any more details on how you separated out the coefficients in Photoshop? The article didn't go into much detail. I'd be curious to try and replicate your results with my own data. Thanks for your help!
  14. As you suspected, you definitely don't want to stitch images before processing. One thread I've found is: https://forums.culturalheritageimaging.org/topic/542-stitching-together-multiple-rtis/ I seem to recall a discussion (but can't find the thread right now) on merging RTIs of columns, but can't spot that thread right now. As well as the long-standing RTI generator, there is the new 'Relight Lab' under development, would suggest you enquire there (sub-forum here as well). Dave
  15. I'm researching ways to stitch multiple RTIs of one artifact. I haven't found much info on the web. I would like to have a very detailed scan of a larger artifact, more detail than I could get with the FOV that would fit the whole artifact in one photo. I've seen some mention of stitching photos, which I'm very familiar with, but how would that fit into producing RTIs? If that route is feasible, would one first stitch the raw photos and then generate the RTI? I see a lot of issues with that, as the light direction and angle would not be the same. Maybe stitching final RTI would be a way to go, but I don't see any tools that would help? Any suggestions?
  16. Hi Annalisa, first few quick thoughts. GCP targets can be coded or un-coded; I guess from your question you're thinking about un-coded. First thing I would suggest is that you need to work from your proposed flight(s) and know the GSD and the accuracy you aspire to, that will help guide the size of target you plan to lay. I don't know what background or substrate you plan to lay your targets on, but from the air I would be concerned that the red carpet might not be that distinct? You mention using CDs - whilst the playing surface of a CD can 'blink' when, say, the sun strikes it, at other times they can be quite dark from other directions; the un-printed top white surface of a printable CD/DVD is though visible against a dark background. A very significant proportion of targets used for UAS/drone aerial photogrammetry use black and white for maximum contrast, and the chequer board pattern of two quarters white and two quarters black, with the reference point at the central intersection - that type of target is distinctive and also draws the eye in when working with them on-screen. (If you were using coded targets, then that centre can be found automatically to pretty good accuracy). If you are wanting to use carpet as your target material, could I suggest investing in cans of black and white paint and a roll of masking tape? In terms of logistics, you also need to remember that as well as a suitably-distributed set of GCPs, you also need a similar number of check points in your area that won't be used to constrain the resultant model, but will be used to assess the accuracy of the survey/model. As this is an issue wider than (just) cultural heritage, I might also suggest reading/searching/joining groups on UAS/drone aerial photogrammetry and photogrammetry more generally? Three that I could suggest on FaceBook are: Drone Photogrammetry Drone mapping, 3D modelling and GIS Agisoft Metashape (not sure if you're using Metashape, but target info will transcend software used) (disclosure: I'm admin/moderator for those groups) Dave
  17. Hello everyone! I'm a PhD student in the field of Geomorphology. I'm currently working on a photogrammetry project and I could use some advice on obtaining Ground Control Points (GCPs) for my drone survey. Any tips for me? I was considering using a red carpet and placing CDs on it to create GCPs. What are your thoughts? All the best, Thanks, Annalisa
  18. Dear Federico, Thank you so much for your quick reaction and implementing rotation. Thank you for the new release. I will be able to use it at the excavation site still this season. Great! I do use RTIviewer for the RTIs. I have never thought about posting them in the network, but I know that this season my boss wants to do it. So thank you for this solution in the program, as well as your explanation here on how it works. All the best Kamila
  19. Thank you, Federico!!!
  20. Last version has now rotation implemented. Ciao, Federico
  21. Hi, I think i solved the problem, latest version should have no problems with spaces and accents (or emoji, should you be so inclined ) I tested the code only in Windows 11 (that's all I got). Federico
  22. Hi, I am the developer of RelightLab, I totally forgot to implement the image rotation! You seems to be the first one to notice or just tell me. I will implement it this weekend, sorry (and find it as a new release). The relightLab browser require the 'relight' web format: json + .jpg (or deepzoom), .rti are extremely web unfriendly, and the browser do not support them. If you wish to inspect the .rti, use RTIViewer. The browser is looking in the folder searching for the info.json file and failing, sorry for the confusing error message. .relight is the relightlab project file, it's not an RTI (just to be clear). If you need further assistance, just ask. Ciao, Federico
  23. Hello. I wonder if any of you faces the same problem as I do: RelightLab does not react at all to rotation of the images commands. Another issue is that after creation of .rti file (HSH27) the browser of RelightLab cannot find it. It is displayed well in the old RTI Viewer, but here I get a localhost communicate: "Could not detect an RTI here". Not a big issue, contrary to the first one, but I wonder how it works. I understand that it is designed for the .relight files only, as these are intended for the web, but why does the browser search for .rti then, and although the file is there, it cannot detect it. I run the RelightLab on Windows 11 Home. I would be most grateful for any clues particularly to this "Rotate all images" issue. Greetings from the middle of a desert (field season in Egypt)! Kamila
  24. Hello. I have faced the same problem of crushing after the "Build" command, but in my case it was a space in the name of a folder somewhere in the file directory. It was not allowed to use spaces in the RTI Builder, and it seems that it still applies here.
  25. Good day to everyone. I am very new to photogrammetry, so this will be an overview of my setup and process for support of a field dig in England in 2024 with a request for input from the group. Some background on me - I am a retired civilian from the U.S. Air Force with a degree in Engineering Technology from long ago. I have supported archeological field work in England in 2022 and 2023 as a digger and junior assistant supervisor. For 2024 I am moving up in the world to support digital documentation of the dig. One part of that will be photogrammetry of small finds / trench sections and possibly documenting some standing buildings. My setup is built around the Nikon D850 with a 200mm Macro lens, a 40mm Macro lens, a 50mm lens, and a 24mm lens, all are prime lenses. I will also have a zoom lens and a Nikon D70S for general photography. I have a Cognisys macro rail and turntable. The macro rail will support focus stacking for very small items and the turntable will be for anything that can fit on it. I have attached a syrp turntable top to the Cognisys turntable which gives me a nine inch table with a very high weight limit. The turntable and macro rail are controlled by the Cognisys Stackshot3 controller which is tied to my laptop running Helicon remote and Helicon focus stacking software. Turntable shooting will be done in tethered mode. I have a folding light box with MISO black background fabric and MISO black backgrounds for the turntable, LED lighting from Lume Cube (two cubes and two panels), 2 GODOX TTL600 flash units, and a GODOX ring light. I also have a polarizing filter for the ring light and all the lenses. Then there are two tripods, a boom arm, and two table top tripods for the LED lights. I have two, two terabyte SSDs and two 5 terabyte standard hard drives. I have a backup turntable from Edelkrone. Then there are all the interconnect cables for flash, shutter control, and turntable control. Battery charging cables and power bricks are all rated for 110 and 220 volt. I have a set of CHI scales on order. Then there is the color checker card for white balance and color correction I will be running Metashape software on a MacBook Pro M3 Max with 198GB of memory. The laptop will also be running the Helicon software. I will be using DLN to document the work. I will also be using light room classic for color correction. Over the next several months I will be exercising the set up and working out my work flow. I expect I will have one work flow for basic turntable work, one for focus stacking with the turntable, one for large objects that will not fit on the turntable, one for trench section documentation, and finally one for structure documentation. Through all the research on assembling this setup there are a couple of questions that I still have: 1 - Is there are recommended file naming convention for the RAW images, then the images for import into Metashape, and finally the Metashape outputs. 2 - What are the recommended workflows? I have a lot of information on general steps, but I suspect I will be discovering a lot as I go along. I look forward to comments and input. Chuck Mason
  26. Goodmorning, I'm working with RelightLab to produce models of archeological objects. I'd like to share the results online with the other colleagues, expecially because some of them cannot use RTIViewer on their comuters and, for this reason, they cannot see the results. I think the online sharing can be the solution. I saw the functionality "OpenLime" that can be enbedded in .rti and .ptm files. So, I think I've produced files that contain OpenLime because I tick the option "Add Openlime Viewer" (see the image below) but I cannot understand how can I use it to share projects. I have to dowload an application? Thank you so much. Laura
  27. There is not yet a manual, as the software is still under development. There is a short video that explains the basic functionality. You can see it here: Not all functionality is implemented - for example Set Scale is not implemented. There is some information about the OpenLime viewer with RTI on this page: https://vcg.isti.cnr.it/relight/ Post your questions here and we will try to answer them. Carla
  1. Load more activity
×
×
  • Create New...