Hello to everybody!
I'm new to this forum, so I thank you all in advance for any suggestion.
During the same day, I have carried out two consecutive photogrammetric captures of the same object --an oil on canvas-- using, for both the sessions, controlled illumination and the same camera network similar to the one indicated in the CHI website (that is normal landscape and overlapping convergent portrait photos). The images were processed using the same workflow on Photoscan Pro up to the generation of the orthophotos using only the normal images to avoid unwanted specular reflections and the color correction option was enabled to minimise exposure differences between the single orthos. A Digital ColorChecker SG was also captured in the scene to have a colour reference and check the quality of the final result referring to the Metamorfoze guidelines. The aim of my test is to compare the orthorectified images of the painting to check possible variations in colour in paintings.
I have exported the orhtomosaics of the two models, and both of them did not pass the quality check on Delt.ae since the patches of the DCSG were altered and couldn't fit in the ranges of tolerance of the Metamorfoze after the assignment of the ICC profile. For the sake of research, I have registered them and performed difference on ImageJ2. Theoretically, the two orthomosaics should be identical (or at least very similar) but the result (below) shows a sort of random blending between the orthos.
What does Photoscan do when creates the single orthos?!
How does it blend them to create the orthomosaics?
Did anyone of you ever noticed this?
Given this result, I am wondering if photogrammetry in general is capable of recording accurately only 3D information and make random approximations in the formation of orthorectified images (or is it just a problem of Photoscan?).
Thank you in advance for any suggestion to solutions/literature/processes/hints!