I have introduced ellipse fitting (but still using the old code for the reflection computation). The major axis should point toward the center of the image (this is drawn to verify that the fitting is working... it wont of course work if the sphere appers like a circle).
The red dot appears when an highlight could not be detected, instead of the regular green dot placed on the highlight.
Yes, removing lens distortions is adviced if you have wide lenses, at least until I have figured out all the issues with the ellipse fitting and reflections calculations (apart for having a less distorted image, which is still a nice thing to have).
Highlight processing time can vary depending on the position on the sphere, as we need to decode all the images line by line up to the position on the sphere, so sphere on the top would be much faster to process. In any case hours is definitely too much, if you can reproduce the issue I would love to have the sample and investigate.
Finally there are many algorithms for normal computation, one of the simpler one assumes rotational simmetry of the BRDF around the normal, and fits a plane (using least squares) to each pixel. Other algorithms are planned (might take longer (even waaaay longer) but be more robust to bad light sampling, most of the available software is in python or worse using deep learning, which is a bit of a pain to convert to C++ (or worse requires training).
This is pretty close to the PTM (LPTM actually) way of extracting normals: instead of a plane a quadric function is used, and then the plane is extracted.