Very true - and believe it or not, but I'm actually involved in large scale computing professionally. In fact i wrote a paper a few years back on using Procrustes method to match two sets of measurements of the electromagnetic field to the instrument orientations - same problem.
However, registering these images is an optimization problem of finding 4 numbers to describe scale, shift and rotation between a set of 4 or more images with 100s to 1000s of stars in each. This is not an ill-posed problem, and regardless of what method you choose to arrive at the final set of numbers I would argue they should match to several decimals, or you've done something wrong! The stacking and noise rejection process is where the differences are going to come from.
Still - I really didn't think it necessary to go down the burrow of floating point maths in this case. But you are completely correct - although I would argue that it's a second order effect in this case.
Cheers,
Andrew.
Quote:
Originally Posted by naskies
Hi Andrew,
I agree that the maths behind Lanczos resampling is unambiguous, but I'm not sure that I agree with your assertion.
Digital calculations using fixed-precision floating point are not numerically stable, e.g. adding and multiplying the same numbers in different orders will often give slightly different results (unlike in pure maths). The 0.04 px difference in FWHMs reported in the opening post is conceivably within rounding errors across interpolation and stacking (as Rick points out).
A programmer would typically expect floating point variations due to hardware, programming language, and even operating systems. In fact, this is such a huge problem that on large scientific computing projects there is often at least one computer scientist/programmer whose job is just to deal with floating point precision issues.
Also, there is flexibility in Lanczos implementation in the way that boundary cases are handled, e.g. clipping artefacts and handling of constant input signals.
|