ICEINSPACE
Moon Phase
CURRENT MOON
Waning Crescent 19.1%
|
|

26-08-2013, 10:34 PM
|
 |
PI popular people's front
|
|
Join Date: Aug 2010
Location: perth australia
Posts: 1,291
|
|
Should point out that if you can't reproduce exactly the results using the same algorithm in a different program, then one of the bits of software has a bug. There is no flexibility in the implementation.
There is unlikely to be any difference in the actual bit of code (I still call them subroutines!) doing either the Lanczos resampling, or the bilinear spline: in fact I'll bet they've been lifted straight out of either Lapack, the Naval Surface Warfare library, or Numerical Recipies. The maths should be identical and reproducible. As for stacking, it's not a very complicated algorithm. However, I wonder if gains might be made by weighting the contribution to a final pixel from individual subframes in the summation by the FWHM values in each, or some similar data quality metric?
The real gains from one bit of software to the next will be in optimising the rejection criteria in the stack for your data. The optimal parameters will depend on the type of noise and its particular realisation that you're trying to stack out.
cheers,
Andrew.
|

27-08-2013, 07:54 AM
|
 |
PI cult recruiter
|
|
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
|
|
Quote:
Originally Posted by alocky
Should point out that if you can't reproduce exactly the results using the same algorithm in a different program, then one of the bits of software has a bug. There is no flexibility in the implementation.
There is unlikely to be any difference in the actual bit of code (I still call them subroutines!) doing either the Lanczos resampling, or the bilinear spline: in fact I'll bet they've been lifted straight out of either Lapack, the Naval Surface Warfare library, or Numerical Recipies. The maths should be identical and reproducible. As for stacking, it's not a very complicated algorithm. However, I wonder if gains might be made by weighting the contribution to a final pixel from individual subframes in the summation by the FWHM values in each, or some similar data quality metric?
The real gains from one bit of software to the next will be in optimising the rejection criteria in the stack for your data. The optimal parameters will depend on the type of noise and its particular realisation that you're trying to stack out.
cheers,
Andrew.
|
G'day Andrew,
Interpolation is only part of the registration process so there's plenty of opportunity for variation in results even using Lanczos-3 in both cases. CCDStack and PI use different methods for star detection and matching.
There's some interesting stuff in the PI documentation for StarAlignment including a comparison of the different interpolation algorithms.
The idea of weighting subs based on a quality metric is a good one. Integration in PI normally applies a weighting based on the estimated S/N ratio of the subs. There are other options, of course, but using star quality is currently not one of them.
Cheers,
Rick.
|

27-08-2013, 10:32 AM
|
 |
Registered User
|
|
Join Date: Jul 2011
Location: Brisbane
Posts: 1,865
|
|
Quote:
Originally Posted by alocky
Should point out that if you can't reproduce exactly the results using the same algorithm in a different program, then one of the bits of software has a bug. There is no flexibility in the implementation.
There is unlikely to be any difference in the actual bit of code (I still call them subroutines!) doing either the Lanczos resampling, or the bilinear spline: in fact I'll bet they've been lifted straight out of either Lapack, the Naval Surface Warfare library, or Numerical Recipies. The maths should be identical and reproducible.
|
Hi Andrew,
I agree that the maths behind Lanczos resampling is unambiguous, but I'm not sure that I agree with your assertion.
Digital calculations using fixed-precision floating point are not numerically stable, e.g. adding and multiplying the same numbers in different orders will often give slightly different results (unlike in pure maths). The 0.04 px difference in FWHMs reported in the opening post is conceivably within rounding errors across interpolation and stacking (as Rick points out).
A programmer would typically expect floating point variations due to hardware, programming language, and even operating systems. In fact, this is such a huge problem that on large scientific computing projects there is often at least one computer scientist/programmer whose job is just to deal with floating point precision issues.
Also, there is flexibility in Lanczos implementation in the way that boundary cases are handled, e.g. clipping artefacts and handling of constant input signals.
|

27-08-2013, 11:33 AM
|
 |
PI popular people's front
|
|
Join Date: Aug 2010
Location: perth australia
Posts: 1,291
|
|
Very true - and believe it or not, but I'm actually involved in large scale computing professionally. In fact i wrote a paper a few years back on using Procrustes method to match two sets of measurements of the electromagnetic field to the instrument orientations - same problem.
However, registering these images is an optimization problem of finding 4 numbers to describe scale, shift and rotation between a set of 4 or more images with 100s to 1000s of stars in each. This is not an ill-posed problem, and regardless of what method you choose to arrive at the final set of numbers I would argue they should match to several decimals, or you've done something wrong! The stacking and noise rejection process is where the differences are going to come from.
Still - I really didn't think it necessary to go down the burrow of floating point maths in this case. But you are completely correct - although I would argue that it's a second order effect in this case.
Cheers,
Andrew.
Quote:
Originally Posted by naskies
Hi Andrew,
I agree that the maths behind Lanczos resampling is unambiguous, but I'm not sure that I agree with your assertion.
Digital calculations using fixed-precision floating point are not numerically stable, e.g. adding and multiplying the same numbers in different orders will often give slightly different results (unlike in pure maths). The 0.04 px difference in FWHMs reported in the opening post is conceivably within rounding errors across interpolation and stacking (as Rick points out).
A programmer would typically expect floating point variations due to hardware, programming language, and even operating systems. In fact, this is such a huge problem that on large scientific computing projects there is often at least one computer scientist/programmer whose job is just to deal with floating point precision issues.
Also, there is flexibility in Lanczos implementation in the way that boundary cases are handled, e.g. clipping artefacts and handling of constant input signals.
|
|

27-08-2013, 08:01 PM
|
 |
Mostly harmless...
|
|
Join Date: Jul 2008
Location: Brisbane, Australia
Posts: 5,735
|
|
The other elephant in the room of course is algorithm availability. PI has consistently been adding new options in that regard over the years. I haven't been keeping up with other packages, but liked where PI was headed a few years ago.
But, it would be a brave man who said starting with the best tools necessarily assured the best piece of art at the end of the process (err, science  )
|

28-08-2013, 10:52 AM
|
 |
PI cult recruiter
|
|
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
|
|
Quote:
Originally Posted by RickS
I ran a test on 40 x 600 sec uncalibrated Luminance images (NGC 7424 @ 2760mm focal length).
CCDStack: registered with CCDIS/High Precision, interpolated with Lanczos 36, followed by a Mean stack, no normalization, no rejection.
PI: Star alignment with default interpolation (Lanczos 3) followed by integration with no normalization or rejection.
Maxim: Auto star match registration followed by Average combine.
CCDInspector FWHM for integrated result:
PI: 2.71 arcsec
CCDStack: 2.72 arcsec
Maxim: 2.86 arcsec
Looks like a tie between CCDStack and PixInsight with Maxim in second place.
I also tried to do a registration with RegiStar but couldn't get it to work. I have used it successfully before so I'll try again later and see if I can figure out what I'm doing wrong.
Cheers,
Rick.
|
I still can't get RegiStar to work
I went a bit further analysing the star shapes in my original integrations by doing a PSF estimate for the same ten stars in each. PI was consistently better than CCDStack on all ten stars but only by 2-3%. Maxim was consistently worse by 14-15%.
I also did a full calibration and integration run on the same data with PI and CCDStack. Using PI, I followed my normal procedure of tweaking the rejection algorithm and parameters to get as close to maximum S/N (determined by an integration with no rejection) while checking that the rejection was adequate by visual inspection. Linear Fit gave the best result as expected for a large number of subs. In CCDStack I used the STD Sigma-reject algorithm and tweaked parameters to get similar rejection percentages to PI. A noise estimate on the two integrations showed very little difference between them (~1% advantage to PI). Visually, there was no clear difference either.
So far there's no significant advantage to either package. If I get a chance I might try playing around with some smaller/poorer data sets.
For me there is one significant benefit from using PI. I get better calibration using the overscan region in my camera to counter bias drift. AFAIK, PI is the only amateur package that supports this. For the purpose of the comparison above I didn't use overscan.
Cheers,
Rick.
|

28-08-2013, 05:22 PM
|
 |
Registered User
|
|
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
|
|
Hi Rick,
Can you elaborate on using the overscan region of the chip for countering bias drift?
Is this for flats?
Greg.
|

28-08-2013, 06:56 PM
|
 |
PI cult recruiter
|
|
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
|
|
Quote:
Originally Posted by gregbradley
Hi Rick,
Can you elaborate on using the overscan region of the chip for countering bias drift?
Is this for flats?
Greg.
|
G'day Greg,
I have noticed that at least two of the cameras I have used have a significant amount of bias variation over time. The latest, an Apogee Alta U16M, can show a difference between bias frames of as much as 16 ADU (this is with temperature regulated accurately). This corresponds to 40e- or 5 times the camera read noise so it's a significant effect.
My attempt to compensate for this is to use the overscan area for calibration. This appears to be a fairly standard technique in the professional astronomy world. The overscan region (or regions... some sensors have more than one) is an area of the sensor which is masked so that light can't reach it. It can be used as a reference for calibrating all types of frames.
The way it works in PI is that the median value of the overscan area is calculated for each frame and that pedestal is then subtracted from every pixel value. This is done for bias frames, dark frames, flat frames and light frames.
I don't see a lot of difference with bright targets or when there's a lot of sky glow but for narrowband images and dim targets at dark sites it makes a measurable difference.
Most (all?) sensors have an overscan region but not all camera drivers give you access to it. I know that Apogee and FLI support this. My old SX and QHY cameras didn't, and I don't think the STL11K does either, at least not in Maxim. Access to the overscan area is also useful if you ever feel the urge to calculate a Photon Transfer Curve for your camera.
Cheers,
Rick.
|

30-08-2013, 10:44 PM
|
 |
Registered User
|
|
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
|
|
Thanks for that writeup Rick.
I'll have to check that out on my FLI Proline to see how it performs. I have PI and now have some tutorials to learn it better. So that is helpful - thanks.
Greg.
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT +10. The time is now 07:27 AM.
|
|