View Full Version here: : Pixinsight vs CCDstack - FWHM comparison of registration
cventer
26-08-2013, 01:26 PM
At the AAIC it was suggested via numerical evidence that Pixinsight does the best job of aligning/stacking images keeping the tightest stars.
I thought I would test it for myself.
I aligned and stacked 8 identical frames across 2 sets of images. 1 set was binned Red filter data. The other Full Frame Red Filter Data.
I only own CCD Stack and downloaded a trial copy of Pixinsight last night.
The results are not conclusive at all and suggest their is very little in it if anything. In fact I like the look when zoomed in of the stars CCDStack produced for the binned 2x2 images much better. The non binned I could not visually tell the difference.
I have attached a screenshot of the results.
But summary is:
2x2 Bin
Pixinsight defualt setting: 0.87px
Pixinsight lanczos3 setting: 0.87px
CCDStack lanczos36 setting: 0.89px
CCDStack Nearest neighbor setting: 1px
CCDStack Bicibic B-Bspline setting: 1.5px
reference image: 0.98px
1x1 Binned Data
CCDStack lanczos36 : 2.69px
Pixinsight lanczos3 setting: 2.73px
reference image: 2.07px
The difference is probably within the handling of floating point data within each package and not enough to worry about.
What it shows though is defualt setting for Pixinsight seems to be lanczos. Defualt in CCDstack is nearest neighbour so this is why some may report this difference in FWHM values when aligning and registering.
I cannot get a conclusive view on implementation of lanczos. Some say it does a very mild deconvolution.
Interested in thoughts and results from others and other packages.
DavidTrap
26-08-2013, 01:43 PM
I have stacked in Maxim and PI. To my eye, PI stacks are cleaner. I haven't done any numerical analysis, but now that there is the batch processing script available in PI, I won't be going back to Maxim for stacking.
DT
cventer
26-08-2013, 01:46 PM
Just did another test this time with 15 unbinned images. Both packages using lanczos and average stack method.
Got exact same FWHM this time from both.
multiweb
26-08-2013, 01:56 PM
CCDIS in CCD Stack does a better job than PI definitely. Although I haven't checked out the latest star registration routine in PI that came out not so long ago. The most accurate star rego I get is with Goodlook though. Hands down.
RickS
26-08-2013, 02:20 PM
I did a CCDStack vs PixInsight bakeoff about a year ago for my own amusement. I did calibration as well as registration and stacking in each of the packages. I didn't use make an attempt to use the same registration algorithms in both (IIRC, I used Bicubic B-Spline in CCDStack as recommended in Adam Block tutorials and the default in PI). I saw a 5-10% better result, measured by FWHM, from PI over a few examples.
If I get some time I'll redo my tests under more controlled conditions...
Cheers,
Rick.
cventer
26-08-2013, 03:21 PM
Rick
Be interested to see your results if you do them again.
You definately cant accept defaults though. Bicubic B-spline does interpoloation and smoothing which definateley increases the fuzziness around star edges and hence the FWHM goes up. Only way to make it fair comparison is to use the exact same regsitration algorithm across packages. that 5 - 10 % you saw is becuase Pixinsight used lanczos as default and ccdstack you used bicubic, which produces the worst result of all available options.
In fact I deliberately use bicubic b-spline on my color/rgb data becuase of the smoothing effect it induces to color data. would never use it on luminance frames. Nearest neighbour has been my goto for lum. but I will use lanczos36 from now on.
FYI I do use the CCDIs plug in in CCD Stack and it was used for the results I obtained below.
My test are showing there is nothing in it between them, so use the one you find easiest if you have both. I did this as I am trying to decide if I need to take the plunge on pixinsight vs my current CCDstack and photoshop routines.
cventer
26-08-2013, 03:30 PM
What is this ? Google reveals nothing.
Bassnut
26-08-2013, 04:00 PM
Yes, I suspect Goodlook is the best, but I havent made a direct comparison.
I tried nearest niehbour on Marcs recommendation (CCD stack), but I ended up with extreme skewed scaling all over the pic and the stars were chopped to bits.
Going by that experience and everyone at AAIC having their own different favourite, I think it must depend totally on data condition, one method is not best for all data. I find bicubic in CCD stack always works for me, but ill be sure to try lanczos36 now.
gregbradley
26-08-2013, 04:37 PM
Good work Chris.
I am sure you've nailed what the difference was. The different default algorithim could make a large difference.
CCDIS also is superb and I found that was the end of any misregistration I had experienced without it.
I've bought the PI tutorials to study and intend to become adept in both programs. Even if PI is not used all the time I am sure it offers tools that could be very useful.
Nice meeting you at AAIC. Its got me invigorated to pursue several projects.
Greg.
multiweb
26-08-2013, 04:55 PM
It's a stacking utility written by Mike Berhton-Jones. Sculptor on IIS.
I think there are two different and distinct parts in the star registration process.
1_ how good the stars are matched from frame to frame.
2_ how they're interpolated once they are registered.
#1 varies with your image scale. CCDIS works well with 2asp, anything wider field and it will fall apart especially if you're using lenses with a bit of coma and/or field curvature.
I also found CCDIS useless with some of the pictures I took at 0.5asp. Not sure if they weren't enough stars around but it couldn't find anything.
#2 then becomes less important because if the subs are distorted sufficiently then the algo interpolating the stars has an easier job to do.
That's why goodlook is so good at it. I'll have to post two subs before and after to show how it works. I have an animated GIF somewhere that shows the transforms very clearly.
gregbradley
26-08-2013, 05:13 PM
Hi Marc,
Is there a link to this software? I am not having any luck tracking it down either.
Greg.
Bassnut
26-08-2013, 05:30 PM
You won't find it on the web. The only way to get it is to front to an ASNSW imaging meeting ;). That's just one bit of Goodlook, it does everything, literally.
gregbradley
26-08-2013, 05:31 PM
Fred, is that the software that does the multiple star autoguiding? Or is that a separate piece of software?
Greg.
Bassnut
26-08-2013, 05:44 PM
Yes, same software.And that is symultaneous dual guide multistar guiding BTW :eyepop:. The guiding and capture is specific to Mikes SBIG 11k though, he wrote it for himself, but the stacking and all in processing (and much more) in Goodlook is generic. In separate "modules"
Martin Pugh
26-08-2013, 05:49 PM
Good detective work Chris.
It goes to show that choosing defaults across different packages can yield different results. I am pleased you got identical results using the same algorithim across different packages.
A worthwhile test for sure and at least the pointer has got people re-assessing what they use.
cheers
Martin
gregbradley
26-08-2013, 06:00 PM
I was impressed Martin by your thorughness of approach. You leave no stone unturned and constantly inspect and check all your processes.
You leave nothing to chance and are obviously a very hard worker and don't shy away from a tough job!
Impressive.
I got the most out of all the talks from yours so thank you.
Greg.
Martin Pugh
26-08-2013, 06:58 PM
That's very nice of you to say so Greg.
However, I am currently trying out my new 5D MKIII with some of those settings you were discussing, along with a few from Phil's book.
cheers
Martin
RickS
26-08-2013, 10:09 PM
I ran a test on 40 x 600 sec uncalibrated Luminance images (NGC 7424 @ 2760mm focal length).
CCDStack: registered with CCDIS/High Precision, interpolated with Lanczos 36, followed by a Mean stack, no normalization, no rejection.
PI: Star alignment with default interpolation (Lanczos 3) followed by integration with no normalization or rejection.
Maxim: Auto star match registration followed by Average combine.
CCDInspector FWHM for integrated result:
PI: 2.71 arcsec
CCDStack: 2.72 arcsec
Maxim: 2.86 arcsec
Looks like a tie between CCDStack and PixInsight with Maxim in second place.
I also tried to do a registration with RegiStar but couldn't get it to work. I have used it successfully before so I'll try again later and see if I can figure out what I'm doing wrong.
Cheers,
Rick.
cventer
26-08-2013, 10:17 PM
Thanks Rick
Looks like its a tie then which is good. I did not want to learn Pi just for stacking. But sound slike I will learn it anyway as background tool alone seems amazing.
Interesting read. I've never considered myself proficient at CCDStack by any means, but using trial versions of CCDStack again PI thought I could see an improvement in PI. Pretty sure it wasn't a stringent test by the rules here (same algorithm and parameters) and can't find the pics any more.
Thanks for sharing guys.
alocky
26-08-2013, 10:34 PM
Should point out that if you can't reproduce exactly the results using the same algorithm in a different program, then one of the bits of software has a bug. There is no flexibility in the implementation.
There is unlikely to be any difference in the actual bit of code (I still call them subroutines!) doing either the Lanczos resampling, or the bilinear spline: in fact I'll bet they've been lifted straight out of either Lapack, the Naval Surface Warfare library, or Numerical Recipies. The maths should be identical and reproducible. As for stacking, it's not a very complicated algorithm. However, I wonder if gains might be made by weighting the contribution to a final pixel from individual subframes in the summation by the FWHM values in each, or some similar data quality metric?
The real gains from one bit of software to the next will be in optimising the rejection criteria in the stack for your data. The optimal parameters will depend on the type of noise and its particular realisation that you're trying to stack out.
cheers,
Andrew.
RickS
27-08-2013, 07:54 AM
G'day Andrew,
Interpolation is only part of the registration process so there's plenty of opportunity for variation in results even using Lanczos-3 in both cases. CCDStack and PI use different methods for star detection and matching.
There's some interesting stuff in the PI documentation for StarAlignment including a comparison of the different interpolation algorithms.
The idea of weighting subs based on a quality metric is a good one. Integration in PI normally applies a weighting based on the estimated S/N ratio of the subs. There are other options, of course, but using star quality is currently not one of them.
Cheers,
Rick.
naskies
27-08-2013, 10:32 AM
Hi Andrew,
I agree that the maths behind Lanczos resampling is unambiguous, but I'm not sure that I agree with your assertion.
Digital calculations using fixed-precision floating point are not numerically stable, e.g. adding and multiplying the same numbers in different orders will often give slightly different results (unlike in pure maths). The 0.04 px difference in FWHMs reported in the opening post is conceivably within rounding errors across interpolation and stacking (as Rick points out).
A programmer would typically expect floating point variations due to hardware, programming language, and even operating systems. In fact, this is such a huge problem that on large scientific computing projects there is often at least one computer scientist/programmer whose job is just to deal with floating point precision issues.
Also, there is flexibility in Lanczos implementation in the way that boundary cases are handled, e.g. clipping artefacts and handling of constant input signals.
alocky
27-08-2013, 11:33 AM
Very true - and believe it or not, but I'm actually involved in large scale computing professionally. In fact i wrote a paper a few years back on using Procrustes method to match two sets of measurements of the electromagnetic field to the instrument orientations - same problem.
However, registering these images is an optimization problem of finding 4 numbers to describe scale, shift and rotation between a set of 4 or more images with 100s to 1000s of stars in each. This is not an ill-posed problem, and regardless of what method you choose to arrive at the final set of numbers I would argue they should match to several decimals, or you've done something wrong! The stacking and noise rejection process is where the differences are going to come from.
Still - I really didn't think it necessary to go down the burrow of floating point maths in this case. But you are completely correct - although I would argue that it's a second order effect in this case.
Cheers,
Andrew.
The other elephant in the room of course is algorithm availability. PI has consistently been adding new options in that regard over the years. I haven't been keeping up with other packages, but liked where PI was headed a few years ago.
But, it would be a brave man who said starting with the best tools necessarily assured the best piece of art at the end of the process (err, science ;))
RickS
28-08-2013, 10:52 AM
I still can't get RegiStar to work :shrug:
I went a bit further analysing the star shapes in my original integrations by doing a PSF estimate for the same ten stars in each. PI was consistently better than CCDStack on all ten stars but only by 2-3%. Maxim was consistently worse by 14-15%.
I also did a full calibration and integration run on the same data with PI and CCDStack. Using PI, I followed my normal procedure of tweaking the rejection algorithm and parameters to get as close to maximum S/N (determined by an integration with no rejection) while checking that the rejection was adequate by visual inspection. Linear Fit gave the best result as expected for a large number of subs. In CCDStack I used the STD Sigma-reject algorithm and tweaked parameters to get similar rejection percentages to PI. A noise estimate on the two integrations showed very little difference between them (~1% advantage to PI). Visually, there was no clear difference either.
So far there's no significant advantage to either package. If I get a chance I might try playing around with some smaller/poorer data sets.
For me there is one significant benefit from using PI. I get better calibration using the overscan region in my camera to counter bias drift. AFAIK, PI is the only amateur package that supports this. For the purpose of the comparison above I didn't use overscan.
Cheers,
Rick.
gregbradley
28-08-2013, 05:22 PM
Hi Rick,
Can you elaborate on using the overscan region of the chip for countering bias drift?
Is this for flats?
Greg.
RickS
28-08-2013, 06:56 PM
G'day Greg,
I have noticed that at least two of the cameras I have used have a significant amount of bias variation over time. The latest, an Apogee Alta U16M, can show a difference between bias frames of as much as 16 ADU (this is with temperature regulated accurately). This corresponds to 40e- or 5 times the camera read noise so it's a significant effect.
My attempt to compensate for this is to use the overscan area for calibration. This appears to be a fairly standard technique in the professional astronomy world. The overscan region (or regions... some sensors have more than one) is an area of the sensor which is masked so that light can't reach it. It can be used as a reference for calibrating all types of frames.
The way it works in PI is that the median value of the overscan area is calculated for each frame and that pedestal is then subtracted from every pixel value. This is done for bias frames, dark frames, flat frames and light frames.
I don't see a lot of difference with bright targets or when there's a lot of sky glow but for narrowband images and dim targets at dark sites it makes a measurable difference.
Most (all?) sensors have an overscan region but not all camera drivers give you access to it. I know that Apogee and FLI support this. My old SX and QHY cameras didn't, and I don't think the STL11K does either, at least not in Maxim. Access to the overscan area is also useful if you ever feel the urge to calculate a Photon Transfer Curve for your camera.
Cheers,
Rick.
gregbradley
30-08-2013, 10:44 PM
Thanks for that writeup Rick.
I'll have to check that out on my FLI Proline to see how it performs. I have PI and now have some tutorials to learn it better. So that is helpful - thanks.
Greg.
vBulletin® v3.8.7, Copyright ©2000-2025, vBulletin Solutions, Inc.