Pixinsight vs CCDstack - FWHM comparison of registration
At the AAIC it was suggested via numerical evidence that Pixinsight does the best job of aligning/stacking images keeping the tightest stars.
I thought I would test it for myself.
I aligned and stacked 8 identical frames across 2 sets of images. 1 set was binned Red filter data. The other Full Frame Red Filter Data.
I only own CCD Stack and downloaded a trial copy of Pixinsight last night.
The results are not conclusive at all and suggest their is very little in it if anything. In fact I like the look when zoomed in of the stars CCDStack produced for the binned 2x2 images much better. The non binned I could not visually tell the difference.
The difference is probably within the handling of floating point data within each package and not enough to worry about.
What it shows though is defualt setting for Pixinsight seems to be lanczos. Defualt in CCDstack is nearest neighbour so this is why some may report this difference in FWHM values when aligning and registering.
I cannot get a conclusive view on implementation of lanczos. Some say it does a very mild deconvolution.
Interested in thoughts and results from others and other packages.
I have stacked in Maxim and PI. To my eye, PI stacks are cleaner. I haven't done any numerical analysis, but now that there is the batch processing script available in PI, I won't be going back to Maxim for stacking.
CCDIS in CCD Stack does a better job than PI definitely. Although I haven't checked out the latest star registration routine in PI that came out not so long ago. The most accurate star rego I get is with Goodlook though. Hands down.
I did a CCDStack vs PixInsight bakeoff about a year ago for my own amusement. I did calibration as well as registration and stacking in each of the packages. I didn't use make an attempt to use the same registration algorithms in both (IIRC, I used Bicubic B-Spline in CCDStack as recommended in Adam Block tutorials and the default in PI). I saw a 5-10% better result, measured by FWHM, from PI over a few examples.
If I get some time I'll redo my tests under more controlled conditions...
Be interested to see your results if you do them again.
You definately cant accept defaults though. Bicubic B-spline does interpoloation and smoothing which definateley increases the fuzziness around star edges and hence the FWHM goes up. Only way to make it fair comparison is to use the exact same regsitration algorithm across packages. that 5 - 10 % you saw is becuase Pixinsight used lanczos as default and ccdstack you used bicubic, which produces the worst result of all available options.
In fact I deliberately use bicubic b-spline on my color/rgb data becuase of the smoothing effect it induces to color data. would never use it on luminance frames. Nearest neighbour has been my goto for lum. but I will use lanczos36 from now on.
FYI I do use the CCDIs plug in in CCD Stack and it was used for the results I obtained below.
My test are showing there is nothing in it between them, so use the one you find easiest if you have both. I did this as I am trying to decide if I need to take the plunge on pixinsight vs my current CCDstack and photoshop routines.
Yes, I suspect Goodlook is the best, but I havent made a direct comparison.
I tried nearest niehbour on Marcs recommendation (CCD stack), but I ended up with extreme skewed scaling all over the pic and the stars were chopped to bits.
Going by that experience and everyone at AAIC having their own different favourite, I think it must depend totally on data condition, one method is not best for all data. I find bicubic in CCD stack always works for me, but ill be sure to try lanczos36 now.
I am sure you've nailed what the difference was. The different default algorithim could make a large difference.
CCDIS also is superb and I found that was the end of any misregistration I had experienced without it.
I've bought the PI tutorials to study and intend to become adept in both programs. Even if PI is not used all the time I am sure it offers tools that could be very useful.
Nice meeting you at AAIC. Its got me invigorated to pursue several projects.
It's a stacking utility written by Mike Berhton-Jones. Sculptor on IIS.
I think there are two different and distinct parts in the star registration process.
1_ how good the stars are matched from frame to frame.
2_ how they're interpolated once they are registered.
#1 varies with your image scale. CCDIS works well with 2asp, anything wider field and it will fall apart especially if you're using lenses with a bit of coma and/or field curvature.
I also found CCDIS useless with some of the pictures I took at 0.5asp. Not sure if they weren't enough stars around but it couldn't find anything.
#2 then becomes less important because if the subs are distorted sufficiently then the algo interpolating the stars has an easier job to do.
That's why goodlook is so good at it. I'll have to post two subs before and after to show how it works. I have an animated GIF somewhere that shows the transforms very clearly.
Is there a link to this software? I am not having any luck tracking it down either.
Greg.
You won't find it on the web. The only way to get it is to front to an ASNSW imaging meeting . That's just one bit of Goodlook, it does everything, literally.
Fred, is that the software that does the multiple star autoguiding? Or is that a separate piece of software?
Greg.
Yes, same software.And that is symultaneous dual guide multistar guiding BTW . The guiding and capture is specific to Mikes SBIG 11k though, he wrote it for himself, but the stacking and all in processing (and much more) in Goodlook is generic. In separate "modules"
It goes to show that choosing defaults across different packages can yield different results. I am pleased you got identical results using the same algorithim across different packages.
A worthwhile test for sure and at least the pointer has got people re-assessing what they use.
I was impressed Martin by your thorughness of approach. You leave no stone unturned and constantly inspect and check all your processes.
You leave nothing to chance and are obviously a very hard worker and don't shy away from a tough job!
Impressive.
I got the most out of all the talks from yours so thank you.
Looks like a tie between CCDStack and PixInsight with Maxim in second place.
I also tried to do a registration with RegiStar but couldn't get it to work. I have used it successfully before so I'll try again later and see if I can figure out what I'm doing wrong.
Looks like its a tie then which is good. I did not want to learn Pi just for stacking. But sound slike I will learn it anyway as background tool alone seems amazing.
Interesting read. I've never considered myself proficient at CCDStack by any means, but using trial versions of CCDStack again PI thought I could see an improvement in PI. Pretty sure it wasn't a stringent test by the rules here (same algorithm and parameters) and can't find the pics any more.