PDA

View Full Version here: : M83 repro: PixInsight batch stacking vs DSS


DiscoDuck
11-05-2015, 08:46 PM
I had a go at reprocessing a previous M83 image I put up in this thread
http://www.iceinspace.com.au/forum/showthread.php?t=134220&referrerid=17584
which I'm not now happy with as I think maybe it looked a bit overprocessed. The new result is below. Now including 12 frames - so about 4 hours of data with a QHY8 on an RC8 and EQ6. Any better? Worse? Gee, so hard to tell after staring at the screen for so long!!

I have a query though. This time I stacked in PixInsight using the batch preprocessing script. The results (using all 12 frames compared to using the 9 best of 12 in Deep Sky Stacker which I did in the other thread) are shown in the attached crops of the galaxy (with some simple histogram and other manipulations, but nothing substantially different between the two). What I was surprised by is how much clearer the PixInsight version is. Does that script do some sort of deconvolution?

The issue though is the fact that it left all the hot pixels in. Haven't managed to persuade it yet to get rid of these, even with the Cosmetic Correction tool and the dark master generated in the above. Any tips? As it was, removed most of them manually in Photoshop in the full non-cropped and processed image :rolleyes:

Paul

pluto
11-05-2015, 08:58 PM
That's a really good looking M83.

I was amazed at how much better PI stacks than DSS (and coincidently it was an image of M83 that I first tried PI on too :))

In PI when you use the Batch Preprocessing script you need to use a Cosmetic Correction process to get rid of the hot pixels. Also you should stack the registered images that come out of that script with an Image Integration process.

RickS
11-05-2015, 08:59 PM
BPP doesn't do any more than calibration, registration and integration. Certainly no deconvolution!

The default rejection algorithm and parameters for integration in PI may not suit your data. Integration is best done outside BPP and it is an important step to optimize. For 12 frames I'd use Winsorized Sigma Clipping and adjust the rejection parameters to give only a smidge of rejection on the low side (0.05% or so) and more on the high side (for good quality data less than 1% but maybe a percent or two for dodgy data.)

There's a more scientific/data driven approach but that can wait for another day...

Cheers,
Rick.

DiscoDuck
13-05-2015, 07:57 PM
Thanks Hugh and Rick. Will give the separate integration + cosmetic correction a go as suggested.

What's this other method Rick? More science sounds good! :)

BTW, how does PI compare to CCDStack or other alternatives?

pluto
13-05-2015, 08:19 PM
Haven't used CCDStack but I've heard it's even better than PI for stacking.

RickS
13-05-2015, 08:38 PM
I used CCDStack before I moved to PI. I found that CCDStack is relatively easy to use and produces good results. I still use it for checking my data in the field. You probably need to use something else to put the finishing touches to your images (Photoshop or PI, for example.) PI is a bit more challenging to master but IMHO that extra effort is rewarded with better results.



OK... if you use the PixInsight ImageIntegration process to do a trial integration on your data with Rejection Algorithm set to "No rejection" you'll see a bunch of info printed to the Process Console. The last number printed is the Median Noise Reduction. This number is as good as it gets for noise reduction. An integration without any pixel rejection gives you maximal SNR.

Next step is to pick an appropriate rejection algorithm. I use Percentile Clipping if I only have a few subs (say 3-5.) If I have a few more I'll try one of the sigma clipping methods. Around 8 or 9+ Winsorized Sigma Clipping is good. If I get to 15 or more then Linear fit clipping is worth a try.

Using the selected algorithm, play with the rejection parameters and try to reject as little data as possible (watch the pixel rejection counts, especially the total amount at the bottom) while still getting rid of hot pixels and other unwanted artifacts - check the result carefully at 1:1. What you are trying to do is reject all the bad stuff but get as close as possible to the target median noise reduction we got with no rejection. If you can get within a percent or two of this then you've maximized your SNR.

Sometimes it's worth comparing a couple of different rejection algorithms to see which does better. You can also improve things slightly with the right choice of Scale Estimator (try k-sigma, average absolute deviation and median absolute deviation).

This can be quite a lot of work but if you improve your SNR by 20% over the default parameters then you may have achieved the same improvement that would have cost you 10 more hours of data capture :)

NB. the noise estimation isn't infallible so don't be surprised if you occasionally see anomalous results, e.g. a median noise reduction that seems too good to be true. In these cases I use the rejection counts to judge where I am.

Cheers,
Rick.

DiscoDuck
14-05-2015, 11:26 PM
Thanks guys.