I currently have 60 frames at 10 minutes in Ha of Thor's helmet and at -15c. I have been using CCDstack for stacking. I need to do this in batch lots as doing 30 frames the program runs out of memory or will not allow more numbers of frames.
So stacking in batch lots of 10 and summing I cannot get rid of all the noise in the image. This is in part because I have to do data rejection on each run to eliminate the hot pixels from the data. If I just do data rejection from the first runs and then none in the later runs the noise disappears but the hot pixels are in rows. If I do data rejection on each run I get latent noise (see crop image).
What do I do to get rid of the noise and the hot pixels. I would have thought 10 hours of Ha data the noise would be none existent. What on earth am I doing wrong? Could it be because the flux level is low on a 10 minute sub and using the Ha data as my luminence is just causing the program to stretch the data and this shows the noise. Does the data need to be say 20 minutes subs?
Some thoughts and advice would provide an interesting discussion on this subject.
Paul. You stack large numbers of exposures exactly the way I do and I find it works pretty well. If your ADUs are too low and dont overcome read noise, then read noise will add up with the number of subs and the noise wont decrease, althought I have lately stacked subs with very low Ha ADUs and noise did go down.
With lots of noisy subs, poisson data reject works far better than sigma. Its more aggressive, and can distort stars, but stars tend to average out OK with large stacks.
Also increase the reject percentage, I usually use 2 but 5 or even 10 can improve things, again, with large stacks to compensate for increased data loss.
If your guiding is very good, eg with internal or OAG, I found bad pixel colums and hot pixels are in the same place on every sub and they arnt removed. Dithering fixed that totally.
Mean combine rejects bad data by definition, the manual says you dont need to do data reject 1st (althogh you can, I do). If you find noise get worse using data reject and mean combine, then I would guess the settings or method of data reject you use is not optimum.
Looking at your picture, it looks like noise, not hot pixels.
I usually rune each sub through the hot pixel removal process straight after calibration. So in CCDStack go to Process->Data Reject->Remove hot/cold pixels. I use a strength of 5 and an ADU count of 100000. Then do an impute rejected pixels, width 0.2, 3 iterations.
I currently have 60 frames at 10 minutes in Ha of Thor's helmet and at -15c. I have been using CCDstack for stacking.
You're getting swamped by readout noise IMO. Go 20min on your subs especially in Ha. Thor's helmet is very very faint. When you do your data rejection use Sigma or poisson whichever you prefer working with. Poisson is a little aggressive and make sure you explicitly set the threshold, not as a percentage so don't let CCDStack work it out for you. Enter a value. Normalisation is critical for good data rejection. Dithering helps too during data acquisition.
Yes, I always go 20mins Ha on dim objects now with a 10" (80% QE cam). 10 or 15min were passable on the 12" with a reducer, I would imagine 20min would be the go on an 8".
Yep on faint objects I think 20 minutes might have to be undertaken.
Thanks for all the advice.
I have tried a combination of techniques that have been suggested and summed the Ha with the Lum data and the noise seems to have diminished a lot. Here is the completed image.
But the stars are a bit bloated and the ones in front(?) of the nebula are showing the effects of too much sharpening (they are oversaturated and some have rings).
Benefits of a 30" monitor...
Can you post a before and after of the noisy region up close?
Stuart, sorry mate no oversharpening there. I have been deliberately leaving the stars out of the sharpening masks. As for bloated well you get that with Ha Lum added together on a 25K ADU sensor. I am less concerned about that than the noise I was getting.
I will remember to cast a keen over your attempt at this target when the time comes.
I have included an image of the old version and you can look at the linked version.
Stuart, sorry mate no oversharpening there. I have been deliberately leaving the stars out of the sharpening masks. As for bloated well you get that with Ha Lum added together on a 25K ADU sensor. I am less concerned about that than the noise I was getting.
I will remember to cast a keen over your attempt at this target when the time comes.
I have included an image of the old version and you can look at the linked version.
Hey, nothing personal Paul. I still don't like the stars, it's what mine look like when I've oversharpened, when I see that, I back off a bit on whatever I've been doing. Hence I though you may have been a bit heavy handed with the sharpening.
I also know what it's like to spend days processing an image, post it and then have someone say it's a bit insert comment here. Your first instinct is "No it bloody well isn't, I checked for that" (see my NGC 1964 image), then you check it later after a break and doing something else and you see their point.
Personally I'd run a minimum filter through the stars to lessen their imapct on the picture. It's a great picture of Thor's helmet, but an average picture of the starfield around it (IMHO), so reduce them with the minimum filter.
Please, please, cast a very critical eye over my posts, it's why I post them anyway. I don't post to get the "Nice picture" comments, pleasing as they are, they don't improve my imaging at all. I promise I won't snap at you (well, not much anyway).
Cheers
Stuart (now out in the observatory checking the NGC1964 reprocessing to see if I removed the brownness).
Last edited by rat156; 27-01-2010 at 09:03 PM.
Reason: Added rather average attempt at Thor's Helmet.
Try selecting the bright stars in PS using the colour range tool (on your Luminance sub). Once you have the really bright ones selected, expand the selection (3-5 pixels, depends on the resolution of the shot, I used 3 for the Rosette I took with the ED80, but usually use 5 for the stuff with the RC), then feather the selection by 1-3 pixels (depends on the amount you expanded). The go to the filters->other->minimum, select 1 or two pixels, I have a tendency to use two, it looks bad, but don't worry, go to edit -> fade minimum and adjust the slider to taste. Be aware that the ability to fade a filter is only available until you perform another operation.
I tried creating another layer and then used the min filter and used 1 pixel then use the opacity at 60% and this seemed to work quite well. Take a look if you like on the link now. That should sort that a bit.