Just after some clarification of improving SNR by stacking images (as an average, median or sigma combine method, not adding).
I've noticed some marathon imaging recently. One was 48 x 5min subs through a one shot colour camera.
From what I've read in Ron Wodaski's book, the "additional exposures improve the SNR by the square root of the number of exposures".
Thus,
4 exposures improves SNR by a factor of 2
9 exposures improves SNR by a factor of 3
16 exposures improves SNR by a factor of 4 etc...
There is obviously a diminshing return to any improvement to SNR by taking more and more subs. To me it just reduces precious time available to image other objects.
I've been taking between 10 & 15 subs as a compromise between quantity and quality. What do others do?
Has anyone shown a visible improvement from 15 vs 30+ subs?
I use a cooled one shot colour, I usually like to get at least 5 hours (in 10 min subs, although that depends on well depth of your cam) on most objects especially with faint nebulosity. And make sure you dither. Also if your cooled, go longer subs on fainter stuff.
I prefer to use median combine, it has better data rejection that 'average'.
15 to 30 subs? Yep it makes a huge difference to noise with my setup.
there is definitely a big plus to taking more images for one, you can pick more of the best images that is you take 20 images out of that 20 you might have say 12 that are rock solid and the best 5 or 6 that are good but not great, and a few that are dud, now normally you will get a few that are decent most that are average and a few that are dud.
its this selectiveness that be advantageous, but that is just the first part. The reason why you take multiple images is to give the program/algorithm something to base its data rejection routines aka what is data and what isn't. and the reason why you run darks and flats is to get rid of hot pixels and optical defects which might land in the same position constantly and being recognised by the program as real wanted data. That is the reason why you take more and more. There is other reasons why this helps with the SNR for which Ron wodaski has already presented.
as for your question is there a scientific evidence of more data = cleaner images. mmm yes and no.
i really don't have a scientific tool to give you that aka i cannot keep the images the same as cooling plays a big part in noise reduction.
There will be others that have more advice for you.
Your question is a very hard one to quantify but be assured longer and more is definitely better. The big plus with going more and longer seems to be in the depth of the image and the increase in signal in the dim areas of the image. You seem to increase signal to a point where more detail in these dim areas provides a very smooth and detailed image. Noise has to be the hardest thing to control well in astroimaging and the longer and more process certainly helps make this control easier. Not so much scientific but more so an observation.
Do yourselves all a favour and dither your images. It is pointless stacking hot pixels on hot pixels or the 'holes' these pixels leave after over dark correction due to mismatched temperatures of darks and lights. In fact with dithering it is hard to see the difference between dark corrected image stacks and uncorrected dark image stacks.
This also has the effect of any sporadic erroneous pixels being totally eliminated. Cosmic rays come to mind.
Noise control starts with very good darks and flats. I routinely make flats from forty frames with forty darks for the flats. This eliminates the 'bias' problem. This can lead to banding for really faint stuff when stretched.
If you are scaling darks a stack of forty bias frames is also needed for proper scaling of the dark. Noise is an almost linear function with exposure and temperature so mathematically it is not difficult to deal with. The DC level of bias can throw this out. This has to be subtracted before any scaling of darks can be contemplated. Software like ImagesPlus do this automatically.
The reason for so many darks and flats and bias frames is that you want to minimise the statistical (Poisson) errors or noise that then are used to modulate your light frames to at least the statistical errors of your light frames. It is almost useless to correct for flats or darks with less frames than the number of frames of your lights IE DATA. This can corrupt perfectly good data by 'correcting' with noisy darks and flats.
As far as the darks I have total control over temperature so darks match lights. I also make master darks from forty darks.
If this is not possible then dithering really helps.
Yes there is a diminishing return. But still a return.
As Bert mentioned dithering helps a lot. You also have to consider your site light conditions and your camera. Some camera have a big read noise some don't so the more subs you take the more read noise you'll add. From a light polluted site you won't be able to expose for very long before your data is swamped into the background sky glow and noise. So it's ok to shoot a lot of shorter subs even with read noise as it is insignificant compare to background noise.
At a dark site though you can push exposures to 15-20min and not suffer from sky glow. So then because you have no background noise, the read noise of your camera will start peeking through. So the longer the subs and the least of them the better.
There are also some critical steps prior to do data rejection when stacking. Calibrating (dark/flats/bias) your subs is important but most of all you need to normalise them. Effective data rejection will only work if you do this. Then there are also a few different algorithm used to register your subs that can improve noise rejection while keeping details such as nearest neighbourgh.
Those are just a few pointers. The topic is far too vast to summarise in one post.
I am curious to know how do you "dither your images". I have just looked at all my software and I don't see any options.
Is it an "easy" process?
Thanks
Darrell
What software do you have?
MaximDL does it. Camera control window>expose>autosave window->dither radio buton at top of screen.
If you hook up Nebulosity to PHD, you can also do it, but I've never tried.
Geoff
Mostly it is done manually, by moving the guide star (using the mount) a couple of pixels between exposures. Takes an awful lot of time and effort and doubt most amateurs would do it.
Last edited by Astrobserver99; 07-07-2010 at 04:51 PM.
Mostly it is done manually, by moving the guide star (using the mount) a couple of pixels between exposures. Takes an awful lot of time and effort and doubt most amateurs would do it.
I do it all the time and I've been doing it for years now. It takes only seconds to bump the mount and pick up the guiding again. PHD and nebulosity work very well together in doing this. Set it up and forget it.
There is obviously a diminshing return to any improvement to SNR by taking more and more subs. To me it just reduces precious time available to image other objects.
Regards
David T
Once you've got all the elements of astrophotography working well.. polar alignment, composition, focus, tracking and guiding, then you owe it to yourself to go longer.
While you're learning (or because it's what you want to do), shooting lots of objects a night is fine. But if you want one or two great shots, instead of a bunch of 'run of the mill' ones, then try exposing one object for four hours or more. You will notice a difference.
I now consider anything shorter than four hours to be underexposed..
Thanks for the replies Gents. I was just about to go to sleep when I found all of these replies. Am now out of bed, reading through this and trying to digest things.
I'm starting to understand the depth of my question, by the multiple facets discussed in the replies.
I understand that dimmer objects will have a lower SNR initially, so will benefit from more exposures to improve that SNR. Also, taking more lets you throw a few out. I'm fortunate with my mount and guiding, that I'm not having trouble with egg shaped stars. What criteria are you throwing images out on, other than egg shaped stars? What sort of FWHM numbers defines good focus? What about the other quality indices in Maxim?
Dithering also makes sense - again the accuracy of my mount means the stars aren't shifting much between exposures. I'll have to try guiding through Maxim rather than PHD to get that working.
I've been using 20 darks & flats for 10-15 lights. I don't have a regulated camera, so a dark library isn't an option yet. I've only had my decent rig out under dark skies for 2 nights so far (and at home under ugly skies for 3 nights), so I'm a long way from finding the balance between quality and quantity. I have been imaging for longer runs each night I setup.
Multiweb - what do you mean by "normalisation" as opposed to "calibration"?
Peter - I'm ticking a couple of boxes already, but will have to throw more money at this to tick the rest, namely aperture and sensor quality...
What other books would you recommend to help one come to terms with the world of post processing. I'm a bit baffled at the moment as to where to start.
Shooting under suburban skies in LRGB, which most of us do, you will find that your exposure times are limited by the inherent sky glow of your location. Under dark skies, your subs can go as long as your ability to track will allow, but under light polluted skies rarely longer than 10 minutes (and usually much less). Using narrowband filters helps here and you can go much longer.
There's a calculator on the Starizona website for calculating optimal sub-exposure times, I usually double it (just to be on the safe side). There comes a time when you are just accumulating more signal and noise equally for dim objects. In fact I'd say that if you need 10 minutes subs to see something dim, then you're pushing the proverbial uphill to get a decent image from the 'burbs.
I'm taking a picture of the Antennae Galaxies at present, I love the faint wispy tails, but boy are they dim. Have a look at a 10 minute sub, then a stack of two hours...
Take no notice that I need to redo my flats, but look at the antennae. In PS3 for the stacked image there is about 4% difference in the k value about halfway along the tails, this is about the same as in the single sub. In CCDStack the mean value for a small section of tail around halfway along is 1233, the mean for the background next to that is 1226. For the stacked image the values are 1237 and 1228, I'm really up against it with this target...
These are 10 minute exposures. I'm going to try to exposre for the same time in 5 minute exposures next time we get clear skies, maybe even add in some 15's just to complete the experiment. The final image won't be anything to write home about, but it'll at least satisfy my inquiring mind about sub length. If I can get the same result with more 5 minute exposures then I will do it that way.
Oh, dithering is a good idea, but sometimes it's impractical. As I use an AO unit, I can't always dither, as, like in this image, the guide star is placed at the top edge of the guide chip, automated dithering would almost certainly move it off the guide chip. I use electronic methods for hot pixel removal. I'm always left with a couple, but CCDStack is excellent and removes most of the artifacts, once you learn how to use it.
Don't forget you can also use a trick called 'binning' (e.g. trading resolution for exposure and/or noise reduction). It's a great way to vastly increase the amount of light you can gather in light polluted areas in a single exposure.
And depending on your equipment and seeing conditions, the reduction in resolution may not have an adverse effect at all on the amount of detail perceived.
Not if you subtract the skyglow first (leaving the real sky), then bin the result. Then you're left with only the useful data binned.
I use this technique to gain more light quickly to avoid longer exposures as my tracking and alignment is usually shoddy. No such thing as a free lunch; you lose resolution, but that resolution is mostly wasted in my case anyway (bad seeing + afocal projection with cheap eyepieces).
If you're after a better signal-to-noise ratio and want to use binning, you can scale the binned result back to the original brightness levels (it's the same operation as taking the average of the binned pixels), provided the levels weren't capped.
Not if you subtract the skyglow first (leaving the real sky), then bin the result. Then you're left with only the useful data binned.
Hi Ivo, you're talking about software binning? I thought hardware binning was different. On a camera you lose color information for an OSC and resolution for a mono as well but apparently I hear your SNR is better because signal ratio to readout noise increases too. Which is one real advantage. But if you have skyglow to start with and you bin 2x2 then flatfield/calibrate would you get a better result than no binning+calibration then software binning?