View Full Version here: : Improving Signal to Noise Ratio
DavidTrap
06-07-2010, 09:12 PM
Just after some clarification of improving SNR by stacking images (as an average, median or sigma combine method, not adding).
I've noticed some marathon imaging recently. One was 48 x 5min subs through a one shot colour camera.
From what I've read in Ron Wodaski's book, the "additional exposures improve the SNR by the square root of the number of exposures".
Thus,
4 exposures improves SNR by a factor of 2
9 exposures improves SNR by a factor of 3
16 exposures improves SNR by a factor of 4 etc...
There is obviously a diminshing return to any improvement to SNR by taking more and more subs. To me it just reduces precious time available to image other objects.
I've been taking between 10 & 15 subs as a compromise between quantity and quality. What do others do?
Has anyone shown a visible improvement from 15 vs 30+ subs?
Regards
David T
Hi David,
I use a cooled one shot colour, I usually like to get at least 5 hours (in 10 min subs, although that depends on well depth of your cam) on most objects especially with faint nebulosity. And make sure you dither. Also if your cooled, go longer subs on fainter stuff.
I prefer to use median combine, it has better data rejection that 'average'.
15 to 30 subs? Yep it makes a huge difference to noise with my setup.
Brett
bmitchell82
07-07-2010, 12:41 AM
there is definitely a big plus to taking more images for one, you can pick more of the best images that is you take 20 images out of that 20 you might have say 12 that are rock solid and the best 5 or 6 that are good but not great, and a few that are dud, now normally you will get a few that are decent most that are average and a few that are dud.
its this selectiveness that be advantageous, but that is just the first part. The reason why you take multiple images is to give the program/algorithm something to base its data rejection routines aka what is data and what isn't. and the reason why you run darks and flats is to get rid of hot pixels and optical defects which might land in the same position constantly and being recognised by the program as real wanted data. That is the reason why you take more and more. There is other reasons why this helps with the SNR for which Ron wodaski has already presented.
as for your question is there a scientific evidence of more data = cleaner images. mmm yes and no.
i really don't have a scientific tool to give you that aka i cannot keep the images the same as cooling plays a big part in noise reduction.
There will be others that have more advice for you.
Hagar
07-07-2010, 11:21 AM
Your question is a very hard one to quantify but be assured longer and more is definitely better. The big plus with going more and longer seems to be in the depth of the image and the increase in signal in the dim areas of the image. You seem to increase signal to a point where more detail in these dim areas provides a very smooth and detailed image. Noise has to be the hardest thing to control well in astroimaging and the longer and more process certainly helps make this control easier. Not so much scientific but more so an observation.
avandonk
07-07-2010, 11:52 AM
Do yourselves all a favour and dither your images. It is pointless stacking hot pixels on hot pixels or the 'holes' these pixels leave after over dark correction due to mismatched temperatures of darks and lights. In fact with dithering it is hard to see the difference between dark corrected image stacks and uncorrected dark image stacks.
This also has the effect of any sporadic erroneous pixels being totally eliminated. Cosmic rays come to mind.
Noise control starts with very good darks and flats. I routinely make flats from forty frames with forty darks for the flats. This eliminates the 'bias' problem. This can lead to banding for really faint stuff when stretched.
If you are scaling darks a stack of forty bias frames is also needed for proper scaling of the dark. Noise is an almost linear function with exposure and temperature so mathematically it is not difficult to deal with. The DC level of bias can throw this out. This has to be subtracted before any scaling of darks can be contemplated. Software like ImagesPlus do this automatically.
The reason for so many darks and flats and bias frames is that you want to minimise the statistical (Poisson) errors or noise that then are used to modulate your light frames to at least the statistical errors of your light frames. It is almost useless to correct for flats or darks with less frames than the number of frames of your lights IE DATA. This can corrupt perfectly good data by 'correcting' with noisy darks and flats.
As far as the darks I have total control over temperature so darks match lights. I also make master darks from forty darks.
If this is not possible then dithering really helps.
Bert
multiweb
07-07-2010, 01:03 PM
Yes there is a diminishing return. But still a return. :)
As Bert mentioned dithering helps a lot. You also have to consider your site light conditions and your camera. Some camera have a big read noise some don't so the more subs you take the more read noise you'll add. From a light polluted site you won't be able to expose for very long before your data is swamped into the background sky glow and noise. So it's ok to shoot a lot of shorter subs even with read noise as it is insignificant compare to background noise.
At a dark site though you can push exposures to 15-20min and not suffer from sky glow. So then because you have no background noise, the read noise of your camera will start peeking through. So the longer the subs and the least of them the better.
There are also some critical steps prior to do data rejection when stacking. Calibrating (dark/flats/bias) your subs is important but most of all you need to normalise them. Effective data rejection will only work if you do this. Then there are also a few different algorithm used to register your subs that can improve noise rejection while keeping details such as nearest neighbourgh.
Those are just a few pointers. The topic is far too vast to summarise in one post.
darrellx
07-07-2010, 03:24 PM
Bert
I am curious to know how do you "dither your images". I have just looked at all my software and I don't see any options.
Is it an "easy" process?
Thanks
Darrell
Geoff45
07-07-2010, 03:52 PM
What software do you have?
MaximDL does it. Camera control window>expose>autosave window->dither radio buton at top of screen.
If you hook up Nebulosity to PHD, you can also do it, but I've never tried.
Geoff
Astrobserver99
07-07-2010, 04:40 PM
Mostly it is done manually, by moving the guide star (using the mount) a couple of pixels between exposures. Takes an awful lot of time and effort and doubt most amateurs would do it.
multiweb
07-07-2010, 05:40 PM
I do it all the time and I've been doing it for years now. It takes only seconds to bump the mount and pick up the guiding again. PHD and nebulosity work very well together in doing this. Set it up and forget it. :thumbsup:
Phil Hart
07-07-2010, 08:31 PM
Once you've got all the elements of astrophotography working well.. polar alignment, composition, focus, tracking and guiding, then you owe it to yourself to go longer.
While you're learning (or because it's what you want to do), shooting lots of objects a night is fine. But if you want one or two great shots, instead of a bunch of 'run of the mill' ones, then try exposing one object for four hours or more. You will notice a difference.
I now consider anything shorter than four hours to be underexposed.. ;)
Phil
Peter Ward
07-07-2010, 09:33 PM
This topic is well covered elsewhere...but the solution is in the description
i.e. we want to maxmize signal, and mininize noise.
The first thing you can do is shoot under a dark sky, and only when the seeing is great.
Then it comes down to instrumentation pure and simple.
More signal comes from:
larger apeture
higher quality optics
better tracking
better guiding
higher sensor QE
Less noise comes from:
low noise sensor design
perfecting image calibration
cooling the sensor
high quality optics
optimised readout electronics
just my 2 cents worth.....
DavidTrap
07-07-2010, 10:45 PM
Thanks for the replies Gents. I was just about to go to sleep when I found all of these replies. Am now out of bed, reading through this and trying to digest things.
I'm starting to understand the depth of my question, by the multiple facets discussed in the replies.
I understand that dimmer objects will have a lower SNR initially, so will benefit from more exposures to improve that SNR. Also, taking more lets you throw a few out. I'm fortunate with my mount and guiding, that I'm not having trouble with egg shaped stars. What criteria are you throwing images out on, other than egg shaped stars? What sort of FWHM numbers defines good focus? What about the other quality indices in Maxim?
Dithering also makes sense - again the accuracy of my mount means the stars aren't shifting much between exposures. I'll have to try guiding through Maxim rather than PHD to get that working.
I've been using 20 darks & flats for 10-15 lights. I don't have a regulated camera, so a dark library isn't an option yet. I've only had my decent rig out under dark skies for 2 nights so far (and at home under ugly skies for 3 nights), so I'm a long way from finding the balance between quality and quantity. I have been imaging for longer runs each night I setup.
Multiweb - what do you mean by "normalisation" as opposed to "calibration"?
Peter - I'm ticking a couple of boxes already, but will have to throw more money at this to tick the rest, namely aperture and sensor quality...
What other books would you recommend to help one come to terms with the world of post processing. I'm a bit baffled at the moment as to where to start.
Thanks again,
DT
Octane
07-07-2010, 11:14 PM
David,
Normalisation is the act of setting the median level of each image to 0. This helps to increase dynamic range when it comes time to stack.
H
DavidTrap
08-07-2010, 06:52 AM
ta H
DT
rat156
08-07-2010, 07:17 AM
Shooting under suburban skies in LRGB, which most of us do, you will find that your exposure times are limited by the inherent sky glow of your location. Under dark skies, your subs can go as long as your ability to track will allow, but under light polluted skies rarely longer than 10 minutes (and usually much less). Using narrowband filters helps here and you can go much longer.
There's a calculator on the Starizona website for calculating optimal sub-exposure times, I usually double it (just to be on the safe side). There comes a time when you are just accumulating more signal and noise equally for dim objects. In fact I'd say that if you need 10 minutes subs to see something dim, then you're pushing the proverbial uphill to get a decent image from the 'burbs.
I'm taking a picture of the Antennae Galaxies at present, I love the faint wispy tails, but boy are they dim. Have a look at a 10 minute sub, then a stack of two hours...
Take no notice that I need to redo my flats, but look at the antennae. In PS3 for the stacked image there is about 4% difference in the k value about halfway along the tails, this is about the same as in the single sub. In CCDStack the mean value for a small section of tail around halfway along is 1233, the mean for the background next to that is 1226. For the stacked image the values are 1237 and 1228, I'm really up against it with this target...
These are 10 minute exposures. I'm going to try to exposre for the same time in 5 minute exposures next time we get clear skies, maybe even add in some 15's just to complete the experiment. The final image won't be anything to write home about, but it'll at least satisfy my inquiring mind about sub length. If I can get the same result with more 5 minute exposures then I will do it that way.
Oh, dithering is a good idea, but sometimes it's impractical. As I use an AO unit, I can't always dither, as, like in this image, the guide star is placed at the top edge of the guide chip, automated dithering would almost certainly move it off the guide chip. I use electronic methods for hot pixel removal. I'm always left with a couple, but CCDStack is excellent and removes most of the artifacts, once you learn how to use it.
Cheers
Stuart
irwjager
08-07-2010, 12:35 PM
Don't forget you can also use a trick called 'binning' (e.g. trading resolution for exposure and/or noise reduction). It's a great way to vastly increase the amount of light you can gather in light polluted areas in a single exposure.
And depending on your equipment and seeing conditions, the reduction in resolution may not have an adverse effect at all on the amount of detail perceived.
rat156
08-07-2010, 08:38 PM
Hi Ivo,
Binning may help if the noise is truly random, unfortunately skyglow is not random, it's more like unwanted signal.
Binning skyglow results in brighter skyglow. You cannot artificially increase the signal to noise ratio above that which occurs naturally.
Cheers
Stuart
irwjager
09-07-2010, 12:47 AM
Not if you subtract the skyglow first (leaving the real sky), then bin the result. :) Then you're left with only the useful data binned.
I use this technique to gain more light quickly to avoid longer exposures as my tracking and alignment is usually shoddy. No such thing as a free lunch; you lose resolution, but that resolution is mostly wasted in my case anyway (bad seeing + afocal projection with cheap eyepieces).
If you're after a better signal-to-noise ratio and want to use binning, you can scale the binned result back to the original brightness levels (it's the same operation as taking the average of the binned pixels), provided the levels weren't capped.
multiweb
09-07-2010, 08:41 AM
Hi Ivo, you're talking about software binning? I thought hardware binning was different. On a camera you lose color information for an OSC and resolution for a mono as well but apparently I hear your SNR is better because signal ratio to readout noise increases too. Which is one real advantage. But if you have skyglow to start with and you bin 2x2 then flatfield/calibrate would you get a better result than no binning+calibration then software binning?
irwjager
09-07-2010, 10:22 AM
You're absoluetely right - I'm talking about software binning and I should've mentioned that. IMHO hardware binning should be avoided, as it is best done in software, giving you more flexibility.
You're bringing up a very interesting point here which is very relevant to this discussion as well. A CCD with an on-chip Color Filter Array (e.g. a 'color camera') will need to somehow conjur up a lot of missing data, since pixels are distributed over different color channels. In the case of the most used CFA, the Bayer filter (RGB), this means that only 1/3rd of the data is available and 2/3rds needs to be interpolated (aka 'Demosaicing' or 'Debayering'). This process too is a great source of noise, and it's usually noise of the 'worst' kind; correlated noise.
Whereas uncorrelated noise (aka 'random' noise) can be dealt with by the methods posted in this forum or even software noise filters, correlated noise is much harder to deal with because it affects neighboring pixels as well.
Worse, most demosaicing algorithms assume a correlation between the brightness levels of the green channel and the blue and red channel, which means that noise in any of these channels may 'migrate' into other channels. This makes any traditional software noise filtering useless and can really confuse stacking software and quality estimation algorithms.
Some Demosaicing routines are worse than others for noisy images and I encourage anyone to try as many as possible to see what works best. Funnily enough I've found that the best algorithms for natural photography sometimes perfom the worst when it comes to noisy astrophotography images. This usually has to do with high-frequency detail detection, which, when given noiseless input, works as intended and reduces color artefacts (blue, yellow, green, purple fringes and artefacts). However the high-detail frequency detection is often thrown off by random noise (which is by it's very nature high frequency).
Other algorithms look at neighbouring pixels to aid in edge and/or gradient detection. If such a pixel is incorrectly set due to noise, the effect can be a cascade of errors in the debayerd output image affecting neighbouring pixels, introducing yet more artifacts and thus correlated noise.
Nebulosity, for example, uses a demosaicing algorithm that is very simple - no serious natural photographer would touch it with a barge pole. However, it's exactly its simplicity that makes it a much better candidate for dealing with noisy images than the more complicated algorithms found in the various photo processing suites.
The reason why you would choose software binning over hardware binning is that, when using hardware binning, your image gets saturated quicker and you need to stop your exposure sooner to avoid ending up with a completely white image (due to skyglow).
A longer exposure gives you a better signal-to-noise ratio (the signal goes up linearly with the exposure, the noise goes up as the square root of the exposure). Also my rule is to only throw away data at the last possible moment, only if I get something for it in return and, if I can help it, on my terms under my control.
When I process images, at every step, I actually envision what I'm doing to my pixels and why - it's not just a matter of 'what looks good at the moment'. It really does pay to learn about the algorithms behind the operations you perform on your image - it can help you combine processing steps in your head and think ahead.
Sorry, went a bit on a tangent there... :P
JohnH
09-07-2010, 11:37 AM
Nobody has mentioned setting the GAIN correctly. Can this not help S/N?
I am pterrty sure it does on my ccd where the signal can be bosted mare rapidly than noise (well it looks that way in the graphs I have seen) - the compromise will be saturation occurs faster for the bright elements.
The comment about h'ware binning is the opposite of what I have heard - s'ware binning is not binning at all really - it is a 2x2 or 3x3 averaging. H'ware binning 2x2 will reduce the required exposure time by a factor of 4 (at the cost of resolution) for a given signal level and is used to great effect in LRGB imaging where L is 1x1 and colours are taken at 2x2.
irwjager
09-07-2010, 12:47 PM
Very good point!
Gain in digital systems comes down to multiplication of the signal (and thus also multiplication of the noise). When using gain settings that are higher than unity (e.g. a factor > 1.0), as you're suggesting, saturation is reached sooner and there is less time to average noise events, resulting in noisier images.
Gain factors that are lower than unity (e.g. < 1.0) may help, in that there is more time before saturation occurs, so there's more time to collect light/expose and average out noise events.
In the real world, however, your mileage may vary depending on your hardware's characteristics (triggering voltages, thermal noise, circuit noise, rounding errors due to gain multiplication, etc.).
Actually, the term 'binning' does not define how the binned data is used. Binning is pretty cool; you can use binning to increase exposure (e.g. add the pixel values), use binning for noise reduction (e.g. add the pixels, then divide the result by the number of pixels you binned), use binning to increase bit-depth (e.g add the pixels and promote the image to a higher bit-depth at the same time), or any other exotic operation such as median filtering, adaptive binning, fractional binning (1.75x1.75, 3.14x3.14), etc.
It doesn't matter whether this is done at the hardware level, or the software level; the operation is *exactly* the same and there's no difference. Hardware binning is still software binning, only that's it's performed by the software in the firmware, instead of the software on your computer.
EDIT: Though they are almost the same, RickS points out that the latter statement is nevertheless untrue (http://www.iceinspace.com.au/forum/showpost.php?p=612758&postcount=31) (it is done at the hardware level) and there is one important benefit to hardware (summed) binning; read out noise reduction.
'Software' binning allows more flexibility in two ways; 1. It allows you to manipulate the image before it is binned (ex. subtract light pollution - stuff that you don't want binned). And 2. It allows you to use more advanced types of binning than your hardware supports (for example, StarBright determines how much a group of pixels needs to be fractionally binned for maximum contrast and no over exposure, reducing resolution only by the smallest amount possible).
And that's exactly what I use binning for most of the time. After I have taken my shorter than normal exposed image without binning (e.g. dimmer, but at full resolution). Then using a binning algorithm on my computer after I have taken out the light pollution.
JohnH
09-07-2010, 01:04 PM
Is there really no difference between an image of, say, 120s binned 2x2 in hware and one of 120s binned 1x1 and then processed in sware? Is it just that the default behavious in H'ware (sum?) and S'ware (average?) binning are not the same?
irwjager
09-07-2010, 01:36 PM
There should be no difference. Unless there's some other magic performed in the firmware and they're not telling us about it (personally, I've never come across a proprietary summing algorithm, but we can't say for sure).
EDIT: RickS points out that there is in fact a difference (http://www.iceinspace.com.au/forum/showpost.php?p=612758&postcount=31) (it is done at the hardware level) and there is one important benefit to hardware (summed) binning; read out noise reduction.
I guess that depends on what you define as default behavior; In Nebulosity 2.x, for example, you can choose from the menu Image->Bin/Blur-> and you'll have a choice of 3 operations; "2x2 bin: Sum", "2x2 bin: Average" and "2x2 bin: Adaptive". So I guess, at least in Nebulosity, there is no default behavior when it comes to binning.
multiweb
09-07-2010, 02:57 PM
Very interesting. I'm definitely going to look at my subs and debayer with different softwares and compare the results. I wasn't aware that noise from other channels in the bayer matrix could bleed into neighbouring pixels during the process. Thanks for pointing all this out Ivo. :thumbsup:
avandonk
09-07-2010, 04:08 PM
For one shot colour it is far better to do all corrections for flats and darks in FIT form. I also do the initial stretch in fit form and only then do the interpolation to tif for stacking.
Try doing just one image and correct for flats and darks in tiff form and in Fit form and comparing the result. I use ImagesPlus for this.
Bert
multiweb
09-07-2010, 04:46 PM
I always calibrate and flat field the raw files with the bayer matrix in as advised by Stan Moore then debayer to individual channels and stack separately. I also tried to debayer then calibrate debayered channels but the result was the same in CCDStack so it was not worth the extra work.
irwjager
09-07-2010, 05:29 PM
Interesting... So you stack the channels separately? Does that include per-channel quality estimation? Or do you use the quality estimation for the composite RGB image?
multiweb
09-07-2010, 05:36 PM
Yes - my work flow is not different than working with a mono. I'm not sure about the quality estimation bit but I use the sub with the best FWHM as per CCD Inspector report still with its bayer matrix in for everything. Registration, normalisation and data rejection. I make a note of it and use it as a base to work from in each channel.
RickS
09-07-2010, 05:45 PM
This short document I read recently claims there is a signal to noise advantage in hardware binning because it reduces the effect of read noise: http://www.roperscientific.de/binning.html. Seems like a plausible argument to me...
Cheers,
Rick.
irwjager
09-07-2010, 05:55 PM
Very interesting. You just gave me an idea! :D Since most debayering algorithms use data from other channels to help with the interpolation, they may "inherit' each other's noise (as I described above).
Now, what if you would extract all the red, green1, green2 and blue pixels from the bayered image, and put these in their own image?
You would end up with 4 images, all 1/4th the size. You could then stack those, do your usual thing and then, at the very end, put them together in a bayer matrix again and, only then, debayer them with the algorithm of your choice.
If I'm correct, this could yield a better quality image since you're delaying the potential noise crosstalk until the very end when you have a single, much better quality image... :question: Any noise would stay confined to its own channel and be rejected by the quality estimator during the per-channel stacking, with no chance of spreading to other channels.
Hmmm.... :)
irwjager
09-07-2010, 06:08 PM
Interesting read and I stand corrected regarding hardware binning; "This process is performed prior to digitization in the on-chip circuitry of the CCD by specialized control of the serial and parallel registers." as opposed to in the firmware after digitizing.
You got to wonder though if the increased read noise (4 reads instead of 1 read for 2x2 binning) would really register when performing a single readout per-pixel at the end of a long exposure; you certainly wouldn't hope so! :) I can definitely see how this would impact video though.
RickS
09-07-2010, 06:37 PM
That's the big question! Perhaps on faint targets it could make a difference. I'm still very much a newbie on this stuff and keeping an open mind!
rat156
10-07-2010, 03:20 PM
An interesting discussion, but how do you "subtract skyglow"?
Skyglow is NOT noise, it's unwanted signal. If there's a way to subtract unwanted signal I want to know about it.
WRT binning, I really should bin as my resolution is a little over 1"/pixel, which, considering the seeing I usually contend with is way too much. But then I give up image scale, which I like. Also some of the deconvolution algorithms rely on oversampled data.
Cheers
Stuart
irwjager
10-07-2010, 03:50 PM
You're absoluetely right; skyglow is certainly not noise, and that's the sole reason you can subtract it at all. Skyglow is not random, like noise, and therefore fortunately you can do some meaningful processing on it.
For a very crude subtraction of skyglow, you can take the pixel value of the darkest pixel (don't count dead pixels ofcourse :) ) you can find in your image, then subtract this value from all pixels in the image.
Ofcourse this isn't likely to remove all the skyglow, as skyglow usually ends up being uneven (there are other tools for that such as my StarWipe tool, PixelInsight's DBE tool, GradientXterminator, etc.). However it should take out the bulk of the unwanted signal without 'under clipping'. A simple way to do this in any photo processing suite is to create an extra layer, filled with the darkest pixel value you could find in the image, then set the layer to 'subtract'.
From here you can software bin the image. Because you took out the skyglow, you can bin the image more than you were able to with the skyglow still in, without the image starting to clip/over expose. You can not do this with hardware binning and this is the reason why I don't recommend hardware binning.
Bassnut
11-07-2010, 04:37 PM
Your ideas about binning generally are very wrong there Ivo :P.
Hardware binning is not done in firmware (but the hardware process is controlled by firmware, totally different). As per ricks excellent link, electrons say in bin 2 are dumped from 2 adjacent pixels into one serial "super pixel" register and read out as one pixel, halving the readout noise. In bin 4 the readout noise is 4 times less, a significant S/N improvement, especially with low signal.
In fact for low signal levels, hardware binning improves read S/N so much it can make the difference between unusable data at bin 1 (due to read noise domination) and more than usable at bin 2.
Many, many times, I have wasted imaging time with bin 1 on dim nebs with 20min exposures with very NB filters and reimaged at bin2 with excellent results.
Your statement avoiding hardware binning and using sofware binning is very missleading, I would say the reverse (from hard experience), software binning is next to useless, its just unnecessary data manipulation for next to no S/N gain at all, and hardware binning is not only necessary, but often critical for any result at all !.
With advanced CCD chips (especially ith ABGs), binning is unlikely to cause saturation as decribed in the link, and anyway, if the signal is so high that blooming occures, then you would simply not need to bin, or take shorter exposures.
Hardware binning is very common BTW, all this is not just my opinion.
irwjager
11-07-2010, 06:53 PM
As I have graciously admitted (http://www.iceinspace.com.au/forum/showpost.php?p=612772&postcount=33) :)
We're both right! :) I'm talking from the context from a light polluted site (e.g. lots of skyglow). In that case your read out noise is drowned out by skynoise at relatively short exposure times.
If you're doing your photography from dark sites (lucky you!), read noise definitely becomes an issue and binning can really help.
This is a great read on the subject;
http://www.starrywonders.com/ccdcameraconsiderations.html
I take issue with saying that software binning is next to useless though.
For starters, some of us have OSC's and have no way of doing hardware binning without sacrificing color.
Aside from that. The reduction of read noise in hardware binning is merely a (very welcome) side effect of the implementation of binning in the hardware. There are still further benefits to binning, also in the realm of reducing noise. That's why you'll frequently find 'averaging' as one of the binning options in the different software suites.
EDIT: Removed "For example, averaging the 4 pixels in a 2x2 area drops noise by a factor of 2 (it's the same as stacking 4 subs). This can *far* outweigh any read noise gains made by hardware (summing) binning." - It's more complicated than that :)
As I've said before in this thread, "binning" is just the process of putting multiple samples together and choosing a single sample to represent the multiple samples. This totally leaves open the method/algorithm you chose to assign the single value. There are many ways to avail from the benefits of binning - all of them can be accessed by software and only one (summing) is accessed through hardware (with the added benefit of read out noise reduction).
I was referring to excessive skyglow, not blooming. When you have a lot of skyglow at your site, your wells fill up quickly and skyglow drowns out the signal you're after to the point where everything is overexposed. Binning makes this worse - you collect light faster (4x for 2x2), but for some sensors, your well capacity does not increase by the same factor on all sensors due to ADU limitations; the increased dynamic range you collect can not fully be encoded by the A/D converter. Therefore it is better to record everything you can get, export it to your computer and bin it there. However, if you image very faint objects at dark sites, this is obviously not of concern or use to you at all.
Add to that the flexibility of removing data that you don't want binned and software binning becomes a lot more useful.
That said, hardware binning definitely has its uses and I apologize if I came across as dismissing it completely.
multiweb
11-07-2010, 07:39 PM
I believe hardware binning has its advantages on extremely faint objects because I have seen it done practically in narrowband test shots (oiii on the helix). The improvement was just second to none. The signal (and contrast in details) was just overwhelming any noise.
If you're in a light polluted environment or imaging a very bright object then hardware binning has very diminished returns. The data will saturate very quickly and the skyglow will also be worse.
So hardware binning does definitely have its place. As far as software binning goes, I hear what you're saying Ivo but I'm still to see a real life example of the kind of improvement you'd get. Are you saying that if you remove skyglow first then software bin the data the result would be better?
I can provide some light polluted uncalibrated raw data if you wish to demonstrate. Got stacks of that. :thumbsup:
irwjager
11-07-2010, 08:52 PM
The removal of the skyglow frees up dynamic range, which can be used to bin an image more until the whole of the dynamic range is used up. This would be fractional binning obviously and not a neat 2x2 - think 2,25x2,25 or thereabouts. The image would be smaller than the 2x2 hardware binned image with the light pollution still in.
You could do the same thing to the hardware binned image though - take out the light pollution and then software bin it again until the full dynamic range is used.
I'd be happy to demonstrate it!
multiweb
11-07-2010, 10:07 PM
Good stuff. I'll email you some raw fits.
irwjager
12-07-2010, 08:54 AM
Cool, thanks Marc! I'll record the intermediate images, so it's easier for people to see what's going on and reproduce (or criticize and shoot down :P). I'll host them somewhere else, so we can scrutinize some high quality TIFFs instead of 200k blotchy JPEGs.
I'll also show how different debayering algorithms are more suitable than others for dealing with noisy input.
Bassnut
12-07-2010, 10:42 AM
Heres a thread on software binning on OSC. Click on each post down the list. http://forums.dpreview.com/forums/read.asp?forum=1032&message=29133466
irwjager
12-07-2010, 11:23 AM
This thread is about the merits (or not) of software binning in order to decrease noise by means of averaging (not summing).
It's mostly useless for normal photography - there's a good few articles about it on the web. Averaging may look better in normal photography, but what you're really doing is applying a low pass filter. Any high frequency information is lost in the image (along with the noise - random noise is also high frequency).
However, astrophotography is a bit of a different beast, in that high frequency information may not be present, due to your CCD's resolving power being larger than seeing conditions require. You can tell if that's the case if your image looks 'soft' and is devoid of high frequency signal (except for noise). An image like this is a good candidate for binning by averaging.
It can be a boon for astrophotography under the right conditions, but don't expect it to do anything for the signal-to-noise ratio in your holiday snaps... :P
Bassnut
12-07-2010, 11:59 AM
Yes, terrestial would be different there, but the DL help file mentions this about averaging in binning.
Binning and Resizing
Sometimes it is useful to shrink images. The Binning command does this in the same manner as binning inside the camera – simply combine the adjacent pixels together into a single ”super-pixel”. Unlike a CCD camera, this function averages the values instead of summing them; however the effect is otherwise identical. (Binning inside the camera may reduce total read noise, though, if the binning is done "on chip").
Simple binning does not ensure that the result meets the Nyquist Sampling Criterion. This means that small point sources like stars can all but disappear. The correct way to resize an image is to first low-pass filter it, so that no spatial frequencies exceed one half the new sample interval. This prevents the addition of aliasing distortion into the image. The Half Size (mk:@MSITStore:C:\Program%20Files%2 0(x86)\Diffraction%20Limited\MaxIm% 20DL%20V5\MaxIm-DL.chm::/Half_Size.htm) command includes such a Nyquist filter.
irwjager
12-07-2010, 03:36 PM
That's a great find Fred!
To see how binning by averaging works and why you need an input that is a bit blurry (due to seeing conditions, or artificially introduced by a low pass filter) download this (http://www.xs4all.nl/%7Ebvdwolf/main/foto/down_sample/down_sample_files/Rings1.gif) image (from Bart van der Wolf's excellent site (http://www.xs4all.nl/%7Ebvdwolf/main/foto/down_sample/down_sample.htm) on comparing downsampling methods).
View it at 100% and it should look like ever narrowing concentric rings. It should be a single ring in the center, with no other rings being visible.
Now software bin (simple bin) this image at 2x2 and look at the result (again at 100%). Due to the presence of high frequency signals in the original, the binning (averaging) has introduced artifacts (you will see multiple rings), however any random noise that would've been present would have been greatly reduced.
Undo everything until you have your original back.
Now perform a simple blur (anything with a 1-pixel radius will do) and perform a 2x2 again. You should now see a perfect binned copy of the original image without aliasing. Again, any random noise that would've been present would have been greatly reduced.
The blur we applied acted as a low-pass filter, to eliminate any high frequency signal from the original (analogous to the slightly blurry image you get when imaging with a CCD that resolves more than seeing conditions permit).
We just simulated the "perfect" situation whereby binning by average can be used to improve the signal-to-noise ratio. Any noise that would have been present in the image would have been averaged and greatly reduced.
This little experiment also shows why the signal-to-noise ratio does not improve by binning willy-nilly (e.g. without suitable source material); yes you reduce the random noise, but you pay for it by introducing artifacts (aliasing).
This experiment also provides a plausible reason for why most people perceive a downsampled (e.g. binned by averaging) image as 'better', regardless of the suitability of the source; the aliasing that was introduced is far less noticeable than the noise in the original image. The aliasing can be quite subtle, even pleasing to human eyes. However, it is still information that doesn't belong in the image and still counts towards noise. It's just "pretty" noise. :)
So to recap;
Got a soft/blurry image at a high resolution and want to get rid of some noise? Perfect! You don't have any detail to lose anyway and your image is sufficiently blurred to counter any noticeable aliasing. Simply bin it by averaging.
Got a crisp image, but don't mind losing some detail (and resolution) to get rid of some noise? Blur it, then bin it by averaging (or choose a binning algorithm that does both steps for you - like Fred found in Maxim DL).
Bassnut
12-07-2010, 05:23 PM
I admire your persistance Ivo, good work, youve got me thinking more now, Ill look into this further. Cant help thinking though that just stacking large numbers of (sharpish) subs will get the same result without loosing resolution. Im a bit like Marc, would like to see the results without the bother of doing it myself 1st up.
I havent heard of anyone else bothering with software binning, I must say.
Thanks for your effort and time on this Ivo.
irwjager
12-07-2010, 08:27 PM
Thanks Fred! You're right ofcourse - more subs is definitely the way to go. And no matter how clever your processing, you can't beat more and better quality data!
This would make the whole discussion rather academic, were it not for my situation where my equipment is 'basic' (to put it mildly), light pollution is very high (inner Melbourne) and all I have at my disposal is cheap processing power, the oversized CCD of a hacked $90 consumer digital compact camera and my 'mad coding skillz' :P.
My limitations are by choice and some would (rightfully) say I am indeed mad. However, I absolutely love the challenges these limitations pose. The old adage "necessity is the mother of invention" rings very true in my case. Through writing software (and basic understanding of optics), I have overcome the chromatic aberration of my cheap first eyepieces, overcame the lightpollution in my backyard, put the cheap (but big) CCD in the camera to good use, now have a debayering routine that better suits my astrophotography needs, and can now auto guide my scope via my $10 homebrew interface.
I love how cheap digital compact cameras & webcams, even cheaper CPU cycles and free Open Source Software have put astrophotography within reach of the ordinary sidewalk astronomer (i.e me). Software is absolutely essential in order to capitalize on the low cost of these commodities - it's what ties it all together and repurposes it for astrophotography.
With my huge, but lesser quality, uncooled CCD, I imagine my processing chain looks totally different from yours. I got resolution in spades, but am drowning in noise and fuzz. I *need* things like software binning, and have to use every trick I know to get rid of noise and recover precious signal.
It is my hope that more enthusiasts on a shoestring budget will follow suit, recognize the potential of off-the-shelf hardware and use my tools (or contribute their own). Hopefully you'll see quite a few more people bothering with software binning then! ;)
ericwbenson
13-07-2010, 02:08 AM
Adding my two cents here:
If you examine the CCD equation and rearrange it a bit, you can make some observations about binning and other stuff. First the simplified equation for the SNR of square object (easier math!) spanning at least a few pixels in width (so we are above the critical sampling) recorded on a CCD:
SNR = S / sqrt(S + Nsky + Nd + npix˛*nr˛)
S: Signal [e-]
Nsky: sky signal [e-]
Nd: dark signal [e-]
nr: CCD read noise [e-/pixel]
npix: object digital width [pixel]
So we can expand the terms in the equation into other measurable/comparable factors using:
S = QE * t * A * w˛ * Cobj
QE: CCD average quantum efficiency [e-/photon]
t: time [sec]
A: telescope collection area [m˛]
w: object angular width [arc sec]
Cobj: average photon rate from object hitting the earth [photon/sec/m˛/arcsec˛], this is the related to object brightness quoted in [magnitudes/arcsec˛] with a conversion factor since flux is actually measured in Jansky's, we'll skip that part for now! Suffice to say that a 22 mag star = 14 photons/sec for every square meter of telescope area.
Nsky = QE * t * A * w˛ * Csky
Csky: average photon rate from sky background [photon/sec/m˛/arcsec˛], due to light pollution, moon, aurora and sky glow, again it is related to what a sky quality meter reads in [magnitudes/arcsec˛]
Nd = t * Cd * npix˛
Cd: average dark signal per pixel from heat [e-/sec/pixel]. This is a function of CCD temperature and usually halves for every ~6 deg. C. drop. For Kodak chips it is in the vicinity of 0.1 e-/pix/sec at -20C.
So the telescope and CCD parameters give us the image scale:
Image scale = 206 * p / f [arcsec /pixel]
p: CCD pixel size [µm]
f: telescope focal length [mm]
and now we can relate the object digital size to its angular size in the sky:
npix = w * f / (206 * p)
Introducing these factors into the original equation gives:
SNR = QE * t * A * Cobj * w˛ / sqrt[ (QE * t * A * w˛ * Cobj) + (QE * t * A * w˛ * Csky) + (t * Cd * npix˛) + (npix˛*nr˛)]
grouping terms and replacing npix
SNR = QE * t * A * Cobj * w˛ / sqrt[ (QE * t * A * w˛) * (Cobj + Csky) + (t * (w*f/(206*p))˛) * (Cd + nr˛/t)]
pulling out the t and w from the sqrt and dividing the nominator:
SNR = QE * sqrt(t) * A * Cobj * w / sqrt[ (QE * A * (Cobj + Csky) + (f/(206*p))˛ * (Cd + nr˛/t)]
So if you can ignore light pollution, dark current and read noise (perfect camera in outer space), telescope area, integration time, CCD QE, object brightness are all equally important as is the angular area of the object.
Pixel size only matters if QE*A*(Csky+Cobj) is small as compared to dark current and read noise. Of course pixel size matters very much for system resolution!
Therefore when the object is faint under a dark sky or when using a narrow band filter (where Cobj is not affected but Csky is greatly reduced) increasing the pixel size (i.e. binning) reduces the effect of read noise. N.B. the dark current noisee remains unchanged since binning 2x2 makes Cd 4x bigger!
Lets look at some real numbers:
Csky (naked eye limiting mag ~ 6.0) ~ 21 mag/arcsec˛ = 18 photons/sec/m˛ in green filter
Cobj (e.g. a 12th mag. galaxy 30"x30" in size) ~ 19 mag/arcsec˛ = 82 photons/sec/m˛ in green filter
A (for a 10" scope)= pi * 0.254˛ / 4 = 0.05 m˛
QE = 0.5
f = 2500 mm, p = 9 µm
Cd = 0.1 e-/pix
nr = 15 e-
exposure time t = 600 sec
Inside the sqrt we have:
0.5 *0.05*(88+14) + (2500/(206*9))˛ * (0.1 + 225 / 600)
0.025*100 + 1.8 * (0.1 + 0.375)
2.5 + 0.855
So the read noise is more important than the dark noise, but the shot noise from the galaxy itself dominates all other noise sources. In light polluted skies (18 mag/sq" = 300 photons/sec/m˛), the Csky term easily dominates above all else, making binning, pixel size, and to some extents cooling, sorta moot.
BTW Csky is easily measured if you know your CCD QE and gain [e-/ADU]. Take an exposure of a few minutes, record the pixel value in an empty area, subtract any software pedestal (MaxIm/CCDSoft add 100 ADU for technical reasons), multiply by the gain to get electrons, divide by QE, telescope area and exposure time and you get flux in photons/sec/m˛.
EB
irwjager
13-07-2010, 10:01 AM
I'd rate that at a little more than two cents! :lol:
Awesome explanation (and application) of the math behind the whole concept. :thumbsup:
If anyone wants to toy around with numbers of their own, there's a bunch of on-line calculators and spreadsheets available via this (http://www.starrywonders.com/ccdcameraconsiderations.htm) site (scroll down to about 3/4ths of the page).
One of them (http://www.ccdware.com/resources/subexposure.cfm) (a great on-line calculator by John Smith (http://www.hiddenloft.com/)) is particularly instrumental in quickly demonstrating the effect of varying your different circumstances on the different proportions of noise in your signal, as well as at which point read-out noise does become a serious issue (hint: it's exactly the sort of circumstances under which Bassnut's experience (http://www.iceinspace.com.au/forum/showthread.php?p=613442#post613442) taught him to resort to hardware binning - dark site, broad area, faint object, narrow band).
irwjager
14-07-2010, 01:26 PM
I received Marc's images and did the following. (Please be aware the image files are quite big).
This (http://www.siliconfields.net/marc/bin_test.tiff) is the image I got from Marc.
First I created a light pollution map. I used StarWipe, but you might want to use something else;
StarWipe --in=bin_test.tif --out=bin_test_lpmap.tiff --scale=5 --window=30 --mode=global --maponly
That resulted in this (http://www.siliconfields.net/marc/bin_test_lpmap_compressed.tiff) light pollution map.
Next I subtracted the light pollution map from the original image. Again, I used StarWipe for this as well, but importing the original image in PhotoShop, then importing the light pollution map in a separate layer and setting that layer to 'subtract' will do the same thing;
StarWipe --in=bin_test.tif --inmap=bin_test_lpmap.tiff --mode=global --out=bin_test_wiped.tiff --nonormalize
Now we're left with a dimmer version of the original (http://www.siliconfields.net/marc/bin_test_wiped_compressed.tiff), but with the light pollution removed. However, now I'm no longer using the full dynamic range I have at my disposal - I can still crank the brightness up without clipping my histogram to the right.
We should increase the signal in the image so that the image is bright again, but not so bright that my histogram starts clipping to the right.
At this point, I use (fractional) software binning (approx 1.41 x 1.41 in this case) to crank up the brightness again, instead of stretching the brightness levels of the pixels.
StarBright --in=bin_test_wiped.tiff --scale=71 --mode=cap
With this as the final result (http://www.siliconfields.net/marc/bin_test_wiped_brightened_compresse d.tiff). (notice though that the image is 71% of the original size in X and Y directions)
So what's the difference between stretching and binning?
If I stretch the brightness levels, I trade off precision for signal
If I bin the image, I trade resolution for signal. or
If I stretch the brightness levels, gaps will start to appear in the histogram. Stretch the levels enough and you'll start to see banding and noise will become more apparent.
If I bin the image, the histogram will stay intact and appear smooth. I can bin as much as I want without ever seeing any banding (though my image will get smaller and smaller and uselessly bright).
That's how (and why) I use software binning in a nutshell!
ericwbenson
14-07-2010, 08:41 PM
Wait a sec, it sounds as if you are working in only 8bits? CCDs produce images with much more than 8bits of dynamic range (10-14 is typical). And stacked images have even more (16 images in a stack could in theory get 4 more bits). Also the camera quantization levels are generally set to be smaller than the read noise. That way the discreet digital levels are not a limiting factor. That's why MaxIm DL and Mira do their processing in 32bit floating point, none of this banding occurs and there is no trade off in signal range-precision. Although I have seen banding on the display image when I had a large smooth ultra low noise profile, like that produced by a giant elliptical galaxy for which I had tons of exposure time on.
You would only want to software bin to make the image a) smaller and/or b) smoother by giving up resolution.
Cheers,
EB
irwjager
14-07-2010, 09:35 PM
Nope, 10-bit source/CCD, 64-bit integer processing and 16-bit TIFF output (as in the links).
I couldn't disagree more. There *is* signal degradation and you *will* start to notice it when you're starting to manipulate the signal. See how you go when you're trying to multiply the signal, such as when you're trying to make a high dynamic range composite from a high bit-depth image (i.e. you multiply the signal progressively and blend the non-over exposed parts with the original to bring out more detail, in, let's say, 8-bit intervals).
I do agree that there's a certain color depth, beyond which it becomes hard to see any difference (your screen can only represent 8-bits each color channel). That doesn't mean however that the imprecision isn't still there, waiting to bite you in the a** once you start manipulating your data further (such as in the HDR scenario above).
Also, you can still get banding with 32-bit floating point numbers. Floating point numbers can't represent every real number and rounding errors creep in quickly and become larger, the more you manipulate the data. Unless Maxim and Mira use floats to store integers in, in that case you can store 2^24 integers correctly until things go wrong. But that would defeat the purpose - why not just use a 32-bit integer... :) Fidelity and accuracy is not a float's strong point once you start performing arithmetic on them; for example, one of the biggest sins you can commit as a programmer in the finance industry is to use floats... :D
vBulletin® v3.8.7, Copyright ©2000-2025, vBulletin Solutions, Inc.