Log in

View Full Version here: : how long should subs be with low read noise CCDs?


Shiraz
29-10-2013, 12:23 AM
A generally accepted rule for choosing the sub length is to make the subs long enough that shot noise in a sub (from the sky background) will overwhelm the read noise that is added at the end of the sub exposure. A widely used equation for determining an appropriate sub length is that from Starizona, which is:

Sub length = constant*(readnoise*readnoise)/skyflux

The key thing about this equation is that the sub length is proportional to the square of the read noise (for a given sky flux). To put some numbers on this, if a system using a camera with a read noise of 10 electrons requires 10 minute subs, a similar camera, but with a read noise of 5 electrons, would only require 2.5 minute subs for the same conditions (half the read noise requires a quarter the sub length). You could use longer subs, but you wouldn’t gain anything practical in signal to noise ratio, since read noise has already been taken out of consideration once subs are longer than 2.5 minutes.

The new breed of low read noise chips (eg from Sony) really can operate very effectively with short subs – the game is changing.

The other thing to note from the equation is that the sub length is inversely proportional to sky flux – if the sky is bright, you can use shorter subs than if it is dark. Shorter subs do not help with SNR under bright sky, but they do not make it any worse - the shot noise from the sky is so high that it doesn’t matter if you add a bit more read noise.

All this goes out the window with NB - the sky flux in the equation above will be very small and the basic rule is that subs should be as long as possible - low read noise chips will have better SNR, but even they can benefit from long subs in NB.

thanks for reading - discussion welcomed. regards ray

naskies
29-10-2013, 01:11 AM
Hi Ray,



Rick posted this link in another discussion thread recently on this very topic:

http://www.cloudynights.com/item.php?item_id=1622

(One issue being that the sky-noise-dominating-read-noise equation ignores the target brightness and noise.)



I'm not sure that this is correct... SNR = Signal / Noise for a single sub, i.e. it's a linear relationship. If you halve the noise, you can only halve the signal to keep the same SNR.

Adding n separate subs together gives you SNR = n * Signal / (Sqrt(n) * Noise), i.e. the "four times the integration time for half the noise" relationship that we're all familiar with.



There's a zone where the SNR depends purely on total integration time, i.e. where the number of subs and the sub length are interchangeable provided that the total integration time remains constant.

However, the main reason for longer subs is to go deeper by avoiding quantisation errors. If your target is so dim that you're only getting 1 photon every 5 mins (e.g. a faint jet from a galaxy) - but you're taking 2.5 min subs - then it'll just be lost in the noise and you won't record a signal over the noise at all.

On the other hand, if you take 5 min subs you'd be able to detect a 1 photon brightening over the surrounding background with lots of stretching. However, with an average of 1 photon you won't be able to detect any surface detail - there's no zoom for contrast. With 30 min subs, you'd average 6 photons per sub... and there'd be enough room to have brighter (e.g. 8 photons) or darker (e.g. 4 photons) regions within the surface for detail.

This is also why long narrowband subexposures show more detail.

Don't forget the other source of noise: non-Poisson/non-random noise such as pixel defects, cosmic rays, planes, satellites, and so on. These will decrease only by dithering and stacking a larger number of subs (individual sub length has little effect).



Absolutely - for bright regions where plenty of photons are coming in. However, even low noise sensors will still have quantisation issues.

The complete game changer will be if we get effectively *zero* noise sensors one day: stacking will be done by sum rather than average/median combine. This would also make lucky imaging possible - just take the sharp frames of a video and sum them over the subexposure time. Maybe we'll also have to carry tanks of liquid nitrogen or helium out to our dark sites one day? :lol:



The SNR is worse under bright skies. Although you don't have to expose for as long, the sky shot noise contribution is proportional to the square root of the brightness, i.e. 4x the sky brightness leads to 2x the sky shot noise. To cancel out 2x the sky shot noise, you'll have to increase the total integration time by 4x (but you can use shorter subexposures to do so). Intuitively, most of us have experienced this - shooting shorter subs from light pollution, but stacking a huge number of them to overcome the noise.



It's not technically just a NB thing - this applies whenever the target is *much* brighter the sky background, such as NB exposures or even say Orion Nebula under mag 22 dark skies.

This is a good zone to be in, because SNR increases linearly with sub duration (sky noise contribution is effectively zero)... which is why it's still worth shooting Orion Nebula under dark skies.



Thanks for the discussion. My comments above are based on my understanding of the maths/stats behind image sampling (e.g. the formulae in the Anstey article). Feel free to correct the inevitable mistakes that I've made :)

Shiraz
29-10-2013, 11:26 AM
thanks for the comprehensive response Dave - pleasure to have a detailed discussion on this topic.

I will try to summarise my response to the points you raise, but that will require a bit of a critique of the Anstey paper - so sorry to go off track a bit. Also, if it sounds a bit pompous in places - sorry..

first off, you questioned whether sub length is proportional to read noise squared. I think that it is - the read noise and signal both increase linearly with multiple subs, but the shot noise only goes up with the square root. If you double the read noise, you need to increase the signal by 4x to get a doubling in shot noise, ie you need to expose for 4x as long if you want the shot noise to cover a doubling of read noise.

The other main points come from the underlying assumptions of the Anstey article, which are:
- you need to look at the noise in the target when you set sub length
- short subs will be messed up by quantisation noise.

I disagree on both points:
1. In any imaging I have done, the bright bits of the target are never an issue - they take care of themselves. The big problem is the noise that blights the dim parts of the scene when the target is barely visible or not present at all. For broadband imaging the primary source of noise in this region is shot noise from the sky background and target signal/ noise is quite unimportant. Hence, I consider that the Smith/Starizona approach is valid and that subs should be chosen to keep read noise well below shot noise from the sky background. Noise is noise and it doesn't matter where it came from - there is nothing special about the target noise.

2. I think that the quantisation issue is a non-starter. To take your example, consider a target with an average of 1 photon every five minutes. Some of the 2.5 minute subs (on average slightly less than half of them) will have no target photons, some will have one and maybe the odd one will have two or more. Add the signals over 12 such subs and you will typically get around 6 target photons - the same as you would get from one long 60 minute exposure. This of course depends on the stacking method - it won't work if the stacking is done by simply averaging the camera signals at the same bit resolution as the original (then you might get either 1 or 0 for the target). However, any stacking system worth its salt will at least add up the total signal and then divide by the number of subs to give an average. If the internal signal representation is floating point, you may get something like "averagesignal = 0.5" and if you want to know how many photons you collected then just multiply by the number of subs - no signal has been lost by having sub signals below 1 photon on average. The final average signal will need to be stretched more than it would for the longer exposure, but the SNR will be exactly the same, since the noise will also have been divided by the number of subs the get an average. ie, there is no quantisation issue at all. Even if the stacking system produces a fixed point representation, all is not lost - see http://www.stark-labs.com/craig/resources/Articles-&-Reviews/BitDepthStacking.pdf. Maybe I have it wrong on stacking, but with my current understanding, I do not accept any of the Anstey arguments based on quantisation error.

A couple of other comments:
- cameras with 1 electron read noise are available - they just cost an arm and a leg.
- your conclusion that shooting something like the Orion nebula is similar to NB is valid, but only for the bright bits. I find that, to bring out the fine detail around it, you need to take into account the sky noise - the Smith/Starizona approach is still valid. Maybe it isn't on something like the moon or a planet, but then the issue is not one of long exposures anyway.
- the simplest way to resolve some of these questions is by experiment, will try to do so if I can ever again find a nice clear night with good transparency and then force myself to devote some imaging time to something that may be of limited interest.

Cheers and regards Ray

naskies
29-10-2013, 01:12 PM
Actually, with the light of day I think we're talking about slightly different things... my mistake, sorry :ashamed:

Yes, I agree that with all else equal, a camera with half the read noise will only need one quarter the sub length to reach a sky-limited exposure.

(I was thinking of the SNR of an individual sub, where sub length is indeed a linear inverse relationship with read noise, but that doesn't factor into sky-limited exposure duration calculations.)



You don't just need to look at the target noise, but also the signal in the dim areas as you mentioned... [continued below]



If you don't accept quantisation noise/errors, then may I ask what your explanation is for the limiting magnitude under a given set of conditions? :question:

It clearly applies, otherwise we could all just take huge numbers of 1 min sky limited exposures in the heart of an urban centre, and get nicely detailed mag 30 galaxies...?

In both the Smith and Anstey models, object signals are modelled as Flux*t + ShotNoise*sqrt(t).

The issue that Smith ignores is that the ObjectFlux*t signal term needs to be an integer constant - you physically can't record fractions of a target object electron in one sub. If you're recording 0 electrons for a substantial proportion of subs, then with mean/median combine the signal term is actually 0 under a least-squares fit... those occasionally 1 electrons become the shot noise (and are indistinguishable from camera read noise).

With a sum combine as you suggest, then yes - those 1 electron subs will register a signal. However, with sum combine noise increases linearly with the number of subs (SNR ∝ n) therefore stacking doesn't increase SNR. For mean combine it's SNR ∝ sqrt(n)), hence noise is effectively reduced. For our current cameras where read noise >> 0 e-, sum combine isn't practical beyond a few frames at most.

Even with low read noise cameras, a decent sub length is still required if you want to chase the really, really faint stuff... and the limiting factor will still be the target signal dominating over sky shot noise.



Yep. The maths should still work out the same as with 5 or 10 e- read noise cameras. The absolute game changer would be zero (or very, very, very close to zero) read noise cameras - much like a short wire at room temperature is very low resistance, but still nowhere near having superconductive properties.



Yes, these SNR calculations are only valid at the individual pixel level.



Anstey included empirical data in his article. I've also done a few experiments myself, though nothing rigorous enough to share publicly.



Thanks Ray, always a pleasure :thumbsup:

SpaceNoob
29-10-2013, 03:27 PM
The newer generation low noise sensors tend to have smaller pixels too, which in effect increases the impact of read noise... I am wondering if there is a true reduction in required sub exposure duration, unless binned to optimum sampling. Then again, would binning further reduce your overall read noise, given that it is applied to the logical "binned" pixel?

I understand that smaller pixels give more data (assuming oversampling) for later processing such as deconvolution etc, but if you're binning to a decent sample rate that theoretically matches both optics and seeing, you're further improving read noise too. The smaller well depth of these sensors doesn't mean a whole lot when you're stacking so many subs, you get the dynamic range anyway.

I'm excited to see where the technology takes things, one can only hope the overall size of the sensors increases, say to the size of an 8300 at a minimum ;)

SpaceNoob
29-10-2013, 03:49 PM
I just checked my bias for the 8300 both binned 2x2 and unbinned 1x1.

The read noise value does increase in the 2x2 binned bias; however it is by a factor of ~ 2.1 or so. With there being 4 pixels in the single logical pixel, would I be correct in assuming there is an improvement of around half in this case?

Not sure how the sony sensor would look here or if I am heading down a rabbit hole.

Merlin66
29-10-2013, 04:57 PM
Interesting discussions...
My "Bible" "Handbook of CCD Astronomy" by Steve Howell, covers the whole gambut of noise and noise generation (p 50 - 82)
on the effects of On Chip binning he says-
"Binning of CCD pixels decreases the image resolution, usually increases the final SNR....and reduces the total readout time"
"we would get a final signal level equal to ~4 times each single pixel value, but only one times the read noise"

He explains (p75-p82) in painful detail the "CCD Equation" for SNR calculation used by the professional astronomers around the world.

RickS
29-10-2013, 05:14 PM
Not always true in practice, unfortunately... at least not with some KAF sensors.

gregbradley
29-10-2013, 07:28 PM
A lot of this theory is tempered by the reality of imaging.
Clouds, light pollution, poor tracking, bad autoguider performance, flexure, lack of clear nights, lack of time due to work.

So you tend to end up with some sort of subexposure length that optimises both performance of the camera and performance of your tracking in your setup.

Also 40 minute subs sound great if you have the tracking and weather for it. The occasional fast cloud would mean 40 minute subs are unwise.

Poor tracking would make it impractical anyway.

Another factor is the well depth of the camera. These smaller pixel cameras have small well depth and that is one of their weaknesses.
So bright stars can bloat in fast system in long exposures doing LRGB.

So well depth is another factor to consider. Its not really an issue if you are doing narrowband. Shorter exposures on bright objects using a camera with a shallow well depth is the strategy to prevent bloat of bright stars or losing star colours.

Greg.

Placidus
29-10-2013, 08:29 PM
Assuming floating point (not integer) arithmetic, there is no difference in signal to noise ratio between a sum combine and a mean combine.

Let's define signal to noise ratio of anything at all as coefficient of variation, i.e. value divided by standard deviation. Suppose you have 100 subs. The mean is the sum divided by 100. Dividing by 100 is just like changing centimeters to meters. It is a change of scale.

Suppose you have measured the length of a road in centmeters, and you are accurate to 10%. Re-expressing the length of the road in meters, or in kilometres, or in light years, won't increase the percentage accuracy or the fractional accuracy.

snr(sum) = sum / sd(sum)
snr(mean) = mean / sd(mean)
= (sum/100) / sd(sum/100)

what is sd(sum/100) ? Easy:

var(sum/100) = var(sum) / (100 * 100)
sd(sum/100) = sqrt[var(sum) / (100 * 100)]
= sd(sum) / 100

snr(mean) = sum/100 / sd(sum / 100)
= sum/100 / [ sd(sum) / 100 ]
= sum / sd(sum)
= snr(sum)

It's late and the wine was good and I've probably made 27 howling typos in the above but:

Suimmary: Dividing by a constant does not change the coefficient of variation. Changing from sum to mean is just a change of scale, like meters to centimeters. It does not change the accuracy of our photo. It does not change the signal to noise ratio.

Peter.M
29-10-2013, 09:02 PM
This seems counter intuitive to me. If a one second sub is insufficient to overwhelm read noise but a 2 minute sub can, that means that the image signal is increasing at a faster rate than the read noise of the camera. Logically then a longer sub would increase the separation between the read noise and the signal. Obviously this is assuming that the one source of noise is the read noise which is not the case.

Shiraz
29-10-2013, 10:13 PM
Hi again dave. As understand it, you reach limiting magnitude when a dim target has a 3x SNR. My system typically produces a few hundred electrons noise per sub from background sky, so an object at the limiting magnitude needs to produce maybe 1000 electrons. This is way above any possible quantisation noise, which really is limited to 1 electron either way - quantisation noise seems to be a non-event at my system limit. We cannot reach very dim targets simply because we have to image through the bright sky and we have a fixed noise background. It just takes too long to integrate much below the nominal sky levels.

Sum combine and mean combine have the same result if implemented in floating point - the only difference is a fixed scaling factor (the sub number). SNR increases with Sqrt(N) in both cases.

Anstey talked about empirical results, but I couldn't find any hard data in his article - might have been me not understanding what he was presenting though.


From what I have read, seems that all sensors implement a compromise process in binning - you don't get the full gain expected from binning and there is even some argument that software binning after the event may be a better approach. I understand that manufacturers may change the internal gain to keep the output stages from severe overload, so test data may be hard to interpret. I haven't tested the performance of the 694 when binned, except to ensure that it worked OK.


Hi Greg - agree, the therory is only ever a starting point, but it is way better than waving one's finger in the wind and hoping - which is more or less what I used to do.


agree :thumbsup:


Hi Peter, the read noise is a fixed injection at the end of a sub. The signal from the sky rises as the sub length increases and eventually you get to a point where the signal is large enough that the associated shot noise completely swamps the single burst of read noise - there is no point in having longer subs, since read noise has been removed from consideration and the SNR is determined for all practical purposes by the shot noise and you cannot do anything about that. That is the basis for all of the sub length calculators. You could use fewer but longer subs if you wished, but you would not significantly change the SNR in the final combined image.

rally
29-10-2013, 10:33 PM
Ray,

I assume we are talking about sky limited exposures ? and that this is about DSO style imaging where we have faint targets with relatively low flux levels.

Re " . . . to make the subs long enough that shot noise in a sub (from the sky background . . .) "
Is it the Shot Noise or is it actually the Sky Noise itself that we are using to hide the Read Noise ?
I think its the Sky Noise

I'll have an attempt at this !

The purpose of determining a sub exposure time (using this method) is usually to provide the imager with a minimum exposure time that satisfies the purpose of reducing as much image noise as one has control over - that is BTW after all the normal image image calibration and gear tuning has been done.

Its not "The" time or the best time or even the maximum time - its just the minimum time that satisfies a particular basic goal given the local sky conditions, the imaging system and the camera system.

Afterall - there are many other constraints or preferred goals that might also dictate a longer or shorter exposure time for a given situation !

More specifically, since we cannot control Sky Noise at our given site (other than by relocating !), the goal of producing a Sky Limited exposure is to minimise camera Read Noise as a significant contributor of total image noise.
ie to reduce or minimise the effects of noise over which we have some control compared to the noise over which we have no control.

This is achieved by limiting Read Noise to about 3-10% of sky noise so that the total contribution of Read Noise which is a 'fixed' constant determined by the cameras CCD and electronics is swamped by the higher level of Sky Noise over which we have no control.

The 'fixed' value of Read Noise, as provided by the camera manufacturer, is always present no matter how short or long the exposure, but it isn't truly fixed since its also random in nature.

By using statistical methods with multiple exposures in our stacking we can also reduce the effect of Read Noise, since its not scaled with the exposure times and nor is it consistent per pixel across multiple exposures - ie its random

So this process is just another tool to use to improve our SNR, along with our normal calibration processes and everything else.

As mentioned if you want faint detail and the flux rate is low, then you have no option but to expose for very long times just to capture enough signal to get above the read noise or even get above the quantisation levels to even register as one bit !
So of course more bits of faint flux data is needed to get more detail - is a given, rendering this minimum exposure time as meaningless.

My 2c

Rally

PS
Greg,
Passing clouds at a 'Dark site' dont add noise they only reduce signal and therefiore dont really affect your image after normalisation !

Shiraz
29-10-2013, 11:19 PM
Hi Rally. no argument from me overall, but there are two points that possibly need clarification:
1. shot noise from the sky is the main source of noise - the average sky signal itself is a nice pedestal that can be just subtracted - it is only the shot noise part of the sky signal that cannot be easily removed.
2. this discussion is about exposures for subs, not the total overall exposure. clearly, longer overall exposure is always better - it is just longer subs that may be a waste of effort.

Merlin66
30-10-2013, 07:29 AM
Hmmm
Just a question...
You talk about average combine and median combine....
Surely the need is to sum the subs - to lift the total signal...a higher total signal would reduce the shot SNR??

BTW in spectroscopy a spectrum with a SNR of around 10 is taken as the minimum for "useable data" (close to limiting magnitude), SNR>50 is good and SNR>100 is almost mandatory for ProAm contributions.....

Shiraz
30-10-2013, 07:45 AM
thanks Ken for the SNR info - I guess in determining limiting magnitude all you need to say is "it is there" for which SNR=3 may be enough for a reasonable level of confidence - all seems a bit arbitrary though. As far as I can see, apart from a scaling factor, there is no SNR difference between sum and average if the stacking result is floating point - with average, the noise is scaled as well as the signal so the SNR is the same as for summation. In practice any sensible average would be implemented by a summation followed by a scaling. Median stacking works very well if there are enough subs, but I do not yet understand the relationship to summation - there is a summary in the Pixinsight documentation.

Merlin66
30-10-2013, 07:57 AM
Hmmm
If the sub has an average (in spectroscopy the Plank distribution curve always has a peak/ rise somewhere, so KISS, and use an "average") of say 6400ADU (Shot noise = sqt =SNR=6400/80= 80.
If you sum 5 of these subs, the total signal becomes 32000ADU, and the shot noise =179, SNR = 179 a significant improvement?

If you average/ median combine then I'd think you'd still end up with (5 x 6400)/5 = 6400ADU???? and SNR probably less than 80 due to the manipulations???

multiweb
30-10-2013, 08:56 AM
See above from Placidus. Just a different scale.

Merlin66
30-10-2013, 09:05 AM
Marc,
I don't believe it's a scaling issue....
the sd of a 500ADU sub is not the same as an sd of a 5000 ADU sub
One is sqt(500) = 22, the other sqt(5000)= 71

gregbradley
30-10-2013, 12:28 PM
PS
Greg,
Passing clouds at a 'Dark site' dont add noise they only reduce signal and therefiore dont really affect your image after normalisation ![/QUOTE]\

Perhaps but they are still ruined. You usually get nasty halos around bright stars from the cloud at a minimum.

Greg.

Peter.M
30-10-2013, 01:00 PM
here is the point that I use for my decision to take longer subs. If you agree that longer integration will always be better then surely its better to take 20 long subs where the stacking isn't hit by diminishing returns too hard.

gregbradley
30-10-2013, 03:44 PM
Also when you use these big chips the file sizes become an issue as well.
10 x 32mb files becomes slow to process.

Greg.

Bassnut
30-10-2013, 05:42 PM
Ken said Median, not Mean.

In my experience, Median does excellent noise reduction.

To quote directly from the CCD stack manual (with Median) "Data rejection is not necessary (though it is acceptable) because median is a de-facto data-rejection algorithm".

RickS
30-10-2013, 05:52 PM
The diminishing returns are related to the total exposure, not the number of subs (at least so long as your subs are long enough to make read noise insignificant).



That's true but it comes at a cost in SNR - you lose around 20% using median instead of average. An average combination with a well chosen rejection algorithm and parameters will generally give you the best of both worlds. At least, that's what the maths says :) It seems to work in practice as well, at least the couple of times that I've done a comparison.

Cheers,
Rick.

Bassnut
30-10-2013, 06:13 PM
Yes, median has worked well for me only when there was lots of data but the subs were swamped with artifacts, data rejection was way to severe and hard to set. Otherwise 2% or so reject and average combine was better.

Peter.M
30-10-2013, 07:15 PM
So you think that the 21st sub will give the same noise reduction as the third? Personally regardless of the maths, top photographers are getting better results from longer subs and that is enough for me.

The table here shows clearly how each subexposure gives you a lower and lower snr gain regardless of sub length.
http://starizona.com/acb/ccd/advimagingdetail.aspx

Shiraz
30-10-2013, 07:34 PM
if of any interest Peter, modelling suggests a sub length for your system under dark sky of about 20 minutes. How does that compare with your experience?

I am very wary of the concept of diminishing returns for 2 reasons:
1. you restrict your exposure options without any consideration of the system design - scary
2. planetary imagers happily use 10,000 frames and up - each frame helps, even though any individual one could be left out with no practical effect

Peter.M
30-10-2013, 08:08 PM
20 minutes seems about right actually, I am yet to experiment too much at a dark site. I have to admit being able to expose out to 30 minutes now successfully has really helped with narrowband imaging.

I guess the thing about diminishing returns is you still get a return, planetary images usually use more frames out of convenience I think. If I could somehow capture 10000 20 minute subs at 50fps I would too :lol:.

Shiraz
30-10-2013, 08:32 PM
that's comforting :) "as long as possible" for NB seems to be right as well - your helix is a beaut

RickS
30-10-2013, 09:05 PM
I believe what I said... sqrt(10 x 10) = sqrt(5 x 20). Once you can ignore read noise the SNR from a stack of 10 x 10 minutes subs is the same as a stack of 5 x 20 minute subs. This is not inconsistent with the Starizona table.

For narrowband filters the number of photons collected is much smaller and longer subs are needed to minimize the effects of read noise.

There's nothing wrong with long subs but there's a point where the gain is small and the risk of losing a whole sub due to a cloud or a tracking error becomes more significant.

avandonk
30-10-2013, 10:25 PM
Here are two threads where you should be able to get the full res raw fit data of 16 minute subs in 3nm NII i.e. Stacked and only corrected for darks and flats. 9 frames, 21 frames and 33 frames.

http://www.iceinspace.com.au/forum/showthread.php?t=111601

http://www.iceinspace.com.au/forum/showthread.php?t=112594&page=2

Here is an animated gif showing the improvement from nine frames to twenty one.

http://d1355990.i49.quadrahosting.com.au/2013_09/lmc_9vs21.gif

In my opinion more frames would improve the signal to noise even over the 33 frame stack. Unfortunately weather always beats greed!

By the way 16 minutes at f/3 collects the same number of target photons per pixel as 44 minutes at f/5. With about a third of the thermal noise.

This thread has been very interesting so far.
Bert