Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rating: Thread Rating: 7 votes, 5.00 average.
  #1  
Old 28-10-2013, 11:23 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
how long should subs be with low read noise CCDs?

A generally accepted rule for choosing the sub length is to make the subs long enough that shot noise in a sub (from the sky background) will overwhelm the read noise that is added at the end of the sub exposure. A widely used equation for determining an appropriate sub length is that from Starizona, which is:

Sub length = constant*(readnoise*readnoise)/skyflux

The key thing about this equation is that the sub length is proportional to the square of the read noise (for a given sky flux). To put some numbers on this, if a system using a camera with a read noise of 10 electrons requires 10 minute subs, a similar camera, but with a read noise of 5 electrons, would only require 2.5 minute subs for the same conditions (half the read noise requires a quarter the sub length). You could use longer subs, but you wouldn’t gain anything practical in signal to noise ratio, since read noise has already been taken out of consideration once subs are longer than 2.5 minutes.

The new breed of low read noise chips (eg from Sony) really can operate very effectively with short subs – the game is changing.

The other thing to note from the equation is that the sub length is inversely proportional to sky flux – if the sky is bright, you can use shorter subs than if it is dark. Shorter subs do not help with SNR under bright sky, but they do not make it any worse - the shot noise from the sky is so high that it doesn’t matter if you add a bit more read noise.

All this goes out the window with NB - the sky flux in the equation above will be very small and the basic rule is that subs should be as long as possible - low read noise chips will have better SNR, but even they can benefit from long subs in NB.

thanks for reading - discussion welcomed. regards ray

Last edited by Shiraz; 28-10-2013 at 11:40 PM.
Reply With Quote
  #2  
Old 29-10-2013, 12:11 AM
naskies's Avatar
naskies (Dave)
Registered User

naskies is offline
 
Join Date: Jul 2011
Location: Brisbane
Posts: 1,865
Hi Ray,

Quote:
Originally Posted by Shiraz View Post
A generally accepted rule for choosing the sub length is to make the subs long enough that shot noise in a sub (from the sky background) will overwhelm the read noise that is added at the end of the sub exposure. A widely used equation for determining an appropriate sub length is that from Starizona, which is:

Sub length = constant*(readnoise*readnoise)/skyflux

The key thing about this equation is that the sub length is proportional to the square of the read noise (for a given sky flux).
Rick posted this link in another discussion thread recently on this very topic:

http://www.cloudynights.com/item.php?item_id=1622

(One issue being that the sky-noise-dominating-read-noise equation ignores the target brightness and noise.)

Quote:
To put some numbers on this, if a system using a camera with a read noise of 10 electrons requires 10 minute subs, a similar camera, but with a read noise of 5 electrons, would only require 2.5 minute subs for the same conditions (half the read noise, a quarter the sub length).
I'm not sure that this is correct... SNR = Signal / Noise for a single sub, i.e. it's a linear relationship. If you halve the noise, you can only halve the signal to keep the same SNR.

Adding n separate subs together gives you SNR = n * Signal / (Sqrt(n) * Noise), i.e. the "four times the integration time for half the noise" relationship that we're all familiar with.

Quote:
You could use longer subs, but you wouldn’t gain anything practical in signal to noise ratio, since read noise has already been taken out of consideration once subs are longer than 2.5 minutes.
There's a zone where the SNR depends purely on total integration time, i.e. where the number of subs and the sub length are interchangeable provided that the total integration time remains constant.

However, the main reason for longer subs is to go deeper by avoiding quantisation errors. If your target is so dim that you're only getting 1 photon every 5 mins (e.g. a faint jet from a galaxy) - but you're taking 2.5 min subs - then it'll just be lost in the noise and you won't record a signal over the noise at all.

On the other hand, if you take 5 min subs you'd be able to detect a 1 photon brightening over the surrounding background with lots of stretching. However, with an average of 1 photon you won't be able to detect any surface detail - there's no zoom for contrast. With 30 min subs, you'd average 6 photons per sub... and there'd be enough room to have brighter (e.g. 8 photons) or darker (e.g. 4 photons) regions within the surface for detail.

This is also why long narrowband subexposures show more detail.

Don't forget the other source of noise: non-Poisson/non-random noise such as pixel defects, cosmic rays, planes, satellites, and so on. These will decrease only by dithering and stacking a larger number of subs (individual sub length has little effect).

Quote:
The new breed of low read noise chips (eg from Sony) really can operate very effectively with short subs – the game is changing.
Absolutely - for bright regions where plenty of photons are coming in. However, even low noise sensors will still have quantisation issues.

The complete game changer will be if we get effectively *zero* noise sensors one day: stacking will be done by sum rather than average/median combine. This would also make lucky imaging possible - just take the sharp frames of a video and sum them over the subexposure time. Maybe we'll also have to carry tanks of liquid nitrogen or helium out to our dark sites one day?

Quote:
The other thing to note from the equation is that the sub length is inversely proportional to sky flux – if the sky is bright, you can use shorter subs than if it is dark. Shorter subs do not help with sensitivity under bright sky, but they do not make it any worse - the shot noise from the sky is so high that it doesn’t matter if you add a bit more read noise.
The SNR is worse under bright skies. Although you don't have to expose for as long, the sky shot noise contribution is proportional to the square root of the brightness, i.e. 4x the sky brightness leads to 2x the sky shot noise. To cancel out 2x the sky shot noise, you'll have to increase the total integration time by 4x (but you can use shorter subexposures to do so). Intuitively, most of us have experienced this - shooting shorter subs from light pollution, but stacking a huge number of them to overcome the noise.

Quote:
All this goes out the window with NB - the sky flux in the equation above will be very small and the basic rule is that subs should be as long as possible - low read noise chips will have better SNR, but even they can benefit from long subs in NB.
It's not technically just a NB thing - this applies whenever the target is *much* brighter the sky background, such as NB exposures or even say Orion Nebula under mag 22 dark skies.

This is a good zone to be in, because SNR increases linearly with sub duration (sky noise contribution is effectively zero)... which is why it's still worth shooting Orion Nebula under dark skies.

Quote:
thanks for reading - discussion welcomed. regards ray
Thanks for the discussion. My comments above are based on my understanding of the maths/stats behind image sampling (e.g. the formulae in the Anstey article). Feel free to correct the inevitable mistakes that I've made
Reply With Quote
  #3  
Old 29-10-2013, 10:26 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
thanks for the comprehensive response Dave - pleasure to have a detailed discussion on this topic.

I will try to summarise my response to the points you raise, but that will require a bit of a critique of the Anstey paper - so sorry to go off track a bit. Also, if it sounds a bit pompous in places - sorry..

first off, you questioned whether sub length is proportional to read noise squared. I think that it is - the read noise and signal both increase linearly with multiple subs, but the shot noise only goes up with the square root. If you double the read noise, you need to increase the signal by 4x to get a doubling in shot noise, ie you need to expose for 4x as long if you want the shot noise to cover a doubling of read noise.

The other main points come from the underlying assumptions of the Anstey article, which are:
- you need to look at the noise in the target when you set sub length
- short subs will be messed up by quantisation noise.

I disagree on both points:
1. In any imaging I have done, the bright bits of the target are never an issue - they take care of themselves. The big problem is the noise that blights the dim parts of the scene when the target is barely visible or not present at all. For broadband imaging the primary source of noise in this region is shot noise from the sky background and target signal/ noise is quite unimportant. Hence, I consider that the Smith/Starizona approach is valid and that subs should be chosen to keep read noise well below shot noise from the sky background. Noise is noise and it doesn't matter where it came from - there is nothing special about the target noise.

2. I think that the quantisation issue is a non-starter. To take your example, consider a target with an average of 1 photon every five minutes. Some of the 2.5 minute subs (on average slightly less than half of them) will have no target photons, some will have one and maybe the odd one will have two or more. Add the signals over 12 such subs and you will typically get around 6 target photons - the same as you would get from one long 60 minute exposure. This of course depends on the stacking method - it won't work if the stacking is done by simply averaging the camera signals at the same bit resolution as the original (then you might get either 1 or 0 for the target). However, any stacking system worth its salt will at least add up the total signal and then divide by the number of subs to give an average. If the internal signal representation is floating point, you may get something like "averagesignal = 0.5" and if you want to know how many photons you collected then just multiply by the number of subs - no signal has been lost by having sub signals below 1 photon on average. The final average signal will need to be stretched more than it would for the longer exposure, but the SNR will be exactly the same, since the noise will also have been divided by the number of subs the get an average. ie, there is no quantisation issue at all. Even if the stacking system produces a fixed point representation, all is not lost - see http://www.stark-labs.com/craig/reso...thStacking.pdf. Maybe I have it wrong on stacking, but with my current understanding, I do not accept any of the Anstey arguments based on quantisation error.

A couple of other comments:
- cameras with 1 electron read noise are available - they just cost an arm and a leg.
- your conclusion that shooting something like the Orion nebula is similar to NB is valid, but only for the bright bits. I find that, to bring out the fine detail around it, you need to take into account the sky noise - the Smith/Starizona approach is still valid. Maybe it isn't on something like the moon or a planet, but then the issue is not one of long exposures anyway.
- the simplest way to resolve some of these questions is by experiment, will try to do so if I can ever again find a nice clear night with good transparency and then force myself to devote some imaging time to something that may be of limited interest.

Cheers and regards Ray

Last edited by Shiraz; 29-10-2013 at 11:33 AM.
Reply With Quote
  #4  
Old 29-10-2013, 12:12 PM
naskies's Avatar
naskies (Dave)
Registered User

naskies is offline
 
Join Date: Jul 2011
Location: Brisbane
Posts: 1,865
Quote:
Originally Posted by Shiraz View Post
first off, you questioned whether sub length is proportional to read noise squared. I think that it is - the read noise and signal both increase linearly with multiple subs, but the shot noise only goes up with the square root. If you double the read noise, you need to increase the signal by 4x to get a doubling in shot noise, ie you need to expose for 4x as long if you want the shot noise to cover a doubling of read noise.
Actually, with the light of day I think we're talking about slightly different things... my mistake, sorry

Yes, I agree that with all else equal, a camera with half the read noise will only need one quarter the sub length to reach a sky-limited exposure.

(I was thinking of the SNR of an individual sub, where sub length is indeed a linear inverse relationship with read noise, but that doesn't factor into sky-limited exposure duration calculations.)

Quote:
The other main points come from the underlying assumptions of the Anstey article, which are:
- you need to look at the noise in the target when you set sub length
- short subs will be messed up by quantisation noise.

I disagree on both points:
1. In any imaging I have done, the bright bits of the target are never an issue - they take care of themselves. The big problem is the noise that blights the dim parts of the scene when the target is barely visible or not present at all. For broadband imaging the primary source of noise in this region is shot noise from the sky background and target signal/ noise is quite unimportant. Hence, I consider that the Smith/Starizona approach is valid and that subs should be chosen to keep read noise below shot noise from the sky background. Noise is noise and it doesn't matter where it came from - there is nothing special about the target noise.
You don't just need to look at the target noise, but also the signal in the dim areas as you mentioned... [continued below]

Quote:
2. I think that the quantisation issue is a non-starter. To take your example, consider a target with an average of 1 photon every five minutes. Some of the 2.5 minute subs (on average slightly less than half of them) will have no target photons, some will have one and maybe the odd one will have two or more. Add the signals over 12 such subs and you will typically get around 6 target photons - the same as you would get from one long 60 minute exposure. This of course depends on the stacking method - it won't work if the stacking is done by simply averaging the camera signals at the same bit resolution as the original (where you might get either 1 or 0 for the target). However, any stacking system worth its salt will at least add up the total signal and then divide by the number of subs. Then, if the internal signal representation is floating point, you may get something like "averagesignal = 0.5" and if you want to know how many photons you collected then just multiply by the number of subs - nothing has been lost by having sub signals below 1 photon on average. The final average signal will need to be stretched a bit more than it would for the longer exposure, but the SNR will be exactly the same, since the noise will also have been divided by the number of subs the get an average. ie, there is no quantisation issue at all. Even if the stacking system produces a fixed point representation, all is not lost - see http://www.stark-labs.com/craig/reso...thStacking.pdf. Maybe I have it wrong on stacking, but with my current understanding, I do not accept any of the Anstey arguments based on quantisation error.
If you don't accept quantisation noise/errors, then may I ask what your explanation is for the limiting magnitude under a given set of conditions?

It clearly applies, otherwise we could all just take huge numbers of 1 min sky limited exposures in the heart of an urban centre, and get nicely detailed mag 30 galaxies...?

In both the Smith and Anstey models, object signals are modelled as Flux*t + ShotNoise*sqrt(t).

The issue that Smith ignores is that the ObjectFlux*t signal term needs to be an integer constant - you physically can't record fractions of a target object electron in one sub. If you're recording 0 electrons for a substantial proportion of subs, then with mean/median combine the signal term is actually 0 under a least-squares fit... those occasionally 1 electrons become the shot noise (and are indistinguishable from camera read noise).

With a sum combine as you suggest, then yes - those 1 electron subs will register a signal. However, with sum combine noise increases linearly with the number of subs (SNR ∝ n) therefore stacking doesn't increase SNR. For mean combine it's SNR ∝ sqrt(n)), hence noise is effectively reduced. For our current cameras where read noise >> 0 e-, sum combine isn't practical beyond a few frames at most.

Even with low read noise cameras, a decent sub length is still required if you want to chase the really, really faint stuff... and the limiting factor will still be the target signal dominating over sky shot noise.

Quote:
A couple of other comments:
- cameras with 1 electron read noise are available - they just cost an arm and a leg.
Yep. The maths should still work out the same as with 5 or 10 e- read noise cameras. The absolute game changer would be zero (or very, very, very close to zero) read noise cameras - much like a short wire at room temperature is very low resistance, but still nowhere near having superconductive properties.

Quote:
- your conclusion that shooting something like the Orion nebula is similar to NB is valid, but only for the bright bits. I find that, to bring out the fine detail around it, you need to take into account the sky noise - the Smith/Starizona approach is still valid. Maybe it isn't on something like the moon or a planet, but then the issue is not one of long exposures anyway.
Yes, these SNR calculations are only valid at the individual pixel level.

Quote:
- the simplest way to resolve some of these questions is by experiment, will try to do so if I can ever again find a nice clear night with good transparency.
Anstey included empirical data in his article. I've also done a few experiments myself, though nothing rigorous enough to share publicly.

Quote:
Cheers and regards Ray
Thanks Ray, always a pleasure
Reply With Quote
  #5  
Old 29-10-2013, 02:27 PM
SpaceNoob (Chris)
Atlas Observatory

SpaceNoob is offline
 
Join Date: May 2012
Location: Canberra
Posts: 268
The newer generation low noise sensors tend to have smaller pixels too, which in effect increases the impact of read noise... I am wondering if there is a true reduction in required sub exposure duration, unless binned to optimum sampling. Then again, would binning further reduce your overall read noise, given that it is applied to the logical "binned" pixel?

I understand that smaller pixels give more data (assuming oversampling) for later processing such as deconvolution etc, but if you're binning to a decent sample rate that theoretically matches both optics and seeing, you're further improving read noise too. The smaller well depth of these sensors doesn't mean a whole lot when you're stacking so many subs, you get the dynamic range anyway.

I'm excited to see where the technology takes things, one can only hope the overall size of the sensors increases, say to the size of an 8300 at a minimum
Reply With Quote
  #6  
Old 29-10-2013, 02:49 PM
SpaceNoob (Chris)
Atlas Observatory

SpaceNoob is offline
 
Join Date: May 2012
Location: Canberra
Posts: 268
I just checked my bias for the 8300 both binned 2x2 and unbinned 1x1.

The read noise value does increase in the 2x2 binned bias; however it is by a factor of ~ 2.1 or so. With there being 4 pixels in the single logical pixel, would I be correct in assuming there is an improvement of around half in this case?

Not sure how the sony sensor would look here or if I am heading down a rabbit hole.
Reply With Quote
  #7  
Old 29-10-2013, 03:57 PM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,904
Interesting discussions...
My "Bible" "Handbook of CCD Astronomy" by Steve Howell, covers the whole gambut of noise and noise generation (p 50 - 82)
on the effects of On Chip binning he says-
"Binning of CCD pixels decreases the image resolution, usually increases the final SNR....and reduces the total readout time"
"we would get a final signal level equal to ~4 times each single pixel value, but only one times the read noise"

He explains (p75-p82) in painful detail the "CCD Equation" for SNR calculation used by the professional astronomers around the world.
Reply With Quote
  #8  
Old 29-10-2013, 04:14 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by Merlin66 View Post
"we would get a final signal level equal to ~4 times each single pixel value, but only one times the read noise"
Not always true in practice, unfortunately... at least not with some KAF sensors.
Reply With Quote
  #9  
Old 29-10-2013, 06:28 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
A lot of this theory is tempered by the reality of imaging.
Clouds, light pollution, poor tracking, bad autoguider performance, flexure, lack of clear nights, lack of time due to work.

So you tend to end up with some sort of subexposure length that optimises both performance of the camera and performance of your tracking in your setup.

Also 40 minute subs sound great if you have the tracking and weather for it. The occasional fast cloud would mean 40 minute subs are unwise.

Poor tracking would make it impractical anyway.

Another factor is the well depth of the camera. These smaller pixel cameras have small well depth and that is one of their weaknesses.
So bright stars can bloat in fast system in long exposures doing LRGB.

So well depth is another factor to consider. Its not really an issue if you are doing narrowband. Shorter exposures on bright objects using a camera with a shallow well depth is the strategy to prevent bloat of bright stars or losing star colours.

Greg.
Reply With Quote
  #10  
Old 29-10-2013, 07:29 PM
Placidus (Mike and Trish)
Narrowing the band

Placidus is offline
 
Join Date: Mar 2011
Location: Euchareena, NSW
Posts: 3,719
Sum versus mean combine

Assuming floating point (not integer) arithmetic, there is no difference in signal to noise ratio between a sum combine and a mean combine.

Let's define signal to noise ratio of anything at all as coefficient of variation, i.e. value divided by standard deviation. Suppose you have 100 subs. The mean is the sum divided by 100. Dividing by 100 is just like changing centimeters to meters. It is a change of scale.

Suppose you have measured the length of a road in centmeters, and you are accurate to 10%. Re-expressing the length of the road in meters, or in kilometres, or in light years, won't increase the percentage accuracy or the fractional accuracy.

snr(sum) = sum / sd(sum)
snr(mean) = mean / sd(mean)
= (sum/100) / sd(sum/100)

what is sd(sum/100) ? Easy:

var(sum/100) = var(sum) / (100 * 100)
sd(sum/100) = sqrt[var(sum) / (100 * 100)]
= sd(sum) / 100

snr(mean) = sum/100 / sd(sum / 100)
= sum/100 / [ sd(sum) / 100 ]
= sum / sd(sum)
= snr(sum)

It's late and the wine was good and I've probably made 27 howling typos in the above but:

Suimmary: Dividing by a constant does not change the coefficient of variation. Changing from sum to mean is just a change of scale, like meters to centimeters. It does not change the accuracy of our photo. It does not change the signal to noise ratio.
Reply With Quote
  #11  
Old 29-10-2013, 08:02 PM
Peter.M's Avatar
Peter.M
Registered User

Peter.M is offline
 
Join Date: Sep 2011
Location: Adelaide
Posts: 947
Quote:
Originally Posted by Shiraz View Post
You could use longer subs, but you wouldn’t gain anything practical in signal to noise ratio
This seems counter intuitive to me. If a one second sub is insufficient to overwhelm read noise but a 2 minute sub can, that means that the image signal is increasing at a faster rate than the read noise of the camera. Logically then a longer sub would increase the separation between the read noise and the signal. Obviously this is assuming that the one source of noise is the read noise which is not the case.
Reply With Quote
  #12  
Old 29-10-2013, 09:13 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by naskies View Post

If you don't accept quantisation noise/errors, then may I ask what your explanation is for the limiting magnitude under a given set of conditions?

It clearly applies, otherwise we could all just take huge numbers of 1 min sky limited exposures in the heart of an urban centre, and get nicely detailed mag 30 galaxies...?

With a sum combine as you suggest, then yes - those 1 electron subs will register a signal. However, with sum combine noise increases linearly with the number of subs (SNR ∝ n) therefore stacking doesn't increase SNR. For mean combine it's SNR ∝ sqrt(n)), hence noise is effectively reduced. For our current cameras where read noise >> 0 e-, sum combine isn't practical beyond a few frames at most.

Anstey included empirical data in his article. I've also done a few experiments myself, though nothing rigorous enough to share publicly.
Hi again dave. As understand it, you reach limiting magnitude when a dim target has a 3x SNR. My system typically produces a few hundred electrons noise per sub from background sky, so an object at the limiting magnitude needs to produce maybe 1000 electrons. This is way above any possible quantisation noise, which really is limited to 1 electron either way - quantisation noise seems to be a non-event at my system limit. We cannot reach very dim targets simply because we have to image through the bright sky and we have a fixed noise background. It just takes too long to integrate much below the nominal sky levels.

Sum combine and mean combine have the same result if implemented in floating point - the only difference is a fixed scaling factor (the sub number). SNR increases with Sqrt(N) in both cases.

Anstey talked about empirical results, but I couldn't find any hard data in his article - might have been me not understanding what he was presenting though.

Quote:
Originally Posted by SpaceNoob View Post
I just checked my bias for the 8300 both binned 2x2 and unbinned 1x1.

The read noise value does increase in the 2x2 binned bias; however it is by a factor of ~ 2.1 or so. With there being 4 pixels in the single logical pixel, would I be correct in assuming there is an improvement of around half in this case?

Not sure how the sony sensor would look here or if I am heading down a rabbit hole.
From what I have read, seems that all sensors implement a compromise process in binning - you don't get the full gain expected from binning and there is even some argument that software binning after the event may be a better approach. I understand that manufacturers may change the internal gain to keep the output stages from severe overload, so test data may be hard to interpret. I haven't tested the performance of the 694 when binned, except to ensure that it worked OK.

Quote:
Originally Posted by gregbradley View Post
A lot of this theory is tempered by the reality of imaging.
Clouds, light pollution, poor tracking, bad autoguider performance, flexure, lack of clear nights, lack of time due to work.

So you tend to end up with some sort of subexposure length that optimises both performance of the camera and performance of your tracking in your setup.

Also 40 minute subs sound great if you have the tracking and weather for it. The occasional fast cloud would mean 40 minute subs are unwise.

Poor tracking would make it impractical anyway.

Greg.
Hi Greg - agree, the therory is only ever a starting point, but it is way better than waving one's finger in the wind and hoping - which is more or less what I used to do.
Quote:
Originally Posted by Placidus View Post
Assuming floating point (not integer) arithmetic, there is no difference in signal to noise ratio between a sum combine and a mean combine.


Suimmary: Dividing by a constant does not change the coefficient of variation. Changing from sum to mean is just a change of scale, like meters to centimeters. It does not change the accuracy of our photo. It does not change the signal to noise ratio.
agree

Quote:
Originally Posted by Peter.M View Post
This seems counter intuitive to me. If a one second sub is insufficient to overwhelm read noise but a 2 minute sub can, that means that the image signal is increasing at a faster rate than the read noise of the camera. Logically then a longer sub would increase the separation between the read noise and the signal. Obviously this is assuming that the one source of noise is the read noise which is not the case.
Hi Peter, the read noise is a fixed injection at the end of a sub. The signal from the sky rises as the sub length increases and eventually you get to a point where the signal is large enough that the associated shot noise completely swamps the single burst of read noise - there is no point in having longer subs, since read noise has been removed from consideration and the SNR is determined for all practical purposes by the shot noise and you cannot do anything about that. That is the basis for all of the sub length calculators. You could use fewer but longer subs if you wished, but you would not significantly change the SNR in the final combined image.

Last edited by Shiraz; 29-10-2013 at 11:15 PM.
Reply With Quote
  #13  
Old 29-10-2013, 09:33 PM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Ray,

I assume we are talking about sky limited exposures ? and that this is about DSO style imaging where we have faint targets with relatively low flux levels.

Re " . . . to make the subs long enough that shot noise in a sub (from the sky background . . .) "
Is it the Shot Noise or is it actually the Sky Noise itself that we are using to hide the Read Noise ?
I think its the Sky Noise

I'll have an attempt at this !

The purpose of determining a sub exposure time (using this method) is usually to provide the imager with a minimum exposure time that satisfies the purpose of reducing as much image noise as one has control over - that is BTW after all the normal image image calibration and gear tuning has been done.

Its not "The" time or the best time or even the maximum time - its just the minimum time that satisfies a particular basic goal given the local sky conditions, the imaging system and the camera system.

Afterall - there are many other constraints or preferred goals that might also dictate a longer or shorter exposure time for a given situation !

More specifically, since we cannot control Sky Noise at our given site (other than by relocating !), the goal of producing a Sky Limited exposure is to minimise camera Read Noise as a significant contributor of total image noise.
ie to reduce or minimise the effects of noise over which we have some control compared to the noise over which we have no control.

This is achieved by limiting Read Noise to about 3-10% of sky noise so that the total contribution of Read Noise which is a 'fixed' constant determined by the cameras CCD and electronics is swamped by the higher level of Sky Noise over which we have no control.

The 'fixed' value of Read Noise, as provided by the camera manufacturer, is always present no matter how short or long the exposure, but it isn't truly fixed since its also random in nature.

By using statistical methods with multiple exposures in our stacking we can also reduce the effect of Read Noise, since its not scaled with the exposure times and nor is it consistent per pixel across multiple exposures - ie its random

So this process is just another tool to use to improve our SNR, along with our normal calibration processes and everything else.

As mentioned if you want faint detail and the flux rate is low, then you have no option but to expose for very long times just to capture enough signal to get above the read noise or even get above the quantisation levels to even register as one bit !
So of course more bits of faint flux data is needed to get more detail - is a given, rendering this minimum exposure time as meaningless.

My 2c

Rally

PS
Greg,
Passing clouds at a 'Dark site' dont add noise they only reduce signal and therefiore dont really affect your image after normalisation !
Reply With Quote
  #14  
Old 29-10-2013, 10:19 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Hi Rally. no argument from me overall, but there are two points that possibly need clarification:
1. shot noise from the sky is the main source of noise - the average sky signal itself is a nice pedestal that can be just subtracted - it is only the shot noise part of the sky signal that cannot be easily removed.
2. this discussion is about exposures for subs, not the total overall exposure. clearly, longer overall exposure is always better - it is just longer subs that may be a waste of effort.

Last edited by Shiraz; 30-10-2013 at 06:31 AM.
Reply With Quote
  #15  
Old 30-10-2013, 06:29 AM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,904
Hmmm
Just a question...
You talk about average combine and median combine....
Surely the need is to sum the subs - to lift the total signal...a higher total signal would reduce the shot SNR??

BTW in spectroscopy a spectrum with a SNR of around 10 is taken as the minimum for "useable data" (close to limiting magnitude), SNR>50 is good and SNR>100 is almost mandatory for ProAm contributions.....
Reply With Quote
  #16  
Old 30-10-2013, 06:45 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by Merlin66 View Post
Hmmm
Just a question...
You talk about average combine and median combine....
Surely the need is to sum the subs - to lift the total signal...a higher total signal would reduce the shot SNR??

BTW in spectroscopy a spectrum with a SNR of around 10 is taken as the minimum for "useable data" (close to limiting magnitude), SNR>50 is good and SNR>100 is almost mandatory for ProAm contributions.....
thanks Ken for the SNR info - I guess in determining limiting magnitude all you need to say is "it is there" for which SNR=3 may be enough for a reasonable level of confidence - all seems a bit arbitrary though. As far as I can see, apart from a scaling factor, there is no SNR difference between sum and average if the stacking result is floating point - with average, the noise is scaled as well as the signal so the SNR is the same as for summation. In practice any sensible average would be implemented by a summation followed by a scaling. Median stacking works very well if there are enough subs, but I do not yet understand the relationship to summation - there is a summary in the Pixinsight documentation.

Last edited by Shiraz; 30-10-2013 at 06:55 AM.
Reply With Quote
  #17  
Old 30-10-2013, 06:57 AM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,904
Hmmm
If the sub has an average (in spectroscopy the Plank distribution curve always has a peak/ rise somewhere, so KISS, and use an "average") of say 6400ADU (Shot noise = sqt =SNR=6400/80= 80.
If you sum 5 of these subs, the total signal becomes 32000ADU, and the shot noise =179, SNR = 179 a significant improvement?

If you average/ median combine then I'd think you'd still end up with (5 x 6400)/5 = 6400ADU???? and SNR probably less than 80 due to the manipulations???
Reply With Quote
  #18  
Old 30-10-2013, 07:56 AM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Merlin66 View Post
Hmmm
Just a question...
You talk about average combine and median combine....
Surely the need is to sum the subs - to lift the total signal...a higher total signal would reduce the shot SNR??
See above from Placidus. Just a different scale.

Quote:
Assuming floating point (not integer) arithmetic, there is no difference in signal to noise ratio between a sum combine and a mean combine.
Reply With Quote
  #19  
Old 30-10-2013, 08:05 AM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,904
Marc,
I don't believe it's a scaling issue....
the sd of a 500ADU sub is not the same as an sd of a 5000 ADU sub
One is sqt(500) = 22, the other sqt(5000)= 71
Reply With Quote
  #20  
Old 30-10-2013, 11:28 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
PS
Greg,
Passing clouds at a 'Dark site' dont add noise they only reduce signal and therefiore dont really affect your image after normalisation ![/QUOTE]\

Perhaps but they are still ruined. You usually get nasty halos around bright stars from the cloud at a minimum.

Greg.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 09:45 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement