View Single Post
  #43  
Old 20-10-2012, 09:21 PM
naskies's Avatar
naskies (Dave)
Registered User

naskies is offline
 
Join Date: Jul 2011
Location: Brisbane
Posts: 1,865
As Rick mentioned, only the total integration time matters in statistical theory (i.e. sum of multiple Poisson random variables is another Poisson random variable with mean equal to the sum of the individual means).

In practice with CCDs/CMOS chips, you're limited by a whole range of factors, such as noise, non-linearity, quantisation, quantum efficiency, well depth, and chip defects. Noise is fairly straightforward, but the others aren't:

* Non-linearity: a perfectly linear CCD would measure photons/electrons at the same rate regardless of the exposure time (e.g. 10,000 ADUs for 30 sec vs 20,000 ADUs for 60 sec). Although most astro CCDs are very good in this respect, no CCD is perfectly linear.

* Quantisation: the number of electrons in a pixel is converted into a digital number that has a limited amount of precision - e.g. a number between 0 and 65,535 (for a 16 bit sensor). The consequence is that there is a minimum signal strength that you can capture for a given exposure duration - anything less is effectively lost.

* Quantum efficiency: similar to the previous point, imperfect QE means that you effectively "lose" photons and thus also raises the minimal signal strength needed to record *any* signal.

* Well depth: too long of an exposure will result in lots of blooming and huge loss of information in surrounding pixels. Shorter subs with a higher gain may give you a higher effective dynamic range than a single long sub.

* Chip defects: if you have a dead pixel/column, then it doesn't matter how long you exposure a single frame for - you'll never be able to recover the missing information. With multiple subs and dithering, you can "fill in the gaps".

Due to a combination of the above (and other factors as well), you end up with three zones of behaviour:

(1) Signal is too weak for short subs: if you don't clear the minimum signal hurdle, you'll never record any information - i.e. why we can't image mag 36 objects using lots of 1/1000 sec exposures.

(2) Multiple short subs = single long sub: there's a region where stacking short subs is similar to a single long sub (most of us will use exposures in this region). However, stacking multiple subs will be beneficial due to chip defects, tracking limitations, unwanted objects in the sky, etc.

(3) Signal is too strong for the long sub: the wells become saturated in the long sub, but not in the short subs. For example, in my light polluted back yard I can't see the Horsehead Nebula at all on the back of my DSLR with a single shot - it's overwhelmed by skyglow. However, by stacking multiple subs it easily pops out.

Interesting discussion (Please feel free to correct me on any inaccuracies.)
Reply With Quote