#21  
Old 19-10-2012, 11:11 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,087
thanks Rick,

On page 40 of the book you referenced to it has an example of the S/N ratio being the same for stacking via summing and then averaging. Both methods produce the same S/N ratio. But would the summed set have higher pixel values? (greater signal, but also greater noise, as it hasnt been divided by the number of sub frames taken). I know summing is not used because it has some artifact drawbacks in the final image but that s not my point.

The 10 min example you gave, would this be a sum or average, not for the purposes of S/N ratio, but for detecting faint objects. How can you not go deeper with a 10min exposure than 10 one min exposures. I guess the average of 10 one min exposures may show some faint detail.

thanks for the link
Josh

Quote:
Originally Posted by RickS View Post
Not quite right, Josh, but it takes a while to grasp and it's not entirely intuitive.

You don't go any deeper with a 10 minute exposure than you do with a stack of 10 one minute exposures. The difference is that you will incur read noise once in the first example and ten times in the second. How big a difference this makes will depend on the characteristics of your camera.

I'd recommend reading a decent reference on the topic. Craig Stark has written some good articles. See the Signal to Noise series here: http://www.stark-labs.com/craig/articles/articles.html

The book The Handbook of Astronomical Image Processing is a very interesting and informative read too.

Cheers,
Rick.
Reply With Quote
  #22  
Old 19-10-2012, 11:35 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,582
Quote:
Originally Posted by Joshua Bunn View Post
On page 40 of the book you referenced to it has an example of the S/N ratio being the same for stacking via summing and then averaging. Both methods produce the same S/N ratio. But would the summed set have higher pixel values? (greater signal, but also greater noise, as it hasnt been divided by the number of sub frames taken). I know summing is not used because it has some artifact drawbacks in the final image but that s not my point.
Yes, you do get greater pixel values by summing instead of averaging but the absolute values don't matter when you stretch the data. The amount you can stretch is determined by the S/N not by the size of the numbers. You can only stretch until the noise starts to become objectionable and that happens at the same endpoint whether you start with a summed stack or an averaged one.

Quote:
Originally Posted by Joshua Bunn View Post
The 10 min example you gave, would this be a sum or average, not for the purposes of S/N ratio, but for detecting faint objects. How can you not go deeper with a 10min exposure than 10 one min exposures. I guess the average of 10 one min exposures may show some faint detail.
Once again, sum or average doesn't matter. How deep you go depends mainly on how many photons you collect in total from the target object. You are trying to collect a lot of photons not to make the pixel values bigger and make the object "brighter" but to reduce shot noise as much as possible. If shot noise is low then you can stretch the image more and see more detail.

I say "mainly" above because there are several other factors such as read noise, dark noise, noise from the skyglow, etc. that come into it.

Cheers,
Rick.
Reply With Quote
  #23  
Old 19-10-2012, 11:41 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,087
OMG Rick,

You hit the nail on the head... for me... so to speak. I understand now, but dont count on me not asking another question . Thankyou soo much. Where / how did you learn that?

Josh
Reply With Quote
  #24  
Old 19-10-2012, 11:53 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,582
Quote:
Originally Posted by Joshua Bunn View Post
OMG Rick,

You hit the nail on the head... for me... so to speak. I understand now, but dont count on me not asking another question . Thankyou soo much. Where / how did you learn that?

Josh
Glad it helped, Josh, and feel free to ask more questions. I'll answer them if I can.

I have done quite a bit of reading, including HAIP and Craig Stark's articles. I also had a bit of a head start having done image processing and a bunch of relevant maths and stats when studying Computer Science many years ago...
Reply With Quote
  #25  
Old 19-10-2012, 11:54 PM
alocky's Avatar
alocky (Andrew lockwood)
PI popular people's front

alocky is offline
 
Join Date: Aug 2010
Location: perth australia
Posts: 1,289
Sorry if it's adressed in the links - but is seems that a few people don't 'get' the effect of stacking on random noise. Signal to noise ratio (where the noise is random and non-stationary) is improved by the square root of the number of stacks. So stacking 9 subs only gives you a threefold improvement in signal to noise. There is a law of diminishing returns with stacking, eventually you reach the point where it doesn't really improve things much. With astro images it is likely to be at N=100 or greater - I've never taken enough to find out.
Longer integrations will have a similar effect on thermal noise, as it is random and will tend to cancel out over time, but the desired signal strength will be increased linearly with integration time - so longer subs should be more effective at improving S:N until you start saturating the sensor. Darks should get rid of the non-random (also called systematic) noise.
As other posters have pointed out, if the data is manipulated with sufficient precision in the computer (ie 32bit) it doesn't matter if you average or just add.
cheers,
Andrew.
Reply With Quote
  #26  
Old 19-10-2012, 11:59 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,087
Thankyou Rick and andrew.
Reply With Quote
  #27  
Old 20-10-2012, 12:15 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,582
Quote:
Originally Posted by alocky View Post
Signal to noise ratio (where the noise is random and non-stationary) is improved by the square root of the number of stacks. So stacking 9 subs only gives you a threefold improvement in signal to noise. There is a law of diminishing returns with stacking, eventually you reach the point where it doesn't really improve things much.
All true, but the improvement in S/N comes from the additional exposure time not the stacking process itself. If you do a 10 minute exposure and a 90 minute exposure then the improvement in S/N in the longer exposure relative to the short one will also be three times, the same as if you stack nine 10 minute exposures.

Fun stuff, eh?
Reply With Quote
  #28  
Old 20-10-2012, 12:29 AM
alocky's Avatar
alocky (Andrew lockwood)
PI popular people's front

alocky is offline
 
Join Date: Aug 2010
Location: perth australia
Posts: 1,289
Quote:
Originally Posted by RickS View Post
All true, but the improvement in S/N comes from the additional exposure time not the stacking process itself. If you do a 10 minute exposure and a 90 minute exposure then the improvement in S/N in the longer exposure relative to the short one will also be three times, the same as if you stack nine 10 minute exposures.

Fun stuff, eh?
Not entirely true - the signal strength is indeed being increased by the effective integration time, but the noise power decreases as a result of the stack. Consider the image to be composed of signal + a random noise. The next sub is composed of the same signal, but a different realisation of the random noise. Adding the two random signals together gives a new random signal with less power than either of the two original noise components (because the 'noisy' pixels don't occur in the same place), but you get twice the signal. With a bit of simple maths it's easy to show that if the noise is truly random, the signal to noise is now 1.41 times better than either of the originals. Here's a link to one of the awful books I remember from my undergrad last century that describes a related application of the theory...
<http://books.google.com.au/books?id=oRP5fZYjhXMC&pg=PA185&lpg= PA185&dq=stacking+random+noise&sour ce=bl&ots=C9-fMrZslb&sig=VBTxJVg1HUhsRmiZEIwUO_y 7bf8&hl=en&sa=X&ei=MlWBUIeTI5G0iQfS rIC4BA&ved=0CEkQ6AEwBg>

cheers,
Andrew.
Reply With Quote
  #29  
Old 20-10-2012, 12:53 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,582
Quote:
Originally Posted by alocky View Post
Not entirely true - the signal strength is indeed being increased by the effective integration time, but the noise power decreases as a result of the stack. Consider the image to be composed of signal + a random noise. The next sub is composed of the same signal, but a different realisation of the random noise. Adding the two random signals together gives a new random signal with less power than either of the two original noise components (because the 'noisy' pixels don't occur in the same place), but you get twice the signal. With a bit of simple maths it's easy to show that if the noise is truly random, the signal to noise is now 1.41 times better than either of the originals. Here's a link to one of the awful books I remember from my undergrad last century that describes a related application of the theory...
<http://books.google.com.au/books?id=oRP5fZYjhXMC&pg=PA185&lpg= PA185&dq=stacking+random+noise&sour ce=bl&ots=C9-fMrZslb&sig=VBTxJVg1HUhsRmiZEIwUO_y 7bf8&hl=en&sa=X&ei=MlWBUIeTI5G0iQfS rIC4BA&ved=0CEkQ6AEwBg>

cheers,
Andrew.
Sorry Andrew, I'm not convinced. Ignoring inconvenient practical issues like read noise, whether I collect and integrate exactly the same stream of photons in a single exposure or across multiple exposures with the same total length I get exactly the same data in the end.
Reply With Quote
  #30  
Old 20-10-2012, 12:59 AM
alocky's Avatar
alocky (Andrew lockwood)
PI popular people's front

alocky is offline
 
Join Date: Aug 2010
Location: perth australia
Posts: 1,289
OK - try a reductio ad absurdum then. Why not just add the same sub to itself 10 times?
cheers,
Andrew.
Reply With Quote
  #31  
Old 20-10-2012, 01:38 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,582
Quote:
Originally Posted by alocky View Post
OK - try a reductio ad absurdum then. Why not just add the same sub to itself 10 times?
cheers,
Andrew.
Are we talking at cross purposes? I don't see the relevance of that to my argument?

Simple thought experiment: three photons arrive at a perfect single pixel detector at times t1, t2 and t3 where: 0 < t1 < 10, 0 < t2 < 10, 10 < t3 < 20. If we take a 20 second exposure then we capture 3 photons. If we take two 10 second exposures we capture 2 photons in the first and 1 in the second. We stack/sum the exposures and get 3 photons again. The two situations are equivalent and whether we stack or not has no influence on the outcome.

I'm happy to be corrected if my simplistic argument has holes in it...
Reply With Quote
  #32  
Old 20-10-2012, 02:23 AM
alocky's Avatar
alocky (Andrew lockwood)
PI popular people's front

alocky is offline
 
Join Date: Aug 2010
Location: perth australia
Posts: 1,289
None of your subs will have only one or two or even zero signal counts per pixel. Each will have thousands, and it's unlikely that quantum effects are necessary to account for at the photon flux levels you need to form an image.
My point about stacking the same sub is to demonstrate that it is the noise that differs between the subs, not the signal.
Anyway - I've said enough.
Cheers,
Andrew.

Quote:
Originally Posted by RickS View Post
Are we talking at cross purposes? I don't see the relevance of that to my argument?

Simple thought experiment: three photons arrive at a perfect single pixel detector at times t1, t2 and t3 where: 0 < t1 < 10, 0 < t2 < 10, 10 < t3 < 20. If we take a 20 second exposure then we capture 3 photons. If we take two 10 second exposures we capture 2 photons in the first and 1 in the second. We stack/sum the exposures and get 3 photons again. The two situations are equivalent and whether we stack or not has no influence on the outcome.

I'm happy to be corrected if my simplistic argument has holes in it...
Reply With Quote
  #33  
Old 20-10-2012, 12:15 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,585
I find it interesting also that we aren't just interested in increasing the Signal and Reducing the noise, we also have to contend with skyglow, which is signal, not noise, but we want to eradicate it as well.
Reply With Quote
  #34  
Old 20-10-2012, 12:16 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,585
In a perfect world Andrew, then you would be right, the signal would not vary between subs, but as they photons don't come in a perfectly consistent stream, then there will be variation in the signal between each sub.
Less variation between long subs than between really really short ones.
Reply With Quote
  #35  
Old 20-10-2012, 12:21 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,585
Quote:
Originally Posted by RickS View Post
Sorry Andrew, I'm not convinced. Ignoring inconvenient practical issues like read noise, whether I collect and integrate exactly the same stream of photons in a single exposure or across multiple exposures with the same total length I get exactly the same data in the end.
Well, not the same data, as the noise is higher in a single exposure than in stacked exposures.
I think from memory that if you stack two signal that the noise goes down by the root of 2.
*edit* found it here:
Quote:
The more images you take the further the random noise will be reduced relative to the signal. If the signal to noise ratio of one image is λ/sqrt(λ) then the signal to noise ratio of the ensemble average is λ/(sqrt(λ)/sqrt(n)) where n is the number of images in the ensemble. In other words, since noise sums in quadrature the noise in an average of n images is reduced by the sqrt(n) compared to the noise in a single image.
source: http://jethomson.wordpress.com/spect...ise-reduction/

Which is why, as Andrew points out, you can't stack the same image multiple times to reduce noise.
Reply With Quote
  #36  
Old 20-10-2012, 12:49 PM
alocky's Avatar
alocky (Andrew lockwood)
PI popular people's front

alocky is offline
 
Join Date: Aug 2010
Location: perth australia
Posts: 1,289
Nope - in a perfect world there would be no noise
I think you'll find that the variation in 'measured signal' between subs is in a fact well approximated by a zero mean, gaussian distributed noise of exactly the type I'm describing. When we talk about signal to noise ratios it is the 'desired signal' compared to the noise. What we measure is the sum of both. We need to be careful in discussions of this type in discriminating between actual measurement, desired signal and noise (random and systematic), and not muddling them up.
In terms of the spirit of this thread - there seems to be a significant overlap between subs of sufficient length and numbers of subs per stack. Obiously 1/1000th of a second is too short a sub, and there are practical limits (aeroplanes, CCD well depth, mount alignment/PE) that limit how long a sub can be. The optimal sub length is going to depend on aperture, target, focal ratio and the camera.
What is not in question is the fact that the more 'useful' subs you take and subsequently combine, the better the image. The reason I brought up the 'root N' thing is that 30 subs is not 3 times better than ten in terms of S:N.
There are only two important things to know in image processing. Nyquist's theroem, and the principle of ensemble averaging. Everything else is a consequence of these.
Anyway - rather than spouting off about it, this is a good opportunity to experiment and share results. Something I'd be doing if the weather hadn't been so uncooperative and the petrie dish of daycare hadn't delivered yet another plague into our domicile.

Quote:
Originally Posted by Poita View Post
In a perfect world Andrew, then you would be right, the signal would not vary between subs, but as they photons don't come in a perfectly consistent stream, then there will be variation in the signal between each sub.
Less variation between long subs than between really really short ones.
Reply With Quote
  #37  
Old 20-10-2012, 07:03 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,582
Stand back... I'm going to use mathematics

Ignoring read noise, dark noise, and the noise from sky glow the SNR for a stack of images is:

SNR = SQRT(N*F*t) where N is the number of subs, F is the target object flux and t is the time for an individual exposure.

You can find this equation in the Handbook of Astronomical Image Processing, here: http://www.hiddenloft.com/notes/SubExposures.pdf and various other places. It is a consequence of the fact that individual photons arrive as independent random events hence following a Poisson distribution.

If N = 1 and t = 20 mins (a single exposure of 20 minutes) then you get the same SNR as N = 20 and t = 1 min (20 stacked exposures of 1 minute).

There is no magic to stacking compared to a single long exposure of the same total time. The shot noise depends only on the total flux collected.

In the real world where all sorts of noise exist there is one good reason to do as few subs as possible - it minimizes read noise which is incurred every time you read a frame from the sensor.

There are also good reasons to do many shorter subs - it is difficult to guide accurately for very long exposures, it is better if a cloud or other external event only ruins one of many short subs, sensors have finite well depth and will saturate if the exposure is too long, etc.

So, in practice we balance these factors and do subs of a practical length, say 5 or 10 minutes. Maybe shorter for sensors with shallow well depth and/or fast scopes and longer for sensors with deep wells and/or slow scopes. With narrowband filters the flux is lower (read noise becomes a bigger problem) and sky fog is reduced (so the sensor doesn't saturate so quickly) so longer exposures are typical, perhaps 15 to 30 minutes.

If you're so inclined there are calculations you can do to determine an "optimal" exposure time if you know your camera characteristics and sky glow flux:
http://www.cloudynights.com/item.php?item_id=1622
http://www.hiddenloft.com/notes/SubExposures.pdf
http://starizona.com/acb/ccd/calc_ideal.aspx

Hope this helps somebody and dispels some of the common misconceptions.

Cheers,
Rick.
Reply With Quote
  #38  
Old 20-10-2012, 07:35 PM
alocky's Avatar
alocky (Andrew lockwood)
PI popular people's front

alocky is offline
 
Join Date: Aug 2010
Location: perth australia
Posts: 1,289
Rick - clearly said. I think we may have be in complete agreement, which is odd for the internet.
I stand corrected on the Poisson vs Gaussian nature of the photon noise, but I suspect the central limit theorem will rear its head when the flux goes up to reasonable numbers. Either way, stacking and integrating are the only tools we've got to attack noise.
I've ended up with 10 minute subs for precisely the reasons you outline, the most significant of which seems to be the level of polar alignment I can get with the PAS on my G11 @ 530mm fl. Lazy me.
cheers,
Andrew.
Reply With Quote
  #39  
Old 20-10-2012, 07:44 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,582
Andrew,

I'm glad we got there

The Poisson distribution is almost identical to the Gaussian unless the number of events is small.

I mainly use 5 and 10 minute exposures for LRGB and 20 and 30 minutes for narrowband with a FSQ-106ED (f/5) and a STL11K camera. I have a new camera with a deeper well depth (KAF16803 sensor) and I may play with some longer exposures. Even at 5 minutes I often need to add some shorter exposures with HDR to stop the stars from saturating and losing colour.

Cheers,
Rick.
Reply With Quote
  #40  
Old 20-10-2012, 08:16 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 15,215
There are other factors that enter that make 60 minute exposures not practical.

Clouds are one!

Tracking is another.

Flexure is another.

Smoothness of PE is another.

We have to find the sweet spot for our particular setup. Too long an exposure with KAF8300 is likely to give you bloated bright stars from overexposure but with a narrowband filter that is just fine.

10 minutes is good for a lot of cameras, 15 minutes is probably better if your tracking and lack of flexure is up to it.

The reality is most systems won't allow too long an exposure due to the above factors. If its not tracking then its clouds moving in, if its not clouds it could be wind, if not then small well capacity etc.

10 -15 minutes for most CCD cameras and for narrowband longer (the images are noisier as there is less signal and you have to get the signal above the noise floor). I see John Gleason uses 40 minutes for his Ha images very often. His are about as good as you'll see so that is a good goal to be able to achieve with your system.

As far as combine methods go. Its quite educational to combine a series of Ha images with different combine methods. Ha images often have artifacts in them. Sum leaves the rubbish in place but gets a nice strong signal. Sigma reject gets rid of outliers and even satellite trails if you have enough subs. Median and mean also work well and I think mean works pretty well at clearing out rubbish from the images.

I am sure there are some who use different combines for different types of images. Perhaps sum for galaxies, sigma reject for neb shots, mean for Ha, mean or median for flats. Its worth clicking on the various types to see the differences (which tend to be subtle) when next combining some exposures.
Images Plus had a great one for DSLRs which was sigma reject on each RGB channel separately. It worked really well.

Greg.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 01:14 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
Advertisement
Lunatico Astronomical
Advertisement
Celestron Australia
Advertisement
OzScopes Authorised Dealer
Advertisement
SkyWatcher Australia
Advertisement
Meade Australia
Advertisement
NexDome Observatories
Advertisement
Bintel
Advertisement
Astronomy and Electronics Centre
Advertisement