#1  
Old 19-10-2012, 05:32 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,459
Question subs vs singel long exposure

Hi everyone,

Looked through the archives, but couldnt find what i was looking for so...

Been doing some research on other sites about total exposure length of a single 60 min sub vs say... 10 - 6 min subs as to which one would be deeper and how the signal:noise ratio (SN) compares. One such thread here http://cs.astronomy.com/asy/astro_im...5/t/49285.aspx, asks a similar question - which is the longer exposure time, 10 x 6min subs combined or a single 60 min sub. So which final calibrated image would have captures more data? does this depend on weather the subs are averaged or summed?

One member on the above forum says either way, its still the same amount of total exposure time both ways - which i would agree with, however, which exposure method would give the deeper image or pixels with higher readings (for signal that is)?

When you hear an image has been "summed" is it refering to adding ie. 2+2=4? or is adding just a loosly used term? If this was the case, would the pixel values of the final stack of 6 sub exposures summed in this way have 6 times the pixel value of 1 sub (talking about photon signal here)?

Then when you combine by the median or averageing method, I would imagine the pixel values, from signal, in the final stack would be much closer to those in a single sub exposure because the pixel values have not been summed... they have been averaged - for example?

If pixel were summed as in addition and you exposed to half your full well capacity, then summed 3 subs together, would this then overexpose that area of sky?

Some of these paragraphs contradict because of my uncertanty of how it works.

Thanks for any advice
Josh
Reply With Quote
  #2  
Old 19-10-2012, 05:40 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
No, they are not equal, otherwise you could take 600 one second subs and it would be the equivalent of one 10 minute sub. Which of course is not true. Many faint objects will not be there at all in a one second sub, so it doesn't matter how many you take, there is no information there to sum/average/median or otherwise process.

Long subs are needed when there are very few photons around and the sky is nice an light-pollution-free. You need a long exposure to pick up the fainter details.

Now of course the problem is, even under perfect skies that the noise in the camera increases the longer the sub, hence the need to take lots of subs to cancel out the noise and keep the faint detail.

After a certain point though, the CCD sensor gets saturated (i.e. the wells are full, or 'the buckets overflow'), detail gets washed out, so a longer sub isn't always better. It very much depends on the object in question, the camera, the amount of light pollution, the seeing etc.

So in general (this is just what I do, which is probably wrong) I take subs of a long enough length to capture the detail without blowing out any areas, then I take longer subs that blow out the brighter areas but capture more detail in the darker bits. Then I combine the whole mess, swear a lot ,wonder why I bother, get a very disappointing image, start drafting a 'for sale' thread, have a juice, try again and get that buzz when the data seems to be revealed out of nowhere, then start planning my next night....
Reply With Quote
  #3  
Old 19-10-2012, 06:10 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,459
Haha... thats funny...in a good way. . thanks Peter. Point taken. So what combining method do you use?

Quote:
Originally Posted by Poita View Post

Now of course the problem is, even under perfect skies that the noise in the camera increases the longer the sub, hence the need to take lots of subs to cancel out the noise and keep the faint detail.....
Would this be thermal noise you are refering to? in which case a good dark library and cooling are essential?

So my query is really about the 7th post in that link i referenced to in my OP and the ninth posts response to it. (the 8th post is also interesting). what does everyone think about whats being said there

Josh
Reply With Quote
  #4  
Old 19-10-2012, 06:12 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Hmmm.... don't completely agree with the above. If you think of your aperture as a bucket and photons as drop of rain you will collect as much water if you get the bucket out 10x60s or 1x600s.

The faint stuff might drop one or two photons per minute on your CCD but that will still compound as data vs. noise when you stack and increase your SNR.

The only disadvantage of having a lot of shorter subs in my experience is if your camera readout noise is important. Then 1x600s is better than 10x60s.
Reply With Quote
  #5  
Old 19-10-2012, 06:21 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
I see what you mean, but the photons are not necessarily appearing every second, and the subs aren't contiguous.
So if say a photon (drop of rain) arrives every 10 seconds or so, and your bucket is being emptied every other second (your chip is being read) then you may never collect those photons at all.
Also, if you were only capturing one photon, instead of 10 (with a sub 10x longer) then it would be harder to distinguish from the noise floor. With a 'quiet' sensor, this could make a big difference.

But I could be completely wrong, I'm only going on a thought experiment here, I haven't gone back to the books to get hard data.
Just doing some more imagining, sensitivity may be an issue, it might take a certain amount of photons to trigger a level change in that pixel on the camera, say it takes three photons within 10 seconds to push it over the boundary, if you had 5 seconds subs, only one or two photons would hit, and it would never 'click over' to the next level, so you would lose that info.

I'll try taking 60 one second subs and a 1 minute sub and compare the two.

Also, the longer the shutter is open, the more 'averaging' happens automatically rather than averaging later if you know what I mean.

If short subs could capture everything, then I'd be always doing one second ones, no aircraft or satellite trails to worry about

Surely someone has already given it a go.

There was a discussion here a while back on how to calculate the maximum effective sub time for your light pollution and camera, I'll try and find it.

Quote:
Originally Posted by multiweb View Post
Hmmm.... don't completely agree with the above. If you think of your aperture as a bucket and photons as drop of rain you will collect as much water if you get the bucket out 10x60s or 1x600s.

The faint stuff might drop one or two photons per minute on your CCD but that will still compound as data vs. noise when you stack and increase your SNR.

The only disadvantage of having a lot of shorter subs in my experience is if your camera readout noise is important. Then 1x600s is better than 10x60s.
Reply With Quote
  #6  
Old 19-10-2012, 06:29 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Poita View Post
I see what you mean, but the photons are not necessarily appearing every second, and the subs aren't contiguous.
So if say a photon (drop of rain) arrives every 10 seconds or so, and your bucket is being emptied every other second (your chip is being read) then you may never collect those photons at all.
No because some of your subs will catch some and others won't.

Quote:
Originally Posted by Poita View Post
Also, if you were only capturing one photon, instead of 10 (with a sub 10x longer) then it would be harder to distinguish from the noise floor. With a 'quiet' sensor, this could make a big difference.
That's what calibration takes care of.

Quote:
Originally Posted by Poita View Post
But I could be completely wrong, I'm only going on a thought experiment here, I haven't gone back to the books to get hard data.

I'll try taking 600 one second subs and a 10 minute sub and compare the two.

Also, the longer the shutter is open, the more 'averaging' happens automatically rather than averaging later if you know what I mean.

If short subs could capture everything, then I'd be always doing one second ones, no aircraft or satellite trails to worry about

Surely someone has already given it a go.

There was a discussion here a while back on how to calculate the maximum effective sub time for your light pollution and camera, I'll try and find it.
I'd agree that practically the longer you can go the better. Under dark skies. I have a mate who does routinely in excess of 1h subs but he has done a lot of homework on guiding and flexure. But it works.
Reply With Quote
  #7  
Old 19-10-2012, 06:33 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
There is a great post here:
http://www.cloudynights.com/ubbthrea.../o/all/fpart/1

Go down to the post by Arkalius
Reply With Quote
  #8  
Old 19-10-2012, 06:37 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
Quote:
Originally Posted by multiweb View Post
No because some of your subs will catch some and others won't.
Not if you were always reading out the chip when those single photons arrived. They don't always come at the expected rate. The less photons, the less predictable the rate, and the greater the potential error.

Calibration won't help if the readout noise is higher than the signal, which with really faint stuff could be the issue with shorter subs.
Reply With Quote
  #9  
Old 19-10-2012, 06:38 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
The Arkalius posts really explain it well, I've just finished reading them now and have a much better understanding.

It is always easier to understand when formulas are involved... it suddenly makes sense.
Reply With Quote
  #10  
Old 19-10-2012, 06:39 PM
Terry B's Avatar
Terry B
Country living & viewing

Terry B is offline
 
Join Date: Mar 2006
Location: Armidale
Posts: 2,789
The stacking method also will depend on what the final aim of the image is.
If you are measuring photometry rather than just pretty pics then the only acceptable stacking methods are sum or average. Any sort of weighted stacking will bring artifacts into the counts.
As others have stated, long exposures will have less noise than shorter ones due to readout noise etc. Three are other factors that limit the lenght of exposures though. Thes include trailing due to flexure, background sky noise, blooming, artifacts like cosmic ray hits, satelite trails, planes etc.
I limit exposures to 10mins as it is pretty frustrating to lose a longer exposure due to a plane trail or other artifact.
Reply With Quote
  #11  
Old 19-10-2012, 07:30 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,459
Quote:
Originally Posted by Poita View Post
I'll try taking 60 one second subs and a 1 minute sub and compare the two.
Also, the longer the shutter is open, the more 'averaging' happens automatically rather than averaging later if you know what I mean.
This would depend on weather they were summed or averaged - wouldnt a summed image be n times more luminous than the averaged one, n being the number sumed?

The S/N ratio should be the same in both cases - disgregarding read noise.

By combining subs with the mean or median method it reduces random noise, thereby allowing the signal to come through more, but does it raise signal?

Page 568 of "handbook of astronomical image processing" says
...with digital images you can add (or "stack") multiple images together to make one very "deep" image...

This is my very question, can this be done? is it deeper, can it pick up fainter objects, would this differ depending on your combining method? I know it will produce an as clean a picture in terms of random noise but thats not my point.

thanks
Josh
Reply With Quote
  #12  
Old 19-10-2012, 07:33 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,459
hears another reference on the subject http://keithwiley.com/astroPhotograp...Stacking.shtml.
Reply With Quote
  #13  
Old 19-10-2012, 07:41 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,459
Maybe one could gather some data of an object, measure a pixel value of a single sub exposure, then after stacking n exposures, measure the same pixel to see what its value is. Does its value increase n times? keep all the data luminance or B&W. Maybe try this with the summing method and with the averaging method and see if there is a magnitude increase of n for the summed data? I would try this myself but im not set up yet, as soon as i am, i will give it a go .

Thanks
Josh
Reply With Quote
  #14  
Old 19-10-2012, 07:51 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
There's no effective difference between summing and taking the average (mean) so long as you do it with reasonable numeric precision (like most decent image processing packages will). You can just multiply an averaged stack by the number of subs to recover the sum.

There is a difference between averaging and doing a median combine. A median combine will reject things like satellite trails better but will give you a poorer signal to noise ratio (roughly 20% worse).

Cheers,
Rick.
Reply With Quote
  #15  
Old 19-10-2012, 07:57 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by RickS View Post
There's no effective difference between summing and taking the average (mean) so long as you do it with reasonable numeric precision (like most decent image processing packages will). You can just multiply an averaged stack by the number of subs to recover the sum.

There is a difference between averaging and doing a median combine. A median combine will reject things like satellite trails better but will give you a poorer signal to noise ratio (roughly 20% worse).

Cheers,
Rick.
Here's an interesting thread on Yahoo about it.
Reply With Quote
  #16  
Old 19-10-2012, 07:59 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Poita View Post
Not if you were always reading out the chip when those single photons arrived. They don't always come at the expected rate. The less photons, the less predictable the rate, and the greater the potential error.

Calibration won't help if the readout noise is higher than the signal, which with really faint stuff could be the issue with shorter subs.
I think the time taken during the read out in proportion of the exposure time is very negligible in the end. It's not like you're going to miss that many photons The SNR is the real issue for faint stuff.
Reply With Quote
  #17  
Old 19-10-2012, 07:59 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by Poita View Post
There is a great post here:
http://www.cloudynights.com/ubbthrea.../o/all/fpart/1

Go down to the post by Arkalius
Yeah very interesting discussion.
Reply With Quote
  #18  
Old 19-10-2012, 08:05 PM
Peter Ward's Avatar
Peter Ward
Galaxy hitchhiking guide

Peter Ward is offline
 
Join Date: Dec 2007
Location: The Shire
Posts: 8,090
An interesting topic.

Longer exposures do collect more data.

As sensors do not have perfect QE, and while the rain gauge theory is not bad, it ignores the fact photons, unlike rain, fall at discreet locations, at which, enough photons have to collect to trigger the sensing pixel.

Pixels can also be "triggered" by thermal noise, sky noise, readout noise....well suffice to say there are lots of noise sources.

When using sub exposures, the trick is to make sure you can collect signal faster than noise....in which case you'll be happy. Begging the question how do you do that?

Things that help here are, more signal ( large aperture, longer exposure, quality optics, excellent tracking etc. ), high QE, low noise sensor ( cooling, low noise electronics etc. )


Hope that helps.

Last edited by Peter Ward; 19-10-2012 at 09:12 PM.
Reply With Quote
  #19  
Old 19-10-2012, 09:38 PM
Joshua Bunn's Avatar
Joshua Bunn (Joshua)
Registered User

Joshua Bunn is offline
 
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,459
Thanks everyone for your input.

So, i understand that it doesnt matter if subs are averaged or summed, either way will produce the same S/N ratio. What i was trying to figure was, can one image deep with many short exposures, and i think i understand now - to go deep, you need to expose long enough to detect the faint objects - long exposures.

I guess a smooth image is sometimes as astheticaly pleasing as a deep image.

Short exposures wont pickup the faint objects that longer one would. You could sum, ie 2+2=4, but i would have thought this would run the risk of reaching saturation and loosing data before a desirable S/N ratio is reached (too much noise). So, sigma combine or median is used, this way S/N ratio can be achieved without pixel saturation.
See Here down the bottom half and at the bottom.

please Let me know what you think and if you think im talking trash or hav it wrong.

thanks
Josh
Reply With Quote
  #20  
Old 19-10-2012, 09:56 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Not quite right, Josh, but it takes a while to grasp and it's not entirely intuitive.

You don't go any deeper with a 10 minute exposure than you do with a stack of 10 one minute exposures. The difference is that you will incur read noise once in the first example and ten times in the second. How big a difference this makes will depend on the characteristics of your camera.

I'd recommend reading a decent reference on the topic. Craig Stark has written some good articles. See the Signal to Noise series here: http://www.stark-labs.com/craig/articles/articles.html

The book The Handbook of Astronomical Image Processing is a very interesting and informative read too.

Cheers,
Rick.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 03:04 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement