ICEINSPACE
Moon Phase
CURRENT MOON
Waning Gibbous 85.8%
|
|
19-10-2012, 06:32 PM
|
|
Registered User
|
|
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,472
|
|
subs vs singel long exposure
Hi everyone,
Looked through the archives, but couldnt find what i was looking for so...
Been doing some research on other sites about total exposure length of a single 60 min sub vs say... 10 - 6 min subs as to which one would be deeper and how the signal:noise ratio (SN) compares. One such thread here http://cs.astronomy.com/asy/astro_im...5/t/49285.aspx, asks a similar question - which is the longer exposure time, 10 x 6min subs combined or a single 60 min sub. So which final calibrated image would have captures more data? does this depend on weather the subs are averaged or summed?
One member on the above forum says either way, its still the same amount of total exposure time both ways - which i would agree with, however, which exposure method would give the deeper image or pixels with higher readings (for signal that is)?
When you hear an image has been "summed" is it refering to adding ie. 2+2=4? or is adding just a loosly used term? If this was the case, would the pixel values of the final stack of 6 sub exposures summed in this way have 6 times the pixel value of 1 sub (talking about photon signal here)?
Then when you combine by the median or averageing method, I would imagine the pixel values, from signal, in the final stack would be much closer to those in a single sub exposure because the pixel values have not been summed... they have been averaged - for example?
If pixel were summed as in addition and you exposed to half your full well capacity, then summed 3 subs together, would this then overexpose that area of sky?
Some of these paragraphs contradict because of my uncertanty of how it works.
Thanks for any advice
Josh
|
19-10-2012, 06:40 PM
|
Registered User
|
|
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
|
|
No, they are not equal, otherwise you could take 600 one second subs and it would be the equivalent of one 10 minute sub. Which of course is not true. Many faint objects will not be there at all in a one second sub, so it doesn't matter how many you take, there is no information there to sum/average/median or otherwise process.
Long subs are needed when there are very few photons around and the sky is nice an light-pollution-free. You need a long exposure to pick up the fainter details.
Now of course the problem is, even under perfect skies that the noise in the camera increases the longer the sub, hence the need to take lots of subs to cancel out the noise and keep the faint detail.
After a certain point though, the CCD sensor gets saturated (i.e. the wells are full, or 'the buckets overflow'), detail gets washed out, so a longer sub isn't always better. It very much depends on the object in question, the camera, the amount of light pollution, the seeing etc.
So in general (this is just what I do, which is probably wrong) I take subs of a long enough length to capture the detail without blowing out any areas, then I take longer subs that blow out the brighter areas but capture more detail in the darker bits. Then I combine the whole mess, swear a lot ,wonder why I bother, get a very disappointing image, start drafting a 'for sale' thread, have a juice, try again and get that buzz when the data seems to be revealed out of nowhere, then start planning my next night....
|
19-10-2012, 07:10 PM
|
|
Registered User
|
|
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,472
|
|
Haha... thats funny...in a good way. . thanks Peter. Point taken. So what combining method do you use?
Quote:
Originally Posted by Poita
Now of course the problem is, even under perfect skies that the noise in the camera increases the longer the sub, hence the need to take lots of subs to cancel out the noise and keep the faint detail.....
|
Would this be thermal noise you are refering to? in which case a good dark library and cooling are essential?
So my query is really about the 7th post in that link i referenced to in my OP and the ninth posts response to it. (the 8th post is also interesting). what does everyone think about whats being said there
Josh
|
19-10-2012, 07:12 PM
|
|
ze frogginator
|
|
Join Date: Oct 2007
Location: Sydney
Posts: 22,068
|
|
Hmmm.... don't completely agree with the above. If you think of your aperture as a bucket and photons as drop of rain you will collect as much water if you get the bucket out 10x60s or 1x600s.
The faint stuff might drop one or two photons per minute on your CCD but that will still compound as data vs. noise when you stack and increase your SNR.
The only disadvantage of having a lot of shorter subs in my experience is if your camera readout noise is important. Then 1x600s is better than 10x60s.
|
19-10-2012, 07:21 PM
|
Registered User
|
|
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
|
|
I see what you mean, but the photons are not necessarily appearing every second, and the subs aren't contiguous.
So if say a photon (drop of rain) arrives every 10 seconds or so, and your bucket is being emptied every other second (your chip is being read) then you may never collect those photons at all.
Also, if you were only capturing one photon, instead of 10 (with a sub 10x longer) then it would be harder to distinguish from the noise floor. With a 'quiet' sensor, this could make a big difference.
But I could be completely wrong, I'm only going on a thought experiment here, I haven't gone back to the books to get hard data.
Just doing some more imagining, sensitivity may be an issue, it might take a certain amount of photons to trigger a level change in that pixel on the camera, say it takes three photons within 10 seconds to push it over the boundary, if you had 5 seconds subs, only one or two photons would hit, and it would never 'click over' to the next level, so you would lose that info.
I'll try taking 60 one second subs and a 1 minute sub and compare the two.
Also, the longer the shutter is open, the more 'averaging' happens automatically rather than averaging later if you know what I mean.
If short subs could capture everything, then I'd be always doing one second ones, no aircraft or satellite trails to worry about
Surely someone has already given it a go.
There was a discussion here a while back on how to calculate the maximum effective sub time for your light pollution and camera, I'll try and find it.
Quote:
Originally Posted by multiweb
Hmmm.... don't completely agree with the above. If you think of your aperture as a bucket and photons as drop of rain you will collect as much water if you get the bucket out 10x60s or 1x600s.
The faint stuff might drop one or two photons per minute on your CCD but that will still compound as data vs. noise when you stack and increase your SNR.
The only disadvantage of having a lot of shorter subs in my experience is if your camera readout noise is important. Then 1x600s is better than 10x60s.
|
|
19-10-2012, 07:29 PM
|
|
ze frogginator
|
|
Join Date: Oct 2007
Location: Sydney
Posts: 22,068
|
|
Quote:
Originally Posted by Poita
I see what you mean, but the photons are not necessarily appearing every second, and the subs aren't contiguous.
So if say a photon (drop of rain) arrives every 10 seconds or so, and your bucket is being emptied every other second (your chip is being read) then you may never collect those photons at all.
|
No because some of your subs will catch some and others won't.
Quote:
Originally Posted by Poita
Also, if you were only capturing one photon, instead of 10 (with a sub 10x longer) then it would be harder to distinguish from the noise floor. With a 'quiet' sensor, this could make a big difference.
|
That's what calibration takes care of.
Quote:
Originally Posted by Poita
But I could be completely wrong, I'm only going on a thought experiment here, I haven't gone back to the books to get hard data.
I'll try taking 600 one second subs and a 10 minute sub and compare the two.
Also, the longer the shutter is open, the more 'averaging' happens automatically rather than averaging later if you know what I mean.
If short subs could capture everything, then I'd be always doing one second ones, no aircraft or satellite trails to worry about
Surely someone has already given it a go.
There was a discussion here a while back on how to calculate the maximum effective sub time for your light pollution and camera, I'll try and find it.
|
I'd agree that practically the longer you can go the better. Under dark skies. I have a mate who does routinely in excess of 1h subs but he has done a lot of homework on guiding and flexure. But it works.
|
19-10-2012, 07:33 PM
|
Registered User
|
|
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
|
|
|
19-10-2012, 07:37 PM
|
Registered User
|
|
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
|
|
Quote:
Originally Posted by multiweb
No because some of your subs will catch some and others won't.
|
Not if you were always reading out the chip when those single photons arrived. They don't always come at the expected rate. The less photons, the less predictable the rate, and the greater the potential error.
Calibration won't help if the readout noise is higher than the signal, which with really faint stuff could be the issue with shorter subs.
|
19-10-2012, 07:38 PM
|
Registered User
|
|
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
|
|
The Arkalius posts really explain it well, I've just finished reading them now and have a much better understanding.
It is always easier to understand when formulas are involved... it suddenly makes sense.
|
19-10-2012, 07:39 PM
|
|
Country living & viewing
|
|
Join Date: Mar 2006
Location: Armidale
Posts: 2,790
|
|
The stacking method also will depend on what the final aim of the image is.
If you are measuring photometry rather than just pretty pics then the only acceptable stacking methods are sum or average. Any sort of weighted stacking will bring artifacts into the counts.
As others have stated, long exposures will have less noise than shorter ones due to readout noise etc. Three are other factors that limit the lenght of exposures though. Thes include trailing due to flexure, background sky noise, blooming, artifacts like cosmic ray hits, satelite trails, planes etc.
I limit exposures to 10mins as it is pretty frustrating to lose a longer exposure due to a plane trail or other artifact.
|
19-10-2012, 08:30 PM
|
|
Registered User
|
|
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,472
|
|
Quote:
Originally Posted by Poita
I'll try taking 60 one second subs and a 1 minute sub and compare the two.
Also, the longer the shutter is open, the more 'averaging' happens automatically rather than averaging later if you know what I mean.
|
This would depend on weather they were summed or averaged - wouldnt a summed image be n times more luminous than the averaged one, n being the number sumed?
The S/N ratio should be the same in both cases - disgregarding read noise.
By combining subs with the mean or median method it reduces random noise, thereby allowing the signal to come through more, but does it raise signal?
Page 568 of "handbook of astronomical image processing" says
...with digital images you can add (or "stack") multiple images together to make one very "deep" image...
This is my very question, can this be done? is it deeper, can it pick up fainter objects, would this differ depending on your combining method? I know it will produce an as clean a picture in terms of random noise but thats not my point.
thanks
Josh
|
19-10-2012, 08:33 PM
|
|
Registered User
|
|
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,472
|
|
|
19-10-2012, 08:41 PM
|
|
Registered User
|
|
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,472
|
|
Maybe one could gather some data of an object, measure a pixel value of a single sub exposure, then after stacking n exposures, measure the same pixel to see what its value is. Does its value increase n times? keep all the data luminance or B&W. Maybe try this with the summing method and with the averaging method and see if there is a magnitude increase of n for the summed data? I would try this myself but im not set up yet, as soon as i am, i will give it a go .
Thanks
Josh
|
19-10-2012, 08:51 PM
|
|
PI cult recruiter
|
|
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
|
|
There's no effective difference between summing and taking the average (mean) so long as you do it with reasonable numeric precision (like most decent image processing packages will). You can just multiply an averaged stack by the number of subs to recover the sum.
There is a difference between averaging and doing a median combine. A median combine will reject things like satellite trails better but will give you a poorer signal to noise ratio (roughly 20% worse).
Cheers,
Rick.
|
19-10-2012, 08:57 PM
|
|
ze frogginator
|
|
Join Date: Oct 2007
Location: Sydney
Posts: 22,068
|
|
Quote:
Originally Posted by RickS
There's no effective difference between summing and taking the average (mean) so long as you do it with reasonable numeric precision (like most decent image processing packages will). You can just multiply an averaged stack by the number of subs to recover the sum.
There is a difference between averaging and doing a median combine. A median combine will reject things like satellite trails better but will give you a poorer signal to noise ratio (roughly 20% worse).
Cheers,
Rick.
|
Here's an interesting thread on Yahoo about it.
|
19-10-2012, 08:59 PM
|
|
ze frogginator
|
|
Join Date: Oct 2007
Location: Sydney
Posts: 22,068
|
|
Quote:
Originally Posted by Poita
Not if you were always reading out the chip when those single photons arrived. They don't always come at the expected rate. The less photons, the less predictable the rate, and the greater the potential error.
Calibration won't help if the readout noise is higher than the signal, which with really faint stuff could be the issue with shorter subs.
|
I think the time taken during the read out in proportion of the exposure time is very negligible in the end. It's not like you're going to miss that many photons The SNR is the real issue for faint stuff.
|
19-10-2012, 08:59 PM
|
|
ze frogginator
|
|
Join Date: Oct 2007
Location: Sydney
Posts: 22,068
|
|
Quote:
Originally Posted by Poita
|
Yeah very interesting discussion.
|
19-10-2012, 09:05 PM
|
|
Galaxy hitchhiking guide
|
|
Join Date: Dec 2007
Location: The Shire
Posts: 8,269
|
|
An interesting topic.
Longer exposures do collect more data.
As sensors do not have perfect QE, and while the rain gauge theory is not bad, it ignores the fact photons, unlike rain, fall at discreet locations, at which, enough photons have to collect to trigger the sensing pixel.
Pixels can also be "triggered" by thermal noise, sky noise, readout noise....well suffice to say there are lots of noise sources.
When using sub exposures, the trick is to make sure you can collect signal faster than noise....in which case you'll be happy. Begging the question how do you do that?
Things that help here are, more signal ( large aperture, longer exposure, quality optics, excellent tracking etc. ), high QE, low noise sensor ( cooling, low noise electronics etc. )
Hope that helps.
Last edited by Peter Ward; 19-10-2012 at 10:12 PM.
|
19-10-2012, 10:38 PM
|
|
Registered User
|
|
Join Date: Oct 2011
Location: Albany, Western Australia
Posts: 1,472
|
|
Thanks everyone for your input.
So, i understand that it doesnt matter if subs are averaged or summed, either way will produce the same S/N ratio. What i was trying to figure was, can one image deep with many short exposures, and i think i understand now - to go deep, you need to expose long enough to detect the faint objects - long exposures.
I guess a smooth image is sometimes as astheticaly pleasing as a deep image.
Short exposures wont pickup the faint objects that longer one would. You could sum, ie 2+2=4, but i would have thought this would run the risk of reaching saturation and loosing data before a desirable S/N ratio is reached (too much noise). So, sigma combine or median is used, this way S/N ratio can be achieved without pixel saturation.
See Here down the bottom half and at the bottom.
please Let me know what you think and if you think im talking trash or hav it wrong.
thanks
Josh
|
19-10-2012, 10:56 PM
|
|
PI cult recruiter
|
|
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
|
|
Not quite right, Josh, but it takes a while to grasp and it's not entirely intuitive.
You don't go any deeper with a 10 minute exposure than you do with a stack of 10 one minute exposures. The difference is that you will incur read noise once in the first example and ten times in the second. How big a difference this makes will depend on the characteristics of your camera.
I'd recommend reading a decent reference on the topic. Craig Stark has written some good articles. See the Signal to Noise series here: http://www.stark-labs.com/craig/articles/articles.html
The book The Handbook of Astronomical Image Processing is a very interesting and informative read too.
Cheers,
Rick.
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT +10. The time is now 03:58 PM.
|
|