#1  
Old 07-05-2016, 07:25 AM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
SNR short subs vs exposures

Thought this might be interesting to some; though there's nothing unexpected here, it's sometimes nice to put something visual next to the numbers.

I'd decided to play around with short exposures again, this time with the Sombrero and 36s L. I chose 36 seconds because, based on my attempt at following someone else's math, I believed I could get around 90% of the SNR I was getting from 480s subs, given the same overall integration time, and I thought this might help avoid clipping brighter stars. In fact, the core of M104 is so bright that my normal 480s L subs were blowing the core of that, let alone the stars.

I'd also hoped that by having more, shorter subs and weighting the images by FWHM/eccentricity, smart stacking algorithms might make slight improvements to the overall sharpness when compared to what essentially amounts to a "dumb average" by just exposing longer.

I wanted to do a side by side comparison of the subs to see how much practical difference each 10 subs were making. How much detail was becoming apparent that was previously hidden by the noise?

I never really thought about it before, but as the chart shows, when we're dealing with this amount of short subs, the SNR trend is almost linear.

Signal was calculated as the mean of the cropped area. Noise was calculated using PixInsight's NoiseEstimation script. Both calculated when the image was still linear, of course.

These images are all crops of drizzled (scale 2, drop shrink 0.9) integrations with the exact same histogram adjustment applied. I had to scale down the image for submission here, but the full res is available on Astrobin.
Attached Thumbnails
Click for full-size image (composite_small.jpg)
195.9 KB121 views
Click for full-size image (Screen Shot 2016-05-07 at 8.10.46 am.png)
146.8 KB99 views
Reply With Quote
  #2  
Old 07-05-2016, 08:16 AM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
And here's an animated gif; makes it a bit easier to compare the integrations. Too large to upload here (1.4mb), so it's on Astrobin.
Reply With Quote
  #3  
Old 07-05-2016, 08:58 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Very informative results Lee - thanks. The short exposures clearly work for your system and the extra subs all help with SNR.

I think that the SNR trend is the expected SNR=const*SQR(frames)..(ie doubling the number of frames gives 1.4x the SNR).. but the graph certainly does not have much obvious curvature in this region . I plotted your SNR against SQR(frames) and got a nearly perfect linear relationship, so that is an effective way to measure SNR.

Any idea how much resolution improvement is likely to be available by smarter stacking of shorter subs? I tried (again) to test this last night, but clouds appeared half way through.

regards Ray

Last edited by Shiraz; 07-05-2016 at 09:27 AM.
Reply With Quote
  #4  
Old 07-05-2016, 09:06 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Interesting experiment, Lee.

Were the noise estimates done on drizzled integrations? That would "hide" some of the noise.

Did you get an improvement in FWHM with shorter subs?

Cheers,
Rick.
Reply With Quote
  #5  
Old 07-05-2016, 10:40 AM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,980
Very interesting results Lee. How does the overall SNR compare between the same integration time of 36s vs the 480s? How close is it to the calculated 90%?
Reply With Quote
  #6  
Old 07-05-2016, 11:47 AM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
Quote:
Originally Posted by Shiraz View Post
Very informative results Lee - thanks. The short exposures clearly work for your system and the extra subs all help with SNR.

I think that the SNR trend is the expected SNR=const*SQR(frames)..(ie doubling the number of frames gives 1.4x the SNR).. but the graph certainly does not have much obvious curvature in this region . I plotted your SNR against SQR(frames) and got a nearly perfect linear relationship, so that is an effective way to measure SNR.

Any idea how much resolution improvement is likely to be available by smarter stacking of shorter subs? I tried (again) to test this last night, but clouds appeared half way through.

regards Ray
Thanks Ray, appreciate the confirmation! As I've said many times before, math is sadly not my strong suit, so it's good to get confirmation from someone such as yourself.

The (potential) resolution increase is going to be tricky to measure, I think, without a boatload of data. There's so many variables at play (seeing, cable drag, mount tracking etc). How are you planning to measure/compare it?

I can compare the FWHM of resulting images, but it's hard to measure the real impact when they've actually measured different things.

Quote:
Originally Posted by RickS View Post
Interesting experiment, Lee.

Were the noise estimates done on drizzled integrations? That would "hide" some of the noise.

Did you get an improvement in FWHM with shorter subs?

Cheers,
Rick.
Good point about drizzle, I hadn't thought much about its impacts on noise profile and what that might mean for the NoiseEvaluation's ability to measure it. What I'm interested in there is the relative noise though so as long as it remains consistent then I think we're ok.

I think I need more data to be able to say that I got better/worse/neutral FWHM. At the moment all I can say is that it looks about on par.

Quote:
Originally Posted by Atmos View Post
Very interesting results Lee. How does the overall SNR compare between the same integration time of 36s vs the 480s? How close is it to the calculated 90%?
Not very! But that's because I'm mildly retarded. Turns out I mathed wrong. 36s should give me ~82.3% of the SNR of the 480s sub, not ~90%. Silly mistake converting ADU to e. Turned out that it gave me 82%, so basically exactly what the theory predicted.

Interestingly enough, this means that the 36s subs gave me (relative to integration time) ~80.7% of the SNR that would be achieved with a single 240min sub.

Based on the above, I'm going to up my (Luminance) sub length to 96s, which should give me about 91.3% of the SNR of a 240min sub. I'm happy with that trade off. Also means that my RGB can be significantly shorter and still be relatively well exposed, which is nice, since those sub lengths bothered me more than the luminance ones.

I've attached a spreadsheet (in snr.zip, the forum won't allow me to upload the xlsx spreadsheet directly) plotting SNR vs sub length for a 240min integration. The second tab contains my actual results. Please note the results in that second tab are based on a normal integration, not the drizzle integrations measured/shown in the original post.

The object/sky flux in the spreadsheet are measurements based off a single 36s exposure, with the object flux being that of PGC 962963, the small galaxy near M104.

This is based off the math from Steve Cannistra's article Signal to Noise Ratio and the Subexposure Duration
Attached Thumbnails
Click for full-size image (Screen Shot 2016-05-07 at 12.41.50 pm.jpg)
195.4 KB57 views
Attached Files
File Type: zip snr.zip (46.3 KB, 19 views)
Reply With Quote
  #7  
Old 07-05-2016, 12:35 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,980
From that graph it looks like around the 130s mark would be a sweet spot.

EDIT: One other thing to contemplate, shorter exposure times means considerably more sub frames. If you imaged an object for a solid 8 hours and have a 7s download time, you would get 700x36s and 59x480s. Without taking into consideration any dithering between subs or any other factors, purely one image after another, you would loose nearly one hour for every 8 hours.

Factoring in maybe 10 seconds for dithering and guider settle on top of the 7s download you get 543x36s and 58x480, you get 8,292 seconds more exposure with the longer subs or 2.3 hours over an 8 hour period. It's a loss of 15 minutes vs 2.5 hours!

Last edited by Atmos; 07-05-2016 at 12:55 PM.
Reply With Quote
  #8  
Old 07-05-2016, 12:58 PM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
Quote:
Originally Posted by Atmos View Post
From that graph it looks like around the 130s mark would be a sweet spot.
At 130s you've definitely got the lion's share of the SNR curve going for you. Because I'm lazy and the chart already shows 144s instead of 130, I'm going to use that as the basis for the following.

144s gets you 94% of the SNR of a 240min sub.
96s gets you 91.3% of the SNR of a 240min sub.

50% extra sub length for 3.7% extra SNR - I'd personally rather take the shorter sub.

Having said that, it's not all up side for shorter subs either. I suspect it may be trickier to process (signal you care about is closer to the noise floor), and it means a lot more exposures, so more storage, more computation to process etc.

Edit: Didn't see your edit before my post. Very good point. That's something I've been thinking about and haven't fully factored in. I lost a huge amount of time on the first night due to (1) stuffing up the backlash on automatic focusing, making it do multiple runs and (2) auto focusing too frequently given the new sub length. When combined with extra download time and dithering/settle time, it made a really big difference, and not a good one.

My camera reports 2sec download time at full res, though I haven't timed it. I settle my dither for 5 seconds < 1px... definitely wouldn't be more than 10s per frame for dither+settle.

So lets forget about AF, which I can do as a function of time rather than sub count, and say each 96s net sub costs 96 + 10 (dither) + 2 (download) = 108s. 8hrs nets me ~267 subs. 267 @ 96 = 25,632s of net integration time, out of a possible 28,800, or 89% efficiency.

And if I go back to 480s net subs, 480 + 10 + 2 = 492. 8hrs nets me ~58 subs. 58 @ 480 = 27,840s of net integration time, out of a possible 28,800, or ~97% efficiency.

So compounding the loss of the "inactive time" (down 8%) and the SNR (down 6.8% from 98.1%), I'll net about 85.7% of stuff vs the 480s. I'd prefer to be around 90%, so that's a bit low and might be enough for me to up the sub length a little bit, but not much.

Last edited by codemonkey; 07-05-2016 at 02:19 PM.
Reply With Quote
  #9  
Old 07-05-2016, 01:31 PM
Slawomir's Avatar
Slawomir (Suavi)
Registered User

Slawomir is offline
 
Join Date: Sep 2014
Location: North Queensland
Posts: 3,240
A great discussion. I will just add that those of us using mass-produced mounts, with flexing optical trains, with imperfect polar alignment...would potentially be throwing away less subs when keeping exposures short, so the loss in SNR with short subs might be in practice less than calculated.
Reply With Quote
  #10  
Old 07-05-2016, 02:26 PM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
Very good point Suavi. Lots of things going on and it doesn't make it easy to pick an exact number, but I'm leaning towards shorter rather than longer.
Reply With Quote
  #11  
Old 07-05-2016, 03:38 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,980
And then it comes down to whether you're doing LRGB or narrowband. Try 144s exposures with SII
Reply With Quote
  #12  
Old 07-05-2016, 09:31 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by codemonkey View Post
The (potential) resolution increase is going to be tricky to measure, I think, without a boatload of data. There's so many variables at play (seeing, cable drag, mount tracking etc). How are you planning to measure/compare it?

I can compare the FWHM of resulting images, but it's hard to measure the real impact when they've actually measured different things.
I was intending to use FWHM in the first instance. Even though there are many variables that affect the resolution, the one thing that you can count on is that adding one more will always make the situation worse and increase the FWHM. If a technique like using short subs can go the other way and reduce the FWHM, it is going in the right direction and the FWHM will show if the technique can work and what lengths of subs might help. Then it will be time to start a more detailed analysis.
Reply With Quote
  #13  
Old 09-05-2016, 09:39 AM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,425
Interesting stuff Lee just shows what you can get away with when your sensor has low read noise
Reply With Quote
  #14  
Old 09-05-2016, 01:33 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
FWHM may be a little misleading as I would expect a shorter exposure to always have lower FWHM if only because tracking errors, PE, seeing have less chance to do their damage to FWHM.

Also this test is more for the low noise Sony sensors. For Kodak sensors with higher read noise this may not work as well. There is also a sub exposure calculator on the CCDware site. As I recall from using it in the past the results were more like 7 minutes for many popular Kodak CCDs.

Also it goes against the advice of the very top imagers who recommend long subexposures (again depends on the sensor). Sony sensors have pretty small wells so you are getting an advantage there of not filling up the wells and not making the outer airy disk halos really bright which can also make stars look fatter.

This is a massive advantage of the 16803 chip with its 100K+ well depth. Its hard to overexpose a star unless its one of the super bright ones.

Your example though is quite compelling and opens the door for a nice exposure from a mount with high PE. Not useful though for narrowband where the noise factor is more difficult to overcome.

Greg.
Reply With Quote
  #15  
Old 09-05-2016, 01:50 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by gregbradley View Post
FWHM may be a little misleading as I would expect a shorter exposure to always have lower FWHM if only because tracking errors, PE, seeing have less chance to do their damage to FWHM.

Sony sensors have pretty small wells so you are getting an advantage there of not filling up the wells and not making the outer airy disk halos really bright which can also make stars look fatter


Greg.
exactly the point Greg. FWHM provides an estimate of the combination of all errors and if you get fewer errors with shorter subs, you get sharper images. It isn't misleading if you get better results.

the halos we see on our stars have almost nothing to do with the airy disk - the star shape is almost totally dictated by the seeing+guiding. Filling up the wells on any ABG chip makes no difference to the star shape - excess charge just bleeds away to earth. The stars will look bigger with longer exposures because you are seeing more signal in the skirts of the point spread function of your optics, but that has nothing to do with well depth.

With Kodak chips, you need to expose for a long time to overcome read noise. that means that you must have deep wells to minimise star saturation. With Sony chips, you cannot expose for as long because the smaller wells fill up. But you do need to expose for long with Sony chips because they have such low read noise. In the end, you get the same SNR result, but you need about 1/4 the sub length with the Sony chips. However, if the short Sony subs can give you better resolution, you end up in a better place.

Last edited by Shiraz; 09-05-2016 at 02:04 PM.
Reply With Quote
  #16  
Old 09-05-2016, 01:59 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Quote:
Originally Posted by Shiraz View Post
exactly the point Greg. FWHM provides an estimate of the combination of all errors and if you get fewer errors with shorter subs, you get sharper images. It isn't misleading if you get better results.
Yes I see your point, but it is also adding in the huge variable of mount accuracy which we all know is the massive one. So in that scenario the test would be more valid for a single mount compared to another with lower PE.

Certainly a case has been made which is convincing that short subs can give superb sharpness (your images Ray being the benchmark for that) but as a general rule for all setups and CCDs and types of images it may not be true.

Also some CCDs have large files. For example 16803 1x1 files are 32.4mb. So open 60 of those and you will get memory errors in CCDstack. They would also be very slow and tedious to process.

How large are the ICX674 images in 1x1 binning - about 4mb?

Greg.
Reply With Quote
  #17  
Old 09-05-2016, 06:20 PM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
Quote:
Originally Posted by Shiraz View Post
I was intending to use FWHM in the first instance. Even though there are many variables that affect the resolution, the one thing that you can count on is that adding one more will always make the situation worse and increase the FWHM. If a technique like using short subs can go the other way and reduce the FWHM, it is going in the right direction and the FWHM will show if the technique can work and what lengths of subs might help. Then it will be time to start a more detailed analysis.
So far I've not seen anything that correlates better FWHM on individual subs between 36s, 480s or 144s subs, but there's been too much at play for me to be sure of anything, in particular tweaks with auto focus settings.

I think it's still possible, and I think the shorter you go the greater the probability of better results, but I'm just not seeing it yet.

Quote:
Originally Posted by Camelopardalis View Post
Interesting stuff Lee just shows what you can get away with when your sensor has low read noise
True, true. Some very low noise CMOS sensors coming out for us to play with now and there's lots of noise on the overseas forums about very short exposures... could be some interesting times ahead.

Quote:
Originally Posted by gregbradley View Post
FWHM may be a little misleading as I would expect a shorter exposure to always have lower FWHM if only because tracking errors, PE, seeing have less chance to do their damage to FWHM.

Also this test is more for the low noise Sony sensors. For Kodak sensors with higher read noise this may not work as well. There is also a sub exposure calculator on the CCDware site. As I recall from using it in the past the results were more like 7 minutes for many popular Kodak CCDs.

Also it goes against the advice of the very top imagers who recommend long subexposures (again depends on the sensor). Sony sensors have pretty small wells so you are getting an advantage there of not filling up the wells and not making the outer airy disk halos really bright which can also make stars look fatter.

This is a massive advantage of the 16803 chip with its 100K+ well depth. Its hard to overexpose a star unless its one of the super bright ones.

Your example though is quite compelling and opens the door for a nice exposure from a mount with high PE. Not useful though for narrowband where the noise factor is more difficult to overcome.

Greg.
Cheers for your thoughts, Greg.

At this point I'm not convinced that I'll see much gains in FWHM at these sorts of exposure lengths, in individual frames. I think very short subs might be a different story though, due to mount tracking, seeing etc as you say.

To be honest I expected I might get worse eccentricity at least, in the individual subs. I'd wondered though whether the stacking of those potentially less ideal individual subs might result in a better image though due to stacking algorithms being a bit more intelligent about what's rejected or included in the final integration.

I'd argue that this test works as well for Sony sensors as it does for Kodak sensors. I'm not advocating arbitrarily short subs. My interest is in making the subs are short as I can whilst not significantly compromising the overall SNR given the same integration time. If I had a sensor with more read noise, I'd use longer subs. If I was targeting a fainter object, I'd use longer subs.

The spreadsheet that I attached (not pretty, I know) enables you to input actual sky flux, object flux, system read noise and an integration time, from which you can draw your own conclusions about how long the sub duration should be. I had a look at the CCDWare one that you mentioned, and while it's a great tool for sure, I prefer to think about things in terms of how much SNR I'm getting vs the exposure length. This enables me to make what I consider a more informed decision, rather than just taking as gospel the time some tool tells me I should expose for: give me the data, and let me make the decision.

Out of curiosity, with the 16803 you say it's hard to overexpose a star unless it's one of the "super bright ones." Do you have any idea what magnitude qualifies for "super bright"?

One thing I do get irritated by on my images is huge stars (mag ~9 or less at least). I've seen other images of the same targets with the same stars that appear much tighter, and yet I know my images are sharp (usually 1.8 - 2.2").

I've wondered whether it's (a) my imaging scale, (b) my processing, (c) my well depth, (d) microlenses on the sensor--not even sure if that's a factor. Maybe all of the above, I dunno, but it does bother me. I actually like diffraction spikes on large stars because it makes them less of an eyesore imo.
Reply With Quote
  #18  
Old 09-05-2016, 07:01 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
True, true. Some very low noise CMOS sensors coming out for us to play with now and there's lots of noise on the overseas forums about very short exposures... could be some interesting times ahead.

I see QHY have listed some low read noise Sony CMOS sensors. I agree CMOS is likely to be the way of the future at some point. My Sony A7r2 mirrorless camera for example has read noise well under 1 electron per one test. Sony is concentrating on CMOS over CCD so I expect the advances to be more in that area. It would be great to get one of these Sony A7r2 sensors which is backside illuminated with copper circuitry and on chip analogue to digital converters in a mono version. If Sony ever bring out a mono camera like Leica have it could be a good one.


To be honest I expected I might get worse eccentricity at least, in the individual subs. I'd wondered though whether the stacking of those potentially less ideal individual subs might result in a better image though due to stacking algorithms being a bit more intelligent about what's rejected or included in the final integration.


I am not sure what intelligent stacking you are referring to as usually its quite simplistic like average or median. I do notice 2 x 2 binning to tends to round out stars as well.




Out of curiosity, with the 16803 you say it's hard to overexpose a star unless it's one of the "super bright ones." Do you have any idea what magnitude qualifies for "super bright"?

One thing I do get irritated by on my images is huge stars (mag ~9 or less at least). I've seen other images of the same targets with the same stars that appear much tighter, and yet I know my images are sharp (usually 1.8 - 2.2").

Its been my experience that small well cameras tend to show more bloated stars to what I think is the outer halo of the airy disc normally dim being brighter. But I think also because there is less bit depth in the star image itself and when you stretch the image the star data can break down more easily. Less so when its got greater depth as in deeper wells. That may not be a correct explanation but I do see my 16803 images as having much more robust stars that stand up to processing better than small well camera stars.

But there are also plenty of examples of excellent images with great stars from small well cameras so perhaps its more about the processing steps.

Greg.
Reply With Quote
  #19  
Old 09-05-2016, 07:05 PM
SkyViking's Avatar
SkyViking (Rolf)
Registered User

SkyViking is offline
 
Join Date: Aug 2009
Location: Waitakere Ranges, New Zealand
Posts: 2,260
Interesting experiment Lee, always good to see this kind of stuff.
I did a similar comparison of SNR increase as a function of subframes - up to 103 hours: http://www.rolfolsenastrophotography...er/i-Sb22cmT/A
Each step represents roughly double the number of subexposures (mine are 5 mins each)
Reply With Quote
  #20  
Old 09-05-2016, 07:08 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Great post Rolf. It really shows the reduced gain from extra exposure.
Quite an improvement up to 32hours and then only a bit extra for a lot of extra exposure.

Greg.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 09:15 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement