PDA

View Full Version here: : Snr


codemonkey
26-09-2015, 09:40 AM
I did a brief experiment last night, after seeing some threads on CN lately where people are taking huge amounts (thousands) of very short (<= 4secs) subs and integrating them with seemingly good results; and I'm talking DSOs, not planetary. This had me questioning what I believed about SNR and image acquisition.

In the end, my brief experiment (no doubt flawed in many ways!) showed the conventional wisdom to be true: longer subs will give you better SNR, at least when you're "read noise limited".

All three of the attached images show a tiny crop of the edge of the Helix nebula. All images were calibrated. All images were stretched using PI's STF tool in combination with HT.

Where stacking was involved, Windsorized Sigma Clipping was used. Maybe not appropriate for the 5x1, but eh.

One is a single sub, 5mins in length.
One is 5 sub, 1min in length.
One is 57 subs, 1min in length.

Of the three images, one has a SNR of 4.8:1, another has a SNR of 5.5:1, the other has a SNR of 3.5:1

Can you guess which SNR belongs to which image?

RickS
26-09-2015, 10:27 AM
It's more than conventional wisdom, Lee. It's basic physics and mathematics. As they say, Science works, *****es :lol:

I can easily pick the lowest SNR image. I'd be more confident of differentiating 2 and 3 if you hadn't changed the image scale. Was that meant to be a trick question?

codemonkey
26-09-2015, 10:38 AM
There's no change in image scale as such, but the previews aren't quite the same since I roughly created them manually. Apologies for the inconsistency. Have a crack anyway.

As you say, it's all physics and math and science works. Trouble is, however, a lot of people just parrot information that they heard/read and apply it to situations where it no longer holds. There's some questions I had that I hadn't seen satisfactory answers to and since it's easier for a lay person like me to just do it and see the results for myself, I'm happy to do just that.

Back on topic...which is which?

peter_4059
26-09-2015, 10:44 AM
There's a good Craig Stark read on SNR (4 parts) here:

http://www.cloudynights.com/page/articles/cat/fishing-for-photons/signal-to-noise-understanding-it-measuring-it-and-improving-it-part-1-r1895

http://www.cloudynights.com/page/articles/cat/fishing-for-photons/signal-to-noise-understanding-it-measuring-it-and-improving-it-part-2-understanding-one-pixel-r1902

http://www.cloudynights.com/page/articles/cat/fishing-for-photons/signal-to-noise-part-3-measuring-your-camera-r1929

http://www.cloudynights.com/page/articles/cat/fishing-for-photons/image-sampling-r1970

codemonkey
26-09-2015, 10:50 AM
Thanks Peter :-)

Does anyone want to play? Maybe everyone already knows this, but I think it's an interesting topic and if it helps shed some light on things for just one person I'll be happy.

RickS
26-09-2015, 11:05 AM
1, 3, 2 from best to worst.

codemonkey
26-09-2015, 11:08 AM
Afraid not. That's what I would have thought just looking at them, and this is why I posted this thread.

For the record, I measured the ratio the same way that Craig Stark did in the articles Peter mentioned; stddev vs mean.

RickS
26-09-2015, 11:23 AM
Is that based on a comparison of the mismatched previews?

codemonkey
26-09-2015, 11:35 AM
I doubt a couple of pixels is going to make that much of a difference, and I did it twice with different previews and got roughly the same numbers.

Let's stop playing guessing games now.

#1 whilst clearly the cleanest of the images, is the 57x1min exposure and has the 4.8:1 SNR - this is our midrange image in terms of SNR.

#2 has the worst SNR; it's the 5x1min sub integration.

#3 our SNR winner, is the 1x5min sub.

So "conventional wisdom" holds: if you're read noise limited, longer subs are better in terms of SNR.

But looking at these images, I question why we optimise for SNR? Honestly, which of these would you prefer to process? Without doubt it's #1.

Obviously #1 has more than 10 times the integration time than the others, but that's not the point. The point is I think we should be optimising for absolute noise, rather than SNR.

peter_4059
26-09-2015, 11:43 AM
I would have thought you'd do a comparison based on constant total integration time. ie if you have 3 hours total on a target how is your time best spent in terms of number of subs vs sub exposure length. Here's how Craig's calcs look for my SN10/QSI at Duckadang vs home (albeit on different targets). This pretty clearly shows for a constant total duration it is better to use a longer exposure to a point where you then get diminishing returns.

codemonkey
26-09-2015, 11:48 AM
Thanks Peter, nice post! And yes, that's exactly what I had in mind. Unfortunately all of my 5min subs except one got ruined. So I compared 5x1 to 1x5 and then did the 57x1 just for giggles and then I saw something I didn't expect and I thought I'd share that.

My point still remains though: after seeing the images I captured here, do you not even start to question why we optimise for SNR? There's no doubt that when read noise limited longer subs will give you better SNR. But again, looking at those images, do you really care which one has the better SNR? If you did, you'd discard the cleanest of the images due to its inferior SNR.

So while you've measured the relative SNR, I'm not convinced that SNR in itself means much at all in practical application.

peter_4059
26-09-2015, 11:52 AM
I think you need to compare 50x1 min subs to 10x5 min and see which one looks cleaner. I suspect the 10x5 stack is going to have the higher SNR and look cleaner.

RickS
26-09-2015, 12:09 PM
My experience is that things work in practice pretty much as expected in theory and that higher SNR images look better. Not sure what has happened with your experiment but I wouldn't jump to any conclusions ;)

Noise is random and hard to estimate accurately unless you have a statistically significant number of samples.

Cheers,
Rick.

Slawomir
26-09-2015, 01:40 PM
Hi Lee,

STF will stretch different data differently, and from what you wrote I gather you just applied STF 3 times for each preview, am I right?

This is what I would do to visually compare noise in different images: Apply STF to one of the previews, then copy that to histogram transformation tool, and then apply that histogram transformation to the remaining previews - in this way you are applying the same stretch for all 3 previews.

EDIT: I think perhaps it would be interesting to compare regions with stronger signal in these images, just for fun :)

EDIT2: Another random thought...it does not seem right just to look at background and on that basis decide potentially most effective integration. Taking that approach to the extreme...500 frames with millisecond exposure will generate even cleaner "background", but there will be nearly zero signal from a DSO. That's why my suggestion to compare areas with stronger signal as well :)

Shiraz
26-09-2015, 03:09 PM
+1 - SNR really is the only measure that matters.

a couple of points:
you have used an averaging stack, which means that the signal will be proportional to the sub length and will not change as you add more subs. However, it will be 5x higher for the 5 min sub. To get a reasonable idea of how much noise you really have, you need to stretch differently so that the signal in each image is the same.
how have you measured SNR? you need a totally flat region to get an SD measure that incorporates just the noise and not some variation in the background level as well. FWIW, the method used in Nebulosity is very reliable.

codemonkey
26-09-2015, 03:18 PM
Yeah, that's the plan. Actually, the original plan was to take a crazy amount of very short subs (~5s) and compare, but it becomes very difficult to register narrowband subs of that length.



Fair enough, thanks for your thoughts, Rick.



Hey S :-)

That's an interesting point about the presentation/stretch. If I do the same stretch for all of them one is going to be much brighter than the others (because it's exposure was 5 times longer), which I think might make it more difficult to compare noise. Having said that, a non-linear transformation probably isn't the best choice for presenting this either because it could have impacted the comparison.

This isn't background by the way, it's the edge of the brighter region of the nebula; it captures probably the strongest area of signal in the image and where it falls off a bit.

Slawomir
26-09-2015, 03:46 PM
:hi: Lee

Maybe linear fit applied before a stretch? Just for fun :rofl:

Shiraz
26-09-2015, 05:08 PM
maybe a silly question, but assume that you measured SNR on the unstretched data?

with your system, the turnover point on Peter's curves will be at about 5 minute lum subs in reasonably dark sky. For RGB, about 10 minutes.

codemonkey
26-09-2015, 05:42 PM
I'll have a play later and see what I can come up with :-)



Yep, SNR was measured on unstretched data.

I found that of the 5min subs I mentioned before, only two of them were ruined by clouds, the others just had tracking/guiding issues, which means they're usable for SNR comparisons.

I combined 40x1min and 8x5min and found that the former had better SNR. That's not what I expected to see either. Not sure which looked better, I had difficulty eyeballing them. Might post up some samples a bit later.

I'd encourage anyone interested to have a look around on CN. There's people posting samples of DSOs taken with subs as short as 0.1s, just using heaps (thousands) of subs. This is what caused me to start this experiment.

RickS
26-09-2015, 06:12 PM
There are a few folks doing short subs with low read noise cameras. That's perfectly fine because their subs, though short, aren't read noise limited. One day we will all probably be doing lucky imaging of DSOs with very short subs but we'll need affordable very low read noise sensors first.

Anybody doing very short subs with the sort of cameras that most of us currently use is getting suboptimal results. That doesn't mean that they aren't getting OK results. It does mean that they could get the same results in less time with longer subs.

This is not just an opinion. It's the way that photons and sensors work.

By all means keep experimenting, Lee. I'm sure you'll learn a lot from it and hopefully others will too. If nothing else, you'll start some more lively discussions :)

Cheers,
Rick.

Shiraz
26-09-2015, 06:30 PM
suggest you have a close look at exactly what you are measuring Lee - there is no way that you will get better SNR from more reads and the same overall exposure. The theory has stood the test of 40+ years of EO system design and eval - it is not likely to be wrong now.

Slawomir
26-09-2015, 06:42 PM
What Rick wrote makes sense to me. I have found that with my camera (4e read noise) there is not much benefit in going longer than 15 minutes (with narrowband imaging) for the fainter DSOs, also due to shallow wells and sky glow. For bright DSOs, such as Carina, 1 to 2 minute subs with narrowband filters is really plenty with my camera.

Maybe the next generation of sensors for low light applications (that will also be affordable) will have near zero read noise? That would be very nice.

RobF
26-09-2015, 06:56 PM
I imagine on bright objects where the SNR is immediately far higher you can go with much shorter sub times. The only reason to do so might be if your exposure times are short enough to allow you to pick through the seeing - basically as the planetary guys now do with fast sensors.

It wasn't that long ago the stacking video frames for planetary work was cutting edge mind you.....

codemonkey
26-09-2015, 07:05 PM
Yeah, the original person I saw doing the tiny exposures was working with low read noise uncooled cmos cameras. I saw another guy who got a good result (only saw the one) using a camera which, according to what I could find during a google search, had in the area of 10e- of read noise, which made me wonder what I could do with a 5.4e- cooled ccd.

So it looks like Ray was on the money and this was a measurement error. Craig Stark calculates SNR in his article by mean / stddev, which is exactly what I did. However, just then I stacked without any normalisation, and without any rejection (so a straight average) to see what that would do to the SNR, and at that point I saw a stddev that was 4x the mean... so I don't think I can trust these numbers.

So now what I don't understand is, when read noise limited, why would fewer, longer (but still read noise limited) exposures always be better than more, shorter exposures?

The whole reason we stack images is because noise in random but the signal is "more constant" (obviously poisson noise comes into it) so we can "average out" the noise, so I don't find it that intuitive.

codemonkey
26-09-2015, 07:11 PM
I've got pretty dark skies, so for me, I think PI's CalculateSkyLimitedExposure tool gave me very high numbers... might have been 30 or 40mins for NB. I should check that.

ZWO already have a CMOS camera that's apparently producing images with 0.75 - 1.5e- read noise. According to a post I saw on CN they're also bringing out cooled versions, I believe in the next few months. Interesting times ahead!

Slawomir
26-09-2015, 08:39 PM
Hi Lee,

Thank you for mentioning the CalculateSkyLimitedExposure tool. - I have not used it before. Below I have collated suggested optimum sub exposure time from various sources. Relevant data was extracted from a 15-minute 3nm Halpha sub collected with QSI 690 in Milton near Brisbane's CBD.

1. PI CalculateSkyLimitedExposure tool: E-readout limit pointed to 80 minutes, while Anstey limit suggested 15-20 minutes.
2. A method from this article (http://www.hiddenloft.com/notes/SubExposures.pdf)
resulted in a 40-minute optimal exposure time.
3. SGP suggested around 15 minutes.

So anything between 15 minutes up to 80 minutes...wow...

Well, with my reliable yet imperfect mount, with the planes overhead and occasional clouds I think I will stick with what I have been doing so far :lol:

codemonkey
27-09-2015, 07:46 AM
Glad to have helped :-)

Hard to know which one to trust, isn't it? I'd probably be inclined to favour the one that inspected my data, but still...

As you say, there's tradeoffs here; a lot can go wrong in 80mins. If 15 works for you I say keep doing 15.

peter_4059
27-09-2015, 08:51 AM
You could also try this one:

http://www.stark-labs.com/craig/resources/Articles-&-Reviews/SNR-Calculator.xls

It doesn't point to a single optimum sub length but presents a series of charts that you can use to decide based on your circumstances. This is what I used in the charts I posted above.

RickS
27-09-2015, 01:28 PM
Best of all, it is quite simple to measure your subs to check if they are read noise limited or not:

http://www.iceinspace.com.au/forum/showthread.php?t=117010

Cheers,
Rick.

RobF
27-09-2015, 03:49 PM
Many thanks for reposting about that thread Rick.
I was guilty of ignoring the first time around, and realise now just how helpful Ray's suggested TargetADU approach is! :thumbsup:

Results in same ballpark as the Pixinsight SkyLimitedExposure script, but great rule of thumb to be working off to get a feel for any given sky conditions for a given OTA/camera combination.

Slawomir
27-09-2015, 05:41 PM
Thanks Rick for the suggestion.

I have also followed Ray's method that resulted in recommended 50-minute subs for my 3nm Ha filter...maybe when I get a better mount :)

So it looks like we really should be aiming for significantly longer exposures if we want the highest practically possible SNR, or for greater number of shorter subs if we want smooth images, but at the expense of detail...

I think that skilful use of TGV Denoise and the like can help to lower noise from long subs with high SNR if needed, but extracting detail from short and smooth subs might be a completely different challenge.

RobF
27-09-2015, 05:54 PM
The other major practicality in all this of course is trying to get a decent number of subs for stacking. Most stacking rejection algorithms get better as you have more subs. I don't like having less than 7, but preferably many more, for each filter.

So if you're shooting LRGB with a mono camera, that means you need the object to be reliably visible for 28 times your chosen sub length for the period you have available. Most of us with portable rigs are often scrounging 1 to 2 nights maximum at a dark sky site. Nothing worse than setting out with best intentions then having cloud or weather intervene. You may have maximal SNR L,R and G but it ain't going to amount to a nice image until next year perhaps when you manage to get the long Blue subs you missed out on :P

Does anyone set out with a target number of subs you feel you need to collect, in addition to the important requirements of SNR?

Slawomir
27-09-2015, 06:14 PM
Good point Rob,

I do set a targeted number of subs, usually these days I aim for at least twenty 15-minute subs per filter, but ideally go for 30 subs or more (narrowband).

With the weather and setting up the gear, I would collect first about 15 subs per filter, so I have something to play with, and then will be adding more subs weather and time permitting.

Shiraz
27-09-2015, 06:14 PM
that's a good point. Lately I have been using an incremental approach - baseline is 30 subs lum and 15 each colour then add subs in increments of about 15 lum and 5 for each colour until the image is deep enough - although lum takes extra precedence if the seeing is good. SGP makes this sort of approach dead easy, by automating everything and keeping track of where the process has got to.

edit: Slawomir just beat me to it - snap!

RobF
27-09-2015, 06:31 PM
Agree totally about the benefits of SGP tracking how many subs you have on each filter/object. While blessed with a week plus of great weather at Astro Fest this year was so great to set the counters for goal number of subs then watch the counters increment each night.

Particularly since they've added the option to manually increase/decrease collected subs. I find this a big advantage over CCD commander and using manual spreadsheets. Nothing worse than sitting down to do final processing and finding you're short on a subs/exposure for a filter.

Slawomir
27-09-2015, 10:10 PM
I have just come across this one: http://www.gibastrosoc.org/sections/astrophotography/optimum-exposures-calculator

Might be worth giving it a go.

RickS
28-09-2015, 08:06 AM
I'll be interested to hear how it goes for you, Suavi.

If I had more time I'd dig into the maths of the different models... but I'd rather use my limited spare time for processing :lol:

RobF
28-09-2015, 08:51 AM
Yes, will have to look at that one a bit more closely thanks Slawomir.
I can't help wondering if there should be a field for "is your christian name Rolf"? (if so, multiply required exposure time by 50 :))

Slawomir
28-09-2015, 10:49 AM
Just reporting back in regards to the calculator.

It returned a value of circa 16 minutes for my setup, so it must be using a similar algorithm to the one that is incorporated in SGP (SGP also usually suggests around 15-minute exposures).

See the attached screenshot for reference.

On a personal note, I have been experimenting with subs ranging from seconds to one-hour long, and eventually I settled for 15-minute subs, unless imaging really bright nebulae. So it is a nice confirmation that some algorithms suggest a similar length of exposure.

EDIT: I have just realised that I did not use calibrated images for calculations :doh:

So using correct data - the algorithm suggests 45-minute exposures (not 15-minute ones), confirming the results obtained from the Pixinsight SkyLimitedExposure script and from Ray's method.

I think SGP does not take into account the bias, thus it gives a different result.

Atmos
28-09-2015, 12:27 PM
I had a look at that calculator about 6 months ago, I'd never seen the recommended number of exposures change from 27. I was just putting in random numbers however as I didn't have a CCD camera at that stage.

Slawomir
28-09-2015, 12:37 PM
It does change, but you need to update the numbers in the SNR section.

codemonkey
06-10-2015, 12:57 PM
So I ran the aforementioned PI script on some new images. Turns out I need 135 mins of Ha, 38mins of B to be "sky limited". The curse of dark skies I guess...

RickS
06-10-2015, 01:00 PM
A blessing and a curse ;)

Atmos
10-10-2015, 06:48 PM
Apparently my imaging time for Luminance without a moon is ~1m... Not very long at all :(
I think I need to head up to my rural property and test it up there!

Slawomir
11-10-2015, 11:27 AM
Hi Colin,

To my understanding, the darker the skies, the cleaner the camera and the brighter the object the shorter the subs to get a good SNR. And inversely, with light polluted skies and with a noisy camera and when imaging a dim object exposures need to be much longer to achieve the same SNR. Or am I missing something obvious?

So the optimum exposure times should vary for different DSOs even for the same gear.

RickS
11-10-2015, 05:52 PM
Suavi,

To restate what you said more correctly: the darker the skies, the cleaner the camera and the brighter the object the shorter the total exposure time needed to get good SNR.

The effect of dark skies on sub length is the tricky bit. Because they are dark there is little sky glow and hence little sky glow noise (which is the sqrt of the number of photons detected). That means that read noise makes a much bigger contribution to the total noise and long subs are needed to overcome this. This seems counter intuitive but it makes sense if you think about it.

Note that the SNR and quality of a sky limited sub taken under dark skies will be much, much better than a sky limited sub taken under bright skies. The latter sub will contain a lot of noise associated with the sky glow (as well as the unwanted sky glow signal itself.) Despite needing long subs under dark skies, the total exposure time required to get acceptable overall SNR will be much shorter.

Cheers,
Rick.

Atmos
11-10-2015, 06:10 PM
With my Sky Limited being 60-80s in Luminance, is it worth while pushing that out to 600s just to shift the histogram further to the right?

I personally am finding that 600s exposures are easier to work with than 200s, even though I am getting like 12,000 ADU backgrounds.

Slawomir
11-10-2015, 06:33 PM
Thank you Rick for your clarification and disposing of my confusion, it makes perfect sense now :thanks:

RickS
11-10-2015, 10:02 PM
You lose a bit of dynamic range and run a larger risk of losing a sub due to cloud or guiding error, but there's no major downside to going longer unless you start running out of well depth for details you want to keep.



Glad I could help, Suavi. It took me a while to make sense of it too.