#1  
Old 21-03-2018, 10:29 AM
PKay's Avatar
PKay (Peter)
Registered User

PKay is offline
 
Join Date: May 2017
Location: DEPOT BEACH
Posts: 1,643
Ideal Number of Subs. Take 2

Please see attached image (Figure 3- How many images).
It is within the Pixinsight ‘Image Integration’ support documentation.

In my last essay, I attempted to understand why this may be the case.
That prompted several replies and it appears that from experience, in some scenarios, more images can indeed be useful.
Also I look at the results of others, and the techniques used vary considerably. Why is this so?
And finally address the importance of total integration time.

Please remember, this is only my take on things. I am new to this game and trying to make sense of it all. Please feel free to contribute.


Looking at the equation in Figure 3, there is no mention of any other important variables. Variable such as target brightness, average background noise level, aperture, focal length, pixel size, exposure length etc etc.
The only variable is N. N = number of images.

I then did some light reading :
Probability and Stochastic Processes for Engineers. I so hate that book.
It turns out the formula used to create the graph is applicable to any random variable (at least I think it is). Terms such as ‘Central Limit Theorem’ were mentioned.
The Central Limit Theorem states that the ‘sampling distribution’ of the ‘sampling means’ approaches a ‘normal distribution’ as the sample size gets larger. This fact holds especially true for sample sizes over 30. All this is saying is that as you take more samples, especially large ones, your graph of the average value will look more like a normal distribution.

An example:
Take a bag of tennis balls to the top of a high rise building and then drop then one by one from the same spot. Assume good conditions ie: no hurricane blowing.
Mark on the ground where each lands.
Theory says that after 30-50 balls, you can safely predict where they will land. After that you are wasting tennis balls.

But then if we can change one of the variables.
In the case of the tennis balls, give each an initial velocity at the point of release. The statistics will indeed change.
And you can throw another 50 balls and get new information.

How does this apply to astro photography?
In changing the exposure time (or gain, or filter) you are changing an important variable.
So that means, you can indeed get more information.

Once again I will make a conclusion:
Taking 30-50 images at x seconds, and then taking 30-50 images at 2x seconds will yield more information, (assuming that the seeing remains constant).
Not forgetting the considerations of choosing the right exposure, dependant on your equipment, target brightness, seeing conditions (no hurricane) and so on.

It should also be noted that seeing conditions can change (one of the variables) and the same theory applies ie: if it gets clearer, take another 50 images (at the same exposure) and new information can be gained.

The resulting images sets are integrated separately and then combined.

Integration time was mentioned more than once as being of greatest importance, and it is, but I see it as a result of the above considerations.

Anyway proof is in the pudding and I am willing to give it a go.

For example I hope one day that one of my images will read:
50 of 2 sec. Gain 40 Good seeing
50 of 240 sec. Gain 139 Good seeing
50 of 240 sec. Gain 139 Next night. Average seeing
50 of 480 sec. Gain 139 Next week. Very good seeing
Total 13 hrs integration time.

And so on.
Attached Thumbnails
Click for full-size image (SNR Number of Images.jpg)
99.0 KB37 views
Reply With Quote
  #2  
Old 21-03-2018, 11:03 AM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,982
I have 250x60s subs on NGC4038 so I’ll do a comparison tonight to help illustrate total integration time.

Better yet, I’ve got something like 400x60s on PGC9408 which could be an even better one given how faint everything in that area is.
Reply With Quote
  #3  
Old 21-03-2018, 11:20 AM
Amaranthus's Avatar
Amaranthus (Barry)
Thylacinus stargazoculus

Amaranthus is offline
 
Join Date: Jan 2014
Location: Judbury, Tasmania
Posts: 1,203
Your tennis ball example has nothing to do with the central limit theorem, it simply illustrates that precision of a statistical estimate (standard error) improves (in quadrature) with the sample size.

Let's say you were trying to estimate the skyglow of your image, and so you sampled an area of 'background' sky and recorded the mean pixel intensity. That would be one estimate, and its precision would depend on how many pixels constituted your sample (this is akin to your tennis ball example). Then, if you did that over repeated images, the distribution of those estimates would be Gaussian, irrespective of the underlying distribution of the pixels within the sampling area (might be uniform, lognormal, whatever). THAT is the central limit theorem.
Reply With Quote
  #4  
Old 21-03-2018, 11:31 AM
cometcatcher's Avatar
cometcatcher (Kevin)
<--- Comet Hale-Bopp

cometcatcher is offline
 
Join Date: Jan 2005
Location: Cloudy Mackay
Posts: 6,542
Here's a few more things to think about. While total integration time is the key factor, the way to get there will be different for different setups. I shoot 30-60 sec subs for a couple of reasons.

With 30 sec subs I can do away with computer guiding and all the hassle it involves. It saves the weight of guiding gear. In my case I already have an overloaded HEQ5 Pro.

With 30 sec subs one can shoot in the cloud gaps. I live in the tropics and rarely get cloud free sky time. There's guys here that shoot 1 hour subs. No way could I do that here.

Short subs are sharper. Seeing can blur out longer subs.

Short subs avoid focus change with temperature drift. On nights where temperature drops quickly, the focus of my 10" F4 can shift in a short time.

Dew. I'd rather lose a minute than half an hour. Yes you can get dew heaters. More complexity lol. I used to use one but found they shift the focus too much and the camera dews up anyway. Probably less important with an astro cam.
Reply With Quote
  #5  
Old 21-03-2018, 12:52 PM
Amaranthus's Avatar
Amaranthus (Barry)
Thylacinus stargazoculus

Amaranthus is offline
 
Join Date: Jan 2014
Location: Judbury, Tasmania
Posts: 1,203
Basically as Kevin says (for any given integration time) short subs rule for virtually everything (e.g., avoids the need for guiding and good mounts, problems with intermittent clouds, star saturation and bloat, satellite trails, wind, etc. etc.). EXCEPT for the nasty reality of the read-noise floor (...okay, and maybe excessive processing time). It's that read-noise limit is the real the kicker in astroimaging, as I explained in your previous thread on this topic. If we ever get a near-zero-read-noise camera that otherwise performs well on QE etc., then everyone would be doing super-short subs and 'lucky imaging' for DSOs. Right now only planetary imaging can get away with this, because the inherent SNR is so high that even millisecond subs get well above the read-noise floor.
Reply With Quote
  #6  
Old 21-03-2018, 09:34 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,982
Okay, I've got 250x60s images of NGC 4038 (Antennae Galaxies) so I did crop around two small background galaxies and the fainter antennae. I've uploaded 6 images which, if uploaded correctly should be:

250x60s
100x60s
50x60s
25x60s
10x60s
5x60s

What I've done with these is stacked the cropped area, done a LinearFit on them with respect to the 250x60s and then done the same stretch on all six. Every stage is roughly a doubling. The 5 stack was a Percentile Clip while all the others were a Winsorized Sigma Clip with the default settings. In each rough doubling there is a clear increase in contrast and smoothness while a big drop in noise. Even between 100x and 250x there is a big decrease in the amount of background noise and the faint arm stands out from the background considerably better. Going to 500x would make this even smoother.

I dither every sub with either a Medium or High dither movement in SGP so there is a good statistical sampling. I picked the middle Lum sub (#125) and the background ADU value is ~350 on the CALIBRATED subs.
My LRGB imaging is done at Gain 80 with the QHY163M (the same as Gain 76 with the ASI1600). This gives me a Read Noise of 1.9e- and a Gain of 1.9e/ADU. With LRGB imaging you want your background to be about 10x read noise while with narrowband trying to get to the 3-5x is good but 10x is still better.

The equation is.
(((RN^2)/Gain)*ARN)*AMP

RN= Read Noise in e-
Gain= System Gain
ARN= Above Read Noise (eg. 10x)
AMP= Amplification from 12/14 bit to 16-bit.
The amplification is the difference. To get from 12-bit to 16-bit you do 2^4 which is 16. From 14-bit to 16-bit you do 2^2 which is 4.

(((1.9^2)/1.9)*10)*16 = 304 ADU
As I was getting 350 ADU that means I am 11.51x read noise which is a good sampling.

My point of showing this that stacking 50x various exposure lengths doesn't really help at all as long as each exposure is around the 10x read noise mark. If you're shooting at Unity you will have a RN of ~1.6e- and a Gain of ~1. This means that all you need to do is have your calibrated subs at ~410 ADU and continue at whatever exposure time that is, whether it be 60s or 480s. Doubling or tripling the exposure time required to each that point don't likely do anything other than start saturating more and more stars which will cause havoc when it comes to stacking everything together.

Also, as changing the Gain on your camera only changes the Full Well Capacity and the Read Noise, all you're changing is what background ADU value you need and how quickly your pixel wells fill up.
In short, you are best just finding what the best exposure time is for whatever Gain setting is, your filter and location and sticking to that.
Attached Thumbnails
Click for full-size image (X250.jpeg)
108.5 KB34 views
Click for full-size image (X100.jpeg)
135.8 KB29 views
Click for full-size image (X50.jpeg)
156.0 KB28 views
Click for full-size image (X25.jpeg)
170.8 KB25 views
Click for full-size image (X10.jpeg)
185.4 KB27 views
Click for full-size image (X5.jpeg)
189.7 KB31 views
Reply With Quote
  #7  
Old 21-03-2018, 10:18 PM
Amaranthus's Avatar
Amaranthus (Barry)
Thylacinus stargazoculus

Amaranthus is offline
 
Join Date: Jan 2014
Location: Judbury, Tasmania
Posts: 1,203
Quote:
In short, you are best just finding what the best exposure time is for whatever Gain setting is, your filter and location and sticking to that.
Higher gain has the advantage of lower read noise (on CMOS sensors like this), and fills the wells faster, which necessitates shorter subs. Why then have a lower gain? Higher dynamic range. But often, the latter is not worth it (and can be 'filled' by stacking, which with enough frames can almost drag it up to 14+ bits equivalent), which is why (along with amp glow) many advise using a high gain setting and short subs on these sensors.

Thanks for the demo Colin, that is useful, but it might be even more useful for Peter's understanding to hold the integration time constant and change the sub number (and axiomatically, the sub length). But I doubt you have the data at hand for that!
Reply With Quote
  #8  
Old 22-03-2018, 08:17 AM
PKay's Avatar
PKay (Peter)
Registered User

PKay is offline
 
Join Date: May 2017
Location: DEPOT BEACH
Posts: 1,643
Very good Colin, you have given us all a lot to think about.

Thank you Barry and Cometcatcher for your valuable insights and experience.

Regards to the Pixinsight documentation.
May be they are indeed wrong.
I intend to follow this up with the PI team.

That may take a few days.

And a quote from one amongst our fold (who wishes to stay anon):

'The question is also a high level question in that all the individual issues that could go into the answer are all topics of knowledge in their own right'.
Reply With Quote
  #9  
Old 22-03-2018, 11:36 PM
ChrisV's Avatar
ChrisV (Chris)
Registered User

ChrisV is offline
 
Join Date: Aug 2015
Location: Sydney
Posts: 1,738
Follow what Colin says about sorting exposure time based on swamping read noise.

But that equation - should the bias level be added (measured from the bias frames) ?
Reply With Quote
  #10  
Old 23-03-2018, 07:04 AM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,982
Quote:
Originally Posted by ChrisV View Post
Follow what Colin says about sorting exposure time based on swamping read noise.

But that equation - should the bias level be added (measured from the bias frames) ?
You are right, when looking at the uncalibrated frames you do need to add the Bias. My bias is 793 ADU so I try aim for about 1,100 ADU with uncalibrated frames.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 12:31 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement