View Single Post
  #1  
Old 14-03-2018, 03:02 PM
PKay's Avatar
PKay (Peter)
Registered User

PKay is offline
 
Join Date: May 2017
Location: DEPOT BEACH
Posts: 1,643
Gain vs Exposure Time. Image Integration. Take 2

This is an attempt to sort out thoughts in writing. If you can spot error, please let me know.
Signal to Noise Ratio is another story, and for me is getting no attention (Image Integration processing techniques puts SNR theory into the background).
Having said that, a knowledge of how Exposure Time affects the SNR is a very important concept.


I am using a ZWO ASI1600MC camera. However the ideas below would apply to any camera.

I have just finished an experiment imaging a star field.
I radically varied the GAIN (from 20, 50, 70, 139(unity) & 200). Keeping the EXPOSURE TIME a constant at 60 sec.
Then I radically varied the exposure time for 2, 30, 60, 120 & 240 sec. Keeping the GAIN a constant.
The results are for me, are quite conclusive .
EXPOSURE TIME is the key element.
GAIN SETTING? In all further work I am setting it at unity (139), and there it will stay.
Proof was further found when taking my morning FLATS. A correctly exposed FLAT will reveal LENS artifacts such as dust particles. GAIN is held constant. Exposure time variation in only tenths of a second can make a huge difference.

In land based photography exposure time (shutter speed) is critical for correct exposure.
Same with astro photography.
And it is dependant on the target and your FIELD OF VIEW (FOV).
This must be clarified. The FOV is controlled by aperture, focal length and camera pixel size.

A bright star such as Sirius is sending out a lot of photons / sec.
You have a small FOV, then an exposure time of 2 sec. might be appropriate.
You have a large FOV, then an exposure time of 30 sec may be the best choice.

For a feint nebula an exposure time of 240 sec. (small FOV) would be a better choice.

What if you have both in the same image?
I think that each element has to be treated separately.
And in this case an understanding of how IMAGE INTEGRATION works is required.
There is more than one way of integrating an image, and the maths involved is horrendous.
A fundamental understanding of how it works is enough.

This is my take on it:
You have 30 images to integrate (30 is the ideal choice by the way, but more on that at some other time).
Draw a line (say 5 pixels long) through the centre of a star in the first image (Pixel line 01).
Each pixel has a number associated with it (for simplicity the number range is from 1 to 10).
I will just use 5 images (out of the 30).
Image 01: Pixel line 01: has the count ( 1, 2, 4, 8, 1)
Image 02: Pixel line 01: has the count ( 1, 2 ,4, 8, 1)
Image 03: Pixel line 01: has the count ( 1, 2 ,4, 8, 1)
Image 04: Pixel line 01: has the count ( 1, 2 ,4, 8, 1)
Image 05: Pixel line 01: has the count ( 1, 2 ,4, 8, 1)

After integration the AVERAGE VALUE = (1, 2, 4, 8, 1)

With the above example, it is easy see that if Image 05 was (8, 8, 8, 8, 8 )
The AVERAGE VALUE would be ~ (2, 3, 5, 8, 2).
The AVERAGE VALUE is not the only one way of combining pixel data.
The MEDIAN VALUE (in both cases above) = (1, 2, 4, 8, 1 )
AND there are many other ways.


So what does that mean, and how do we use it?

So in the example of a bright star and a dim nebula (small FOV).
Take 30 images at 2 sec. exposure.
Take 30 images at 240 sec. exposure.
Integrate each data set separately.
The two resulting images can then be treated separately with further processing (such as curves transformation).

At this point the two resulting images need to be combined.
HDRComposition (in PI) is for combining short exposure images with long exposure images.
Reply With Quote