View Single Post
  #18  
Old 17-01-2019, 12:37 AM
kens (Ken)
Registered User

kens is offline
 
Join Date: Oct 2014
Location: Melbourne, Australia
Posts: 314
Quote:
Originally Posted by Merlin66 View Post
Hmmm
My summary version:
1. Bit depth is used by camera suppliers to "define" both the well depth and (say a unity gain where 1e=1ADU) the number of "shades"/ levels which can be obtained.
Actually they use electrons to define FWC. FWC is more relevant to dynamic range than contrast.
2. Bit depth is used in processing to maximise the "usable" number of "shades"/levels.
3. Bit depth in the final image is used to present the "best" resulting image.

Example:
The camera:
An "8 bit" mono camera, at unity gain would have a well depth of 256bits and be capable of handling 256 shades of grey, each shade would be 1bit.
A bit is a binary digit. You mean 256 units or ADU. It's not a good idea mixing units this way. The well depth is an effective well depth in this context because the ADC saturates even though the well is not full and thereby limits dynamic range. The ADC is rated by its bit depth - 8 bits in this case. Assuming appropriate use of gain to manage dynamic range, this affects contrast stretching or tonal gradients (if that's the right term).
A "16 bit" mono camera, at unity gain would have a well depth of 65000bits and be capable of handling 65536 shades/levels of grey, each shade would be 1bit.
No. The limit is more likely to be the sensor. My ASI1600 has a FWC of 20000 electrons and a 12bit (4096 level) ADC. At zero gain the 20k electrons are shoehorned into 4096 levels. At unity the ADC saturates at 4096 electrons when the well is only 20% full. 16bits here refers to either the ADC output or the image bit depth. My 12bit ADC output is multiplied by 16 to make it a 16 bit value for saving to file. Conversely, in 8bit mode the ADC outputs 10bits which are divided by 4 to convert to 8 bits for faster transmission and higher frame rate.
The Processing:
If we use a stack (say 1000) of say 8 bit images (as above) then when summed the total ADU count would be 256,000. Each ADU would be a shade level, giving 256,000 different levels in the image (!!!)
To manipulate each level available we would need to process at 18bit (2^18 = 262144)

If we used 16bit processing, then the 65536 limit would mean that 256,000/65536 =3.9=4bit per level. Inferring that the "finer detail" contained in the 4bit step would be compromised(?)
Another way to look at it is that you have introduced rounding or quantization noise
A side note: Stacking either by summing (or more commonly by averaging) always improves the SNR - all things being equal -See Howell's "Handbook of CCD Astronomy", p 71)

If we "average" stack the 8 bit camera image, the result is still an 8 bit image (1000 x 256)/1000 =256. BUT the SNR is improved due to the reduction in "variation", which is also smoothed.
But you have added some quantization noise which reduces SNR. In effect you lose some of the smoothing benefit by rounding it off.
The Final Image
If we save a 8bit camera file to an 8 bit image format then it will have 256 levels of grey.
If we save an 8bit camera file as a 16 bit image, it will still have 256 levels. Due to the unity gain and each level can't have less than 1e. This means that not all the possible 65536 levels are used.
Due to read noise all levels will be output by the ADC. Stacking will reduce the read and other noise. Averaging in stacking also causes all levels to be filled
If we save a 16bit camera file to an 8 bit format, the number of levels is restricted to 256 - each level in this case would have 65536/256 =256 electron. Again inferring that fine detail could be compromised..
Or that you have added quantization noise
As always, open to comment/ correction and ridicule.
Comments above. I've tried to explain in terms of noise. The two main things we are concerned with are noise and contrast. Quantization noise is quantified as 1 LSB / sqrt(12) where LSB=least significant bit or 1 unit
All noise, including quantization noise, is amplified when we stretch to get contrast. So more bits, especially during processing, are better.
Reply With Quote