View Single Post
  #5  
Old 09-09-2017, 10:28 PM
DiscoDuck's Avatar
DiscoDuck (Paul)
Raider Nation

DiscoDuck is offline
 
Join Date: Apr 2014
Location: Adelaide
Posts: 691
Quote:
Originally Posted by Shiraz View Post
Normal Paul.

The signal can only take values in steps of 16, so eg in the bias, the values can only be .....304,320,336.... etc. At 139, these steps are equivalent to single electron changes in the signal, so there is not any point in getting finer resolution. With shot and read noise, this will show as a smattering of noticeably lighter and darker pixels - all of which can have only a limited number of values.
Thanks Ray. Understood. But that is a lot of points at 10 sigma from the mean I'm seeing I guess. Still - happy to hear it's normal!

Quote:
Originally Posted by Shiraz View Post
Stacking fills in the intermediate values in the final image, but the data distribution in the subs is very ragged. Of course that means that standard non-linear rejection methods have to be used carefully. Sigma rejection in particular can throw away lots of data and leave distinct holes in the resulting stack histogram by rejecting a quite large number of pixels that have the same value (eg, if you ask it to reject 5% outliers, it may not be able to do so if the steps in the histogram only give the option of rejecting say 1%,3% or 10% - it will reject 10% and make a noticeable difference to the data). I have found that min/max rejection works best on dithered short subs - throwing away a fixed percentage of the data at the top and bottom gets rid of almost all cosmic ray events and most hot pixel leakage without disturbing the real data distribution too much.
So that would be for each integration, i.e. bias, dark, flat, light? And normalisation options as per normal on them? Min/Max rejection takes as parameters the numbers of pixels to reject it seems (just looking now) - what sort of values do you use in your case?
Reply With Quote