Here's my understanding - others feel free to chime in if you see it differently...
The rejection algorithm is trying to reject outliers. It does this on a pixel by pixel basis. To determine what is real signal and what is an outlier for a given pixel the only information it has is the value of that pixel in each of the subs.
Sigma refers to the standard deviation of the pixel values. If you take ten subs of increasing exposure, for a given pixel (lets say part of a star) you might get values of 1,2,3,4,5,6,7,8,9,10. The mean value is 5.5 and the standard deviation is 3.03.
If the algorithm is set to reject anything that is outside the range of mean +/- one standard deviation (5.5+3.03=8.53) (5.5-3.03=2.47) then data with values less than 2.47 and greater than 8.53 are not included in the stack. So the subs with pixel values 1, 2, 9 and 10 are not included.
The significance of this for you is if you take a whole lot of different exposures and then stack them all together, the rejection algorithm might be rejecting the shortest and longest exposures as outliers. If you have taken 20 x 180 sec subs (ie one hour worth) in an attempt to capture the faint detail, it would be a shame if these were rejected from your final image. The same goes for the short exposures however you obviously spent a lot less time capturing these.
"Kappa-Sigma Clipping
This method is used to reject deviant pixels iteratively.
Two parameters are used: the number of iterations and the standard deviation multiplier used (Kappa).
For each iteration, the mean and standard deviation (Sigma) of the pixels in the stack are computed.
Each pixel which value is farthest from the mean than more than Kappa * Sigma is rejected.
The mean of the remaining pixels in the stack is computed for each pixel."
There are other rejection routines that are designed to combine different exposures for HDR purposes:
Entropy Weighted Average (High Dynamic Range)
This method is based on the work of German, Jenkin and Lesperance (see Entropy-Based image merging - 2005) and is used to stack the picture while keeping for each pixel the best dynamic.
It is particularly useful when stacking pictures taken with different exposure times and ISO speeds, and it creates an averaged picture with the best possible dynamic. To put it simply it avoids burning galaxies and nebula centers.
Last edited by peter_4059; 10-05-2022 at 06:02 PM.
Here's my understanding - others feel free to chime in if you see it differently...
The rejection algorithm is trying to reject outliers. It does this on a pixel by pixel basis. To determine what is real signal and what is an outlier for a given pixel the only information it has is the value of that pixel in each of the subs.
Sigma refers to the standard deviation of the pixel values. If you take ten subs of increasing exposure, for a given pixel (lets say part of a star) you might get values of 1,2,3,4,5,6,7,8,9,10. The mean value is 5.5 and the standard deviation is 3.03.
If the algorithm is set to reject anything that is outside the range of mean +/- one standard deviation (5.5+3.03=8.53) (5.5-3.03=2.47) then data with values less than 2.47 and greater than 8.53 are not included in the stack. So the subs with pixel values 1, 2, 9 and 10 are not included.
The significance of this for you is if you take a whole lot of different exposures and then stack them all together, the rejection algorithm might be rejecting the shortest and longest exposures as outliers. If you have taken 20 x 180 sec subs (ie one hour worth) in an attempt to capture the faint detail, it would be a shame if these were rejected from your final image. The same goes for the short exposures however you obviously spent a lot less time capturing these.
"Kappa-Sigma Clipping
This method is used to reject deviant pixels iteratively.
Two parameters are used: the number of iterations and the standard deviation multiplier used (Kappa).
For each iteration, the mean and standard deviation (Sigma) of the pixels in the stack are computed.
Each pixel which value is farthest from the mean than more than Kappa * Sigma is rejected.
The mean of the remaining pixels in the stack is computed for each pixel."
There are other rejection routines that are designed to combine different exposures for HDR purposes:
Entropy Weighted Average (High Dynamic Range)
This method is based on the work of German, Jenkin and Lesperance (see Entropy-Based image merging - 2005) and is used to stack the picture while keeping for each pixel the best dynamic.
It is particularly useful when stacking pictures taken with different exposure times and ISO speeds, and it creates an averaged picture with the best possible dynamic. To put it simply it avoids burning galaxies and nebula centers.
will only stack images that contain at least eight stars that are common between all light framhes. ,.........
Kappa-Sigma Clipping
This method is used to reject deviant pixels iteratively.
Two parameters are used: the number of iterations and the standard deviation multiplier used (Kappa).
For each iteration, the mean and standard deviation (Sigma) of the pixels in the stack are computed.
Each pixel which value is farthest from the mean than more than Kappa * Sigma is rejected.
The mean of the remaining pixels in the stack is computed for each pixel.
But it doesn't say it's not all the images or just the individual timed lots . So I have no idea