View Single Post
  #28  
Old 16-06-2014, 07:12 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by LightningNZ View Post
Hi Ray, really excellent stuff you've done here regarding correct use of flats.

I just want to make some comments regarding the use of the median vs the mean.

If we have 10 subs and for a given pixel the ADU values are:
sub 1 = 13212
sub 2 = 15234
sub 3 = 12424
sub 4 = 14243
sub 5 = 14234
sub 6 = 14700
sub 7 = 14532
sub 8 = 0
sub 9 = 12430
sub 10 =0

The mean is 11100.9 and the median is 13723.0.

The median is the value that has an equal number of smaller and larger values flanking it. Because we have an even number of subs, the median value is actually 13212 + 14234 / 2 = 13723.

So, which one of these is more "accurate". The median is in this case because the pixel values of 0 are clearly rubbish. The mean is said in this case to be "biased". The median is said to be "robust" to outliers - that's why people use it.

Also note that if sub 1 had a value of 13213 then the mean would be 11101.0 and the median would be 13723.5. The median can be fractional.

Edit: I should add that the "sample mean" and the "sample median" (what you're calculating) are both estimations of the "population mean", which the average is you took an infinite number of subs. If that were the case the number of outliers you had would be irrelevant because they would be overwhelmed by true signal, and in which case your mean and median would be exactly equal.

Hope this is helpful,
Cam
Thanks for that Cam - and Rick for the response

Quote:
Originally Posted by RickS View Post
Cam: using a median combine instead of mean will give you a degree of implicit outlier rejection but it comes at a cost of approximately 20% less improvement in SNR (for larger sets of image - results are worse for small sets.) You'll almost certainly get a better overall result using an average combine with an explicit rejection algorithm.

Ray: I haven't had a chance to think about your model much but at a superficial level I was wondering whether dithering reduces FPN. Have you already considered this?

I only recently got back from a month away. When I catch up I will do some experiments with my large collection of narrowband flats and see if my empirical results fit your curve.

Cheers,
Rick.
Hi Rick - yes, dithering should decorrelate the FPN between subs, so it should behave as normal random noise - still to validate that bit of it, but if it works that way, it should make a large difference to the required number of flats.
The model is currently based on fully correlated FPN across the subs, so it is a worst case. I posted the 10x rule of thumb for 2 reasons;
1. a rule of thumb developed in this way should apply to any image gathering technique - it may be overkill for dithering, but that is not a disaster,
2. my test data was dithered in 1 axis, but was still not too far from the "worst case" result and the FPN was still visible in all but the 11x image- so there is presumably some partial correlation left? Not sure what this means yet, but it added weight to the need for a conservative rule of thumb.

Narrowband flats should be OK for testing, but the rule of thumb analysis is based on LRGB imaging where the sky is dominant.

Looking forward to seeing some more real world results - I have only tested it with one set of subs and 4 sets of flats to date, but I have also gone back and improved some earlier images by using more flats.

regards ray

Last edited by Shiraz; 16-06-2014 at 11:00 PM.
Reply With Quote