#21  
Old 15-06-2014, 03:26 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by Screwdriverone View Post
Hi Ray,

Greg mentioned he uses the cursor in CCDStack to find the ADU value, although, as I use an Atik as well and am a bit of a noob still when it comes to taking flats with my light box, what do you suggest I use to determine the ADU value of the fits files? I think it works in Nebulosity when I debayer the fits file and go to the info page, but I am not confident this is correct?

Any tips on determining the correct ADU per flat would be of great help, as I find some flats (say 0.5 sec with my light box) give me a histogram in Nebulosity between say 18,000 and 30,000 so I guess that is somewhere in the 24,000 range for ADU?

Is there an easier way?

Thanks

Chris
Hi Chris. there is no need for extreme precision. Flats in the 18,000-30,000 range should be fine if you take one for each light. I think that Nebulosity allows you to read out the ADU value under the cursor after you have taken an image, which should be fine for determining average flat exposure. The other thing that would work OK is to look at the real time histogram display (normally in the top right corner) and set the flat exposure so that the main peak is about the centre of the histogram.

From memory, your Atik has a Sony chip, so bear in mind that you might be able to get by without flats for short to medium exposures if your sky is OK. Suggest that you try processing one of your image sequences with and without flats to see if they are worth the effort for your system and the imaging you do - although of course, you will need flats if you must correct vignetting or donuts. Regards Ray

regards ray

Last edited by Shiraz; 15-06-2014 at 05:07 PM.
Reply With Quote
  #22  
Old 15-06-2014, 03:59 PM
Merlin66's Avatar
Merlin66 (Ken)
Registered User

Merlin66 is offline
 
Join Date: Oct 2005
Location: Junortoun Vic
Posts: 8,904
Ray,
help me here....
Neglecting vignetting/ shadowing/ fringing (the usual target for flats...)
Why would dividing by what I see as an "evenly illuminated but random pixel distribution" image improve the SNR????
What is it actually doing to the signal?
For AP I can see the background "graininess" caused by the low SNR.....
but I can't see how dividing by basically a "constant" is going to lift the target signal above the background and effectively add to the SNR???
Reply With Quote
  #23  
Old 15-06-2014, 04:24 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by Merlin66 View Post
Ray,
help me here....
Neglecting vignetting/ shadowing/ fringing (the usual target for flats...)
Why would dividing by what I see as an "evenly illuminated but random pixel distribution" image improve the SNR????
What is it actually doing to the signal?
For AP I can see the background "graininess" caused by the low SNR.....
but I can't see how dividing by basically a "constant" is going to lift the target signal above the background and effectively add to the SNR???
Hi Ken - hope this makes sense.

Ignoring read and dark noise, there are 2 other major noise components - the shot noise from the sky and the fixed pattern variations in pixel sensitivity that modulate the sky and target signal and give you additional noise (fixed pattern noise, FPN). You can get rid of the FPN by using flats, which compensate for the variations in pixel sensitivity - but the flats have to be perfect to completely remove FPN. The analysis shows how much flat signal you need to get the flats close enough to perfect - and thereby to ensure that the noise introduced by the flats is submerged under the sky shot noise and not a significant component of the total noise.

Put another way, evenly illuminated flats are not constant at the pixel scale - they have embedded graininess due to the pixel-level fixed pattern variability in the sensor sensitivity. The lights have exactly the same graininess, so you can use the flats to compensate for the fixed pattern graininess in the lights.

You have a Sony chip with low inherent FP variation, so this is all a bit academic for your system unless you do very long exposures or have bright sky. However, the majority of cameras out there have Kodak sensors and they need flats, since they have significantly higher FP variation.

Last edited by Shiraz; 21-06-2014 at 09:04 AM.
Reply With Quote
  #24  
Old 16-06-2014, 04:03 PM
LightningNZ's Avatar
LightningNZ (Cam)
Registered User

LightningNZ is offline
 
Join Date: Oct 2011
Location: Canberra
Posts: 951
Hi Ray, really excellent stuff you've done here regarding correct use of flats.

I just want to make some comments regarding the use of the median vs the mean.

If we have 10 subs and for a given pixel the ADU values are:
sub 1 = 13212
sub 2 = 15234
sub 3 = 12424
sub 4 = 14243
sub 5 = 14234
sub 6 = 14700
sub 7 = 14532
sub 8 = 0
sub 9 = 12430
sub 10 =0

The mean is 11100.9 and the median is 13723.0.

The median is the value that has an equal number of smaller and larger values flanking it. Because we have an even number of subs, the median value is actually 13212 + 14234 / 2 = 13723.

So, which one of these is more "accurate". The median is in this case because the pixel values of 0 are clearly rubbish. The mean is said in this case to be "biased". The median is said to be "robust" to outliers - that's why people use it.

Also note that if sub 1 had a value of 13213 then the mean would be 11101.0 and the median would be 13723.5. The median can be fractional.

Edit: I should add that the "sample mean" and the "sample median" (what you're calculating) are both estimations of the "population mean", which the average is you took an infinite number of subs. If that were the case the number of outliers you had would be irrelevant because they would be overwhelmed by true signal, and in which case your mean and median would be exactly equal.

Hope this is helpful,
Cam

Quote:
Originally Posted by Shiraz View Post
Hi Ken.

As I understand it, median combine will pull out the single value that is the median of all of the inputs - it cannot have a fractional ADU value, so the scaling factor will be subject to quantisation noise.

For SNR, the basic question is "of what". To fix a consistent reference for the modelling and measurements, I measured the average signal of a dim part of the galaxy to be 1/10 the sky background on one of the lights. Since the sky and the galaxy did not change much during the imaging run, I thereafter used 1/10 sky as the Signal and so the SNR values refer to the chosen part of the galaxy. I could choose some other reference point and all of the SNR results would be scaled, but the same picture would emerge when comparing model and test data. With a dim target, the noise is totally dominated by the sky, so I measured the RMS variability of a featureless part of the sky to find the noise. I turned off all of the smart bits while stacking in PI (to get an undoctored result) and used the pixel stats function of Nebulosity to do the measurements.

Interesting observation on the 314. I would guess that the Sony chip in the Atik has very low fixed pattern variability, so flats will not help unless you expose for very long periods of time.

Regards Ray

Last edited by LightningNZ; 16-06-2014 at 04:19 PM. Reason: Added a bit about the sample vs population mean.
Reply With Quote
  #25  
Old 16-06-2014, 05:28 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by LightningNZ View Post
The median is said to be "robust" to outliers - that's why people use it.
Cam: using a median combine instead of mean will give you a degree of implicit outlier rejection but it comes at a cost of approximately 20% less improvement in SNR (for larger sets of image - results are worse for small sets.) You'll almost certainly get a better overall result using an average combine with an explicit rejection algorithm.

Ray: I haven't had a chance to think about your model much but at a superficial level I was wondering whether dithering reduces FPN. Have you already considered this?

I only recently got back from a month away. When I catch up I will do some experiments with my large collection of narrowband flats and see if my empirical results fit your curve.

Cheers,
Rick.
Reply With Quote
  #26  
Old 16-06-2014, 06:02 PM
LightningNZ's Avatar
LightningNZ (Cam)
Registered User

LightningNZ is offline
 
Join Date: Oct 2011
Location: Canberra
Posts: 951
Rick, do you have a link or some stats to back this up? 20% is a big amount. While I can see that mean with sigma rejection should be superior, I find the 20% value a little hard to fathom.

For instance in my example above, leaving out the bottom two values of 0 (obvious outliers) would result in a mean of 13876.125 - not much different at all from the median value.
-Cam

Quote:
Originally Posted by RickS View Post
Cam: using a median combine instead of mean will give you a degree of implicit outlier rejection but it comes at a cost of approximately 20% less improvement in SNR (for larger sets of image - results are worse for small sets.) You'll almost certainly get a better overall result using an average combine with an explicit rejection algorithm.

Cheers,
Rick.
Reply With Quote
  #27  
Old 16-06-2014, 06:49 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by LightningNZ View Post
Rick, do you have a link or some stats to back this up? 20% is a big amount. While I can see that mean with sigma rejection should be superior, I find the 20% value a little hard to fathom.

For instance in my example above, leaving out the bottom two values of 0 (obvious outliers) would result in a mean of 13876.125 - not much different at all from the median value.
-Cam
Cam,

Using the median effectively ignores much of the data so I don't find it difficult to understand intuitively. One of the more complete analyses I've seen is in the PI doc: http://pixinsight.com/doc/tools/Imag...tegration.html

See the section on median combination.

Cheers,
Rick.
Reply With Quote
  #28  
Old 16-06-2014, 07:12 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by LightningNZ View Post
Hi Ray, really excellent stuff you've done here regarding correct use of flats.

I just want to make some comments regarding the use of the median vs the mean.

If we have 10 subs and for a given pixel the ADU values are:
sub 1 = 13212
sub 2 = 15234
sub 3 = 12424
sub 4 = 14243
sub 5 = 14234
sub 6 = 14700
sub 7 = 14532
sub 8 = 0
sub 9 = 12430
sub 10 =0

The mean is 11100.9 and the median is 13723.0.

The median is the value that has an equal number of smaller and larger values flanking it. Because we have an even number of subs, the median value is actually 13212 + 14234 / 2 = 13723.

So, which one of these is more "accurate". The median is in this case because the pixel values of 0 are clearly rubbish. The mean is said in this case to be "biased". The median is said to be "robust" to outliers - that's why people use it.

Also note that if sub 1 had a value of 13213 then the mean would be 11101.0 and the median would be 13723.5. The median can be fractional.

Edit: I should add that the "sample mean" and the "sample median" (what you're calculating) are both estimations of the "population mean", which the average is you took an infinite number of subs. If that were the case the number of outliers you had would be irrelevant because they would be overwhelmed by true signal, and in which case your mean and median would be exactly equal.

Hope this is helpful,
Cam
Thanks for that Cam - and Rick for the response

Quote:
Originally Posted by RickS View Post
Cam: using a median combine instead of mean will give you a degree of implicit outlier rejection but it comes at a cost of approximately 20% less improvement in SNR (for larger sets of image - results are worse for small sets.) You'll almost certainly get a better overall result using an average combine with an explicit rejection algorithm.

Ray: I haven't had a chance to think about your model much but at a superficial level I was wondering whether dithering reduces FPN. Have you already considered this?

I only recently got back from a month away. When I catch up I will do some experiments with my large collection of narrowband flats and see if my empirical results fit your curve.

Cheers,
Rick.
Hi Rick - yes, dithering should decorrelate the FPN between subs, so it should behave as normal random noise - still to validate that bit of it, but if it works that way, it should make a large difference to the required number of flats.
The model is currently based on fully correlated FPN across the subs, so it is a worst case. I posted the 10x rule of thumb for 2 reasons;
1. a rule of thumb developed in this way should apply to any image gathering technique - it may be overkill for dithering, but that is not a disaster,
2. my test data was dithered in 1 axis, but was still not too far from the "worst case" result and the FPN was still visible in all but the 11x image- so there is presumably some partial correlation left? Not sure what this means yet, but it added weight to the need for a conservative rule of thumb.

Narrowband flats should be OK for testing, but the rule of thumb analysis is based on LRGB imaging where the sky is dominant.

Looking forward to seeing some more real world results - I have only tested it with one set of subs and 4 sets of flats to date, but I have also gone back and improved some earlier images by using more flats.

regards ray

Last edited by Shiraz; 16-06-2014 at 11:00 PM.
Reply With Quote
  #29  
Old 16-06-2014, 08:41 PM
LightningNZ's Avatar
LightningNZ (Cam)
Registered User

LightningNZ is offline
 
Join Date: Oct 2011
Location: Canberra
Posts: 951
Thanks for the link Rick, the explanation there is gold. If no one minds too much, I'll quote the last paragraph because it states trade off for the mean vs median combine methods very clearly.

Quote:
By comparing equations 10 and [6], we see that the SNR achieved by a median combination is approximately a 20% less than the SNR of the average combination of the same images (even less for small sets of images [16]). In terms of SNR improvement, average combination is always better, so what can a median combination be useful for? The answer leads to the subject of robust estimation. For a distribution with a strong central tendency, the median is a robust estimator of the central value. This makes median combination an efficient method for image combination with implicit rejection of outliers, or pixels with too low or too high values due to spurious data. However, in our implementation we provide several pixel rejection algorithms that achieve similar outlier rejection efficiency and can be used with average combination without sacrificing so much signal.
My misunderstanding was of the limits of standard deviation of median not approaching the SD of the mean better than 80% - which only goes to show I didn't understand the maths properly because I knew this once but had forgotten it.

Cheers,
Cam
Reply With Quote
  #30  
Old 18-06-2014, 08:34 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Cam,

I would have paid more attention in those Uni stats lectures if I'd realised the info would be useful one day

Cheers,
Rick.
Reply With Quote
  #31  
Old 18-06-2014, 12:30 PM
LightningNZ's Avatar
LightningNZ (Cam)
Registered User

LightningNZ is offline
 
Join Date: Oct 2011
Location: Canberra
Posts: 951
Quote:
Originally Posted by RickS View Post
Cam,

I would have paid more attention in those Uni stats lectures if I'd realised the info would be useful one day

Cheers,
Rick.
Same here... but I got really good at playing pool in the Rec centre.
Reply With Quote
  #32  
Old 20-07-2014, 10:51 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
At Rick's suggestion, have finally completed and validated the model for predicting the effects of flat noise when dithering. The attached graphs show how much difference dithering makes - to the extent that one would have to have a very good reason not to use it.

However, these results are for a perfect world where the only noise is shot noise and the dithering does not allow any noise correlation between frames. Some real world test data, for an image sequence that was imperfectly dithered in one axis only, are also graphed. These probably indicate that real world results are sensitive to dither efficiency. In addition, I found during the validation process that my master calibration frames have some excess (read and dark?) noise that has got into the flats, even though the camera is inherently quiet and 20 darks and 40 bias frames were used in the calibration. So the real world results were somewhere between the "model" dither and non-dither predictions.

The second image is a startling example of how effective dithering is. This shows one of the model validation images resulting from stacking 15 evenly illuminated subs after calibration with a fairly noisy master flat. The central zones in the test subs were clone stamped by hand to slightly offset the data and thereby simulate dithering in the central region. The outer region was not dithered. After stacking, the noise is vastly reduced in the central zone, even though the cloned regions cannot be distinguished at all in the individual sub images.

to sum up:
- Use as much flat data as you can - more is always better.
- Use dither if you possibly can.
- The rule of thumb that you should take one flat for every light will be fairly conservative if you dither, but it is probably still worthwhile in the real world where dithering may be imperfect and there can be residual read and dark noise embedded in the flats.
- if you have a CCD with low FPN and you do not need to correct for vignetting or dust, then consider not using flats at all - just dither.

Now to try to work out where that excess flat noise came from....
Attached Thumbnails
Click for full-size image (flatsupdated.jpg)
36.0 KB41 views
Click for full-size image (ditherdemo.jpg)
157.3 KB27 views

Last edited by Shiraz; 21-07-2014 at 07:18 AM.
Reply With Quote
  #33  
Old 21-07-2014, 08:51 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Thanks for that, Ray. Very interesting.

I ran some comparisons the other night with varying numbers of flats and got inconsistent results. Will have to spend more time figuring out what was going on...

Cheers,
Rick.
Reply With Quote
  #34  
Old 21-07-2014, 09:16 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
interesting observation Rick - look forward to seeing what you get.

If of any interest, I found that it was quite difficult to get consistent enough data to verify the model - got the impression that everything (bias, darks, processing) has to be near enough to perfect to get consistent results. Since PI appears to use noise minimisation all along the way, suspect that it may be difficult to get consistent results with that software no matter what the data is like - the software will be trying to help even by reducing noise even when that is what you actually want to measure.

The other point is that, if you use dithering, the flat noise in the final result varies as a function of the ratio of total flat signal to sky signal in a single sub rather than the total sky signal.
Reply With Quote
  #35  
Old 21-07-2014, 10:06 AM
SkyViking's Avatar
SkyViking (Rolf)
Registered User

SkyViking is offline
 
Join Date: Aug 2009
Location: Waitakere Ranges, New Zealand
Posts: 2,260
Great info there Ray, which confirms my own experiences with flats. Thanks for posting yet another thorough analysis
One question, how do you dither flats? With an evenly illuminated flat what is there to dither?
Reply With Quote
  #36  
Old 21-07-2014, 12:38 PM
LightningNZ's Avatar
LightningNZ (Cam)
Registered User

LightningNZ is offline
 
Join Date: Oct 2011
Location: Canberra
Posts: 951
Quote:
Originally Posted by SkyViking View Post
Great info there Ray, which confirms my own experiences with flats. Thanks for posting yet another thorough analysis
One question, how do you dither flats? With an evenly illuminated flat what is there to dither?
Depends how you make your flat. If you image a twilight sky then you may pick up some stars. If you image a t-shirt then there may be varying density of threads or whatever across it. If you image an LCD screen like me then they are pixel defects or backlight inconsistencies.

So I move my scope around in front of the screen to "dither". It has the effect of averaging out the inconsistencies.

Cheers,
Cam
Reply With Quote
  #37  
Old 21-07-2014, 12:53 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by SkyViking View Post
Great info there Ray, which confirms my own experiences with flats. Thanks for posting yet another thorough analysis
One question, how do you dither flats? With an evenly illuminated flat what is there to dither?
Rolf, my original comment to Ray was that flats aren't the only way to reduce fixed pattern noise. Dithering lights and using rejection should also remove FPN, at least small scale stuff. That's what Ray has added to his model. We're not talking about dithering the flats. I don't know how you'd do that either
Reply With Quote
  #38  
Old 21-07-2014, 12:59 PM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Really appreciate your efforts, Ray. Thank you!

H
Reply With Quote
  #39  
Old 21-07-2014, 01:17 PM
SkyViking's Avatar
SkyViking (Rolf)
Registered User

SkyViking is offline
 
Join Date: Aug 2009
Location: Waitakere Ranges, New Zealand
Posts: 2,260
Quote:
Originally Posted by LightningNZ View Post
Depends how you make your flat. If you image a twilight sky then you may pick up some stars. If you image a t-shirt then there may be varying density of threads or whatever across it. If you image an LCD screen like me then they are pixel defects or backlight inconsistencies.

So I move my scope around in front of the screen to "dither". It has the effect of averaging out the inconsistencies.

Cheers,
Cam
Yep that's normal dithering of the incoming light source. But since here we're dealing with ideally a uniform incoming illumination dithering (as in moving the telescope) should not be relevant. For sky flats then yes, it is relevant to dither as much as possible to remove stars, but that is another issue.

As I understand it, Ray simply manipulated the resulting flat frames by shifting the central portion of the recorded signal around using Photoshop. But this also shifts the noise, which would be the opposite of what happens during normal dithering. So just wondering how practical that is? Effectively it simply blurs the noise component of the integrated flat, so how about instead applying a Gaussian blur to the integrated master flat frame and thereby removing all noise contribution from it? If the blur is at a sufficiently small scale then any pattern noise would still be present, which is the really nasty noise component that only flats can remove properly.

I did some S/N analysis while integrating data for my new antennae image and found that a stack of images calibrated with darks and flats resulted in a combined image with around 10-15% lower S/N. So not using calibration at all would have yielded a superior final image. My problem is that the KAF-8300 chip does exhibit some large scale horizontal and vertical pattern noise which I can only get rid of by using flats.
I'm thinking that ditching darks altogether and using a Gaussian blurred master flat frame should give a better result. I'm not worried about the dark noise as both that and hot pixels are dealt with very efficiently by statistical rejection i.e. PixInsight Windsorized sigma clipping etc. so darks frames are really not necessary. I need to experiemt more with this, but the above is my current thinking.
Reply With Quote
  #40  
Old 21-07-2014, 02:04 PM
strongmanmike's Avatar
strongmanmike (Michael)
Highest Observatory in Oz

strongmanmike is offline
 
Join Date: May 2006
Location: Canberra
Posts: 17,142
Seems confirmed, my non use of darks and flats (with the SXH694 at least) is quite justified after all... and doing so doesn't make me a pariah

Well researched Ray

Mike
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 05:02 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement