View Full Version here: : Flat noise reduction
Shiraz
14-06-2014, 05:13 PM
Hi
Just finished developing a model of the way in which flats introduce noise into broadband images. Also did a validation using some real data, to be reasonably sure that the model describes reality. This analysis has nothing at all to do with removal of vignetting or dust donuts - just noise reduction. If flats are good enough to fix noise they will take care of other stuff.
Real world results in the first figure show the way in which image noise varies with total flat signal - these four images were obtained using the same sequence of real images (totaling just under 4 hours with dark sky), but with different flats exposure. The data has been processed using flat data where the total flat signal was 0.05, 0.5, 2.7 and 11x the total sky signal. The images show the effects of noise reduction as the total flat exposure goes up. Note that the noise shows up as lines since the imaging system had some drift - this makes any fixed pattern noise stand out well and shows how this form of noise affects bright parts of the image as well as the background (it really is a nasty form of noise). Clearly there is some advantage in having a reasonably long total flat exposure - the fourth image has flat exposure as recommended by the model and would need more light exposure to reduce noise further (and of course longer flat exposure).
The real world SNR results from the 4 images are compared with model results in the second figure. Model data is shown as a dashed line and the four measured results as diamonds. The model results are "worst case", so the agreement between model and real world seems convincing enough (at least to me) for the model to be used to derive some general guidance on how to expose flats. It shows that, if you are doing broadband imaging, flat induced noise will be negligible if you get at least 10x as many flat electrons as you get total sky electrons. This very simple rule will apply under all circumstances and can be used to replace ad-hoc guessing, rumor or one-off observations. Since the model is "worst case", you will probably be able to get by with fewer flats most of the time, but flats are easy to obtain, so it makes sense to use enough to ensure that you will not be troubled by flat noise in any circumstances.
If you want an even easier rule of thumb that will get you close enough, expose your light subs properly (eg about 2-3000 ADU in the sky background) and then make sure that you take at least one flat of 20-30,000 ADU for each light sub. If you take sky flats with widely varying exposures, adjust the number of flats to maintain the total ADU over the set.
The graph also shows the measured SNR for no flats (a single dot on the y axis) - with ~4 hours of lights and a very low noise CCD (in this case an icx-694), there is not much point in using flats at all. This is definitely not the case for much longer total exposures, or for CCDs with a little more fixed pattern variabilty (eg some of the Kodak ones) - then flats are necessary.
Thanks for looking. Regards ray
Added: for anyone looking for the last skerrick of SNR, the 10x rule gets you within 5% of the SNR you would get with infinite flats - but you can always squeeze a little more SNR by using more flat data.
Octane
14-06-2014, 08:19 PM
Excellent work, Ray. This is greatly appreciated.
H
Shiraz
14-06-2014, 09:59 PM
Thanks H. It's fun to try to find a theoretical basis for some of the stuff we do - hope it is of some use. regards Ray
Octane
14-06-2014, 10:05 PM
I got my RoboFocus back from the US. It's all fixed. But, do you think there's been a break in the clouds and rain since it arrived? :)
I'm holed up inside going stir crazy; playing around with Skytools!
H
gregbradley
14-06-2014, 10:09 PM
Thanks for that Ray, very helpful. The numbers on those images - what do they represent?
Greg.
Octane
14-06-2014, 10:12 PM
The ADU of the flat vs the ADU in the signal of the light.
Hence, the 20,000-30,000 ADU value for the flat vs the 2,000-3,000 ADU for the light.
About 10x difference.
The 0.05 would represent 1,000-1,500 ADU flat, and, so on.
H
trent_julie
14-06-2014, 10:37 PM
Ray,
I got something out of this thanks for posting!
Trent
Shiraz
14-06-2014, 11:27 PM
Thanks very much for the feedback guys. - H, thanks for the explanation ....clouds, arrgh.
Greg, the model shows that, for the flat noise to be under control, the sum of the ADU in the flats has to be >10x the sum of the sky ADU in the lights.
The four images show what I got from a single test set of lights (about 4 hours of subs), after I calibrated and stacked them four times using four different sets of flats. The sets of flats had total ADUs of 0.05x, 0.5x, 2.7x and 11x the total of the sky ADU in the lights - the number on an image shows which set of flats was used. These real life results suggest that the model is not complete crap, since the fixed pattern noise is visible in the images where the flat/sky ratio is 0.05x, 0.5x and 2.7x, but has effectively gone in the image where the flats sum is 11x the sky sum - just as the model says it should. The graph just summarises these results in numerical SNR form, rather than imagery.
To put some numbers on it, if you choose to take 50 light subs and the sky in each is 2,000 ADU, you will have a total of 100,000 sky ADU. You will need 10x that much (or 1,000,000) in the flats, so if your flats each have an ADU of 20,000, you will need at least 50 of them to get to the required 1,000,000 ADU total.
There is no fixed number for how many flats you should take - that will depend on how long you image for, how bright the sky is and how you expose the flats. But, provided you keep the total flat ADU to at least 10x the total sky ADU in the lights, the flat noise will always be negligible. Or, as a straightforward rule of thumb that will work well enough (for LRGB imaging), if you keep your lights around 2-3,000 ADU and your flats around 20-30,000 ADU, then you should:
******
Take at least one well exposed flat for every well exposed light sub that you take.
******
I know that I tend to add confusion when trying to explain things - be grateful if anyone would point out inconsistencies or lack of clarity in what has been said. Although the model itself is slightly arcane, the guidance that it provides is remarkably straightforward - I hope that is coming across.
regards ray
PRejto
15-06-2014, 08:30 AM
Hi Ray,
Thanks so much for this work. I think it will really benefit my images if I follow this rule of 10X!!
Question. Some time ago someone on this forum suggested that I take flats at higher adu than the typical 20-30k adu. The claim was that fainter stuff would be visible. So, say if I aimed for 40,000 adu flats what does this do? And, in your opinion is there an advantage or disadvantage to such an approach?
Thanks!
Peter
Shiraz
15-06-2014, 09:15 AM
Thanks Peter - yes, I am going to have to go back over some past data where I mucked up the flat calibration :P.
As far as I can tell, the only two conditions for flats is that they have to be taken in the linear region of the CCD and it makes sense to get well above the read noise. Depends on the CCD, but those conditions should pretty much always be met if the ADU is between 10,000 and 40,000. In terms of the final result, 40 flats of 10,000 ADU should be identical to 10 flats of 40,000 ADU. The only ways I can think of that this could break down is if you use stacking software that only has 16 bit fixed point internal data representation (I don't think anything modern is in that category), or you use median stacking for the flats (but why would you?). regards ray
PRejto
15-06-2014, 09:24 AM
Thanks Ray!
So, if I understand correctly (assuming 40K adu are still "linear") using 40,000 adu flats becomes a rule of ca 5X (or ca 1 flat/2 images)
Peter
Merlin66
15-06-2014, 09:47 AM
Ray,
Why not median combine?
It's only a scaling factor and doesn't change the SNR?
How do you measure your SNR?
I use average combine for my spectroscope lights and median for the darks...I haven't found flats very useful with the ATiK314 in this application.
Shiraz
15-06-2014, 09:50 AM
10x still applies. You just need half as many 40,000 ADU flats to get to 10x, so take one 40,000 ADU flat for every 2 subs (of 2,000 ADU).
gregbradley
15-06-2014, 10:18 AM
Thanks H and Ray for explaining what the numbers on the images meant.
Does this model assume a particular method of combining? Usual practice is a simple mean or average combine. I have stared using sigma reject combine but if I understand the statistical maths involved that method really requires a substantial number of flats to be effective as it tries to identify outliers that are outside the statistical average or norm. You can't do that with 3 subs.
I sometimes find flats that are too bright are harsh and damage the lights.
Also I assume there is no bias or flat dark subtract here? I have found no flat dark and using a separate bias gets the best results.
Greg.
gregbradley
15-06-2014, 10:30 AM
I just checked a few subs I have. With my Microline 8300 the background seemed to be around 3300 (I simply moved the cursor around the image in CCDstack and the ADU is displayed in the bottom right corner).
For my Proline 16803 it was more like 2100.
So if I took 20 luminance subs like that with the Microline that is 20 x 3300=66,000. 10X rule gives me 10 x 66,000 = 660,000. If my flat subs are around 30,000 ADU then I need 660,000 divided by 30,000 =22 flats.
So an approximate rule there would be 1 flat for every light. This is all 1x1 binning.
For the Proline though it would need 1/3rd as many so say 15 flats for every 20 lights.
I did approximately 12 flats in my last imaging run.
But if I increase my subexposure length to 30 minutes I'll have to measure the background ADU (not sure if it increases with exposure length). It may mean longer exposures require less number of flats?
Another argument for longer exposures so long as bright areas don't blow out?
Greg.
Shiraz
15-06-2014, 10:31 AM
Hi Ken.
As I understand it, median combine will pull out the single value that is the median of all of the inputs - it cannot have a fractional ADU value, so the scaling factor will be subject to quantisation noise.
For SNR, the basic question is "of what". To fix a consistent reference for the modelling and measurements, I measured the average signal of a dim part of the galaxy to be 1/10 the sky background on one of the lights. Since the sky and the galaxy did not change much during the imaging run, I thereafter used 1/10 sky as the Signal and so the SNR values refer to the chosen part of the galaxy. I could choose some other reference point and all of the SNR results would be scaled, but the same picture would emerge when comparing model and test data. With a dim target, the noise is totally dominated by the sky, so I measured the RMS variability of a featureless part of the sky to find the noise. I turned off all of the smart bits while stacking in PI (to get an undoctored result) and used the pixel stats function of Nebulosity to do the measurements.
Interesting observation on the 314. I would guess that the Sony chip in the Atik has very low fixed pattern variability, so flats will not help unless you expose for very long periods of time.
Regards Ray
Shiraz
15-06-2014, 10:43 AM
Hi Greg.
The assumption is average combine with floating point representation. It is assumed that outliers are taken care of using some non-linear technique that does not interfere significantly with the averaging process - sigma rejection would fit the bill.
This analysis does not include darks - it is assumed that dark calibration is perfect.
So an approximate rule there would be 1 flat for every light. This is all 1x1 binning. For the Proline though it would need 1/3rd as many so say 15 flats for every 20 lights.
why not stick with "1 flat for every light" - that will be close enough.
If you choose to deliberately overexpose the subs by increasing the exposure length, the sky background will increase in proportion to the time - you will still need the same number of flats to get to 10x the total signal, so there is no argument in favour of longer subs. However, the simple rule of thumb that you need one flat for every light will not apply if you deliberately overexpose - you will then need more than one flat for every light. The basic idea makes intuitive sense - if you expose for longer on the lights you need to expose for longer on the flats. It doesn't matter how you go about doing it - just how long the total sums of the exposures are.
I sometimes find flats that are too bright are harsh and damage the lights I don't understand what this means Greg - what sort of damage do you see?
regards ray
gregbradley
15-06-2014, 12:49 PM
I sometimes find flats that are too bright are harsh and damage the lights I don't understand what this means Greg - what sort of damage do you see?
regards ray[/QUOTE]
I'd have to look over past callibrations. Mostly with the CDK which has been the touchy one to flat out in the past. The APOs are a piece of cake and very forgiving. I don't have exact and precise steps I took at the time so forget the comment as I may not be able to replicate it but I would still say though that too bright a flat could be damaging. After all you are dividing the values into the light to even them out so too high a number from the flat could divide too savagely?? I could be wrong here.
Greg.
Shiraz
15-06-2014, 02:13 PM
OK. I guess you might run into problems if you have to correct significant vignetting and the flats are near the non-linear region - guess that would depend on the software.
Screwdriverone
15-06-2014, 02:52 PM
Hi Ray,
Greg mentioned he uses the cursor in CCDStack to find the ADU value, although, as I use an Atik as well and am a bit of a noob still when it comes to taking flats with my light box, what do you suggest I use to determine the ADU value of the fits files? I think it works in Nebulosity when I debayer the fits file and go to the info page, but I am not confident this is correct?
Any tips on determining the correct ADU per flat would be of great help, as I find some flats (say 0.5 sec with my light box) give me a histogram in Nebulosity between say 18,000 and 30,000 so I guess that is somewhere in the 24,000 range for ADU?
Is there an easier way?
Thanks
Chris
Shiraz
15-06-2014, 04:26 PM
Hi Chris. there is no need for extreme precision. Flats in the 18,000-30,000 range should be fine if you take one for each light. I think that Nebulosity allows you to read out the ADU value under the cursor after you have taken an image, which should be fine for determining average flat exposure. The other thing that would work OK is to look at the real time histogram display (normally in the top right corner) and set the flat exposure so that the main peak is about the centre of the histogram.
From memory, your Atik has a Sony chip, so bear in mind that you might be able to get by without flats for short to medium exposures if your sky is OK. Suggest that you try processing one of your image sequences with and without flats to see if they are worth the effort for your system and the imaging you do - although of course, you will need flats if you must correct vignetting or donuts. Regards Ray
regards ray
Merlin66
15-06-2014, 04:59 PM
Ray,
help me here....
Neglecting vignetting/ shadowing/ fringing (the usual target for flats...)
Why would dividing by what I see as an "evenly illuminated but random pixel distribution" image improve the SNR????
What is it actually doing to the signal?
For AP I can see the background "graininess" caused by the low SNR.....
but I can't see how dividing by basically a "constant" is going to lift the target signal above the background and effectively add to the SNR???
Shiraz
15-06-2014, 05:24 PM
Hi Ken - hope this makes sense.
Ignoring read and dark noise, there are 2 other major noise components - the shot noise from the sky and the fixed pattern variations in pixel sensitivity that modulate the sky and target signal and give you additional noise (fixed pattern noise, FPN). You can get rid of the FPN by using flats, which compensate for the variations in pixel sensitivity - but the flats have to be perfect to completely remove FPN. The analysis shows how much flat signal you need to get the flats close enough to perfect - and thereby to ensure that the noise introduced by the flats is submerged under the sky shot noise and not a significant component of the total noise.
Put another way, evenly illuminated flats are not constant at the pixel scale - they have embedded graininess due to the pixel-level fixed pattern variability in the sensor sensitivity. The lights have exactly the same graininess, so you can use the flats to compensate for the fixed pattern graininess in the lights.
You have a Sony chip with low inherent FP variation, so this is all a bit academic for your system unless you do very long exposures or have bright sky. However, the majority of cameras out there have Kodak sensors and they need flats, since they have significantly higher FP variation.
LightningNZ
16-06-2014, 05:03 PM
Hi Ray, really excellent stuff you've done here regarding correct use of flats.
I just want to make some comments regarding the use of the median vs the mean.
If we have 10 subs and for a given pixel the ADU values are:
sub 1 = 13212
sub 2 = 15234
sub 3 = 12424
sub 4 = 14243
sub 5 = 14234
sub 6 = 14700
sub 7 = 14532
sub 8 = 0
sub 9 = 12430
sub 10 =0
The mean is 11100.9 and the median is 13723.0.
The median is the value that has an equal number of smaller and larger values flanking it. Because we have an even number of subs, the median value is actually 13212 + 14234 / 2 = 13723.
So, which one of these is more "accurate". The median is in this case because the pixel values of 0 are clearly rubbish. The mean is said in this case to be "biased". The median is said to be "robust" to outliers - that's why people use it.
Also note that if sub 1 had a value of 13213 then the mean would be 11101.0 and the median would be 13723.5. The median can be fractional.
Edit: I should add that the "sample mean" and the "sample median" (what you're calculating) are both estimations of the "population mean", which the average is you took an infinite number of subs. If that were the case the number of outliers you had would be irrelevant because they would be overwhelmed by true signal, and in which case your mean and median would be exactly equal.
Hope this is helpful,
Cam
RickS
16-06-2014, 06:28 PM
Cam: using a median combine instead of mean will give you a degree of implicit outlier rejection but it comes at a cost of approximately 20% less improvement in SNR (for larger sets of image - results are worse for small sets.) You'll almost certainly get a better overall result using an average combine with an explicit rejection algorithm.
Ray: I haven't had a chance to think about your model much but at a superficial level I was wondering whether dithering reduces FPN. Have you already considered this?
I only recently got back from a month away. When I catch up I will do some experiments with my large collection of narrowband flats and see if my empirical results fit your curve.
Cheers,
Rick.
LightningNZ
16-06-2014, 07:02 PM
Rick, do you have a link or some stats to back this up? 20% is a big amount. While I can see that mean with sigma rejection should be superior, I find the 20% value a little hard to fathom.
For instance in my example above, leaving out the bottom two values of 0 (obvious outliers) would result in a mean of 13876.125 - not much different at all from the median value.
-Cam
RickS
16-06-2014, 07:49 PM
Cam,
Using the median effectively ignores much of the data so I don't find it difficult to understand intuitively. One of the more complete analyses I've seen is in the PI doc: http://pixinsight.com/doc/tools/ImageIntegration/ImageIntegration.html
See the section on median combination.
Cheers,
Rick.
Shiraz
16-06-2014, 08:12 PM
Thanks for that Cam - and Rick for the response
Hi Rick - yes, dithering should decorrelate the FPN between subs, so it should behave as normal random noise - still to validate that bit of it, but if it works that way, it should make a large difference to the required number of flats.
The model is currently based on fully correlated FPN across the subs, so it is a worst case. I posted the 10x rule of thumb for 2 reasons;
1. a rule of thumb developed in this way should apply to any image gathering technique - it may be overkill for dithering, but that is not a disaster,
2. my test data was dithered in 1 axis, but was still not too far from the "worst case" result and the FPN was still visible in all but the 11x image- so there is presumably some partial correlation left? Not sure what this means yet, but it added weight to the need for a conservative rule of thumb.
Narrowband flats should be OK for testing, but the rule of thumb analysis is based on LRGB imaging where the sky is dominant.
Looking forward to seeing some more real world results - I have only tested it with one set of subs and 4 sets of flats to date, but I have also gone back and improved some earlier images by using more flats.
regards ray
LightningNZ
16-06-2014, 09:41 PM
Thanks for the link Rick, the explanation there is gold. If no one minds too much, I'll quote the last paragraph because it states trade off for the mean vs median combine methods very clearly.
My misunderstanding was of the limits of standard deviation of median not approaching the SD of the mean better than 80% - which only goes to show I didn't understand the maths properly because I knew this once but had forgotten it.
Cheers,
Cam
RickS
18-06-2014, 09:34 AM
Cam,
I would have paid more attention in those Uni stats lectures if I'd realised the info would be useful one day :lol:
Cheers,
Rick.
LightningNZ
18-06-2014, 01:30 PM
Same here... but I got really good at playing pool in the Rec centre. :lol:
Shiraz
20-07-2014, 11:51 PM
At Rick's suggestion, have finally completed and validated the model for predicting the effects of flat noise when dithering. The attached graphs show how much difference dithering makes - to the extent that one would have to have a very good reason not to use it.
However, these results are for a perfect world where the only noise is shot noise and the dithering does not allow any noise correlation between frames. Some real world test data, for an image sequence that was imperfectly dithered in one axis only, are also graphed. These probably indicate that real world results are sensitive to dither efficiency. In addition, I found during the validation process that my master calibration frames have some excess (read and dark?) noise that has got into the flats, even though the camera is inherently quiet and 20 darks and 40 bias frames were used in the calibration. So the real world results were somewhere between the "model" dither and non-dither predictions.
The second image is a startling example of how effective dithering is. This shows one of the model validation images resulting from stacking 15 evenly illuminated subs after calibration with a fairly noisy master flat. The central zones in the test subs were clone stamped by hand to slightly offset the data and thereby simulate dithering in the central region. The outer region was not dithered. After stacking, the noise is vastly reduced in the central zone, even though the cloned regions cannot be distinguished at all in the individual sub images.
to sum up:
- Use as much flat data as you can - more is always better.
- Use dither if you possibly can.
- The rule of thumb that you should take one flat for every light will be fairly conservative if you dither, but it is probably still worthwhile in the real world where dithering may be imperfect and there can be residual read and dark noise embedded in the flats.
- if you have a CCD with low FPN and you do not need to correct for vignetting or dust, then consider not using flats at all - just dither.
Now to try to work out where that excess flat noise came from....
RickS
21-07-2014, 09:51 AM
Thanks for that, Ray. Very interesting.
I ran some comparisons the other night with varying numbers of flats and got inconsistent results. Will have to spend more time figuring out what was going on...
Cheers,
Rick.
Shiraz
21-07-2014, 10:16 AM
interesting observation Rick - look forward to seeing what you get.
If of any interest, I found that it was quite difficult to get consistent enough data to verify the model - got the impression that everything (bias, darks, processing) has to be near enough to perfect to get consistent results. Since PI appears to use noise minimisation all along the way, suspect that it may be difficult to get consistent results with that software no matter what the data is like - the software will be trying to help even by reducing noise even when that is what you actually want to measure.
The other point is that, if you use dithering, the flat noise in the final result varies as a function of the ratio of total flat signal to sky signal in a single sub rather than the total sky signal.
SkyViking
21-07-2014, 11:06 AM
Great info there Ray, which confirms my own experiences with flats. Thanks for posting yet another thorough analysis :thumbsup:
One question, how do you dither flats? With an evenly illuminated flat what is there to dither?
LightningNZ
21-07-2014, 01:38 PM
Depends how you make your flat. If you image a twilight sky then you may pick up some stars. If you image a t-shirt then there may be varying density of threads or whatever across it. If you image an LCD screen like me then they are pixel defects or backlight inconsistencies.
So I move my scope around in front of the screen to "dither". It has the effect of averaging out the inconsistencies.
Cheers,
Cam
RickS
21-07-2014, 01:53 PM
Rolf, my original comment to Ray was that flats aren't the only way to reduce fixed pattern noise. Dithering lights and using rejection should also remove FPN, at least small scale stuff. That's what Ray has added to his model. We're not talking about dithering the flats. I don't know how you'd do that either :)
Octane
21-07-2014, 01:59 PM
Really appreciate your efforts, Ray. Thank you!
H
SkyViking
21-07-2014, 02:17 PM
Yep that's normal dithering of the incoming light source. But since here we're dealing with ideally a uniform incoming illumination dithering (as in moving the telescope) should not be relevant. For sky flats then yes, it is relevant to dither as much as possible to remove stars, but that is another issue.
As I understand it, Ray simply manipulated the resulting flat frames by shifting the central portion of the recorded signal around using Photoshop. But this also shifts the noise, which would be the opposite of what happens during normal dithering. So just wondering how practical that is? Effectively it simply blurs the noise component of the integrated flat, so how about instead applying a Gaussian blur to the integrated master flat frame and thereby removing all noise contribution from it? If the blur is at a sufficiently small scale then any pattern noise would still be present, which is the really nasty noise component that only flats can remove properly.
I did some S/N analysis while integrating data for my new antennae image and found that a stack of images calibrated with darks and flats resulted in a combined image with around 10-15% lower S/N. So not using calibration at all would have yielded a superior final image. My problem is that the KAF-8300 chip does exhibit some large scale horizontal and vertical pattern noise which I can only get rid of by using flats.
I'm thinking that ditching darks altogether and using a Gaussian blurred master flat frame should give a better result. I'm not worried about the dark noise as both that and hot pixels are dealt with very efficiently by statistical rejection i.e. PixInsight Windsorized sigma clipping etc. so darks frames are really not necessary. I need to experiemt more with this, but the above is my current thinking.
strongmanmike
21-07-2014, 03:04 PM
Seems confirmed, my non use of darks and flats (with the SXH694 at least) is quite justified after all... and doing so doesn't make me a pariah :innocent: :lol: :)
Well researched Ray :)
Mike
Amaranthus
21-07-2014, 03:23 PM
Mike, why not just use Bad Pixel Mapping instead of Darks?
Shiraz
21-07-2014, 04:01 PM
thanks for the comments and discussion.
the demo image was formed when I wanted to isolate noise from signal structure in order to test the predictions of the model against what happens with real image data. I used 15 evenly illuminated lights (no structure) and then calibrated and stacked them as normal, with full flat/bias/dark calibration. When I wanted to investigate dithering, there was a need to move the noise pattern around in the lights so that, when they were stacked without alignment (nothing to align on), the noise would not be correlated from sub to sub - just like you get with dithering, only in this case the frame is fixed, but the noise was moved, rather than the noise being fixed, but the frame moving. It is not a practical method for real image processing, just something I had to use to simulate the effects of dither in featureless light frames.
Rolf, if you use dither, there is probably a lot of merit in removing the pixel scale noise in the flats with spatial filtering. I had been thinking along similar lines of a dual approach to flat fielding with one heavily smoothed flat from each imaging night to remove vignetting and a one-off separate master flat, prepared with even illumination and a lot of flats, to deal with fixed pattern noise and applicable to all images. Be very interested to hear how you get on with smooth flats to remove vignetting and dither to remove FPN. Also, if you are getting some additional noise from using full calibration, suggest that you can improve that with more flat, bias and dark data - the model suggests that in all cases, calibration is ultimately the best way to go. However, it also shows that you may need really high quality calibration data to get to the point where there is some extra SNR from calibration. In the end I guess that it comes down to which approach is more efficient and that will depend on the FPN in your camera. I also suspect that an outstanding problem is that darks contain a subset of warm pixels that generate significant current. You can remove the fixed component of that current by subtraction, but the noise associated with the higher current will remain much more significant than that from normal pixels - ie you may need a lot more darks than conventional wisdom would suggest to get rid of all the dark noise. One possible solution - haven't tried it yet- may be to use aggressive and identical hot pixel replacement to get rid of the worst dark noise sources in lights, darks and flats before doing any other processing. EDIT: as Barry suggests.
Mike - yep, your technique is very powerful - and it is statistically likely that you are not a pariah.
the model is now at the stage where I can probably expand it to include dark and bias. Might be interesting to try, but will probably be quite a job due to the odd statistics of dark noise. Do you think it could be worth publishing the model? - it clearly is not of major importance, but the results are quite interesting and I haven't seen anything like it elsewhere. if so, any ideas where?
strongmanmike
21-07-2014, 04:47 PM
Nice to finally hear :prey: I have been told I am a heretic spreading bad habits :sadeyes: :lol:
I think you should keep going with yoru analysis, some excellent useful info is being quantified...stuff that lazy bums like me avoid doing :thumbsup:
Mike
strongmanmike
21-07-2014, 04:48 PM
Do you mean for the H694 or the 16803?
Mike
Amaranthus
21-07-2014, 05:08 PM
Either - there is arguably little to be gained over a BPM, and much to be introduced (noise wise) by using Darks. The advantage of a BPM is that it will only fix (interpolate from surroundings) the bad pixels you designate (which you have control over, anywhere from 0 to 100s) and will leave every other pixel clean and untouched.
You really only have to match exposure and take a few frames to get a decent BPM, and even this is not critical like with Darks. In theory you could take a suite of long exposures (e.g. 10+ min), create a (bias-subtracted) Master Dark from these, and then just tune your BPM to match your actual exposure (fewer pixels for shorter subs). Keep the same Master Dark for months then, and forget about temperature matching etc.
RickS
21-07-2014, 05:22 PM
Ray,
Have you seen the new PI Superbias process? It uses multiscale analysis to isolate large scale structures and generate a "noise free" master bias from a lower quality master. Perhaps Superflat is next :D
Cheers,
Rick.
Shiraz
21-07-2014, 05:34 PM
Yes, Marc pointed it out.
definitely seems to be related. I got the impression that the basis for the superbias approach is that bias will vary predictably (in a spatial sense) and that any abnormal pixel-to-pixel variation must be down to read noise - and hence removable. In the case of flats, the pixel-to-pixel variation can be FPN, flat read noise, shot noise or (darkshot+readdark+readbias) from the dark cal - this may be much less amenable to filtering. We live in interesting times though and it is good to see some formal optimisation of the processes we use.
Amaranthus
21-07-2014, 05:50 PM
Agree Ray, with caveats. If you use a light box and take really short-duration flats (I have a dimmer switch on mine and so can take nice 50% max ADU flats at a consistent 0.1 sec exposure), then bias subtract from those flats, it seems then well positioned to make a "super", no?
RickS
21-07-2014, 06:53 PM
That won't work with a lot of mechanical shutters, unfortunately.
Shiraz
24-07-2014, 01:31 PM
If very short subs are possible, that sounds like a good approach Barry - gets away from dark correction in making the flats, so lots of noise goes out the window too. Have you tried it? maybe also consider a median filter to remove any residual hot pixel data? Of course, would only work properly if you dither.
Amaranthus
24-07-2014, 01:48 PM
I don't have PI software, so can't try out their superbias process/algorithm (any idea how it works, exactly?). But I definitely only use bias subtraction in my ultra-short flats, since read noise will completely dominate over any (minimal) dark current noise that might arise in 0.1s.
Hot pixels effects should also be really minimal in 0.1s flat exposures, but one could remove them with a BPM and dithering if it was a concern.
RickS
24-07-2014, 02:06 PM
A couple of the more interesting posts on Superbias are here:
http://pixinsight.com/forum/index.php?topic=7286.0
http://pixinsight.com/forum/index.php?topic=7312.0
Shiraz
31-07-2014, 10:20 PM
just tested an alternative method for flat fielding that seems to get around the introduced noise problem. PixInsight has a background estimation algorithm (DBE) that generates a smooth surface representing the sky background. I extracted such a background from a single starfield sub and then applied it as the master flat in the standard calibration process for a complete batch of subs. Worked fine to clear up some minor vignetting and the resultant calibrated data is flat to better than 1% across the frame. And of course there is no dark noise, no read noise, no shot noise and only a little residual bias subtraction noise in this type of flat.
Would not work on dust bunnies, but can possibly provide high quality control of vignetting without introducing much flat noise at all. Of course it will not remove fixed pattern noise, but dithering can take care of that.
Be interested in opinions on this technique - I have only tried it one dataset, but it worked OK on both the lum and RGB data..
SpaceNoob
01-08-2014, 10:58 PM
The DBE tool works quite well, ABE too. I did not bother with flats when using the FSQ106 @f5, using the CDK though I first started to notice the impact of dust bunnies.... DBE did reduce these slightly, but not enough to justify not using flats at longer focal lengths. I also found it useful for battling gradients from street lights and other additive sources via subtraction.
Merlin66
02-08-2014, 10:00 AM
The DBE tool is very popular with solar imagers to help remove the hot spot sometimes found in Ha optical systems
Shiraz
03-08-2014, 05:00 AM
Thanks Chris and Ken - appreciate the feedback.
I had used DBE for both subtraction and division on stacked data, but thought that it could generate synthetic flats from single frames, for use in pre-stack calibration, in the following circumstances:
- Where I wish to combine historical data sets in which the system configuration and target orientation varies between datasets and where flat data may be either minimal or non-existent - flat fielding can reduce the impact of abrupt background changes that cannot be handled post-stack and a synthetic flat may be the only way to do this,
- Where I have nebula images and the DBE process cannot correct the images after stacking - a single sky flat or even a dark sky image of a starfield may be used, along with dithering of the lights, to get close to the results expected from perfect flat fielding without the need for extensive flat data.
The primary disadvantage would be that the synthetic flat can only be valid if the background illumination is consistent over the field of view and of course it will not correct for dust bunnies. It will also not correct for fixed pattern noise. However, I thought that it may be a useful additional flat technique in circumstances where there is insufficient flat data to support a conventional approach. I tried it out by calibrating a recent dataset using only bias and a synthetic flat and it worked very effectively, reducing the gradient in vignetted lights to below 1% with only a single starfield image as the flat data source. dithering took care of dark and fixed pattern noise.
I'm late to the party, but thought worth saying thanks Ray (and all) for a great thread. Also bumping this for anyone else that missed first time around. Take home message (shoot as many flats as lights) rings true.
Shiraz
15-10-2015, 11:13 AM
thanks Rob - nice thought.
And thanks also for emphasising the primary message - use as many flats as lights, if you can. That message got a bit diluted in later discussion which included dithering (reduces the need for flats), but as a failsafe, equal numbers is a reliable approach.
Rod771
15-10-2015, 12:05 PM
Geez, I like the sound of this. Taking flats through the Hyperstar lens with a DSLR and LP filter just sucks.
Ray, was the star field sub used for the synthetic flat exposed for the same time as your lights? And am I right in saying, that you only used one DBE extraction image as master flat and this was effective on a large number of light subs?
Shiraz
15-10-2015, 02:13 PM
yes, it was one of the light subs. The DBE background was then applied just like a normal master flat would be - (you must set downsample under model-image to 1 though). It gets rid of vignetting and even helps with the worst effects of large donuts (if you use enough sampling points). It does not do fixed pattern noise reduction, but dither can take care of most of that - it's not as good as a real flat, but still useful if you have problems getting flat data. You don't get a lot of signal in the synthetic flat, but, being a fitted surface, it is essentially noise free. if you do try it, be interesting to hear if it works well enough - I guess there will be issues with a DSLR due to the Bayer filter, but you should be able to find a way round them.
Rod771
20-10-2015, 09:44 AM
Must be my Hyperstar flats curse. Or I just keep ballsing it up, Ray. I applied the synthetic flat (downsampled to 1) but it over corrected the vignetting by quite a lot. I tried both the image cal tool and BPP but got the same result.
At work atm, I can attach a reference later today.
Shiraz
20-10-2015, 05:21 PM
trying to remember what I did...I think that the following is the philosophy used.
Using DBE to extract the flat from a sub will give a flat that includes the bias and dark current - which have to be removed. With a cooled Sony CCD, it was good enough to just subtract a fixed bias value, but with a DSLR, you will probably need to subtract the master dark (which should include the bias) from the chosen sub before doing the DBE background extraction (could either do a dark-only calibration on the single sub that you want to use or just use pixelmath to do the subtraction). Then you will end up with a flat that has no bias or dark, so will not have "calibrate" checked for the flat when you do the final calibration of the subs.
IF the Master Dark includes the Bias THEN
sub - masterdark
DBE
ELSE
sub - masterdark - master bias
DBE
ENDIF
Shiraz
23-10-2015, 12:10 AM
Hi again Rod. went back and found some data where I had used a synthetic flat. The summary image shows an original sub, the synthetic flat from DBE and the calibrated sub, all with similar stretch. Not perfect, but the method does the job OK if you don't have flats and need them - but of course it doesn't correct for pixel-scale sensitivity variations.
Rod771
23-10-2015, 09:44 AM
Hi Ray, Thanks for the support! :) From your images attached I can see it works well. I have been working on it, last night I took a new batch of darks which are a much closer match to the light subs. I'll start again from the calibration of the light sub that I'm using for the flat generation and go from there tonight.
One thing I did notice is when I used the BPP script, the lights were over corrected but when I used the Image Cal tool on a single image the correction of vignetting was much better, although the image seemed to be a lot noisier?
I have 180 light subs which I want to drizzle , so I want to get this right before I put them all in PI. It takes a while on my PC to crunch through the subs.
Thanks again for your assistance :thumbsup:
Shiraz
23-10-2015, 11:03 AM
I think that BPP generally needs tuning for specific requirements - it is probably great when everything is working as expected, but I would be inclined to stick with imagecal while sorting out problems - it seems closer to the nuts and bolts.
A good set of darks will probably help - using this method, you cannot get good flat compensation without them, because the flat data includes dark current from a full length sub exposure, not a short exposure.
Rod771
23-10-2015, 08:35 PM
It worked, Ray!
Not quite as good as your image sample above but I'll take it and run. DBE should be able to clean up the rest post integration.
Thanks so much , we all benefit from your excellent posts. :thumbsup:
Cheers
Rod
Shiraz
23-10-2015, 10:12 PM
that's excellent Rod :thumbsup:.
did you do anything in particular to get it to work with the DSLR? Also, are the noise levels OK?
Rod771
23-10-2015, 10:50 PM
When using ImageCal to calibrate the light with the new master dark (which is a much better match ,light was 27c, master dark ranged from 24c - 29c both 60 sec iso 800) I typed " raw cfa" input hints in the Format hits both Input and output. This ensures the data is loaded as pure raw unaltered and save as a grayscale CFA image. This calibrated gray image was use to generate the gray synthetic flat. I then use both imagecal and BPP to test the flat on the light and it worked on both this time? Noise was fine this time around, I think the new master dark help a lot.
PI is now slogging through the 180 subs, here's hoping there are no suprises :prey:
vBulletin® v3.8.7, Copyright ©2000-2025, vBulletin Solutions, Inc.