To drizzle, or not to drizzle: that is the question.
Hi all,
Since it is cloudy tonight, I thought of experimenting with the little data I have. It seems that drizzle integration works nicely even with 12 subs, as long as signal in each sub is not too weak, but more subs is always better of course. May drizzle all three channels for the current project, once I collect enough data.
According to NASA (JPL), 9 subs is enough statistically for drizzle to be effective
Drizzling is good though, even if it doesn't make your detail appear it does do a better job with noise sampling and rejection plus making deconvolution more effective.
doesnt make any improvement when i try with PixInsight, nothing like your result, maybe cos I use 200-500 dslr frames usually? scaling my integration file gives me same result as drizzle to same scale.
eg I usually integrate 3 or 5 frames to make a cleaner temp integration, then crop this around my target and register/integrate/drizzle with this as my target but I got no improvement between the two final integration images I could notice, no extra detail, no smaller stars appearing, not noise difference, But I did notice some bright stars obtained verticle artifacts (not visible in frames or anywhere else, not lense artifacts or star streaks but look like a 1 pixel line 20pixels long drawn in difference mode, definitely a processing artifact. Admittetdly its been a while since I tested since drizzle integration was added to PI.
Yes, I only use PixInsight for all data processing these days. Standard settings for x2 drizzle integration. Scaling does give similar results to drizzle, but drizzle is more sophisticated (and more CPU involved) than scaling and because of this can recover extra detail from dithered subs and when optics and seeing are both decent, guiding is good and of course when one is undersampling.
Last night added more OIII data and now have 7 hours in total (3nm filter at f/4.5 and 1.6 arcsec pp). To my eye, drizzle has done a better job at pixel rejection than a few algorithms I tried in PI. I have not been dithering enough, only once every 3 or so subs.
Throughout the night, as the telescope was moving away from the CBD, mean ADU value dropped by 600 between the first and last sub. That's huge! Getting decent signal in OIII is a real challenge from where I live.
The image on the most right is a single 15 minute sub. The next one up is integration of 28 15-minute subs without rejection, then with rejection and the most left one is drizzled data. Looks okay to me.
Yep, I drizzle everything now. At 530mm at f/5, you end up with crunchy stars otherwise.
It takes a while, especially when you work with the number of sub-exposures that I often take for my subjects, but, the end result is well worth the time.
Therefore if say - my seeing conditions are very good
I will get 2 arc seconds FWHM.
Using the Nyquist sampling theorem you need double the
arc-sec/pixel ratio - follows that to sample 2 arc seconds you need a minimum of 1 arc second per pixel.
Therefore I am slightly under sampled at 1.11.
In theory I can gain something from using drizzle.
I certainly noticed a difference - an improvement - using 3 x drizzle
on my last image.
I got that little bit more detail when zooming in on tiny structures.
I was always interested if Mike Sidonio could gain anything by using drizzle.
I checked & he's oversampled so in theory he can't but
I would like to see the results - if only I could twist his arm.
Allan, instead of having questions unanswered and getting lost in the maths, why don't you follow Slawomir's example and try processing some images with and without drizzle, and see if it works for your setup/combo? Who cares about definitions, it's the result that counts, no?
I have read somewhere that sampling at 1/3 of your usual FWHM is optimal for getting the most detail from data, so I believe your observations are spot on Allan. Drizzle x3 requires more subs/better data than x2, but if it works then why not use it I like Troy's suggestion of directly comparing different methods visually and by measuring noise/SNR.
Allan, instead of having questions unanswered and getting lost in the maths, why don't you follow Slawomir's example and try processing some images with and without drizzle, and see if it works for your setup/combo? Who cares about definitions, it's the result that counts, no?
I have read somewhere that sampling at 1/3 of your usual FWHM is optimal for getting the most detail from data, so I believe your observations are spot on Allan. Drizzle x3 requires more subs/better data than x2, but if it works then why not use it I like Troy's suggestion of directly comparing different methods visually and by measuring noise/SNR.
As for twisting Mike's arm - good luck with that!
Yes - I think the article I pointed to below is a rebuttal
of Nyquist's sampling theorem that we've all been using.
3 x is considered better than twice when considering sampling.
As for twisting Mikes arm - that was a joke -
I'm not a strongman.
I reckon there is more detail in the close up 2nd crop using drizzle.
Too hard to compare apples and oranges (processed images, processed differently, different crops). Why not be a little more scientific about it, just process lum, and apply identical processing to a drizzled vs undrizzled. Compare same crop at 100%.
At approx 2 arcsecs/px on my current set up, I then took the time to compare images more carefully in PI (based on some quality data from Astrofest) and realised I was almost always happy to be (quote) "trading off resolution against SNR".
What I find fascinating is that the background noise often also appears more pleasing, even though you know you theoretically have less SNR (in addition to smooth stars of course). Believe that is shown in that last example.
(Dammit - Rick covered and described that better too: "Drizzle introduces correlated noise that isn't as visually intrusive.")
Nice illustration Rob, stars look nicer in drizzled data for sure.
I'm wondering if purposely putting together a system that reasonably under-samples and then drizzling data is the ultimate solution for amateur astrophotographer.
As I understand it, at the same f-ratio (and same QE+read noise), undersampling system will be going deeper in the same time as opposed to a well-sampling system (say 2"pp at f/6 vs 1"pp at f/6), thus in the end one will have a higher SNR and a similar detail with undersampled and drizzled data comparing to a well-sampled data, given equal time of exposure. Or am I missing something here?
Yes - I think the article I pointed to below is a rebuttal
of Nyquist's sampling theorem that we've all been using.
3 x is considered better than twice when considering sampling.
There's nothing wrong with Nyquist sampling at 2x but the conventional use (sampling a single analog signal) isn't what we're doing.
I'm wondering if purposely putting together a system that reasonably under-samples and then drizzling data is the ultimate solution for amateur astrophotographer.
That's what we're doing with the C300 at SRO (image scale 1.26 arcsec/pixel with seeing often down to 1 arcsec) and what I had in mind when I matched the AP140EDF with 9um pixels (2.4 arcsec/pixel with typical poor SEQ seeing.)
I like this approach because it gives you better resolution than the image scale, a big FOV and images that you can print really big.
Quote:
Originally Posted by Slawomir
As I understand it, at the same f-ratio (and same QE+read noise), undersampling system will be going deeper in the same time as opposed to a well-sampling system (say 2"pp at f/6 vs 1"pp at f/6), thus in the end one will have a higher SNR and a similar detail with undersampled and drizzled data comparing to a well-sampled data, given equal time of exposure. Or am I missing something here?
Alas, it's not magic, Suavi. The increase in resolution does come at the expected cost in reduced real SNR. It's just that the apparent SNR is better because of the way that Drizzle correlates noise. It effectively does noise reduction. You can get the same effect by applying noise reduction to the data from a well-sampled system.