Quote:
Originally Posted by David Fitz-Henr
Well, the web reference is useful to explain the main concept of drizzling for those that may not be familiar with how it works. I believe it(drizzling) was used on the Hubble deep field images to regain much of the resolution that is lost due to the undersampling by the WFPC2 camera.
As you imply though, our actual images won't replicate this exact outcome; I believe due mainly to random dithering, high freq noise, limited sub frames available, etc. I'm also not clear to what extent the SNR would suffer (per output pixel) since (as I understand it) drizzling is a process that "constructs" a higher resolution image from multiple (lower res) subframes, whereas with simple stacking of n subframes (using say the mean on the original pixels) the SNR is increased by sqrt(n). I assume that for sufficient sub frames there will also be some "stacking" of the drizzled image output pixels which will increase the SNR at the higher res pixel size?
So ... given that your system is undersampled it would be very interesting to compare upsized/stacked vs drizzled images from the same set of sub frames if you ever have the opportunity. I believe that drizzle is being incorporated into some of the popular software and may also be available standalone.
|
What I do is collect many images at 3.08" per pixel. I always use the RBI function on the PL16803 camera. This gives far less noise due to residual signal due to brighter stars when dithering.
By experiment I have found out how many images are needed for RGB and NB beyond which is there is not much more gain in both signal to noise and resolution enhancement. Of course if you REALLY need to get a bit extra S/N and resolution, more images are always better. If one collects data till the end of time the image will still not be perfect!
I correct the frames for darks and flats. I use at least thirty darks and flats to make the master darks and flats. I even correct for darks with the flat frames. This ensures that a minimum of noise is injected into the data due to the correction process.
I then upsize the images by a factor of 1.5. This now gives me about 2" per pixel.
Note this is the resolution of the width of the pixel not the corner to corner, which is root two this resolution. I could go into information or sampling theory here but I won't.
By now stacking these upsized frames, I get an image frame where both signal to noise and resolution are enhanced.
Drizzling is quite a bit different to this as they have far less data frames so even a more selective mathematically valid sampling is required. They have the added variable of sensor orientation in the case of Hubble.
It is pointless stacking images without dither as any residual noise is enhanced along with the signal.
My method is just elegant brute force. What essentially is happening is that I collect lots of real signal while ensuring that the noise is suppressed.
I can then afford to increase resolution at the expense of more noise or conversely decrease resolution by binning the upsized image to get better signal to noise. I estimate this to be about a factor of four. It is most probably a bit less due to the Poisson or shot noise. Dithering and many frames lowers the inherent unavoidable shot noise of the very weak signals we are attempting to image.
Finally fast optics means that you collect data far faster than any noise no matter its source! The ratio of the square root of N photons over the N photons goes down as N increases. This is Poisson or shot noise.
Sky glow and light pollution are the exceptions as these are both collected just as fast.
Bert