The sensor is sampling the image at the pixel spacing along the sensor edges. At 45 degrees it is sampling at 1.414 X pixel spacing. Dithering randomly then gives you sampling along the diagonals at the pixel spacing. With more frames the image is sampled uniformly at the pixel spacing and then with stacking the resolution actually exceeds the pixel spacing. Note real attainable resolution is twice pixel spacing due to Niquist Theorem. In reality it is a bit worse than this.
In the case of the 300mm lens the stars are undersampled so any enhancement without dithering will result in lovely square stars. Upsizing does not help as the squares just get bigger.
The effect is more dramatic than reality as the conditions are not identical. I have looked for data that was not dithered to compare to. The only definitive way to do it is to collect a set of exposures without dithering and with on the same night so all other conditions are the same.
Below is a crop of a single frame at native pixel size from the dithered set of corrected tiffs. Note how the stars are square or very blocky. The second image is what it looks like upsized X1.6 by cubic interpolation.
The third is twenty stacked upsized dithered frames. I think the improvement is obvious. The stars are far rounder in the stacked image especially the small dim ones.
These frames were all screen captures so that the pixels in the images did not change from what was actually there.
I use Guidemaster which can offset the guide star between exposures randomly by a specified number of pixels of the guide camera. It then guides atthe new position of the guide star.
Here is an animated GIF of 200% crops of jpg's straight out of the canera of the exact same area of sensor 3.8MB
Below is a stack of these ten crops, Notice the hot pixels are gone even without correcting for darks. And the same stack with levels adjusted. Note this was done with cruddy 8 bit jpg's.
By moving by many pixels it also smooths out any noise due to the sensor. It is pointless stacking the same noisy pixel on top of itself as you just enhance the noise or the 'hole' produced by over subtraction of noise by temperature mismatched darks.
Bert,
This fantastic algorithm should be "patented" :-)
Actually, it may be worth talking to IRIS and/or DSS developers... to incorporate this functionality into software (or to develop script), to automate the procedure.
Once more, fantastic !!
Bojan it is too late to patent even if I wanted to as it is now in the public domain. I would be interested to see how this works at shorter focal lengths for real wide fields. Too much to do and not enough clear skies. I am retired now and do not ever want to go through the patent process again. Even with a patent a big rich company can send you broke by eternal litigation. Unless you have deep pockets it is better not to patent. Just keep quiet and die with the secret.
My ethos is to put back in as much as I can into the astronomical community as I have taken far more than I can ever give back.
That last set of images really shows what is happening without invoking any mathematics. I always knew the 300mm lens was better than my sensor. In hindsight the solution is now obvious. It was just a matter of the getting the correct tools and skills.
Bojan it is too late to patent even if I wanted to as it is now in the public domain.
Bert
Perhaps I should have put this word in parentheses.. It's been corrected now.
I think my 100mm and 50mm Canon FD SSC lenses are better than they look like at first glance as well..
Perhaps when I find some time I will try your method on images taken with them (and, I first of all, have to take them .. )
Isn't this the same as "drizzling" or am I on the wrong planet?
H
Yes it is mathematically. It is a better practical method as it also corrects for the defects in one shot colour sensors that are not cooled to -40C as CCD's are due to their poor performance anywhere near room temperature.
There is really nothing new it is just a method that does it all better.
Some people advocate subpixel dithering. By this the always present drift is sufficient. For reasons already stated it is better to dither by many pixels.
I should note again I correct for flats and darks with data in fits files. I then also stretch in the fits files. It is only then I interpolate to the TIFF files. This gets data that is far less corrupted or riddled with artefacts..
I have just posted an image of the fox fur - 7.5 hours - so, let me try this procedure and post the result.
So, to get it right: My images are calibrated and aligned (they were dithered during capture).
I will upscale 2x each image (I suspect I can use Photoshop or CCDSTACK or Maxim to do this) and then stack.
I can then make a direct comparison with the image I processed tonight, without the upscaling.
The resultant FWHM of stars in the sum-combined image (consisting of upscaled images before the combination), was larger by a factor of 33%..and visibly obvious when blinking the images. (This is FSQ/STX data, 3.5asp, so well undersampled).
I used CCDSTACK and a quadratic bispline algorithim to upscale, then I used the same procedure to downscale the image at the end. I have heard that a bispline algorithim smears light in stars - which could explain it.
Thoughts?
cheers
Martin
Bert
I tried your technique by resizing with nebulosity before stacking in DSS (No drizzle in DSS Stacking- second photo) and compared it to stacking and drizzle resizing with DSS (first photo). There is definately better retention of fine detail which can be brought out by your technique than what I was previously using. Many thanks Bert as i'm now a convert.
Allan
Last edited by allan gould; 13-04-2010 at 09:14 AM.
Bert
I tried your technique by resizing with nebulosity before stacking in DSS (No drizzle in DSS Stacking- second photo) and compared it to stacking and drizzle resizing with DSS (first photo). There is definately better retention of fine detail which can be brought out by your technique than what I was previously using. Many thanks Bert as i'm now a convert.
Allan
Allan I would be interested in seeing 100% crops. What was the focal length and pixel size. The real test is if other people find that it works for them and is not just a figment of my constant cloud addled mind.
I assume that this technique is mostly useful with undersampled images just like "drizel"
It makes little differenc if you have well or oversampled mages.
Bert
Dont know if this answers you question but the fl of the imaging system was 1600mm and pixel size 5.4u. The photos were 100% full-frames presented.
Allan
I think Bert wanted to see 100% sized crops (so that every original pixel is visible) and not 100% frame (which is originally 3888 x 2592 but in your presentation it is scaled down sigificantly)