You can get more resolution from your sensor if you dither between exposures and upsize the images before stacking.
In the case of my Canon 5DH with a pixel size of 8.2 micron and a 300mm lens at f/3.5 the Airy disc is 4.5 microns in diameter at green wavelengths. So theoretically if the lens was perfect i.e. diffraction limited the lens has more resolution than the sensor. The situation is actually worse than this because of the Bayer Matrix and Anti Aliasing filter.
For dim stars your optical system actually does better than the diffraction limit as the Airy Disc is not uniform and is peaked.
I upsized by a factor of 1.6 before stacking each exposure stack.
I further enhanced the final HDR image with slight star reduction and RL enhancement after enlarging further by a factor of 1.4. This is a total of enlarging from the native pixel size of the sensor by a factor of 2.24. In sensor size it is the equivalent of 12.7MP to 64MP!
Here is the final image downsized to the native pixel dimensions. 4000x2655 pixels 6MB
Fantastic just what the Dr ordered I was just wondering what your high res versions would look like, after just going over your Centaurus pair again
TYVM
Thats very interesting Bert. I often upsample in PS before processing to great effect, but havent tried before stacking. The last image is indeed a huge improvement.
How do you handle stacking large numbers of huge upsampled DSLR images though, your RAM size would have to be obscene ?. Or do you stack with IP, ala off disk?.
Despite this counter intuative process ie "cant get detail that aint there" I found processing "recovers" detail that is buried and arguably retrieved by up sampling, as you have shown.
A most interesting process and I am not sure I understand all that is happening.
I must admit though that the image is amazing.
Can you explain in lay terms exactly what your process is? Easy HDR works on stacking a number of different exposure lengths to extract all of the light possible from the images doesn't it?
I understand upsampling prior to processing in PS can help with specific things such as sharpening, reducing stars because you have more pixels to play with but I'm not sure how you get more details out of the data? I mean isn't deconvolution algorithm what's giving you more details in the nebs? Working it's way inwards? Besides tightening the stars too.
I understand upsampling prior to processing in PS can help with specific things such as sharpening, reducing stars because you have more pixels to play with but I'm not sure how you get more details out of the data? I mean isn't deconvolution algorithm what's giving you more details in the nebs? Working it's way inwards? Besides tightening the stars too.
Well, thats a good question Marc, often wondered and have experimented muchly, it is counterintuative, but I swear emperically anyway, upsampling works, and "appears" to increase detail (other than just haveing more pixels to play with, or im kidding myself and thats the only effect, ill admit that).
I understand how decon works, it mathematically reverses a known blur process, which doesnt have much to do with upsampling granted.
Perhaps the upsampling allows all other processing to procceed more accurately and the "more pixels to play with" does actually increase detail, not just a visual effect.
Anyway, visual effect in the end is what we are after, so regardless, works for me .
Actually now I think about it I always upscale my raws because when I debayer CCDStack interpolates each channel to the native resolution. Green looks best because I have to G in the bayer matrix and R & B are sligthly less sharp.
And, upsampling before stacking is a whole subject itself. I imagine that would improve accurracy and improve noise reduction (with more pixels to play with). Berts idea of dithering then upsampling makes sense.
In fact , and im not sure how to express this, if you could capture into a larger file size than the image being captured, then with dithering, you are sampling a much higher res, by "filling" the "gaps" in the larger file size with repeated exposures. Does that make sense?. A bit like a mosaic, only at the pixel level.
And, upsampling before stacking is a whole subject itself. I imagine that would improve accurracy and improve noise reduction (with more pixels to play with). Berts idea of dithering then upsampling makes sense.
In fact , and im not sure how to express this, if you could capture into a larger file size than the image being captured, then with dithering, you are sampling a much higher res, by "filling" the "gaps" in the larger file size with repeated exposures. Does that make sense?.
Well actually you're right because I always got better result doing noise rejection by upsampling the red in Ha rather than binning 2x2. And I always dither. I mean always. I make a point of it. It makes a hell of a difference for me.
Thats very interesting Bert. I often upsample in PS before processing to great effect, but havent tried before stacking. The last image is indeed a huge improvement.
How do you handle stacking large numbers of huge upsampled DSLR images though, your RAM size would have to be obscene ?. Or do you stack with IP, ala off disk?.
Despite this counter intuative process ie "cant get detail that aint there" I found processing "recovers" detail that is buried and arguably retrieved by up sampling, as you have shown.
I have 12GB of RAM with an intel i720. ImagesPlus is now 64 bit. Would you believe I can open 40 fits images from the 5DH and manipulate the lot.
I could go into a rave about the mathematics but Ican assure you it is all real.
Well actually you're right because I always got better result doing noise rejection by upsampling the red in Ha rather than binning 2x2. And I always dither. I mean always. I make a point of it. It makes a hell of a difference for me.
Well, I found capturing bin2 then oversampling gave the signal and res increase required with less noise.
When you say "rather than bin2", do you mean in cam bin2 or software bin2?, software bin2 has no advantage noise wise. Cam bin2 reduces read noise.
OK there are a few factors at work here. By dithering you immediately improve your resolution by a factor of one over root two. This is due to oversampling. More frames better definition.
By upsizing (always use bicubic) before stacking you will get better resolution merely because the arithmetic is better. Note dithering when collecting is what makes this work.
The lens has far better resolution than the sensor so we are only getting back something that already exists. You cannot make a lousy optic better.
I could go further but you get the idea. With star reduction algorithms and RL enhancement we have made the blocky 12.7 MP sensor perform
like a 64MP sensor.
All of this is mathematically valid.
It is the oversampling that is the secret. Not only does it increase signal to noise but resolution as well.
Well, I found capturing bin2 then oversampling gave the signal and res increase required with less noise.
When you say "rather than bin2", do you mean in cam bin2 or software bin2?, software bin2 has no advantage noise wise. Cam bin2 reduces read noise.
Cam bin2x2. I get very noisy pictures. Probably because I collect all of it from the Gs & B when I do Ha. So I upsample Ha(red) alone then reject and stack.
OK there are a few factors at work here. By dithering you immediately improve your resolution by a factor of one over root two. This is due to oversampling. More frames better definition.
Cool - I wasn't aware dithering was so important for resolution as well. I did it mainly for better data rejection and hot pixels when stacking.
OK there are a few factors at work here. By dithering you immediately improve your resolution by a factor of one over root two. This is due to oversampling. More frames better definition.
By upsizing (always use bicubic) before stacking you will get better resolution merely because the arithmetic is better. Note dithering when collecting is what makes this work.
The lens has far better resolution than the sensor so we are only getting back something that already exists. You cannot make a lousy optic better.
I could go further but you get the idea. With star reduction algorithms and RL enhancement we have made the blocky 12.7 MP sensor perform
like a 64MP sensor.
All of this is mathematically valid.
It is the oversampling that is the secret. Not only does it increase signal to noise but resolution as well.
Bert
Yes, I would like the math Bert, even though I will struggle with it.
Dithering, as I understand, is primarily to eliminate sensor defects.
Even if you upsample, stacking will just align subs (canceling the dither)and produce the same results as a no-fault sensor sub.
I see dithering increasing res as say if a sub was captured at 4 times (or more) the image res to a fixed non-dithered file so that sub pixels "filled up" a larger pixel space, which is not possible now (that I know of).
I will write a full mathematical explanation tomorow Fred.
The data for this image has been pushed.
Your interest of course is great as you can improve your resolution I estimate at least by a factor of nearly two just by carefull data aquisition and processing.
Fred your system is limited by seeing. I will have to think carefully how I can improve your already scary resolutions.
Hey Bert, nice info. I routinely upsample my raw planetary data before feeding it into registax etc, I read somewhere that the sampling theorem lets you get something like 2x the native resolution of your sensor if you have a lot of frames and the target is moved a little on each one :-)
Upsampling before processing certainly helps me, good to see it here as well.
that's why I always hold on to old data
I've been doing the upsample to my DSI II stuff for a while now
pre-stack because it seems to work, but only on good nights with
lots of subs, as has been mentioned.
I don't really understand the maths but I know it works for the
planetary guys as Bird says, so why shouldn't it work for LX
The DSI software has the dither function as an option and explains
a bit about star overlap over a pixel grid and interpolation.
Here is an example of my results when I upsample 2x with a Bspline
algorith before stack.
Even though the input raws are done on a night of EXCELLENT seeing
it only works when there is some slight change in frames due to tracking
and seeing in combination. I don't know if it is a combination of
the focal length, sub length (5s in this case) and arc sec per pixel
but the result definitely improves fine detail.