Log in

View Full Version here: : More deconvolution examples


iceman
05-05-2006, 08:41 AM
hmm still trying to figure out the best processing method.. I think it really comes down to seeing and the quality of the image in the first place.

As an example, here are 2 images from the same avi from last night (the same avi that the teaser in general chat came from).

Processed identically (convert to bmp's, split into r/g/b), except for the following differences:

Image on the left
- Harder wavelets on each channel after stacking in registax
- ME deconvolution on each channel (7/1.4) in AstraImage

Image on the right
- Medium wavelets on each channel after stacking in registax
- LR deconvolution on each channel (7/1.4) in AstraImage

Both gamma reduced to 0.7 in AstraImage after recombine.

That's the only processing.. normally I would do a bit extra in Photoshop afterwards, but for the purposes of this example I left as is.

My preference is the right one, the one on the left looks overprocessed. But that's using identical iterations/curve radius. So what I'm getting at, is because the data was already quite sharp, the hard wavelets in registax did most of the processing the image could handle.
The ME deconv just overprocessed it.

I need to go back and try different methods on the "hard wavelets" version, to process it to a point where it doesn't look overprocessed. Then we can see the difference between:
a) hard wavelets in registax and soft processing in AstraImage, versus
b) medium wavelets in registax and harder processing in AstraImage

On lesser quality data, i've done the same steps as above and preferred the hard-registax version. So it's important to process based on the quality of your raw data.

Anyway.. just rambling :)

davidpretorius
05-05-2006, 08:45 AM
have you tried a 2x1.1 me on the hard wavelets?

davidpretorius
05-05-2006, 08:49 AM
i am coming to the same conclusion, there are basic steps that can be listed, but the "art" or the "dp" (damian peach) factor is to know what process will produce the best finished product out of the miriad of processing techniques.

no not rambling at all!

allan gould
05-05-2006, 09:59 AM
Most excellent shots and thanks for sharing your techniques. Thats how we all learn by thrying someone elses methods on our avis to see what is best for us. Many thanks.
Allan

Robert_T
05-05-2006, 03:51 PM
Thanks Mike, that pretty well confirms my thoughts too. I've found when I have good seeing and sharp data I get the best images straight from registax with just a little touching up in a photoshop style program. Deconvolution doesn't seem to help much with these and sometimes removes detail. With poor to average seeing shots I seem to get best detail with Medium wavelets and Medium-Hard Astra Image Deconvoluting.

cheers,

janoskiss
05-05-2006, 04:21 PM
Much prefer the "soft" processed one, Mike.

I have no experience in planetary imaging, but I do know a few things about image processing.

For deconvolution to be effective you should apply it before any other usual processing operations. I don't know how Registax does its processing, but if all the "wavelet" functions operate only on the single image (after stacking), and if the stacking is just a weighted sum of realigned individual frames, then the deconvolution can be applied to the unprocessed stacked image. Otherwise it should be applied to each frame before stacking.

The "wavelets" should be done after deconvolution. If you need individual frames for wavelets (i.e., it does not simply operate on a single image) then you need to deconvolute every frame first. For best results the deconvolution algorithm should be told what the 'point spread function' of your instrument is. This can be deduced from aperture and central obstruction size: it is the Airy disk and diffraction rings you see when looking at stars at high powers.

davidpretorius
05-05-2006, 04:32 PM
any easy formula steve, i would love to investigate further for my super dob, 1ponders won't be able to work it out cos it changes all the time for his ratty scope!

matt
05-05-2006, 04:40 PM
So, Steve, you're saying save the un-waveleted stacked image (perhaps as a 16 bit Tiff) and take that to AstraImage for RGB split and decon and recombine...

and then save that processed image and bring back to Registax for wavelets afterwards?

Have I understood you?:)

davidpretorius
05-05-2006, 04:43 PM
hey it works, load a single tiff into registax and it automatically goes to wavelets!!!!

davidpretorius
05-05-2006, 04:46 PM
so we just need to work out how the formula works!!!

steve????????


i could google or ask steve


steve??????????

i could not be lazy, but i will and ask steve



steve?????????

matt
05-05-2006, 04:49 PM
Can someone please deconvolute and recombine Steve and bring him back to the forum!:lol:

davidpretorius
05-05-2006, 04:50 PM
steve???

janoskiss
05-05-2006, 05:00 PM
I do have to do a bit of work too every now & again, Davo. :lol:
Yes if you can just save the unprocessed stacked image and then deconvolute and do any further processing after that would be worth a go.

Formula?

janoskiss
05-05-2006, 05:04 PM
Ah I think I get it. I suppose you are asking about point spread function formula. The software should be able to make a good guess. It is more for if you had to do each frame individually that you would want to specify it, because those images contain little information compared with the stacked image, so "blind" deconvolution would be more error prone.

davidpretorius
05-05-2006, 11:30 PM
this bit steve, how do we tell the system the PSF. i know the aperture and central obstruction etc so what is the PSF for my scope?

work, whats work???

janoskiss
06-05-2006, 10:35 AM
The PSF is the Fourier transform of the aperture: the square of the difference of two Bessel functions of the first kind; one for the mirror and another for the obstruction. It is wavelength dependent. (Theoretically it could also include things like mirror clips and spider vanes, so you could deconvolute all the diffraction effects due to these. I'm sure this is done for images taken with professional telescopes, but it would be rather tedious for amateurs. You would probably need to write your own image processing software from scratch.)

I will get back to you on the exact form when I get a chance to work through it. Meanwhile try letting the software work backwards from the convoluted image to figure it out. If the DC algorithm is any good this should work just as well.

davidpretorius
06-05-2006, 05:26 PM
thanks steve

janoskiss
08-05-2006, 03:23 PM
Donno how useful this is going to be for you, but the formula for PSF is (within a constant normalisation factor):

((a * J1(2*pi*a*r/lambda) - b * J1(2*pi*b*r/lambda)) / r)^2

where
a = diameter of objective
b = diameter of central obstruction
pi = 3.141592
lambda = wavelength
r = angular coordinate in image plane (in radian)
J1 = Bessel function of the first kind and order 1 (see wikipedia page on Bessel fns for more info)

Because of the wavelength spread of each of the RGB filters, the resulting PSF will be a sum (integral) of functions of the form above. Setting wavelength = peak or median of filter would be a good guess. Then the deconvolution algorithm could in principle refine the PSF further.

janoskiss
08-05-2006, 03:40 PM
Here is a plot of PSF for 10" scope at wavelength of 500nm (green), unobstructed and with 30% CA.

davidpretorius
08-05-2006, 04:26 PM
thanks steve, much appreciated.

Ultimately, it woul be good to get a radius out of this. Astra image asks for the number of iterations and the radius of the point spread function.

this deconvolution business is very complicated as the variable is the seeing.

Spent about 2 hours reading last night on this.

I am fiddling with doing deconvolution first on the stacked image and the wavelets after to see if which is the best method.

Thanks again mate

janoskiss
08-05-2006, 05:32 PM
Depends on what the software means by "radius". Some possibilities (approx.):

radius of the inner disk = 0.61*lambda/a
1/2 standard deviation of PSF in the radial direction = 0.22*lambda/a
1/2 full width at half max (FWHM) = 0.26*lambda/a

where

lambda = wavelength (approx: red 650nm, green 510nm, blue 470nm)
a = aperture

This is in arc second (and use same units for a and lambda, e.g., metres). Multiply by the number of pixels / arc second appropriate for your scope + camera to get the radius in pixels.

davidpretorius
08-05-2006, 06:52 PM
thanks again steve, yes, more theoretical and experimental work to be done