View Single Post
  #8  
Old 07-03-2014, 11:14 AM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Mike,

Maybe trying to think of it another way.
Its not completely scientifically correct but almost, but easy to understand !

The light from the distant source has become abberrated during its journey from source to CCD - by space, by the atmosphere in numerous ways and by our optical system.
Sufficiently so, that the light from any given infinitely small point has been spread out across the image we see.

So that means that for one pixel in your image - it is comprised of light from the original source point, plus a little bit of light from every other pixel from the original source - in varying degrees . . . and that some of the light that should have been in that pixel (representing the source detail) likewise got spread out over the rest of the image.

What deconvolution attempts to do with varying degrees of success is put all the light back into the right places !

It doesnt matter that the source is a point source (like a distant star or infinitely distant galaxy or an extended source like a nebula or nearer galaxy) the light is being abberated similarly. Its just that its just so much easier to mathematically determine (and validate) the PSF on a point source of light - but all light is affected and therefore all light sources can benefit from being corrected by deconvolution.

The result is significantly enhanced contrast and significantly enhanced detail between what we see before correction and what we would see if everything was perfect.

If the Point Spread Function for that image, that object and that optical system combined was able to be determined perfectly and that it applied equally across that image we could do a pretty good job of recovering the original data.
But of course things arent quite that simple

There are many different methods of either 1. trying to determine what sort of PSF applies, 2. what the PSF is, 3. how much simplification the algorithm has in it and what assumptions are used or 4. the many processes for applying in the reverse the effects of that PSF.
Then there are all the tweaks that each algorithm permits to try and fix up common problems and number of iterations etc
The level of mathematics employed in deconvolution are typically at the bleeding edge of maths theory.

The most successful algorithm that I have read about (from Eric) is the MCS deconvolution - the initials MCS are after the authors.

Rather than try to apply the perfect correction MCS assumes it can never do this and only tries to correct for a lesser amount, but in doing does not introduce as many artifacts and for reasons that are best read in their papers, has to date produced the most scientifically accurate deconvolution, such that true quantitative scientific use of the data after deconvolution can be performed as opposed to getting pretty pictures which may help with spatial information and resolving detail.

Here is an example of MCS
http://www.orca.ulg.ac.be/OrCA_main/...n/Deconv4.html
If you start Googling you will find lots of real life examples that replicate that example

As you say there are many examples of bad deconvolution published where some aspect of the original data is improved, but often at the expense of other features - such as ringing
That link shows some obvious problems.

Cheers

Rally
Reply With Quote