Log in

View Full Version here: : deconvolution - long post


Shiraz
06-03-2014, 08:58 PM
Thought it might be an appropriate time for some discussion on deconvolution. As a straw man to start off, the following is my understanding of the process and what it does. Sorry its a bit long winded. Regards Ray

Stars are effectively unresolved points of light with no size, but with varying intensity. The intervening atmosphere and limited optical resolution of a telescope combine to spread the points of light into balls of light with bright centres and wings of reducing brightness. The spread of the light to surrounding pixels is described by a point spread function (PSF), which generally has a bell shape similar to a 2D Gaussian function, but with more extended skirts. Brighter stars have proportionally brighter skirts, so we see them as being larger than the dimmer stars.

So, how to make an image closer to reality - ie a collection of point sources of varying brightness? The simplest way is to look for adjacent pixels with different brightness – and then increase the contrast in that region. The logic is that if there is a small brightness difference in the blurred image, there must have been a larger brightness difference in the original scene. This is the basis of all filter-based sharpening – some methods are much more complex and look for brightness variations over differing scales, but they all work by enhancing the contrast in regions of fine detail.

Deconvolution is somewhat smarter, since it attempts to do an inverse of the blurring process that damaged the image in the first place – if you know the result (the image) and what the point spread function looks like, it is (almost) possible to take out the blurring and substantially reconstruct the original scene. The atmospheric/optical blurring is a convolution process and the reverse process is called deconvolution.

The most successful deconvolution methods for astro images have been the relatively gentle Lucy-Richardson and van Cittert. For these to work, the PSF must be known. You could guess what it might be, but it is much better to measure it directly using unsaturated stars in your image - software such as PixInsight allows you to apply a measured PSF to the deconvolution. Apart from being the best way to get the true PSF, if there is any odd shape to the measured PSF (eg the stars are slightly trailed), deconvolution can compensate and not only tighten up the stars, but make them less distorted as well.

Despite the advantages, deconvolution cannot get back all the way to the original scene for many reasons, including:
1. There may not actually be a unique solution if the scene is very complex and noisy (eg if large numbers of stars are close together)
2. Noise will be amplified
3. Sensor non-linearities, errors in the PSF and sampling problems will produce artefacts that are not part of the original scene
4. Saturated stars will not be properly sharpened
5. If the scene has continuum (eg nebula), there will be regions where a star has been tightened and the donut region left behind is undefined – no nebula information was recorded in this region and there is no longer any star data.

The end result of a deconvolution process will be tighter stars with much sharper edges – you will not be able to get right back to the original scene (with point stars), but you can get some way there. The criticisms levelled against deconvolution – that it produces hard looking stars and may “make up” data that isn’t really there seem a bit harsh. Stars really are very hard edged objects (although we seem to insist on soft fuzzy ones by convention) – and a properly implemented deconvolution process cannot invent information from good data, but it can produce artefacts from noisy or non-linear data. My impression is that we now seem to be using deconvolution in a hybrid mode – able to accept it as a lightly applied sharpening tool, but drawing the line at embracing the full blast of a deconvolved image, complete with tiny saturated hard-edged stars and previously unavailable detail in extended objects. And yet that is exactly what the process is designed to achieve – and it is largely what Hubble images look like.

ref:http://www.astro.rug.nl/~peletier/signal/starck.pdf

strongmanmike
07-03-2014, 08:48 AM
He he nice write up Ray, I assume this post was instigated by some of my recent comments on the validity of deconvolution?:)

The key here and the bit I am not sure of and questioning is how does the decon algorithm apply a PSF to the non stella detail when the PSF was measured and created from looking at a star..?

Mike

Shiraz
07-03-2014, 09:07 AM
yuup :).

the PSF applies to the whole image, not just the stars. The whole image is blurred by the atmosphere and the optics, so the whole image should be subjected to the reverse process of deconvolution.

An easy way to measure a system Point Spread is to pass light from a point source through the system and what comes out the other end is the PSF - in astro imaging, isolated non-saturated stars provide the point sources you need. Using the stars is just a convenient way to get the PSF for the imaging conditions, but once you have it, it applies to the whole image. Deconvolution is widely applied in microscopy, where there are no readily available point sources. PSF can be determined using tiny fluorescent beads to create artificial equivalents of the stars used in astronomy, but once PSF is determined using such a source, it is applied to the normal images - same thing in astronomy.

I would be most interested in your view on where deconvolution fits in. It does have wide application for technical analysis, but when used as designed, it produces results that do not fit into the "pretty pictures" mould. Maybe there can be a place in our hobby for images that extend resolution at the expense of "niceness" in the rest of the image. I think that there is a lot that can be done to move amateur images into higher resolution regimes, but suspect that there might be a fair bit of resistance to such an approach. Might be fun to try to push the boundaries a bit though. Regards Ray

strongmanmike
07-03-2014, 09:14 AM
So if we have an irregular shaped knot in a galaxy arm say (particularly if it is saturated) what does the application of the PSF derived from a star do to this shape?

Mike

Shiraz
07-03-2014, 09:56 AM
Deconvolution will not change the shape of the knot (unless the system has star trailing - decon will partially correct for that), but it will enhance finer detail and increase fine noise. If anything is saturated, you get enhancement of the edges of the saturated region, but there is nothing that can be done to extract info that is not there. Ray

strongmanmike
07-03-2014, 10:02 AM
Assuming you have collected/created a valid PSF in the first place..? I am thinking that correct and judicious application is the key here. I have seen deconed galaxy data for example that just doesn't look right and the detail features look to have just been made more like point sources

Good discussion :)

Mike

Shiraz
07-03-2014, 10:53 AM
Spot on. If the PSF is not correct, the results will be wrong. Using the stars to get the PSF is the way to go - guaranteed to be the appropriate PSF if the stars are isolated and not saturated. If using deconvolution as a sharpening tool with "guessed" PSF, it is possible to really mess up an image.

Nothing you don't already know, but attached image shows effect of decon on a complex image - PSF was taken from stars. Looks to me that deconvolution is not making up detail - just peeling back a layer of blur when compared to the VLT image (of course it is nothing like as good as the VLT image, but its heading the right way). This is as far as the deconvolution could go on this image due to noise, but it may be possible to go even further with more signal (aaugh, not mega data!!) Regards Ray

rally
07-03-2014, 11:14 AM
Mike,

Maybe trying to think of it another way.
Its not completely scientifically correct but almost, but easy to understand !

The light from the distant source has become abberrated during its journey from source to CCD - by space, by the atmosphere in numerous ways and by our optical system.
Sufficiently so, that the light from any given infinitely small point has been spread out across the image we see.

So that means that for one pixel in your image - it is comprised of light from the original source point, plus a little bit of light from every other pixel from the original source - in varying degrees . . . and that some of the light that should have been in that pixel (representing the source detail) likewise got spread out over the rest of the image.

What deconvolution attempts to do with varying degrees of success is put all the light back into the right places !

It doesnt matter that the source is a point source (like a distant star or infinitely distant galaxy or an extended source like a nebula or nearer galaxy) the light is being abberated similarly. Its just that its just so much easier to mathematically determine (and validate) the PSF on a point source of light - but all light is affected and therefore all light sources can benefit from being corrected by deconvolution.

The result is significantly enhanced contrast and significantly enhanced detail between what we see before correction and what we would see if everything was perfect.

If the Point Spread Function for that image, that object and that optical system combined was able to be determined perfectly and that it applied equally across that image we could do a pretty good job of recovering the original data.
But of course things arent quite that simple

There are many different methods of either 1. trying to determine what sort of PSF applies, 2. what the PSF is, 3. how much simplification the algorithm has in it and what assumptions are used or 4. the many processes for applying in the reverse the effects of that PSF.
Then there are all the tweaks that each algorithm permits to try and fix up common problems and number of iterations etc
The level of mathematics employed in deconvolution are typically at the bleeding edge of maths theory.

The most successful algorithm that I have read about (from Eric) is the MCS deconvolution - the initials MCS are after the authors.

Rather than try to apply the perfect correction MCS assumes it can never do this and only tries to correct for a lesser amount, but in doing does not introduce as many artifacts and for reasons that are best read in their papers, has to date produced the most scientifically accurate deconvolution, such that true quantitative scientific use of the data after deconvolution can be performed as opposed to getting pretty pictures which may help with spatial information and resolving detail.

Here is an example of MCS
http://www.orca.ulg.ac.be/OrCA_main/Deconvolution/Deconv4.html
If you start Googling you will find lots of real life examples that replicate that example

As you say there are many examples of bad deconvolution published where some aspect of the original data is improved, but often at the expense of other features - such as ringing
That link shows some obvious problems.

Cheers

Rally

strongmanmike
07-03-2014, 02:00 PM
Cheers Ray and Rally, I was generally familiar with what decon did and how it worked but (still) have my reservations with its use in the general amateur astronomical community, that was all.

Having said that, both of you have managed to improve my mental picture of how it should work, so cheers for that men :thumbsup:

Now I just need to find a good piece of Decon software to use :shrug: I have never been happy with the results provided by Astroarts application of it (so very rarely use it) but I suspect I am not utilising it correctly either :question:

Thanks again guys

Mike

RickS
07-03-2014, 03:14 PM
Great summary, Ray. Just one thing I'd add is that a truly accurate PSF would be a complex beast varying over time and also across the imaging field. The best we can do is only a rough approximation. That might be one reason for not pushing a decon too far.

Cheers,
Rick.

strongmanmike
07-03-2014, 03:22 PM
Hey Ray, what software do you use for your decon?

Mike

Shiraz
07-03-2014, 04:05 PM
PixInsight. The best algorithm I have come across though is something called RL2 in IRIS, which can sharpen without much in the way of artefacts. However, since I use PI for everything else, I normally use the vanCittert algorithm in that package, with local deringing support and dynamic PSF. regards Ray

Shiraz
07-03-2014, 04:13 PM
thanks Rick. Good points. I guess that an advantage of using dynamic PSF in PI to extract the PSF from the image is that you get the PSF applicable to that combined image, so time variability should not be too big an issue? PSF must be averaged across the field though, so spatial variability will still be a major problem. In any case, I understand that even slight errors in the PSF can have huge effects if the deconvolution is pushed too far - never had high enough SNR to get into that region though :lol:. regards ray

strongmanmike
07-03-2014, 04:47 PM
Pixinsight huh?....after initial resistance, many seem to eventually succumb to the magnetic power of PixInsight :scared2: :thumbsup:...hmmm?

Cheers for that

Mike

RickS
07-03-2014, 06:17 PM
Come join us on the dark side, Mike <sinister cackle> :lol:

strongmanmike
07-03-2014, 06:36 PM
Maaaaaaate :eyepop: I am Mr don't change anything :scared: if it is working (except the deconvolution) :lol:

Better look into it groaaaan....

:P

gregbradley
07-03-2014, 07:21 PM
My understanding is Decon works better on oversampled data -it gives the algorithim something to bite on.

I see that with Trius data from the CDK. It does take decon quite nicely without artifacts (so long as not overdone).

Like any sharpening tool easily overdone. But several layers of different decon strength is a very useful tool in bringing out galaxy data. I don't see a conflict in that its merely sharpening. Much like using unsharp mask (gasp - does anyone use that anymore??).

Greg.

ericwbenson
07-03-2014, 10:49 PM
Hi Mike,

I am still using CCDSharp from SBIG (c. 2002), best bang for the buck - it's free. It's a one trick pony software - it only does Lucy-Richardson. It has deringing and noise reduction (to reduce amplified bgd splotches) built-in too (but you can't control that part). Overall I think it's still the best LR implementation out there except there are no bell or whistles...
my Arp244 image from last year was improved quite a bit with CCDSharp.

But it is true that decon requires 'oversampling' or at least critical sampling to work properly. What it is doing is sacrificing/trading SNR for spatial resolution, but there needs to be enough pixels per PSF to put that increased resolution into, and of course enough SNR to barter with! Sometimes undersampled images can be upsampled (resized into a larger buffer) and then decon'ed, but I don't think it works as well as imaging at a higher pixel scale since interpolation is added artefacts that can get amplified.

Best,
EB

strongmanmike
07-03-2014, 11:40 PM
Cheers Eric, yes I was aware that decon worked better with greater sampling. Wonder if CCD sharp would work with a Starlightxoress camera...?

Mike

Shiraz
08-03-2014, 12:51 AM
Hi Greg. Well you could use it for merely sharpening, but then you would not be using most of its capabilities.

Deconvolution is a fairly general purpose image restoration method. It certainly does help sharpen up an image with atmospheric blurring, but, used appropriately, it can also help correct for irregular star shapes, stray diffraction patterns (possibly), minor defocus, motion induced blur (eg from wind), residual aberrations etc. There is a plug-in for Astroart that can be used to correct for coma and the use of deconvolution to correct for SA in early Hubble data is well known.

An appropriate deconvolution algorithm should automatically give you close to the best possible enhancement of stars and galaxies. If it measures PSF it knows how the image has been affected and from that, tries to correct for what has actually gone on in the imaging process - without any guesswork from the user.

In addition, most of the iterative algorithms include noise reduction and deringing at each step, so you don't need to do much massaging of the enhanced image.

Typical deconvolution algorithms are much more comprehensive and potentially reliable than ad-hoc sharpening, even though they can be used in that way if desired. Regards ray

EDIT: just noticed that PI has an input to allow motion blur to be incorporated into a predefined PSF.

ericwbenson
08-03-2014, 01:42 AM
yep it works with FITS compatible files, as long as they are saved with integer data (as opposed to floating point which Maxim does after stacking, just make sure to stretch it back to 0-65535)

EB

LightningNZ
08-03-2014, 10:05 AM
Firstly I just want to say "thank you" to Ray for the great write-up and discussion. I've never had access to astronomy software to do any form of deconvolution but I think you've convinced me that I should plonk down some cash for PixInsight.

Secondly, if it hasn't been made clear already, I'd like to say that iterative deconvolution is a process of separating the observed from a model of the noise in the image - in our case the atmosphere. Of course the atmosphere is ever-changing and we take many images over long time periods so the blurring effect that we see will vary slightly over the image frame, so _yes_ it will never be perfect. Even if it never changed we will also come up against losses of precision in our computers and this will cause some loss of accuracy.

By comparison, sharpening is a (generally) simple matrix math function which shifts affects contrasts based upon their surroundings in a way that we decide. There is often a considerable loss of information from this process (as information from higher orders is concentrated in lower orders). Personally I think the only place for sharpening is as a final, mild step to make an image "contrasty" for its final medium - on screen or in print.

Cheers,
Cam

Shiraz
08-03-2014, 11:38 AM
Thanks Cam.
In addition to your comments, I guess that the other distinction is that deconvolution only works properly on data that is still linear, whereas ad-hoc sharpening can be applied to stretched data. One is a formal restoration process, the other is cosmetic. I sometimes wonder if this might explain the occasional dodgy result from deconvolution - it has been applied to stretched data.
regards ray

kinetic
08-03-2014, 01:45 PM
Great discussion Ray,

further to your comment above,
I wonder if there is any merit in applying the deconvolution to the individual calibrated
subs rather than the raw stack result, pre stretch?

Steve

LightningNZ
08-03-2014, 02:01 PM
Good point Ray, thanks for mentioning that.

Steve - as long as no stretching is done during the stacking process then all that's going on is to average out the noise and signal in the original images so it should be okay.

Maybe instead of thinking of deconvolution as a sharpening process they should be thinking of it as a calibration process? That is really what it is provided the original signal is respected.

Shiraz
08-03-2014, 05:12 PM
probably agree with Cam Steve - will have to give it some thought. what do you think?



makes a lot of sense Cam - it is clearly a calibration process in that it corrects the data for deficiencies in the imaging process.

regards ray

Bassnut
08-03-2014, 05:46 PM
I dont think that applies generally at all, why do you say that?. As Startools tracks stretching and noise during processing, it offers "mathematically correct"
(whatever that means) deconvolution before or after stretching. Ive done this many times and it seems to make no difference whether the image is linear or stretched. I havent directly A/B compared results BTW, thats just anecdotal fiddling around. I dont think CCD stack or Pixinsight can do this, is that what you mean?.

Shiraz
08-03-2014, 06:42 PM
Hi Fred

In stretched data, the PSF varies with brightness (eg bright stars have different FWHM than dim ones). As far as I can see, deconvolution will not work properly if you cannot define a consistent PSF.

According to Ivo on the StarTools forum: As of version 1.3, the difference between linear (for which screen stretching is needed) and non-linear data has been abstracted away; all operations now *appear* non-linear, but some are, in fact, performed on linear versions of the data in the background. StarTools accomplishes that by 'going back in time' while your data was still linear, applying the operation to that version of the data, then calculating how the result would have looked after all the steps you performed after you decided to apply the linear step; it's akin to time travel where you change the past in order to change the future.

What this means for you is that you don't have to worry about keeping your data linear for, for example, deconvolution - you can now apply mathematically correct deconvolution any time, for example after stretching, HDR optimisation, Wavelet Sharpening, etc.) - it will still work! StarTools will calculate the result for you as if you performed deconvolution before you applied stretching, HDR optimisation, Wavelet Sharpening, etc. Neat hey? It's ultimate freedom! And not to mention much more user friendly... :)

I guess that what this means is that StarTools does not actually do deconvolution on stretched data. It will not allow deconvolution on data that is imported in a stretched form. It protects you from making a mess of your data by cleverly referring back to the linear data if you wish to apply deconvolution after stretching linear data in StarTools. Other packages are not so protective or flexible and, as a general rule, deconvolution should not be applied to stretched data - it won't work properly.

Bassnut
08-03-2014, 07:37 PM
Ok, thanks for that Ray (I should RTFM) :thumbsup:. Makes total sense.

Shiraz
08-03-2014, 08:57 PM
:thumbsup: ..was good idea to bring up StarTools - it's another good package that offers deconvolution.

ericwbenson
10-03-2014, 02:19 PM
Hi,
To clarify, deconvolution cannot be considered a calibration step. Calibration is the removal of known and quantifiable system artefacts. Decon of atmospheric effects is never known apriori nor easily quantifiable. The problem of decon is an inverse problem with no single nor sometimes obvious solution.

Also decon should not be applied to single subs (unless you have only one sub!). As I mentioned in a previous post above, decon is basically a trade off of high SNR (because you have more than you need and can always collect more signal with more time) to spatial resolution which you cannot easily get more of even if you collect for ever at that location.

EB

LightningNZ
10-03-2014, 03:18 PM
Two points here:
1) Noise may be quantified but only by estimating a distribution. You cannot know the exact noise term in each pixel and remove it perfectly leaving only a perfect image.

2) As I already mentioned in an earlier post, you cannot model right across the whole changing wavefront of the sky, so some approximation is always to be expected. This is no different from estimating the dark noise, bias noise, amplification noise or shot noise for any given pixel.



I've never seen this written before and I think it's not strictly true. Wikipedia states the following:

This says that deconvolution will always be limited by SNR but it is NOT a trade-off. You imply with your statement that "you can always improve SNR" that the possibilities of decon are endless and that you could somehow achieve better resolution than half the wavelength of the light you are looking at - you can't. You can restore to this point, and the question of what can be resolved varies by empirical measure (Dawes, Rayleigh, etc), but you can't beat it, ever.

NB. There are tricks to doing so in microscopy - so called Super Resolution Microscopy - but these are actually methods of isolating what are effectively point sources and using these to reconstruct a whole image, they are not part of this discussion.

Shiraz
10-03-2014, 06:45 PM
Hi Eric.
Agree that the atmosphere cannot be known a-priori, but I don't think that you need this - surely you can just measure the stellar profile in the stacked image at processing time and use that as your PSF for deconvolution. regards Ray

ericwbenson
10-03-2014, 07:23 PM
This is true, and hence why decon is a "wicked" problem.


Not exactly sure what you're getting at. But remember calibration does not remove noise (dark, bias or shot, it actually adds some itself), it only removes unwanted signal (dark, bias, fixed pattern and vignetting), that is artificially added by the measurement system. Decon is not removing a measurement artefact, the smeared wavefront is the same no matter what telescope/camera is looking at it.
As a byproduct of decon optical aberrations can be 'removed' (if your algorithm is good enough, e.g. TinyTim for the blurry Hubble) since the decon assumes the original PSF was a symmetric gaussian/moffat etc, and that's where it steers the solution. I've never had any success with that however, bad data due to miscollimation or the like always sticks around to annoy...


Well, here's another way to look at it, notwithstanding wikipedia :) : the image contains information, decon reallocates that information. Decon is not a free lunch, the amount of spatial resolution gained is limited by (among other things) how much signal to noise contrast you can/want to give up.


Sorry I don't understand how you draw this conclusion.


Wait a sec...you can always improve the SNR of the raw data, just collect more data (granted it's asymptotic but not if you get a bigger scope...)
Where did I say the possibilities of decon are endless?? They are in fact quite the opposite, decon in most cases gives a marginal or no real improvement to the image since it can easily create structures that aren't real by amplifying noise. Doing decon right is not easy at all, the best way is to have really good data to start with, and be very careful as you wield that sword.

EB

ericwbenson
10-03-2014, 07:34 PM
If you measure it from your image and it changes for every image and you can't remove it with a reversible mathematical transformation, I can't see how it can be called calibration.
Those semantics aside, yes the PSF of each star is the clue that allows deconvolution to enhanced the not point-source bits. It presumes (very important clause here, because it's not always right!) that the math that transforms the crappy PSF into the desired perfectly shaped/narrower gaussian also applies to your nebula etc.

EB

Shiraz
10-03-2014, 07:51 PM
Thanks Eric. That's a key assumption of deconvolution that Mike alluded to earlier.
Under what circumstances will deconvolution apply to stars but not to nebulae etc in the same image? - apart from the obvious and manageable issue of non-linearity/saturation.

EDIT: I guess the basic question is, which algorithms employ constraints that work against extended structures. My understanding is that some of the radio astronomy algorithms do so, but that the algorithms widely used in optical system are equally as capable of effective deconvolution on nebulae/galaxies as on stars. The widely used Ricardson-Lucy and van Cittert for example are fine for extended objects and will work properly using measured star profiles directly as PSFs for deconvolution on nebulae and galaxies.

LightningNZ
10-03-2014, 09:39 PM
People get this strange idea that because perfect Airy-discs are used to model an ideal PSF for a star means that nebula images will somehow get screwed up by deconvolution. They won't - provided there are at least a couple of stars in there to train the blind deconvolution algorithms. The correction model that effectively "sharpens up" the stars will also be applied to the nebula, bringing it back closer to what it would be without distorted air and optics in the way.