View Full Version here: : NGC 5189 deconvolution
Lester
30-06-2010, 05:38 PM
Hi all, I took this image a few months ago with 14" LX200 at f6.3, 19 x 4 minute exposures. Stacked in Deep Sky Stacker, processed in Photoshop and deconvolution in Astra Image.
The second image has had deconvolution applied in Iris by Ray. The result of this process IMO has added sharpness to the image above what I could achieve with Astra Image.
Comments welcome. Are any others using Iris for processing images?
TrevorW
30-06-2010, 09:42 PM
IRIS has a deconvolution function aptly named CRISP IMAGE I assume this is what Ray used
It works well sometimes much depends on the image data
Shiraz
30-06-2010, 09:49 PM
Used the modified Richardson-Lucy deconvolution algorithm in IRIS and it clearly outperformed the standard algorithm on Lester's high res planetary nebula images. Not sure what the basis for the modification is, but it produces much lower ringing and background granularity than the standard algorithm.
An interesting example of the algorithm differences. Do you know whats going on behind the curtains between the two algorithms? Are you selecting a star for a PSF or is this done automatically or even simulated?
Like most image processing routines, the use of masks is key to controlling effects. Masks would have greatly helped here as the stars can become 'butchered' if not managed. Interesting target none the less and as Trevor highlights, how far you can push the data with deconvolution greatly depends on its quality.
Lester
01-07-2010, 06:57 AM
Thanks for your comments Trevor, Ray and Jase. I don't fully understand the process in Iris and am grateful for Ray offering to tweek my images in this program.
Lester
01-07-2010, 07:51 AM
Here are 2 more planetaries that Ray has tweeked in Iris for me. The first is NGC 3132 and the second IC 4406. In both cases more nebula detail is revealed, showing finer structure. I agree with your comments Jase that the stars can suffer. Perhaps this would be where the starless look may help.
Shiraz
01-07-2010, 11:21 AM
hi
The kernel in the IRIS implementation is an unsaturated star and the algorithm works well if the kernel is changed regularly (eg between passes). Of course the big problem arises on saturated stars where the underlying stellar shape information has been entirely lost. But it does a really good job of tightening faint stars, in tidying up minor tracking problems and in extracting more detail on extended structures, so it can be worth doing anyway. The RL2 algorithm in IRIS seems to handle saturated stars much better than any other decon and does not do much damage to smooth surfaces. Would be very interested to know what is under the hood, but the manual does not say and I dont fancy doing any reverse engineering.
Jase, have not yet used star removal, but if you get rid of the stars, I guess that you infill from the surrounding regions. Deconvolution would then provide enhanced structure on the remaining extended objects, but you would not get any of the benefits on the stars - is that the case, or could you tidy them up before re-inserting?
regards ray
Shiraz
01-07-2010, 11:37 AM
Hi Lester
just occurred to me that it might be worthwhile to re-post your original 4406 image alongside the intermediate and heavily processed ones to show the advantages and disadvantages of the two levels of processing on this type of object?
regards Ray
TrevorW
01-07-2010, 11:40 AM
I tried the wavelet with setting of 2 but very harsh
the attached show the crisp function with a setting of .65 on stars in the Jewell
Ray
Star removal is a little over-rated IMO. The issue with removing stars is that its near impossible to do so in a clean manner. Even using a continuum filter for narrowband data sets, there are always traces of where the stars were located requiring the image to be upscaled by and then manually cloned out. There are short cut solutions on the net that others have posted here. Deconvolution will wreck havoc if these traces are not sufficiently clean up, especially those with significant tonal differences from the surrounding area.
So instead of creating all that extra work with little gain by getting rid of the stars, work with them. Deconvolution can be applied at different strengths. You'll find the gains in reducing star sizes slowly decrease with the quantity of deconvolution passes. The difference between say 8 passes and 20 passes is much smaller than you think. Try it. You need to settle on a pass value where you maintain the stellar profile of a tight star, but don't butcher them completely blowing out the center. This is a fine line as you many not be able reach the desired pass value depending on the quality of data. No point sharpening noise as you're trying to get rid of it!
What about the extended object? You may find that the deconvolution applied simply to tighten up the stars was too weak to do your extend object justice. It may have a good signal so why not push it harder. In this case, load up your original master and increase the deconvolution passes to extract as much detail as possible without introducing artefacts. Don't focus on how the stars look, just the extended object.
So you now have two images, one where the stars look tight but not over done (cooked too long) and the extended object deconvolution where the stars look ridiculous, but the extend object is in full glory showing depth and detail. Time for some work in photoshop.
As deconvolution (or most sharpening routines in general) alters the contrast between shadows, midtones and highlights, the extend object is likely to be brighter than that of the master with a lower pass value to tighten the stars. If so, you can layer the extended object image as 'Lighten' blending mode to simply bring through the brighter highlights of the heavily deconvoluted extended object layer. 'Lighten" takes the larger of the each pair of RGB values. Note the stars will also break though in this mode given they'll be brighter so you'll need to create a mask to block them from coming through. Hide all will do the trick and simply paint in the highlights from the extended object layer. Be sure to blur the mask so the transition between layers is smooth. Experiment with the blending modes in photoshop. 'Soft light' is a favourite with me, but you need to be aware of what the blend does as some darken while others lighten. 'Soft light' darkens so the image being blended needs to be stretched. You can also perform a double layer (duplicating the layer) to increase the effect. Plenty of info online, but I recommend understanding them and know when to best use them.
Depending on how far you want to take it, you can also perform a variety of extra layers with even greater levels of deconvolution and sharpening, all while still maintaining a good stellar profile. Processing is one of those activities in what you put in is what you get out.
Shiraz
01-07-2010, 04:22 PM
Hi Jase
Thanks very much for that clear explanation of your philosophy re deconvolution. It really is quite exciting what can be extracted from an image if the SNR is good to start with and Lester's originals were pretty good. When dealing with the smaller objects, I guess there is not really any alternative to get past atmospheric blurring.
Re technique, I generally only use one or two passes with a given kernel and then reselect another (previously dimmer) star to use as the next kernel .. etc etc. Have found that this is more efficent than using multiple passes with one kernel and gives the ability to stop when the result is "right".
regards Ray
vBulletin® v3.8.7, Copyright ©2000-2025, vBulletin Solutions, Inc.