View Single Post
  #8  
Old 08-04-2015, 11:06 AM
Placidus (Mike and Trish)
Narrowing the band

Placidus is offline
 
Join Date: Mar 2011
Location: Euchareena, NSW
Posts: 3,719
G'day colleagues,

Because I write my own image processing software, I should love to understand how to do this not at a "Use this tool in this package" level, but at a mathematical algorithmic level. I have no answer, but let me talk around the problem for a bit.

One reason for removing stars (and then replacing them) is to avoid ringing when deconvolving or sharpening. I tackle that (as does PixInsight) by using an anti-ringing step between each deconvolution step. The other reason is to control purple haloes when using the Hubble palette (R=SII, G=Ha, B=OIII), in order to gain an astrophysical understanding of what we are looking at. This is my main concern today.

The origin of the typical purple halos is that there's usually say 10 times more light in a nebula from H-alpha than there is from OIII or SII. That makes the image overwhelmingly green. We then stretch the red and blue channels, either to try to bring out what OIII and SII there is, for astrophysical reasons, or to make the image more colour balanced, for aesthetic reasons. The result is that the stars get bigger in red and blue than they are in green.

Our goal is to try to work out what the background would look like if the star wasn't there. The human eye is amazingly good at that, but we don't know how we do it.

One way to think about this is it is the same as trying to guess what is the pattern in a tablecloth under a coffee cup. We try to mathematically model how the brightness changes with position in the background well outside the star, and then use that model to guess what is "under" the star. My own star-removing tool does this by fitting a general conic section z = ax + by + cxy + dxx + eyy + f to the brightness well outside the star, and then use that model to interpolate under the star. This is done using multiple linear regression. The input data to the model is weighted to the fuzzy extent that the particular pixel is dark. Bright pixels are either other small stars, or part of the halo or diffraction spike of the main star, and we want to ignore a pixel to the extent that we think it is part of a star.

This approach works astonishingly well on gently changing background, but fails utterly if there is say a shock front crossing through the star. For that, we would need a much more sophisticated model. It also fails if there are two very bright stars quite close to each other. These special cases can be handled manually for example in PhotoShop, by cloning a bit of background that looks like the bit that we think should be under the star. That's not very scientific - it is finger painting - but is perhaps justifiable artistically.

There is another approach, which I've fiddled with but not yet got working well. Instead of thinking of the star as an opaque coffee cup, where all information of what lies below has been lost and has to be guessed, think of the star as more like a spreading tea-stain. If we can mathematically model the stain, we can subtract the stain, to reveal what is underneath. We don't need to know about shock fronts for example. They will be revealed when we subtract the stain. This method will only work well for the outer edges of the stain (the halo). The centre of the stain (the burned out star) is opaque, and all information about what is underneath is already lost.

An approach to this "stain modelling" method is to assume that the star is radially symmetrical, and that it has some sort of expected profile, or example (prior to curves) a Gaussian profile with the brightest central region clipped to 65535.

All possible approaches to removing stars or haloes, regardless of package, must ultimately use one or both of these two approaches:

(a) Look at the surrounding tablecloth and try to guess what is under the coffee cup.
(b) Look at the tea-stained cloth, and from what you know about tea-stains, try to work out what the unblemished cloth would look like.

I hope that when you tackle the problem using existing commercial tools, it helps to think in this way because it will help understand why said tools are not quite working as expected.

Best,
Mike
Reply With Quote