View Single Post
  #19  
Old 03-08-2012, 09:22 PM
RobF's Avatar
RobF (Rob)
Mostly harmless...

RobF is offline
 
Join Date: Jul 2008
Location: Brisbane, Australia
Posts: 5,716
Quote:
Originally Posted by irwjager View Post
Hi,

Before I delve into the processing, let me just say that you really need to try to get to know your gear and how to get the most out of it during acquisition. If you're on a budget you work with sparse equipment, or you can't travel to a dark sky site or the moon is out, then you need to know how this affects your data.
Don't have a field flattener? Expect coma. Using cheap glass? Expect chromatic aberration. Using a cheap mount? Expect eggy stars. Shooting under light pollution? Expect a background level with a particular color signature. Shooting with the moon out? Expect a background level with another signature. Shooting widefield? Expect stronger gradients. And so on.
You need to learn how to cope with any one of these problems that may affect you. Do that in hardware first. If you can't then use software as a fallback.

Once you understand the strong points and weak points of your gear, your circumstances and even yourself (how much patience do you have? how much of a perfectionist are you?), then you can start provisioning for these weak points.
Next, try to understand your strong points as well. Use them when picking choosing an object, or use them to cover up your weak points. Bad tracking? Go widefield. Sensor too big? Use binning. Light pollution? Consider imaging in narrow band, etc.

Only when you feel you're comfortable with how you acquire your data and what you acquire on a typical night, can you really start to develop a processing regime. For us 'commoners', this workflow will be different for everyone because everyone acquires different types of data, because they acquire under different circumstances, with different gear, with different skills of different objects. The more serious you become with this hobby, the more consistent and flawless your data will be.

Processing, besides the usual steps that everyone goes through (signal stretching, etc.), will deal primarily with addressing your weak points, using your strong points and tackling object-specific challenges (such as high dynamic range issues).

So let me preface my description with saying that the steps I took were mostly specific to these data sets and may not apply to you. Let me also say that some steps, techniques and algorithms, while having a basis in advanced image processing, are frowned upon by others, for example, the creators of PixInsight. This is because some techniques partially use data-driven best-estimate reconstructions of some pixels, instead of only using 1:1 transformations (curves, filters, histogram manipulations) of the data as it was recorded.

As far as Petr's M31 data set goes, it suffers from severe vignetting and light pollution. There are several tools you can use that will ameliorate this (GradientXTerminator for PS, ABE or DBE in PixInsight, Wipe in StarTools). This particular data set is severely affected, so much so that I had trouble with StarTools getting rid of all of it without adversely affecting the fainter parts of the galaxy.

Some trivial stretching soon revealed that the outline of M31 was very hard to distinguish from the background.

To bring out M31, I used a new technique called 'large scale light diffraction remodelling' (as implemented in StarTools' Life module), where semi-automatically selected individual parts of an object are used to remodel the aggregate light diffraction of the whole object, restoring its outline and making it stand out against the background again, with the original (already visible detail) embedded inside it. The outline is an approximation but tends to correlate very well to what the real object would look like. It's just one example of data-driven best-estimate replacement of pixels is used to bring out an object in your data that would otherwise be lost or inaccessible.

I used a special astronomy-specific version (coming in ST 1.3) of an algorithm similar to PhotoShop's 'content aware fill' to remove the dust donuts while still retaining any stars that were dimmed by it. It's another example of a data-driven, best-estimate replacement.

I finished off with rounding the stars (another data-driven, best-estimate modification of the image).

The Sculptor galaxy data was much better. After a screen stretch (e.g. a stretch that doesn't modify the data but merely visualises it), I started of with some mild selective deconvolution as the noise level was quite good (low). It managed to recover a fair amount of detail.
It didn't require such aggressive gradient removal and a small amount of stretching was enough to bring out the galaxy further.

As with the M31 image, I removed the dust donuts. Next I selected the whole galaxy in a mask and calibrated the white point to the full galaxy and nothing else. I bumped up the saturation a little, did some very mild noise reduction, rounded the stars and called it a day.

Cheers,

Top post Ivo, almost worth of pinning somewhere. The reality is that many people don't want to spend an arm and a leg on top quality astro gear, and you can make do with a lot less than top notch as long as you understand and expect certain limitations. You can learn to understand and manage those limitations and do things at data collection and post processing to increase the chance of achieving your goals. Not everyone is worried about APODs (nice though!), and the PMX, FSQ, 16803 rig can wait until we win lotto anyway
Reply With Quote