PDA

View Full Version here: : M83 - Data available


codemonkey
19-07-2015, 11:53 AM
This is my second attempt (ever) at processing LRGB data. Man, it's harder than narrowband! Don't know how you guys do it so well. Would have liked more colour data, but the weather is the weather.

Data is:

R: 4x420
G: 6x420
B: 7x420
L = 20x420, + a synthetic L created from the RGB.

The registered and calibrated RGB and "super L" data set is available here (https://www.dropbox.com/s/khs16y4b92lrir3/M83.zip?dl=0) If anyone wants to play with it.

Basic process was:


Decon L
Noise reduction on L using ATrousWaveletTransform
Multiple HistogramTransformation passes on L
Using image statistics get the median from the L, then use masked stretch on R/G/B images with a target background matching the L's median
LRGBCombination using default settings
ColourCalibration
Couple of curves tweaks to the L "channel"
HDRWavelets to lightness with lightness mask
Couple of passes of LocalHistogramEqualization
Boost saturation using curves with a mask to protect the background
SCNR to get rid of the green stars.... green stars?


All in all, not that happy with it yet so I'll probably redo it another 50 times :-)

Edit: Just tweaked the colour balance a bit more since it was too yellow.

AlexN
19-07-2015, 12:02 PM
I think its great for the beginnings in LRGB imaging. I remember when I went from a Qhy8 to an st8300m and filters a few years back I found the transition really really difficult to grasp. It took a few months to get my groove going with it. Once you find what works for you, then you start to see the benefits of the extra work in LRGB imaging.

RickS
19-07-2015, 12:18 PM
Lee,

The theory is that ColorCalibration should only be applied to linear data. Might be worth trying to see if it helps... Getting good LRGB colour is a challenge at the best of times :)

Cheers,
Rick

Atmos
19-07-2015, 03:38 PM
I believe it is best to get the colour the way you want it while it is linear. Don't ask me why it's better to do it linear than not, just what I've heard! I am at the same stage as you codemonkey :-)

Geoff45
19-07-2015, 04:49 PM
Just downloaded it now Lee. Data looks pretty good. Will have a go at it tomorrow since I have no more outstanding data of my own.
Geoff

Edit: Just had a quick run through it. I think it has lots of potential. Will do it more carefully tomorrow.

codemonkey
19-07-2015, 06:44 PM
Cheers Alex :-)

I didn't really spend that much time doing OSC with a dSLR before moving to a mono CCD. But it's been a long time with lots of equipment issues, so almost all of my (limited) stuff has been mono, because I always get L and then figure I'll get RGB later, but later never comes.

All good fun though, at the early point of a learning curve... best place to be!





Thanks Rick & Atmos! I did try to do colour calibration with the linear data but it went from having a strong yellow hue over everything to having a slightly more olive, but still mostly yellow, strong hue over everything... so I tried it with the non-linear data and that seemed to work better.

I'll have another crack at calibrating the colour when it's linear and see how I go... must be some setting I didn't have right.



Thanks Geoff! Having you say "lots of potential" is a pretty big compliment given the work I've seen of yours on Astrobin. Looking forward to seeing what you can do with it.

I was able to pull out some good detail of the original L data, probably the best I've managed with a broadband subject... but trying to do it with LRGB has left me scratching my head.

The colour is extremely muted after I combine and when I try to tweak the data it just washes it out more... and then when I up the saturation it makes the data look bad, and the colour look cartoonish. Lots to learn when it comes to processing colour... looking forward to the challenge :D

Here's the mono version I did with the original L a while ago. Adding in the synthetic L dropped the noise by roughly 1/3 so there's a decent amount of detail in there I think, if you're actually competent in processing.

AlexN
19-07-2015, 06:48 PM
Just remember. Luminance is everything. Get it bright, deep, sharp and detailed. Colour needn't be too extreme. My best image with my 8300 was about 10hrs luminance and 3x5min of each colour.

RickS
19-07-2015, 07:07 PM
Lee,

A LRGBCombination of R/G/B followed by BackgroundNeutralization and ColorCalibration using a preview of the galaxy as the white reference (Structure Detection disabled) gave quite good colour for me...

I'll have a quick play and see what I can come up with...

Cheers,
Rick.

RickS
19-07-2015, 07:43 PM
A quick process in PI...
- MultiscaleLinearTransform noise reduction (linear mask) all masters

- Combine RGB (LRGBCombine)
- BackgroundNeutralization
- ColorCalibration with galaxy as white ref
- Stretch
- ACDNR denoise heavy on Chrominance
- Apply star mask and blur chrominance (ATWT remove layers 1-4)
- SCNR
- Apply clipped luminance mask: Curves boost Saturation
- Curves Hue: cyan -> blue

- stretch L
- HDR mask (clipped Lum mask, remove stars): LHE 0.5, HDRMT 7 layers, slight ATWT sharpen (small bias increase layers 1-3)

- Combine L and RGB (LRGBCombine)
- Slight contrast curve
- ICCProfileTransformation -> sRGB

Cheers,
Rick.

codemonkey
19-07-2015, 08:36 PM
Ah, that's interesting. I was using the galaxy as a white reference, but with structure detection enabled. I was also using a preview for the background reference.

I noticed a subtle (to my eye anyway) gradient when viewing with STF and I extracted that using DBE... interestingly enough the colour balance was pretty damn good after that without even running colour calibration. Didn't expect that...



Looking good! Thanks for that Rick, and I appreciate the detailed write-up too. I'll be going over that and picking up some things which I'll no doubt include in my workflow going forward :-)

RickS
19-07-2015, 09:00 PM
Structure detection is appropriate if you're using star colour for the white reference but not for a galaxy where you want the whole object considered.



DBE can do a pretty effective background neutralization :)

rustigsmed
20-07-2015, 09:41 AM
looks good Lee!
following your journey with interest!

Geoff45
20-07-2015, 11:27 AM
OK Lee, here we go:
I didn't use your luminance because I didn't know how you'd done it. I generated a new synthetic L from your R, G and B and used that instead.
--Load R, G and B images. Do a LinearFit with G as the reference.
--ChannelCombination
--CloneStamp to remove stray red, green and blue pixels.
--DBE
--BackgroundNeutralization
--ColorCalibration with M83 as the white reference
--RGBWorkingSpace with colour coefficients set to 1 and gamma set to one. The reason for doing this is to extract a linear luminance image in which R, G and B all contribute equally.
-- Image > Extract > Lightness (CIE L*). I used this as my synthetic Luminance. Stretch with HT
--Clone and clip synthetic L for use as a mask for the RGB image

--Back to RGB image.
--Stretch with HT
--Boost saturation with Curves, using the clipped synthetic L to mask the background.
--Noise reduction with ACDNR, using the clipped synthetic L to mask the bright areas. Go strong on the chrominance noise.
--SCNR to remove green cast.

--LRGBCombination
--Masked saturation boost with Curves
--SCNR
--Convert to sRGB for web display.

The image would probably benefit from a deconvolution as well as a light noise reduction on the final image.
Geoff

codemonkey
20-07-2015, 07:44 PM
Nice one Geoff!

Most of the data I had was in that L, the RGB had limited time put into it so you pulled out a pretty good image from the worst of the data.

In going over these processes, both yourself and Rick mentioned a clipped luminance mask... can either of you spell that out for me a bit?

I never knew about linear fit either, that's perfect... I used masked stretch in mine because I didn't know that existed.

One other thing I discovered is that the saturation slider on the LRGBCombination tool is backwards, and I've been able to get much better colour now using that tool rather than trying to adjust it after the fact with curves.

I've updated my original post with my newest processing revision... much better now I think, though it's still far from perfect.

RickS
20-07-2015, 08:43 PM
Lee,

A clipped luminance mask is used when you want to apply a process to the brighter parts of the image more strongly and not to the dim background at all. A couple of typical examples are when you are boosting saturation or sharpening. These are not things you want to do to the background!

To create a clipped Lum mask you need some Luminance captured with an L filter or extracted from an RGB image (CIE L* or CIE Y component will do, but do a RGBWS first to set the luminance coefficients all to 1.) If this data is linear then you'll need to stretch it. The final step is to black clip it with HistogramTransformation so that the background area is all black (use a dynamic preview to adjust the black point.)

Apply this mask to your image and it will protect the background areas and apply the process more strongly to the bright areas.

LinearFit isn't a replacement for a stretch but it does allow you to scale an image to approximately match the brightness of another one. This is useful for matching narrowband masters and many other applications.

And, yes, the Lightness and Saturation functions in LRGBCombination seem to work the wrong way around if you don't understand that they are MTF functions like the midtone balance in HistogramTransfer (moving them to the left is a boost, moving to the right is a reduction.)

Cheers,
Rick.

Geoff45
20-07-2015, 08:51 PM
Hi Lee
I misunderstood your data. I thought you only had RGB and that the L you posted was a synthetic L Didn't realise you had a real L. So what I did only came from your RGB data. Will have another go tomorrow using your L
Geoff

Rex
20-07-2015, 09:05 PM
This is a great thread Lee and your data looks awesome. Sitting back watching and learning heaps.

jase
20-07-2015, 10:46 PM
Damn you PI freaks! :lol: I look so incompetent with CCDStack, Photoshop.:D:P Layers and Masks, and plenty of them.
Colour combine in CCDStack. Used two RGB versions. Same colour balance for both but one with a high saturation. Applied while data is linear.
DDP stretched the Lum and colour masters. All exported as tiffs into photoshop. Conventional LRGB combine. While tempted to go into PI for some DBE, I opted to use Russ' gradientXterminator plugin. Applied blue photo filter as a mask to highlight the starburst regions. highpass mask on luminance to tighten. Inverse mask for noise control, I should have gone harder on reflection as the background is still a mess but this was a quick and dirty process. Tempted to liquify some of the bloated stars to tighten but decided to save them from implosion.

I actually have an M83 data set from 16" ASA that I'm yet to process. Will look into it shortly.

codemonkey
21-07-2015, 07:36 AM
Aha, I see, thanks Rick! I was thinking it was a white-clipped rather than black-clipped mask and was wondering how you guys were creating them.



Yeah, I used masked stretch for essentially that purpose because you can set a target background value so I used it to normalise the RGB, even though I didn't really want to stretch it.



Eh, I dunno. Sliders have a convention where moving to the right increases value/effect, and the left decreases it. It'd be simple for them to abstract that detail from the user and keep the UI consistent with the rest of the world.



Ah cool. The L data provided is a bit over 2hrs pure L, but with the RGB added in; I simply used L+R+G+B in PixelMath and rescaled the result; maybe there's better ways of doing that.



Thanks Rex! My data is a bit soft but it's about the best I've managed so far.



Nice one Jase, particularly for one of you incompetent Photoshop guys ;-) But hey, the right tool for the right job and sometimes the right tool is the one you know how to use. Looking forward to see your M83 soon!

Geoff45
21-07-2015, 02:00 PM
Here's my next version Lee. As I said above, I misread your post and assumed you only had a synthetic luminance from your RGB data, so my previous attempt was based only on your RGB data. I used pretty much the same processing in this one. And of course one of the good things about PI is that you can save your work as a project, so when you reload the next day all the stuff you did before (images, history) is still there to be further processed. No lost work.
Differences were:
-- A deconvolution on your (real) luminance
--LRGBCombination with the real luminance rather than the synthetic one (which I still used as a mask)
--Some sharpening with AtrousWaveletTransform.
--Noise reduction withTVGDenoise
--Upped the saturation a bit with CurvesTransformation
--The odd Histogram tweak here and there.


All up this has been an interesting thread. I learned a couple of new tricks from PI Guru Rick's workflow above.
Geoff

RickS
21-07-2015, 02:42 PM
A nice colourful result, Geoff! It has been interesting to compare notes.

Cheers,
Rick.

jase
21-07-2015, 02:42 PM
Actually, a question on this data. What caused the FITS masters to have a mean background ADU value of .01? Is this a result of the way PI produces masters or the data being previously stretched?

Put another way, why has the pedestal value been blown away? Post calibration, what were the values of the subs? Were there any subs with a value below zero? A no point should this occur. If it does, you need to back track to find the root cause. This is the reason why all FITS apps add a pedestal value (typically 100 counts) to stop FITS becoming negative.

RickS
21-07-2015, 03:14 PM
Jase,

A pedestal isn't needed after calibration has been done and the data has been converted to 32-bit floating point.

Unfortunately, the FITS spec doesn't specify a standard range for floating point data values. PI uses a range 0..1 Other packages use different values. You can tell PI to use a different range when reading "foreign" FITS files.

Cheers,
Rick.

jase
21-07-2015, 03:24 PM
Yep, thanks Rick. Even in IEEE 754 float, values still can't be negative so I was surprised to see the value so close to zero. Thanks for clarifying.

RickS
21-07-2015, 03:45 PM
Jase,

I suspect the reason for using a pedestal with integer FITS files is actually to prevent underflow from generating large positive values when using 16-bit unsigned data. That would be more of a problem than a small negative value.

But it's a long time since I wrangled bits for a living, so maybe others know better than me :)

Cheers,
Rick.

LewisM
21-07-2015, 05:25 PM
Had a crack at it thanks Lee.

Apart from combining the colours and lum in CCDStack, it was processed entirely in PS CS5.1, using 2 actions and a little levels and curves. Nothing else except a double 50% Lum layered in luminosity to bring it out a bit.

codemonkey
22-07-2015, 07:25 AM
Nice one, thanks Geoff! I'd never actually noticed the save project feature before. That's going to be really handy in future.



Great work Lewis, kept the noise down well and with a really simple approach. For those using PS, do you mind sharing what actions you used (assuming these are commercially, or otherwise freely available actions).

LewisM
22-07-2015, 11:36 AM
Sure.

Did the combine in CCDStack, with intial stretching done there also. Saved as 16bit TIFFS, including the Lum, and moved it to PS.

In PS, I initially selected the stars (Focal Pointe action), then played a bit with contrast, colour balance and saturation. Then I ran a mean saturation enhancement action (will find out later what it is called), removed all the green, and did a small masked sharpen (the stars already masked, so was easy) - something like a 15% sharpen only. Enhanced the contrast a little again selectivelly, and boosted the blue some. I forgot to enhance the H-a reds, but you can still see them. Ran an EXTREMELLY light tonal Contrast to bring out the dust lanes some more.

Black clipped the Lum slightly, layered 2 copies on top, reduced opacity to 50% each, layered in Luminosity (makes it pop, and darkens the background somewhat). Ran a very light denoise with Imagenomic's nose reduction filter

All in all, about 15 minute work.

I think most of what I did is covered with Annie's Actions, but I usually do it manually myself (more control over result)