View Full Version here: : Carina - First Tri-colour
codemonkey
30-03-2016, 01:51 PM
A tiny integration (104mins total), but a special one for me all the same. This is the first tri-colour narrowband integration I've done. RGB = OHS.
No doubt I'll reprocess this 1,000 times as I try different palettes.
Full res on Astrobin (http://www.astrobin.com/243356).
glend
30-03-2016, 02:10 PM
Darn it, Lee you beat me to it. Looks wonderful. I've only shot the Ha and Sii so far but need more, much more. There is never enough signal.
Please share your processing techniques?
Slawomir
30-03-2016, 02:57 PM
The image looks very nice Lee.
I agree, Carina nebula is such a wonderful and dynamic dso, and in spite of her brightness composing beautiful image of the Carina nebula certainly takes some dedication and skills.
Looking forward to further revisions :thumbsup:
codemonkey
30-03-2016, 06:16 PM
hah, sorry Glen. Sounds like you'll be gathering much more data than I bothered with anyway, so I may have beat you to the punch, but you should get some cleaner data :-)
All processing was done in PI.
Pre-processing:
* Weight images according to FWHM (smaller number = more weight) and eccentricity (again, smaller = more weight)
* Calibrate using bias only
* Register all images against the best sub (an OIII with a FWHM of ~1.4")
* Integrate Ha, OIII and SII separately, using one of the Sigma clipping algorithms
Luminance:
* Create luminance from H/O/S using PixelMath h*0.4+o*0.4+s*0.2
* Create range mask to protect big blown out stars
* Deconvolute the crap out of it, with the range mask applied, using ~100 iterations
* Stretch using multiple iterations of Curves & Histogram Transformation
* Apply mild dark structure enhance
* Apply mild local histogram equalization (radius 64)
Colour:
* Linear fit O and S using H as a reference
* Use LRGB Combation to combine R = O, G = H, S = B
* Use curves on different colour channels to better balance the colours visually speaking
* Stretch using masked stretch (all default options)
* Create a star mask
* Using star mask run a couple of iterations of SCNR over the stars to neutralise the purplish stars
Combine lum and colour using LRGB Combination, dropping the saturation slider to increase saturation.
I think that pretty much sums it up. Sounds like a lot now after I type it out, but there's really not much to that.
Thanks Suavi! Since posting this I've uploaded another version to Astrobin, and I think I'm fairly happy with that one, so I might just let it sit for a while.
Atmos
30-03-2016, 07:42 PM
Looks quite clean for a pretty short integration time :) Nice seeing it in a close up :)
Great image, and thanks for sharing workflow for NB processing. Hadn't thought to balance on FWHM :question:
codemonkey
30-03-2016, 08:04 PM
Cheers mate. That's that Sony sensor at work :-)
Thanks Rob, and no probs sharing the processing. I'm very new to NB processing as well, so if anyone has any suggestions I'm all ears.
PI by default weights on noise estimation, but when you shoot the same sub length with the same gear in the same conditions, the noise doesn't vary that much. What does vary a lot is the sharpness due to poor guiding etc, so I use subframe selector to weight them, then save that weight as a keyword in the fits file, then when doing the integration you can tell it to use that weighting instead of the default noise estimation.
I updated the OP to include my most recent processing, and also threw in auto-stretched copies of HOS as well, for anyone wanting to see the difference between the different filters.
Slawomir
30-03-2016, 08:35 PM
Thank you Lee for sharing your workflow in PI. It is great that you are willing to share it :thumbsup:
I see you do a few steps differently, so I will look into checking that :)
Also, I hope you do not mind me adding a few optional steps that perhaps someone might find useful in pre-processing? :)
Pre-processing:
1. Calibrate using super-bias and flats (I found flats essential when significantly stretching the final image).
2. Apply Cosmetic Correction tool with automatic detection of defects. It removes most of hot and cold pixels.
3. Same as you do, weight subs according to eccentricity and FWHM. Identify the best few subs and also potential ones for deleting.
4. Visually inspect all subs using Blink tool and confirm which one is the best one as well as select any subs for culling.
5. Register all subs into 3 masters using the best ones as a reference (I do each channel individually-not sure if that makes any difference. Later on I register again the three master integrated channels).
6. Integrate without pixel rejection first (again, best sub as a reference). Keep the integrated file as a reference. Then try integrating with different settings for rejection to make it as permissive as possible until all defects are just eliminated. I usually go with the Linear Fit because of the number of subs I work with. Repeat for the other two channels.
Again, I hope I am not interfering with your thread Lee...:scared3:
Suavi
codemonkey
30-03-2016, 08:51 PM
I don't mind at all Suavi, thanks very much for sharing!
I would usually do flats myself, but I've been using my laptop display for flats and have recently changed field laptops for a smaller one with a less good display; I'm not sure it's good enough for flats so I'm thinking to try and put together a flat panel using some LEDs that I have laying around. Normally I would consider flats essential, but I was lucky enough to shoot a complex area which hides many nasties ;-)
Good tip on super-bias too. I think maybe you put me onto that a while ago. I use a super based off an integration of 100 and it cleans it up nicely.
I was never able to get cosmetic correction to fix enough of the issues I saw (probably just never learned to use it well enough), so I started dithering a while ago and never looked back. Having said that, the new mount is struggling a bit to get back on track after a dither, but that's probably just part of the learning curve of new equipment.
Good call on experimenting with the different integration algorithms as well. I basically just choose based off the number of subs these days, and like you, use linear if I have a decent number.
Thanks again for sharing Suavi!
PS: I'd never used dark structure enhance until you mentioned it in a thread recently. It's a handy little one-click to get some nice contrast going.
Ryderscope
30-03-2016, 10:43 PM
Not only is that a very good image(s) Lee, it is going to help me out with my approach to my Gum 39 image processing. Excellent and thanks.
codemonkey
31-03-2016, 07:49 AM
Thanks Rodney! Glad to hear you've found something useful in the workflow too :-)
Geoff45
31-03-2016, 01:55 PM
Pretty good Lee. The Eta Carinae neb is definitely one of those subjects that need NB.
Just a comment on the workflow: Why not just transfer the screen stretch to the histogram transformation and hit apply? Much quicker than curve, histogram, curve, histogram, curve,....
Geoff
codemonkey
31-03-2016, 06:35 PM
Thanks Geoff. Any suggestions for improvement?
As for the workflow thing, I guess I've never considered STF as a serious option for stretching. It's meant for previewing, right? Why not call it auto-stretch if it's meant for one-click stretching?
I actually did use STF for the sii / oiii / ha shots attached, but I'm not sure I could give up control and leave it all to STF for my "real image".
RickS
31-03-2016, 09:01 PM
Nice job, Lee!
A step you might find helpful for creating a synthetic luminance with PixelMath is to use LinearFit to match the Ha, Oiii and Sii before combining them. Otherwise they may be scaled quite differently. You can make up for this by tweaking the coefficients for the addition but I like to start with an even playing field.
Another option for creating a luminance for NB is to use ImageIntegration to do a noise-weighted combination of the Ha, Oiii and Sii (no rejection.) If the Ha is much better quality than the Oiii and Sii you may find they are under-represented (check the weights listed in the process console) but it can work well sometimes.
Cheers,
Rick.
Atmos
31-03-2016, 09:10 PM
Would you do this with the three already stacked files or re-integrate all of them?
codemonkey
31-03-2016, 09:22 PM
Thanks Rick! I actually did a linear fit on them before creating the lum, but neglected to mention it in the steps above.
I didn't try integrating them with no rejection; I briefly tried to stack them all together using varying rejection algorithms & params, but they came out with artefacts to a varying extent (no surprises). Didn't bother to try stacking them without any rejection, but I probably should, for the sake of comparison if nothing else.
RickS
31-03-2016, 09:24 PM
Just using the existing stacks. Sorry, should have been more explicit.
codemonkey
31-03-2016, 09:29 PM
Not that you were responding to me, but I did misunderstand that as well. I assumed you meant integrate them all together, which is what I tried, with poor results, as I expected. ImageIntegration on the masters is interesting. Ha would definitely swamp the others in this case, especially SII, if using noise weighting.
I wonder how HDR integration would go for a lum? Be nice to have something that picked out the "most detail" during the integration process.
RickS
31-03-2016, 09:54 PM
You can tune the process a little by listing some of the masters more than once in the integration.
Not quite sure what you mean, Lee?
I have experimented with blending a heavily FWHM weighted integration with a SNR weighted integration, taking the bright bits from the former and the dim bits from the latter with reasonable success, but it was hard work.
codemonkey
31-03-2016, 10:01 PM
Ah, good point, didn't think of that.
HDR's not meant to do what I'm talking about, but I wondered what would happen if you did it anyway and whether it would work to achieve the same result.
Basically what I was getting at is that given three integrations, say H, O and S, in any region of the image, one of those integrations might show more "detail" than the others. It'd be nice to be able to do local weighting, rather than just global operators, so as to show the most detail in the resulting luminance.
I briefly just tried it, seemed to have almost the opposite affect I think, but it did reduce the number of apparent stars significantly, which I do find appealing for narrowband. The stars left over were flatter though.
Slawomir
31-03-2016, 10:08 PM
What might possibly also work is to perform sensible noise reduction on O3 and S2 masters before integration :question:
RickS
31-03-2016, 10:20 PM
You'd need to watch the weighting carefully to check that the noise estimation didn't go too crazy but it would be worth a try.
strongmanmike
31-03-2016, 10:34 PM
That's a really good Keyhole region shot Lee. I don't mind the palette and although the decon police can recognise the decon in places :whistle:, it is not obvious and most of the detail looks very normal :thumbsup: Good that you can also make out Eta's Homunculus too, nice work :)
This area doesn't really need heaps of data, as this result clearly demonstrates :thumbsup:
Mike
Geoff45
01-04-2016, 10:52 AM
Hi Lee, you can tweak the STF if you don't like it, then apply your preferred stretch to the histogram, or you can just use the histogram to stretch without relying on the STF. The point is that it is not necessary to use Curves at any stage of the stretch. Histogram is all you need.
This is all explained here http://harrysastroshed.com/pixinsight/pixinsight%20video%20files/2013%20pix%20vids/histogram/histogram.mp4 in much greater detail.
Geoff
codemonkey
02-04-2016, 09:47 AM
Thanks Mike! lol. I'm not surprised the decon police caught me red-handed with this one. I did overbake it for sure.
Thanks Geoff! I don't see myself doing away with curves or doing little tweaks, but I may now use STF as a starting point. I got some HA data on Gabriela last night and did so with that, and was reasonably happy with the results after comparing to what I got when doing a manual initial stretch.
vBulletin® v3.8.7, Copyright ©2000-2025, vBulletin Solutions, Inc.