Hi, Andy,
Not talking about the tools but just the maths. The rough idea with LRGB is that you get three times as many photons per hour through L as through a colour filter, and humans are much more sensitive to variations in L than to variations in RGB, so it seems to make sense to shoot much more L than RGB, and to use the RGB to "colour" the more detailed L. To a first approximation, one parcels out the L to the diffrerent colour channels pro rata according to the RGB shots. In practice one takes into account that we perceive G as brighter than R, which is brighter than B, so we hand out more B and less G.
But there's a catch: In the darkest areas of the image, there may be almost no colour information in your short RGB stack. So you don't know how to hand out the L amongst the channels. Result: Hideous colour noise.
So I use four tricks:
(0) Take enough RGB. This might include 2x2 binning to reduce noise.
(1) Wavelet filter the RGB to remove colour noise in the RGB, before even considering using it with the L.
(2) Hand out more B than R, and more R than G, to make up for the human visual system.
(3) To the fuzzy extent that you have lots of RGB information at a particular pixel, hand the L out pro-rata according to the RGB. To the fuzzy extent that you have negligible RGB data at a particular pixel (and really don't know what the colour is), just apportion the L to produce a neutral grey.
No idea how PixInsight does it, but hope that the concepts help.
Best,
M
|