PDA

View Full Version here: : Why LRGB? Why not LPY?


codemonkey
11-03-2015, 11:00 PM
LPY being luminance, purple, yellow, given which you could extract LRGB in software.

The reason I suggest purple and yellow is that red + blue is purple, and red + green is yellow. I think generally spectral response of camera sensors tends to peak in green so I assume you'd be better off capturing the weaker areas which I believe to be red, followed by blue and then green as a general rule.

It seems to me that doing it this way would result in reduced capture time and improved SNR due to less read/bias/dark noise being introduced and more signal being captured. Seems like a win all around. So what am I missing? Why aren't we doing this?

Of course you could also do LRB/LRG/LGB, but you'd lose out on the extra signal you'd get when imaging two colours simultaneously. The plus side is that you could do that with existing filters.

LewisM
12-03-2015, 12:01 AM
Interesting, but I don't think so.

True that with LIGHT, red plus green results in yellow, but if the filter is a simple yellow filter, I think the colour becomes a "colour" and not photonic properties, so the result would be YELLOW light, which when combined with the purple, would result in yakkiness :)

Have a look at RGB filters - to the eye = photons - they do not even look red, green or blue.

Besides, the RGB filters are broadband, with specific wavelengths that they either allow or deny through. The yellow filter and purple filter would have different wavelength block/pass characteristics to that of RGB (though I don't know which, but being that yellow and purple are not primary photonic colours,... well, I dunno :).

What I am trying to say is, no, I don't think it would work, or they would do it already :)

codemonkey
12-03-2015, 07:24 AM
What I mean by "yellow" is a filter that has a bandpass of say 480 - 700nm, and for "purple" I mean a filter that has a rejection band of say 480 - 600nm.

You can roughly estimate a similar process by:

1. Stack all RGB to create a synthetic L
2. Stack all RB to create a synthetic purple
3. Subtract the synthetic purple from the synthetic L

The output of the above operations should match your G stack. You can try it yourself using Pixel math in PixInsight if you have it.

alpal
12-03-2015, 07:29 AM
Of course that would work but I think
there is a signal to noise advantage in using 3 filters for colour -
in most cases RGB - just like the S/N advantage using narrow band Ha.
Also:
I notice that my RGB stacks have a better FWHM value than the L stack if they are both binned 1x1.


cheers
Allan

troypiggo
12-03-2015, 07:30 AM
Wouldn't you need a third colour filter to triangulate the colour space coordinate?

codemonkey
12-03-2015, 08:28 AM
Thanks Allan :-)

Note that I'm not necessarily advocating hardware binning in the capture of any of these. Binning basically means you're (probably) clipping your highlights to reduce your read noise... consider the Atik 314L+ (my specific one) has read noise in the area of 3.6e, which is about 0.02% of my full well capacity, which is clearly insignificant even when tripled to match the difference in 2x2 binning. I'd rather keep the read noise in and blow out less stars.

As for the SNR improvement of LRGB over LPY, I'd be keen to have that explained a bit more thoroughly because to my way of thinking you'd improve the SNR this way. I just don't see how LRGB can be better than that; you'd have more read/bias/dark noise and less signal because for each R, G, B you're excluding both the others, whereas with PY you're only excluding one component, which directly translates to more signal.



You already have all three components in the luminance frames. Consider this:

R = 10
G = 20
B = 15
L = 45

vs

L = 45
P = 25
Y = 30

G = L - P = 20
R = L - Y = 10
B = L - (G+R) = 15

codemonkey
12-03-2015, 08:50 AM
Actually, after thinking about that a bit more, that's not very smart ;-)

Even if 3.6e- is 0.02% of my full well capacity, most of the signal I care about would be way below that capacity and so the read noise would have a more significant contribution than I first thought. As for whether it's still actually significant I'm not sure. Needs more reading/experimentation/math before I'd be happy to take a stance of which side of the trade-off I'd prefer to lean towards.

Either way, using PY doesn't preclude the use of binning.

Another thing to consider is the choice of PY; is it actually better to get more green signal? The reason cameras peak in green is because apparently us humans are more sensitive to that area of the spectrum so maybe it's more important to get clearer signal there.

Yet another factor there is what are the major contributors of light pollution and sky glow? Would there be a better choice than PY to reduce the capture unwanted signal which screws us over twofold by capturing signal we don't want and increasing our shot noise?

Shiraz
12-03-2015, 09:07 AM
But I think that when you do the subtraction processes, you include noise from 2 or more pixels, rather than just one (as for each RGB pixel) - so you may end up slightly worse off in the end (or at least no better off). eg if you had a large noise excursion in your L channel, you would end up with a large noise excursion added to all 3 RGB channels as well.

The other issue may be dynamic range. I seem to recall reading that the Kodak scheme of using clear L pixels in a modified Bayer matrix ran into trouble with the L pixels filling up long before the colour ones did, so the exposure had to be truncated before the colour data had got very far above the noise - nice L but poorer quality RGB. Same thing could possibly apply here.

and then again I could be completely wrong:)

codemonkey
12-03-2015, 07:03 PM
Interesting points! I've done some experimentation, which is probably not mathematically sound :p

I generated an L frame in PixInsight, from which I extracted RGB of varying percentages (30, 50, 20 respectively).

Onto the L frame I added some poisson noise with an amplitude of 0.6. Onto each of the RGB I added 0.2

From the RGB I generated, by simple summation, P and Y frames. From the P and Y images I then reverse-engineered the RGB frames as I've proposed above, with results as attached.

At this point I basically have no idea how to evaluate the results. I assume that a higher variance means there's more noise. If it's as simple as that, then you'll see that the LPY version has more noise in RG but less in B, with more overall (0.01e-03).

Even if I didn't stuff up something obvious this isn't reflective of reality because in reality the LPY would benefit from reduced read/bias/dark noise whereas this experiment assumes that all noise is doubled when combining the RB and RG images.

codemonkey
12-03-2015, 07:54 PM
Two more things spring to mind: given the entire experiment revolves around noise, a sample size of 1 probably isn't the best idea ;-)

In addition, the above result also assumes that you don't take that extra time you saved by not capturing a G frame directly and use it to get capture more subs and improve your SNR.

ericwbenson
12-03-2015, 10:03 PM
This reminds me of the discussion between RGB and CMY imaging about 15 years ago, at the time we added CMY and LCMY color combine modes to Maxim v2....but eventually RGB won out.

A good mathematical explanation of the pros/cons of CMY (aka subtractive filters) can be had here:
http://www.astrosurf.com/buil/us/cmy/cmy.htm

Mr. Buil's website is actually chock full of really good stuff, always a good read.

EB

codemonkey
13-03-2015, 09:09 AM
Thanks Eric, that's a great resource! Looks like there's potentially a small benefit to using CMY vs RGB in some cases but probably not enough to worry about.

On the other side of the discussion, I've done some more simulations on extracting synthetic channels from various components combined with L and came up with the following numbers:

Total variance across RGB from LRGB image:

LPY: 1.503E-02
LRGB: 1.501E-02
LRGsB: 1.501E-02
LRsGB: 1.500E-02
LRGBs: 1.503E-02

Where Rs / Gs / Bs are the synthetic variants generated by L-(X+Y).

The original RGB frames were generated from the same L frame, with the weights 0.3, 0.5 and 0.2 respectively. Each RGB frame had poisson noise added with an amplitude of 0.2, and the L frame had an amplitude of 0.6 added.

The P and Y frames in these latest tests were generated from R+B and R+G with poisson noise added afterwards with an amplitude of 0.4.

The poisson noise generated is meant to emulate shot noise, but I honestly don't know if I've selected appropriate numbers for an accurate representation. The "experiment" also disregards read/bias/dark which in a real world scenario would benefit the LPY and synthetic variants.

What's interesting there is that the variant with a synthetic R channel had less total variance than the LRGB, while the synthetic blue and the LPY tied for most variance.

Also of note is that the LPY and synthetic blue have variance increase of 0.133% over the synthetic red which suggests to me a trivial decrease in SNR that could well be offset by the real world impact of bias/read/dark.

End of the day though, based on the above I don't think there's any benefit in going LPY over omitting one of RGB and generating it in software from the L and remaining 2 components. I suspect there might be some benefit however in omitting one of the components and instead using that time to capture more L.