Why LRGB? Why not LPY?
LPY being luminance, purple, yellow, given which you could extract LRGB in software.
The reason I suggest purple and yellow is that red + blue is purple, and red + green is yellow. I think generally spectral response of camera sensors tends to peak in green so I assume you'd be better off capturing the weaker areas which I believe to be red, followed by blue and then green as a general rule.
It seems to me that doing it this way would result in reduced capture time and improved SNR due to less read/bias/dark noise being introduced and more signal being captured. Seems like a win all around. So what am I missing? Why aren't we doing this?
Of course you could also do LRB/LRG/LGB, but you'd lose out on the extra signal you'd get when imaging two colours simultaneously. The plus side is that you could do that with existing filters.
|