Ah ... I think I'm starting to understand. So the sensor is the same in both cases, including the combination of 4 "well values" which constitute a given RGB pixel. The key difference is that (without the filtering effect of the bayer matrix) each of the 4 values are practically identical ... therefore a balanced monochrome RGB pixel results.
Thanks - this general explanation does make sense to me.
Are there then some other factors which lead to higher quality images captured using mono cameras (in conjunction with LRGB filters)?
The image detail often seems to be so much better, especially those showing fine details of dust and dark nebulae. The best planetary images (eg. Jupiter) even seem to be captured using mono cameras with RGB filters.
There seems to be additional expense with mono cameras and expensive filters/filter-wheels, and extra processing effort required due to the multiple channels of data. But I do not understand why the results achievable are so much better. Is there a simple explanation ... or maybe a website which explains?
|