I was wondering, if luminance can be calculated from RGB, then why capture LRGB? I understand that it's desirable to capture the luminance, because of the high transmission of this filter, but why not just capture LRG, and calculate B from that?
It seems to me like if L = 0.33 R + 0.33 G + 0.33 B (or whatever appropiate coefficients)
Let’s say you had a region that is red emission Nebula on one side and blue reflection Nebula on the other. The luminance doesn’t know which is red or blue but you’re only removing some of the red so some of that emission Nebula will be coming up in the synthetic blue where it shouldn’t be.
Let’s say you had a region that is red emission Nebula on one side and blue reflection Nebula on the other. The luminance doesn’t know which is red or blue but you’re only removing some of the red so some of that emission Nebula will be coming up in the synthetic blue where it shouldn’t be.
But L is made by R, G, and B. You substract R and G from L, and you should get B.
That's a good question fsr, and sounds good in theory, however the formula
L=(R+G+B)/3 would only apply for a perfect CCD with a constant QE across the whole colour spectrum.
Most CCD's have a QE of 60% at 550nm(Green), dropping to 45% QE at 650nm(Red) and 35% QE at 400nm (Blue/Violet).
You would also need to take into account atmospheric extinction, where blue and green signals are lower than red when your telescope is pointing at a lower elevations. At 50º elevation the blue signal is 5% less than red, and green is 3% less than red.
So I think what you say would work, but to get a good colour balance your formula to get blue may have to be revised slightly.
There's a process in PI called B3E (Ballesteros Black Body Estimator) that can estimate missing data from a single colour filter if you have the other two. It is based on the assumption that the signal is generated by black body radiation. I've seen it do a fairly good job with missing a missing R/G/B filter and also creating simulated Ha from RGB data (not sure how accurate this is given that NB emission is not a black body process.)
I was thinking about this a while ago, though I was considering different bandpass filters in order to increase the SNR. I think the idea has merit but probably needs some experimentation, or alternatively someone a lot better at math than I am.
One thing to consider (as pointed out to me by Ray) is that if you use three images (say L, R and G) to generate a fourth synthetic image, your synthetic image is going to have noise contribution from all of the other frames. That said, the noise in the other frames should also been reduced because you were able to devote more time to capturing that data.
Which filters you use will also be impacted by the QE curve of your camera and the bandpass of the filters. For example, Astrodon series E have a big gap in between the end of the green filter and the start of the red. The L filter has no such gap. I'd be inclined to capture L, B and G and then generate a synthetic R.
You could use some existing data and try various combinations of equivalent total exposure time and see how they compare.... like spend more time on L at the expense of no R. It'd be worth trying with a variety of target types as well.
One other benefit to this approach would be freeing up a filter slot which you could then use for something interesting like a red continuum or IR pass filter.
Hi FSR,
what a good idea,
I did it just now using Photoshop
on some stacked data I had from a 3 x Drizzle crop of Eta Carinae.
I had to play a round a bit with the colour balance
but in a few minutes I got them looking almost the same.
Of course that would be only the first step in the processing of the picture.
I show a comparison picture.
So - why do we take Blue?
Interesting. I believe he was suggesting a synthetic R? Did you try that? I'm curious to see.
Markus
Hi Markus,
I don't have time right now to try Red.
At least Photoshop has a subtraction option and
the slider can be moved to 33%.
That makes it fairly easy to do what FSR suggests.
Let’s say you had a region that is red emission Nebula on one side and blue reflection Nebula on the other. The luminance doesn’t know which is red or blue but you’re only removing some of the red so some of that emission Nebula will be coming up in the synthetic blue where it shouldn’t be.
Suggest you re-read Cols answer: he is on the money.
Your suggested method cannot resolve where the colours should be.
So, as i understand this, the luminance is just the addition of some fraction of every one of the color channels, depending on the ccd sensitivity and the filters (for the sake of simplicity let's say that every color contributes with 1/3 of the luminance, so that L = (R + G + B) / 3). So, if you get a luminance capture of a white star with a full histogram, then you take a pic with any of the color filters, the brightness of the filtered star will only reach 1/3 rd of the histogram. So, only 1/3 rd of the luminance data is from the blue channel, so it's not as good as a fully exposed blue filtered frame, but as the luminance is added to enhance an underexposed RGB capture, i think that the question is: is it better to use the underexposed blue filtered frames, or dedicate that time to capture more luminance frames, and then get the blue by substracting red and green from the luminance?
Obviously, if you had the fully exposed R, G, and B channels, then it would make no sense to capture the luminance, as the luminance of the rgb image will be as good as it can be.
I proposed the blue channel, because i think that the sensitivity is probably lower for that channel. Not sure if that's the case.
Alpal: That's a very good test! I think that the LRGB has more blue, but that's probably because you used 33% for all the colors, while the actual sensitivity of the CCD to every color likely varies. It's very similar, however, and probably just by boosting the blue channel a little bit, it will be identical.
Peter: don't forget that you have the luminance, and the real red and green channels. The luminance is made by the addition of the 3 channels, so by substracting the red and green from the luminance, you should get the real blue.
For example, to make things simple let's suppose that the CCD saturates with a value of 100, and that the sensitivity is R=30%, G=50%, B=20%.
You capture LRGB, and a particular pixel has a value of L=100, R=30, G=50, B=20. So, let's say that you discard the blue channel, so that you don't know its value. But you have LRG, so you can do the math B = L - R - G = 100 - 30 - 50 = 20.
Suggest you re-read Cols answer: he is on the money.
Your suggested method cannot resolve where the colours should be.
I don't know about that Peter.
I think FSR is on the money.
You take 100% Luminance layer in Photoshop and subtract 33% of the Red channel
then subtract 33% of the Green channel -
flatten it -then you have a synthetic Blue channel.
You then combine your Red , Green & synthetic Blue channels to form an RGB picture then
add the Luminance layer as "Luminosity" blend in Photoshop.
I did it & posted the pic - it works.
Maybe you could try it with one of your pics?
Anyone can try it for themselves if they have LRGB data.
Edit - It may not work that well on a dim galaxy as all the noise
left over could be turned to blue
giving a blue cast to the whole background.
I did it on a bright target.
The reason you can't expect accurate results from LRG, LRB or LGB imaging is demonstrated by the attached image showing the transmittance of a set of Astrodon LRGB filters. You'll notice that the L filter admits wavelengths that none of the colour filters admit and also that there is an overlap between the B and G filters, so there are wavelengths that would be "counted twice" if you use these two filters.
If you had a set of colour filters that didn't overlap and covered the transmittance of the L filter perfectly then you could do imaging as suggested, but it doesn't provide any real advantage. You still need to achieve the same amount of integration time if you want the same SNR.
From a SNR perspective
For some nebulae the flux rate of Blue (and even Green) is often quite low - especially compared to Red.
In order to get the blue signal significantly above the noise floor and get a good SNR you may choose/need to expose for longer than say R or L
If you are relying on the R signal or the L signal which will usually be strong having a good flux rate then your Blue may still be buried deeper in the noise
In any case there is still the issue SNR to consider and generating something artificially is going to either obfuscate or abstract the problem making it harder to resolve.
As Ricks spectral response curves shows the real formula for the Astrodon Series 2 filters ends up looking more like this :
L = B+G - (BG overlap) + (Notch between G&R) + R + (extended Red)
This is nothing like L = B+G+R !
What the CCD is actually recording is different again since it has its own set of spectral response curves and these further distort what the filter might theoretical capture and how efficiently the CCD is actually converting eacg spectral range into signal
That would dramatically impact on the modified L=RGB formula above !
Making it very non linear and with all sorts of offsets
At the end of the day - after white balancing and fine tuning colour to "taste" you'll get enough signal to render an image.
Then that probably boils down to beauty is in the eye of the beholder
I think you are always going to be better off with more data than less data, but of course the quest for more efficiency - ie potentially 25% less imaging time is attractive.
The simple maths shows its not the same (the more complicated Maths including CCD response, atmospheric response (as stated above also), SNR issues, scope response etc etc shows its significantly different
Given the large notch between G & R (approaching 25% of L signal and the other 10% of extended R) and the
That shows that a simple B=L - (R+G) could potentially affect B by up to 35% of L !!! and this could be quite problematic.
Let’s say you have a strongly emitting Ha region with no reflection nebulosity, no need to name one as there are many out there. Luminance and R channels will be picking up the Ha whereas green and blue will have no data in those Ha regions excepting some weak OIII emissions. If you remove 33% of the R channel from the L it will still leave some strong Ha emissions as you haven’t removed all of the Ha. Removing 33% of the green won’t reduce those Ha emissions so your resulting synthetic blue channel will be a strong Ha and half strength OIII emission channel.
If the L filter covered exactly the RGB emission lines and there was no overlap and the cutoffs were perfectly clear (straight lines) then you can scale the R&G channels in relation to L and then subtract all of the R&G channels from the L to leave a synthetic B.
......Peter: don't forget that you have the luminance, and the real red and green channels. The luminance is made by the addition of the 3 channels, so by substracting the red and green from the luminance, you should get the real blue....
Regards
Nup. I think Helmholtz sorted this a while back
The L channel records all colours. All colours can be made from red green and blue. But it does not follow all colours without red and green are always blue.
e.g. All old farmers are called McDonald. McDonald is a farmer. But he is not old.
The reason you can't expect accurate results from LRG, LRB or LGB imaging is demonstrated by the attached image showing the transmittance of a set of Astrodon LRGB filters. You'll notice that the L filter admits wavelengths that none of the colour filters admit and also that there is an overlap between the B and G filters, so there are wavelengths that would be "counted twice" if you use these two filters.
If you had a set of colour filters that didn't overlap and covered the transmittance of the L filter perfectly then you could do imaging as suggested, but it doesn't provide any real advantage. You still need to achieve the same amount of integration time if you want the same SNR.
Cheers,
Rick.
Hi Rick,
that graph tells all.
you've convinced me that it's not going to be as accurate with the correct colours.
My anti-light pollution Astronomik CLS CCD filter - in front of all other filters -
would not help matters either as shown.
That makes processing much more difficult as it cuts out the orange of sodium lights
& any orange colours in the target.