PDA

View Full Version here: : Why not just LRG?


fsr
10-07-2018, 10:30 PM
Hi,


I was wondering, if luminance can be calculated from RGB, then why capture LRGB? I understand that it's desirable to capture the luminance, because of the high transmission of this filter, but why not just capture LRG, and calculate B from that?


It seems to me like if L = 0.33 R + 0.33 G + 0.33 B (or whatever appropiate coefficients)


Then B = (L - 0.33 R - 0.33 G) / 0.33



Any reason why it's not done like that?


Regards

Atmos
10-07-2018, 10:38 PM
Let’s say you had a region that is red emission Nebula on one side and blue reflection Nebula on the other. The luminance doesn’t know which is red or blue but you’re only removing some of the red so some of that emission Nebula will be coming up in the synthetic blue where it shouldn’t be.

fsr
10-07-2018, 11:54 PM
But L is made by R, G, and B. You substract R and G from L, and you should get B.

billdan
11-07-2018, 02:11 AM
That's a good question fsr, and sounds good in theory, however the formula
L=(R+G+B)/3 would only apply for a perfect CCD with a constant QE across the whole colour spectrum.

Most CCD's have a QE of 60% at 550nm(Green), dropping to 45% QE at 650nm(Red) and 35% QE at 400nm (Blue/Violet).

You would also need to take into account atmospheric extinction, where blue and green signals are lower than red when your telescope is pointing at a lower elevations. At 50º elevation the blue signal is 5% less than red, and green is 3% less than red.

So I think what you say would work, but to get a good colour balance your formula to get blue may have to be revised slightly.

Cheers
Bill

RickS
11-07-2018, 07:57 AM
There's a process in PI called B3E (Ballesteros Black Body Estimator) that can estimate missing data from a single colour filter if you have the other two. It is based on the assumption that the signal is generated by black body radiation. I've seen it do a fairly good job with missing a missing R/G/B filter and also creating simulated Ha from RGB data (not sure how accurate this is given that NB emission is not a black body process.)

A couple of PI forum threads about it:
http://pixinsight.com/forum/index.php?topic=4093.msg28541#msg28 541
http://pixinsight.com/forum/index.php?topic=9691.0

The paper that B3E is based on: https://arxiv.org/pdf/1201.1809.pdf

Cheers,
Rick.

multiweb
11-07-2018, 08:01 AM
Sounds like a left handed 9 iron shot! :eyepop:

RickS
11-07-2018, 09:21 AM
Straight into the water :rofl:

codemonkey
11-07-2018, 08:27 PM
I was thinking about this a while ago, though I was considering different bandpass filters in order to increase the SNR. I think the idea has merit but probably needs some experimentation, or alternatively someone a lot better at math than I am.

One thing to consider (as pointed out to me by Ray) is that if you use three images (say L, R and G) to generate a fourth synthetic image, your synthetic image is going to have noise contribution from all of the other frames. That said, the noise in the other frames should also been reduced because you were able to devote more time to capturing that data.

Which filters you use will also be impacted by the QE curve of your camera and the bandpass of the filters. For example, Astrodon series E have a big gap in between the end of the green filter and the start of the red. The L filter has no such gap. I'd be inclined to capture L, B and G and then generate a synthetic R.

You could use some existing data and try various combinations of equivalent total exposure time and see how they compare.... like spend more time on L at the expense of no R. It'd be worth trying with a variety of target types as well.

One other benefit to this approach would be freeing up a filter slot which you could then use for something interesting like a red continuum or IR pass filter.

alpal
12-07-2018, 11:16 PM
Hi FSR,
what a good idea,
I did it just now using Photoshop
on some stacked data I had from a 3 x Drizzle crop of Eta Carinae.
I had to play a round a bit with the colour balance
but in a few minutes I got them looking almost the same.
Of course that would be only the first step in the processing of the picture.


I show a comparison picture.
So - why do we take Blue?



cheers
Allan

Stonius
12-07-2018, 11:27 PM
Interesting. I believe he was suggesting a synthetic R? Did you try that? I'm curious to see.

Markus

alpal
12-07-2018, 11:34 PM
Hi Markus,

I don't have time right now to try Red.
At least Photoshop has a subtraction option and
the slider can be moved to 33%.
That makes it fairly easy to do what FSR suggests.


cheers
Allan

Peter Ward
12-07-2018, 11:59 PM
Suggest you re-read Cols answer: he is on the money.

Your suggested method cannot resolve where the colours should be.

fsr
13-07-2018, 05:07 AM
Very interesting answers, thanks!

So, as i understand this, the luminance is just the addition of some fraction of every one of the color channels, depending on the ccd sensitivity and the filters (for the sake of simplicity let's say that every color contributes with 1/3 of the luminance, so that L = (R + G + B) / 3). So, if you get a luminance capture of a white star with a full histogram, then you take a pic with any of the color filters, the brightness of the filtered star will only reach 1/3 rd of the histogram. So, only 1/3 rd of the luminance data is from the blue channel, so it's not as good as a fully exposed blue filtered frame, but as the luminance is added to enhance an underexposed RGB capture, i think that the question is: is it better to use the underexposed blue filtered frames, or dedicate that time to capture more luminance frames, and then get the blue by substracting red and green from the luminance?

Obviously, if you had the fully exposed R, G, and B channels, then it would make no sense to capture the luminance, as the luminance of the rgb image will be as good as it can be.

I proposed the blue channel, because i think that the sensitivity is probably lower for that channel. Not sure if that's the case.

Alpal: That's a very good test! I think that the LRGB has more blue, but that's probably because you used 33% for all the colors, while the actual sensitivity of the CCD to every color likely varies. It's very similar, however, and probably just by boosting the blue channel a little bit, it will be identical.

Peter: don't forget that you have the luminance, and the real red and green channels. The luminance is made by the addition of the 3 channels, so by substracting the red and green from the luminance, you should get the real blue.

For example, to make things simple let's suppose that the CCD saturates with a value of 100, and that the sensitivity is R=30%, G=50%, B=20%.
You capture LRGB, and a particular pixel has a value of L=100, R=30, G=50, B=20. So, let's say that you discard the blue channel, so that you don't know its value. But you have LRG, so you can do the math B = L - R - G = 100 - 30 - 50 = 20.




By the way, i found this link about LRGB capture, it looks interesting: http://www.robgendlerastropics.com/LRGB.html

Regards

alpal
13-07-2018, 06:39 AM
I don't know about that Peter.
I think FSR is on the money.
You take 100% Luminance layer in Photoshop and subtract 33% of the Red channel
then subtract 33% of the Green channel -
flatten it -then you have a synthetic Blue channel.
You then combine your Red , Green & synthetic Blue channels to form an RGB picture then
add the Luminance layer as "Luminosity" blend in Photoshop.

I did it & posted the pic - it works.
Maybe you could try it with one of your pics?

Anyone can try it for themselves if they have LRGB data.


Edit - It may not work that well on a dim galaxy as all the noise
left over could be turned to blue
giving a blue cast to the whole background.
I did it on a bright target.



cheers
Allan

RickS
13-07-2018, 08:18 AM
The reason you can't expect accurate results from LRG, LRB or LGB imaging is demonstrated by the attached image showing the transmittance of a set of Astrodon LRGB filters. You'll notice that the L filter admits wavelengths that none of the colour filters admit and also that there is an overlap between the B and G filters, so there are wavelengths that would be "counted twice" if you use these two filters.

If you had a set of colour filters that didn't overlap and covered the transmittance of the L filter perfectly then you could do imaging as suggested, but it doesn't provide any real advantage. You still need to achieve the same amount of integration time if you want the same SNR.

Cheers,
Rick.

rally
13-07-2018, 09:14 AM
From a SNR perspective
For some nebulae the flux rate of Blue (and even Green) is often quite low - especially compared to Red.

In order to get the blue signal significantly above the noise floor and get a good SNR you may choose/need to expose for longer than say R or L

If you are relying on the R signal or the L signal which will usually be strong having a good flux rate then your Blue may still be buried deeper in the noise

In any case there is still the issue SNR to consider and generating something artificially is going to either obfuscate or abstract the problem making it harder to resolve.

As Ricks spectral response curves shows the real formula for the Astrodon Series 2 filters ends up looking more like this :

L = B+G - (BG overlap) + (Notch between G&R) + R + (extended Red)

This is nothing like L = B+G+R !
What the CCD is actually recording is different again since it has its own set of spectral response curves and these further distort what the filter might theoretical capture and how efficiently the CCD is actually converting eacg spectral range into signal

That would dramatically impact on the modified L=RGB formula above !
Making it very non linear and with all sorts of offsets

At the end of the day - after white balancing and fine tuning colour to "taste" you'll get enough signal to render an image.
Then that probably boils down to beauty is in the eye of the beholder

I think you are always going to be better off with more data than less data, but of course the quest for more efficiency - ie potentially 25% less imaging time is attractive.
The simple maths shows its not the same (the more complicated Maths including CCD response, atmospheric response (as stated above also), SNR issues, scope response etc etc shows its significantly different

Given the large notch between G & R (approaching 25% of L signal and the other 10% of extended R) and the
That shows that a simple B=L - (R+G) could potentially affect B by up to 35% of L !!! and this could be quite problematic.

Atmos
13-07-2018, 09:23 AM
Let’s say you have a strongly emitting Ha region with no reflection nebulosity, no need to name one as there are many out there. Luminance and R channels will be picking up the Ha whereas green and blue will have no data in those Ha regions excepting some weak OIII emissions. If you remove 33% of the R channel from the L it will still leave some strong Ha emissions as you haven’t removed all of the Ha. Removing 33% of the green won’t reduce those Ha emissions so your resulting synthetic blue channel will be a strong Ha and half strength OIII emission channel.

If the L filter covered exactly the RGB emission lines and there was no overlap and the cutoffs were perfectly clear (straight lines) then you can scale the R&G channels in relation to L and then subtract all of the R&G channels from the L to leave a synthetic B.

troypiggo
13-07-2018, 11:03 AM
There ain't no such thing as a free lunch ;)

Peter Ward
13-07-2018, 01:55 PM
Nup. I think Helmholtz sorted this a while back :)

The L channel records all colours. All colours can be made from red green and blue. But it does not follow all colours without red and green are always blue.

e.g. All old farmers are called McDonald. McDonald is a farmer. But he is not old.

alpal
13-07-2018, 04:14 PM
Hi Rick,
that graph tells all.
you've convinced me that it's not going to be as accurate with the correct colours.
My anti-light pollution Astronomik CLS CCD filter - in front of all other filters -
would not help matters either as shown.
That makes processing much more difficult as it cuts out the orange of sodium lights
& any orange colours in the target.


cheers
Allan

Atmos
13-07-2018, 04:17 PM
CLS filters can be good for emission Nebula as most of the colour comes from the Ha and OIII emissions.

alpal
13-07-2018, 04:23 PM
It would still be great to get data at a really dark site & not have to use it though.

fsr
13-07-2018, 09:11 PM
I see, so the L filter has a larger spectrum that the RGB combination. That's problematic. It also means that the gaps in RGB will be rendered as gray in LRGB processing. It seems like they combined a light pollution filter into the RGB set, but the L filter doesn't have that gap. Curious.

But at the end, L is not a channel in an RGB image. R, G, and B are. L is used to enhance an image with weak RGB channels, because we perceive resolution in the luminance of an image, but no so much in the chrominance. But if the goal is to have the best color accuracy, then it seems better to capture correctly exposed RGB channels, and then there's no reason to capture L, right? The luminance of the image will be good, if the color channels are good. Probably add Ha capture, if the red filter cuts this important line.

alpal
14-07-2018, 03:16 PM
I find that without adding Luminance to my pictures that they lack "punch".
The colours look weak.

Also - maybe since there is a crossover between some colours on all of our so called RGB filters
that perhaps we're kidding ourselves that we're viewing the correct colours?
The processing of pictures becomes subjective & open to
artistic license & a general consensus seems to build up over time
as to what a target is supposed to look like.
We then process to get similar colours.


cheers
Allan

billdan
14-07-2018, 05:32 PM
The only way to be certain that the images we take are accurate in terms of colour i.e the image we take = what we could see with the eye.

You would need to take some subs of a colour test card (like a colour rainbow) in low light at some distance away, stack and process and then see if the colours on the screen matches the original card that the eye sees.

alpal
14-07-2018, 07:08 PM
Actually you could just do it with an LRGB camera
attached to an ordinary DSLR camera lens outside during the day.
Would the result look strange in terms of colour?


Anyway - notice that a DSLR camera is boosting Green - in a Bayer matrix -
by having 2 Green pixels for every Red & Blue pixel?

sil
23-07-2018, 03:00 PM
as a osc shooter I have something to add from a different perspective for experimental consideration. In photography to convert colour to black and white its not a linear process there is filtering involved to accentuate or soften contrast, eg orange filter can deepen sky and highlight clouds for interest, you can make redness and pimples disappear to enhance skin. The initial test comparison of colour image versus one with artificial channel is not a valid approach. Instead I think you need to linearly convert both colour results back to grey and then those can be compared. You could subtract or difference to get a perfectly featureless grey image (if both colour images match) and measure the amount they differ since you won't get a perfect match with an artificial process.

What you will get though is a repeatable process with a quantifiable number ( no " i adjusted and it looks about right", mathematics rules optics and colours so definable adjustment numbers can be honed at to get the final two grey images as closely matched as possible. That will allow people to say "this process works well to build missing B channel while this other process works better on missing R".

Seems like its something that can be pursued, just be careful how you obtain a grey or L from colour data. Using subs from a OSC you can split those to RGB then throw away one channel at a time and work on technique to rebuild them , you then compare to source RGB data and should match. first split then remerge all to rgb and compare should give you 100% match, do this to test your processing method and software. You could also try different colour spaces for your source data as information gets lost converting those.You might discover capturing in srgb and rebuilding missing channel never gets you better than 90% match while capturing in adobergb alone takes you to 95%. So you could prove how much or how little capture setting impact final results. so much to explore even without knowing the maths.