ICEINSPACE
Most Read Articles
Moon Phase
CURRENT MOON Waxing Gibbous
94.2%
The Sun Now
Time Zones
Sydney*
8:03 pm
Perth
5:03 pm
Auckland*
10:03 pm
New York*
5:03 am
Paris*
11:03 am
GMT
9:03 am




  #21  
Old 13-07-2018, 03:17 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 5,628
CLS filters can be good for emission Nebula as most of the colour comes from the Ha and OIII emissions.
Reply With Quote
  #22  
Old 13-07-2018, 03:23 PM
alpal's Avatar
alpal
Registered User

alpal is offline
 
Join Date: Jan 2012
Location: Melbourne
Posts: 2,712
Quote:
Originally Posted by Atmos View Post
CLS filters can be good for emission Nebula as most of the colour comes from the Ha and OIII emissions.



It would still be great to get data at a really dark site & not have to use it though.
Reply With Quote
  #23  
Old 13-07-2018, 08:11 PM
fsr
Registered User

fsr is offline
 
Join Date: Nov 2015
Location: Hurlingham, Buenos Aires
Posts: 23
I see, so the L filter has a larger spectrum that the RGB combination. That's problematic. It also means that the gaps in RGB will be rendered as gray in LRGB processing. It seems like they combined a light pollution filter into the RGB set, but the L filter doesn't have that gap. Curious.

But at the end, L is not a channel in an RGB image. R, G, and B are. L is used to enhance an image with weak RGB channels, because we perceive resolution in the luminance of an image, but no so much in the chrominance. But if the goal is to have the best color accuracy, then it seems better to capture correctly exposed RGB channels, and then there's no reason to capture L, right? The luminance of the image will be good, if the color channels are good. Probably add Ha capture, if the red filter cuts this important line.
Reply With Quote
  #24  
Old 14-07-2018, 02:16 PM
alpal's Avatar
alpal
Registered User

alpal is offline
 
Join Date: Jan 2012
Location: Melbourne
Posts: 2,712
Quote:
Originally Posted by fsr View Post
I see, so the L filter has a larger spectrum that the RGB combination. That's problematic. It also means that the gaps in RGB will be rendered as gray in LRGB processing. It seems like they combined a light pollution filter into the RGB set, but the L filter doesn't have that gap. Curious.

But at the end, L is not a channel in an RGB image. R, G, and B are. L is used to enhance an image with weak RGB channels, because we perceive resolution in the luminance of an image, but no so much in the chrominance. But if the goal is to have the best color accuracy, then it seems better to capture correctly exposed RGB channels, and then there's no reason to capture L, right? The luminance of the image will be good, if the color channels are good. Probably add Ha capture, if the red filter cuts this important line.



I find that without adding Luminance to my pictures that they lack "punch".
The colours look weak.

Also - maybe since there is a crossover between some colours on all of our so called RGB filters
that perhaps we're kidding ourselves that we're viewing the correct colours?
The processing of pictures becomes subjective & open to
artistic license & a general consensus seems to build up over time
as to what a target is supposed to look like.
We then process to get similar colours.


cheers
Allan
Reply With Quote
  #25  
Old 14-07-2018, 04:32 PM
billdan's Avatar
billdan (Bill)
Registered User

billdan is offline
 
Join Date: Mar 2012
Location: Narangba, SE QLD
Posts: 1,102
The only way to be certain that the images we take are accurate in terms of colour i.e the image we take = what we could see with the eye.

You would need to take some subs of a colour test card (like a colour rainbow) in low light at some distance away, stack and process and then see if the colours on the screen matches the original card that the eye sees.
Reply With Quote
  #26  
Old 14-07-2018, 06:08 PM
alpal's Avatar
alpal
Registered User

alpal is offline
 
Join Date: Jan 2012
Location: Melbourne
Posts: 2,712
Quote:
Originally Posted by billdan View Post
The only way to be certain that the images we take are accurate in terms of colour i.e the image we take = what we could see with the eye.

You would need to take some subs of a colour test card (like a colour rainbow) in low light at some distance away, stack and process and then see if the colours on the screen matches the original card that the eye sees.



Actually you could just do it with an LRGB camera
attached to an ordinary DSLR camera lens outside during the day.
Would the result look strange in terms of colour?


Anyway - notice that a DSLR camera is boosting Green - in a Bayer matrix -
by having 2 Green pixels for every Red & Blue pixel?
Reply With Quote
  #27  
Old 23-07-2018, 02:00 PM
sil's Avatar
sil
Registered User

sil is offline
 
Join Date: Jun 2012
Location: Canberra
Posts: 1,101
as a osc shooter I have something to add from a different perspective for experimental consideration. In photography to convert colour to black and white its not a linear process there is filtering involved to accentuate or soften contrast, eg orange filter can deepen sky and highlight clouds for interest, you can make redness and pimples disappear to enhance skin. The initial test comparison of colour image versus one with artificial channel is not a valid approach. Instead I think you need to linearly convert both colour results back to grey and then those can be compared. You could subtract or difference to get a perfectly featureless grey image (if both colour images match) and measure the amount they differ since you won't get a perfect match with an artificial process.

What you will get though is a repeatable process with a quantifiable number ( no " i adjusted and it looks about right", mathematics rules optics and colours so definable adjustment numbers can be honed at to get the final two grey images as closely matched as possible. That will allow people to say "this process works well to build missing B channel while this other process works better on missing R".

Seems like its something that can be pursued, just be careful how you obtain a grey or L from colour data. Using subs from a OSC you can split those to RGB then throw away one channel at a time and work on technique to rebuild them , you then compare to source RGB data and should match. first split then remerge all to rgb and compare should give you 100% match, do this to test your processing method and software. You could also try different colour spaces for your source data as information gets lost converting those.You might discover capturing in srgb and rebuilding missing channel never gets you better than 90% match while capturing in adobergb alone takes you to 95%. So you could prove how much or how little capture setting impact final results. so much to explore even without knowing the maths.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 07:03 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
Advertisement
OzScopes Authorised Dealer
Advertisement
NexDome Observatories
Advertisement
Lunatico Astronomical
Advertisement
SkyWatcher 2018 Catalogue
Advertisement
FLI Cameras and Imaging Accessories
Advertisement
SkyWatcher WiFi Adaptor
Advertisement
Interest Free Finance
Advertisement
Bintel
Advertisement
Atik Horizon
Advertisement
Astronomy and Electronics Centre
Advertisement