View Single Post
  #2  
Old 25-06-2021, 11:25 PM
Ryderscope's Avatar
Ryderscope (Rodney)
Registered User

Ryderscope is offline
 
Join Date: May 2006
Location: Glanmire, NSW
Posts: 2,208
Hi George,
There are three questions raised here so I shall attempt to respond to each. My position is that I don’t take Luminance any more, I create a synthetic luminance from the RGB data. I then process this as I would a ‘real’ luminance with the objective of bringing out structure and fine detail prior to applying an LRGB combination.

Do you align and stack all subs?
If you choose to use a synthetic luminance there are a number of ways that can be deployed. Some will do as you said, align and stack all subs. I don’t do that, I simply take the integrated masters for each of R,G & B and integrate them without any pixel rejection applied. I use Pixinsight for my processing software and have used the Sub Frame Selector process to measure the noise levels and SNR weights of synthetic luminance against captured luminance and it is usually better.

Is a luminance filter better?
Technically the answer is maybe but it depends on the band pass of the luminance filter. If you look at the spectrum graph for a set of LRGB filters typically you will see that the luminance filter covers a slightly wider band than the three RGB filters. You could argue therefore that the luminance data contains more information than the combined RGB filter set. So a synthetic luminance then would be lacking. I have not done any objective tests to try and quantify any losses but from a subjective image quality assessment there does not seem to be any disadvantage. I can report though a definite improvement in SNR with the application of a synthetic luminance.

Pros and Cons of Synthetic Luminance?
Traditionally we captured lots of luminance data and only sufficient RGB data to add chrominance to the image. Our eyes perceive structure and detail through the luminance so this is important. Typically as well, the RGB was captured as binned (x2) data which I presume stemmed from saving imaging time. This would be particularly so if one has to travel to a dark sky location for imaging. This being the case, one of the benefits of capturing data in this way is saving imaging time particularly when dark sky access is precious.

The alternative is to capture RGB unbinned and extract a synthetic luminance from the RGB. In my case I’m lucky in that I have my own observatory and don’t have to travel to collect my data.

I am not convinced that the supporting argument of saving imaging time by capturing luminance and binned RGB holds. My view is that if you apply the equivalent imaging time to capturing unbinned colour data you can produce relatively noise free colour channels and a synthetic luminance that has a high SNR. FWIW, my objective for most targets with my QSI683/Tak TSA 120 combination is to grab 30 subs of 5 minutes for each of RGB.

Of course, I don’t pretend to have the definitive answer to this and there are others that will no doubt want to contribute more.

CS,
Rodney
Reply With Quote