View Single Post
  #15  
Old 14-06-2019, 02:01 AM
luka's Avatar
luka
Unregistered User

luka is offline
 
Join Date: Apr 2007
Location: Perth, Australia
Posts: 1,164
Tony, maybe I am over-complicating things but there are two ways you can "extract" the red channel from the images.

Backtracking a bit first, the camera sensor has a Bayer matrix in front of it. See here for the explanation of the Bayer matrix. Basically in any block of 2x2 pixels on the camera sensor one pixel will pass "mainly red" light, one will pass "mainly blue" light and two will pass "mainly green" light. Note that "mainly red", "mainly green" and "mainly blue" have wide transmission ranges and partially overlap (see the two examples of the transmission curves a few paragraphs below).
Anyway, the RAW images from the camera contain information about intensities for each pixel. To get to the colour information they need to be de-Bayered first, i.e. the information from the neighbouring pixels is used to calculate the colour using a mathematical algorithm.

Now, if you are imaging H alpha (656nm) most of the signal will end up in the red sensor pixels. The green and blue will mainly only have noise (again emphasis on "mainly", have a look at the transmission curves in the last paragraph below).

So you can process your images in two ways:
1. If you stack images in DSS it will de-Bayer them for you. Basically all channels (red, 2xgreen and blue) will get mixed before you use PixInsight to get the red channel. Green and blue channels may not have any useful H-alpha signal but they will add noise to the final image.

2. The other way would be to extract the red channel from the RAW files before doing any stacking. You would basically discard 3/4 of the pixels (green and blue) and your image resolution will be halved horizontally and vertically. But the image will be sharp; the larger resolution of the image from method 1 is not quite real, it was artificially increased by de-Bayering calculations with green/blue pixels which had not much signal.

What is the better way? It depends on the Bayer matrix in front of your sensor (and your camera noise). For example, have a look at the transmission curves for different sensors here and here, in particular look around 650nm.
From the first link (sensor 1) red has about 90% transmission while blue and green have sub-5% transmissions. Basically red pixels will see almost 20x stronger H-alpha signal then blue/green.
From the second link (sensor 2) red has about 40% transmission while blue/green are about 3% / 8%. While the numbers are closer the red will still have 5-10x stronger H-alpha signal.

So... I would be tempted to discard the blue/green channels. While the sensor noise in the modern cameras is low, using blue/green pixels will increase it 4 times while not contributing much to the real signal. The fainter features will be swamped in the extra noise and hence lost.
(Keep in mind that we are dealing with very small signals here)

Hope this all makes sense.

Maybe pick your best RAW image and separate the RGB channels. See if you can see any H-alpha signal in the blue/green channels.
Reply With Quote