PDA

View Full Version here: : Advantages of RGB binned 2x2


telecasterguru
05-05-2010, 08:23 PM
I see that a lot of imagers are binning L at 1x1 and RGB at 2x2.

What are the advantages of binning RGB at 2x2 as compared to binning Luminance at 1x1 and are the advantages the same for all DSOs?

Thanks

Frank

Bolts_Tweed
05-05-2010, 08:30 PM
Gday Frank

I have just starting doing this so i am not experienced but as I understand it is just time. 2x2 binning is more sensitive than 1x1 and thus you can image fainter objects in the same time.

Software like DL allows compositing an LRGB image imaged 1x1 Lum and 2x2 RGB. As the RGB just forms a tone map, resolution isnt the hiogest priority. In fact I run a median filter over my RGB channels then obtain all the sharp detail in the 1x1 binned Lum or Ha (or combination of both) that I am using as a Lum.

Others may be more eloquent but this was what I gathered by research prior to using it. I hope to show the results after the weekend on an LRGB on that NGC3576 I am hoping to take (weekend looks good up here - yahoo)

Mark Bolton

sheeny
05-05-2010, 09:08 PM
I think Mark has pretty much got it.

2x2 binning increases sensitivity at the expense of resolution, but since the detail is in the luminance image (1x1) it makes sense to bin 2x2 for the colour data to get richer, smoother (less noisy) colour data.

Al.

Tandum
05-05-2010, 10:13 PM
Be careful binning x 2 on bright objects. Remember binning x2 adds the contents of a group of 4 pixels into one read out pixel/buffer so these 8300 cameras, with relatively small well depths will overflow as the buffer they combine/read them into has the same well depth as an imaging pixel.

OzRob
06-05-2010, 02:10 PM
If my understanding is cottect our eyes are not good at seeing details in colour so the loss of resolution in the colour data is of little consequence. Details come from the luminance frame.



I don't understand how the cameras handle binning. However, I would have thought that binning would increase the effective well depth as you have more pixels to fill. Please correct me if I am incorrect. Also on a 16 bit camera wouldn't the buffer have a miximum value of 65536 pixles?

Tandum
06-05-2010, 02:40 PM
These 8300 sensors use normal pixels when reading out the image from the sensor to the image buffer for download. From what I've been told they shift the image down the sensor and read it out from the bottom so if you don't use a shutter you get a vertical white line down the image from bright stars as the image moves down. They also tell me that binning is done on the sensor and so if a group of 4 pixels totals more than the well depth of the pixel it's put in, it will overflow. You can see it happening on a qhy9 when you expose to daylight in higher binning modes. It overflows into the overscan area rendering it useless for calibration. But luckily we only really need to expose to daylight to set the gain.

The 8300 has a well depth of 25.5K. It's the gain setting that multiplies it up to 65K odd.

RobF
06-05-2010, 07:57 PM
Interesting Robin. I've been reading threads on a few forums about whether or not to bin. I need to do more experiments comparing LRGB, RGB and various binning combinations. One arguement goes that you can get just as good an image from RGB with no binning versus traditional LRGB with 2x2 binning RGBs.

Wasn't aware of how 2x2 worked on the 8300 sensor, which is interesting with regards to QHY9.

Clouds/moon have stymied this project so far this year - hoping things dry up (and clear up) and lot more from now on though...

Tandum
06-05-2010, 08:41 PM
This is what happens when they send you a camera that doesn't work properly, you spend weeks finding out how it does work :) I've always used 2x for RGB, it's a lot quicker and you can devot longer to Lum/Ha.

Octane
06-05-2010, 08:42 PM
Rob,

Check out Tom Davis' work. He doesn't bin his colour frames and his images are breathtaking.

I'm undecided whether I will bin or not when I get around to giving the STL first light. My gut says I'll stick to 1x1, though.

H

telecasterguru
06-05-2010, 09:08 PM
The QHY9 uses a shutter so I don't know about the white lines in the data when downloading.

I have noticed that the blue channel is more bloated than the red or green channel when I process the QHY9 data and I thought that it may help with the binning as a number of imagers bin RGB 2x2.

Will be having my first attempt at Ha tomorrow night so am prepared to try lots of things while in the learning process.

Frank

RobF
06-05-2010, 10:28 PM
You have to admire his work, but must admit I haven't looked too closely at the details of the data capture processing (other than to flap my jaw at the wonderous pics he produces :))

Are they normally LRGB all without binning, or RGB without binning Humayun? Will have to hunt some down and have a read.

Octane
06-05-2010, 10:54 PM
Rob,

All LRGB without binning. Oodles of luminance.

H

strongmanmike
06-05-2010, 11:06 PM
Yeh, I have never binned my colour data either, in fact pretty week/poor colour data can still produce a speky image...on the other hand get poor Luminance and it's all over red rover.

Luminace is the king so get that right and you are home :thumbsup:

Mike

sjastro
06-05-2010, 11:42 PM
Maintaining star colour is difficult when binning RGB.

I use a standard 60 minute exposure unbinned for each R, G and B image.

Regards

Steven

RobF
07-05-2010, 11:34 PM
Why is the Lum so critical guys? I mean, I understand you want to have a really tightly defined Lum in photoshop to bring out detail, overlayed with luminosity/hue for colour. Could that just as easily be a monochrome treated Red filter image though? Why does it have to be a clear filter that gives the "lum" layer for processing? (leaving Ha and other complications out of the picture for the time being....)

Bassnut
08-05-2010, 07:18 AM
Because clear has all the frequencies (colours) in it. If you used red as lum, then you would not have blue or green in the final pic.

RobF
08-05-2010, 09:49 AM
Another question - perhaps same one worded differently. Why wouldn't R:G:B 4:4:4 hrs combined be just as good as say LRGB 6:2:2:2 hrs?

Is it to do with the difficulty of aligning the 3 colours to get a tight image with detail?

Phil Hart
08-05-2010, 10:19 AM
As Bassnut said, it's because colour filters are only letting ~1/3 of the spectrum through, while Luminance is capturing the whole spectrum at once so many more photons being captured - much greater efficiency.

With RGB 4:4:4, you're only allowing 1/3 (probably less) photons through to the sensor the whole time you're imaging, whereas for LRGB 6:2:2:2 you've got six hours where the whole spectrum can come through. if you do the maths you're capturing about twice as many photons.

Bassnut
08-05-2010, 10:25 AM
Not to do with aligning. The "Lum" channel provides the lumanence only, or brightness to the RGB channels which only provide colour. The lum channel is mono, so it can originally be any colour before mono conversion, but it is a "filter" of sorts and "passes through" colour brightness from other channels based on the brightness of all the colours present in the lum channel at capture. If say the lum channel was red converted to mono, then it would only "pass" the brightness of the red captured data, and green and blue would be muted, or blocked.

As clear passes all colours, it tends to be brighter and give to best S/N ratio, so its data is cleaner for a given exposure and allows more stretching than RGB. Therefore LRGB becomes more attractive as detail and low noise stretching can all be done in the lum channel with shorter, noisier and lower res RGB providing only colour data.

RGB would all require longer bin1 capture to give the same detailed result as LRGB with L at bin 1 and short RGB bin2 exposures.

In your example, L bin1 RGB bin2 at 6;2;2;2 would give a far better result than RGB 4;4;4 all bin1, because the detail and S/N in 6 hrs of lum would exceed the combined RGB of 4hrs ea.

Hope that makes sense ;-)

RobF
08-05-2010, 08:13 PM
Thanks Phil and Fred. That makes perfect sense. Appreciate you sorting out some of my crazy apprentice notions :)
I think my biggest error was glossing over the fact you're only capturing 1/3 (or less) of the spectrum with each filter now you've explained it.

jase
09-05-2010, 03:03 AM
1x1 for RGB gives you greater flexibility when processing a data set. 1x1 chrominance data can be used to generate a synthetic luminance given each filter's spectrum when combined closely equates to a luminance filter. Its not 100% however. I often perform a synthetic lum to match a Ha lum to provide a better match with the strong Ha data set. As alluded, you can also simply use the red filtered data for luminance if captured 1x1. You simply processing the data set in a different way. Possibilities are endless.

RobF
09-05-2010, 10:16 AM
We need to lock you in a room with a wordprocessor Jase and get you to record what you know (even 50% would keep most of us busy for a long time I suspect).

One thing I learned eventually when starting out with the DSLR - there's rarely a "right" way to do it. Helps to follow the general rules, but eventually you'll want to explore the effect of filters, ISO, exposure time, processing etc, etc.

Expecting the same thing with mono CCD learning curve. Pays to follow the rules most of the time while starting, but plenty to learn and explore on the capture and processing front. That's what I signed up for. :D

Thanks again Frank for kicking off such an educational discussion.