OK, I went back and had a listen to Adam's tutorial again. I'm now completely convinced that he is correct and that a better outcome is possible if RGB is separated and processed and only recombined to colour at the end. Here is a brief summary:
Processes like data rejection, registration, and normalization, take information from luminance data and then interpolate neighboring pixels. So consider these two situations. 1 Colour image with a red hot pix. How can one accurately interpolate from a neighboring pixel when in fact that pixel may have a different colour space? 2. Consider the same situation of a hot red pixel on just the red channel. One can now be 100% sure that the interpolation is correct because the neighbor is in fact red. The same argument can be made for registration and really the question must be asked about normalization too. What does it mean to normalize a colour image? The brightness values from image to image may very well be different, but also show different colours. By keeping the data separate and "pure" these difficulties go away.
Peter
|