View Single Post
  #44  
Old 20-05-2014, 04:25 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Octane View Post
Exporting TIFFs from Canon's Digital Photo Professional will give you, in your TIFF, exactly what the sensor captured.
And this is exactly the problem - this is unfortunately the very opposite of the truth and is the reason for all this tomfoolery with dcraw.

For AP we are interested in what the sensor captured, which is *definitely *not* what DPP outputs. DPP just makes things 'pretty', but it's the very opposite of what we need for AP!

1. For starters, Canon DSLRs (as do many other cameras) have a Color Filter Array, meaning that the pixels that are recorded by the CCD are divvied up between the different colors. In the case of your Canon, every 4 native pixels are divided up into 1 red, 2 green and 1 blue channel pixel. E.g. for a 16MP camera, you will get 4MP of red and blue data and 8MP of green data.

This is obviously not the case when you look at a full frame from DPP; all the channels' pixels are accounted for. This is thanks to an interpolation (debayering) algorithm that makes up the missing 3 pixels for red and blue and the missing 2 pixels for the green channel. Yes, that's right - a quarter of your blue and red and half of your green data is completely made up! There are various clever ways of making up this data and chances are that DPP uses a very clever (proprietary) algorithm indeed that reconstructs as much as possible. The trouble is that 'clever' is *bad* for AP. Whereas this works great for terrestrial daytime images with plenty of signal and plenty of geometrical shapes to derive clues from for reconstruction, frames shot for AP purposes have neither. What happens then is that the reconstruction algorithms start reconstructing things that aren't there due to noise and the absence of geometrical shapes/patterns. Or more generally speaking, the noise that was contained in one pixel (for red and blue) is propagated into its neighbouring pixels. What happens then is the creation of artifacts (introducing detail) and noise grain that is larger than one pixel, both of which are very hard to get rid of by stacking or noise reduction algorithms, causing these artifacts and noise grain to become very conspicuous.

Best it then to use a 'dumb' interpolation algorithm (such as bilinear interpolation) to make a 'recoverable' guess in case the guess was wrong (e.g. refraining from trying to introduce detail).

Refer to the attached image; 1 is the original scene, 2 is what the CCD actually captured (which is a far cry from the DPP result - and, also, it's been stretched for easy comparison), 3 is what each individual pixel captured when assigned its correct color, 4 is what the scene looks like reconstructed once the missing data has been interpolated (notice artifacts).

2. As you can see from the comparison that Paul posted, it is clear that the data has been stretched by DPP (so it's fit for human consumption). This is *bad* as many processing steps need to be performed when the data is still in the linear domain (for example deconvolution and color calibration).

3. As you can see from the comparison that Paul posted, it is clear that the data has been color balanced by DPP. This is *bad* as (as I posted on the ST forum) the noise in each channel has also scaled by the white balance factors. It has now become impossible for ST to track the noise effectively as the noise level varies significantly between channels.

4. Since the data has now been white balanced and stretched, it is now no longer possible to recover true colour, as re-whitebalancing is no longer possible and light pollution can be subtracted but its (linear!) influence cannot be rebalanced, also owing to he fact that the whitebalancing will have clipped some of the highlights (white balancing is a simple multiplication of the channels by different factors, consequently causing highlights to carry data for some of the channels but not for others).

5. While I cannot be sure about DPP, I know from other manufacturers that a lot more processing is performed than just simple stretching and colour balancing; often noise reduction and sharpening is applied, wreaking havoc on faint signal that could otherwise be recovered by stacking, or further destroying any hope of applying mathematically correct deconvolution. Sony's NEX line of cameras comes with software that is particularly bad in this respect.

Bottom line is, *please* keep your data as virgin as possible (e.g. as close to actual photon counts as possible). If you don't, you will not be able to make effective or correct (in the case of colour management and deconvolution) use of your signal.
Attached Thumbnails
Click for full-size image (225px-Colorful_spring_garden_Bayer.jpg)
33.6 KB32 views

Last edited by irwjager; 20-05-2014 at 04:53 PM.
Reply With Quote