View Single Post
  #16  
Old 16-01-2019, 04:48 PM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is offline
 
Join Date: Feb 2013
Location: Brisbane
Posts: 5,429
Just to alleviate an element of confusion, bit depth is often used in graphics and displays (monitors) to, coincidentally, define the number of shades the unit can process/display. To avoid that, data type is probably a better term, given that in the files we capture/process can use several different data types, be they 8- or 16-bit integers or floating point numbers.

Maybe I'm missing something, but I don't see the point of summed stacking. You just end up with a larger number.

Typically, we would take the average (or median). Doing this over a collection of subs (even just 2!) will result in values that don't fit in the native range. For example, the average of two values for a pixel, say, 2019 and 2020 results in 2019.5... if your source data type is 16-bit integer, then this number can't be represented by the same data type to appropriate precision, and needs to be promoted to something higher...typically, 32-bit is the next jump up from 16 as computers like powers of 2 (and it's a convenient "word size" for modern computers).

The final image part comes back to the capabilities of the display...most computers have the capability to display only 256 (8-bit) shades of each primary colour, resulting in 16,777,216 "colours". Modern graphics cards can store and display more, but higher bit depth displays remain specialist...and that's before we get on to colour gamut...
Reply With Quote