View Single Post
  #11  
Old 14-01-2019, 12:29 PM
sil's Avatar
sil (Steve)
Not even a speck of dust

sil is offline
 
Join Date: Jun 2012
Location: Canberra
Posts: 1,474
A regular 24bit colour image is 8bpp (bits per pixel, 8 bits per byte times three colour channels times height of image times width of image (in pixels) gives you the raw filesize, bits are a measure of data size after all). 8 bits means 2^8 values so a value range of 256 values (0-255).

now say you have two images and the first pixel in one channel is 250 in one image and 251 in the other, when you average you get a value of 250.5 but values can only be whole numbers to fit a bit depth. So you'd need to increase the bitdepth to say 9bpp making the values 500 and 502 respectively out of 0-511 range now averaging results in a value of 501, a whole number and all good for a 9bit depth.

But the more complex operations you want to do the more decimal points you run into so you keep increasing the bits per pixel if you want to retain precision and not start rounding/clipping values.

So yes 64bits per pixel is possible (and what I always work with) but its rarely needed and TIFF supports 32bpp, maybe 64bpp but all this means if the file format is just a container for the data and can fit data to a depth of say 64bpp, you can put a regular jpg 8pp into a 16bpp TIFF format but it'll be mostly empty and still only retain the values to 8 bits of range now scaled to 16bit values. There's no new information there.

Likewise your DSLR may save a 16bit TIFF file while its sensor only has a 12bit range so the data only have has a 12bit range just stored in a 16bit container. Computers run on binary values which is wher 2^n comes from and processors and memory fit powrs of two better which is why its so common everywhere: 32bit processors, 64bit processors, and file formats with 8bits precision, 16bit etc, you can use 9 bit space but moving it around in memory to use it still takes up a whole 16bit block so why waste the space? just store 9 bits in a 16bit container for processing and gives room for extra precision if your processing step results in a 10bit value or 15 bit. saves time dealing with cropping values and rounding so it gives you the best precision so operations dont result in messy approximations. so devices require smaller chips as they tend to use value spaces that are good enough with some room for calculations before the errors occur and are so minimal they dont impact the operation of the device.

astrophotography is about signal to noise ratios and increasing that bit depth so there is room to play with processing: to stack lots of data to average the noise down into the depths allowing faint signal and better to be "grabbed" and brought forward to make a pretty picture. Stacking doesn't in itself produce a vivid bright image, usually darker as all data is averaged so will tend downwards rather than upwards, you can do it different ways which does brighten it but those also brighten the noise which makes it harder to clean up.

If you capture in 8bit you can work in 8bit but operations will quickly start clipping data, so you should always try to work with the largest bit depth you can as early as possible.

For example I work with DSLR so as I work in PixInsight with the raws I let PI save each frame as 16bit since I know my camera sensor is capturing between 8 and 16 bits (14bit I think). This means all data is kept, I havent lost anything, when I start combining I usually step up to 32 or 64 bit and after the whole preprocessing process using lights, darks, flats etc then registering and integrating I end up with my Integration Master. This is a single file containing ALL my data and stored with 64bit precision ready for doing processing.

Using higher bitdepth file formats means larger file sizes and storage needs, they are also slower to work with even if they only have 8bit of data in them. think of it more like reserved space.

not all programs support all bit depths of all file formats mainly because their work bit depth only allows for certain bitdepths rather than the full possible. plus not all programs implement the file format specifications to the full, maybe just some basics and maybe from and older version spec.

its the great thing about having standards: everyone makes theirs unique. Sometimes this is intentional so you are forced to work eentirely with their software and not just for a part of the workflow. Fits files likewise are not universally shareable between astrophotography software .

Then there are reasons for working in RGB colourspace or LAB or CYMK and bit depths play a role in not losing data as you shift colourspaces.



So to answer Merlins post the answer is none of those options.
Your 16bit images will become >16bit depth and so need to be stored in a file format of maybe 32bit/48/64 bit depth.
Or more obtusely if you processed files directly to a file it may be 16bit if thats how you defined it but you've probably lost signal and clipped values by doing that. The data in memory will end up with greater than 16bits of signal depth, YOU define the container depth when you save to a file and the software may lose signal if the precision needs to be more than you chose.

Of course our screens are basically 8bit only, there are some with greater bitdepth but we're bad at being able to differentiate between two adjacent colours which is why most people end up overprocessing their images in order to see a change in the image not just the numbers.

I constantly adjust my workflow in small ways as I run different targets through it. My DSLR is basically a constant for its settings, so the worhflow should work on any set as I try to use adjustments that do not require custom masking because of the target. So if i tweak to get nebula looking good i might find its a little noisy for galaxies as the noise wasnt apparent in the nebulosity, so I tweak that step and so on each set i take and over time my workflow is something I'm happy with for using on every set I take, and the result is something I can then do artistic processing with or mosaic with other results or take measurements or whatever.

are we confused yet? I am.
Reply With Quote