ICEINSPACE
Most Read Articles
Moon Phase
CURRENT MOON Waning Gibbous
97.1%
The Sun Now
Time Zones
Sydney*
11:47 am
Perth
8:47 am
Auckland*
1:47 pm
New York
7:47 pm
Paris
1:47 am
GMT
12:47 am




  #1  
Old 06-01-2019, 07:47 AM
Merlin66's Avatar
Merlin66 (Ken)
Spectroscopy Wizard

Merlin66 is online now
 
Join Date: Oct 2005
Location: St Leonards, Vic
Posts: 6,628
16 or 32 bit????

If I have a series of 16 bit images and sum them together do I get a 16 bit image or get a 32 bit image, or a 48 bit image etc?
I think I know the answer, looking for confirmation.
Reply With Quote
  #2  
Old 06-01-2019, 08:15 AM
kens (Ken)
Registered User

kens is offline
 
Join Date: Oct 2014
Location: Melbourne, Australia
Posts: 278
Depends on the tool you are using. But by default, adding any two 16 bit numbers results in a 16 bit number with the possibility of an overflow.
ImageJ gives the option to produce 32 bit output. DSS gives the options of 32bit integer or 32bit "rational" i.e floating point.
Reply With Quote
  #3  
Old 06-01-2019, 08:30 AM
DarkArts
Professional Farnarkler

DarkArts is offline
 
Join Date: Dec 2014
Location: Canberra ... I could go on
Posts: 236
Pixel depth will be dictated by the file format and application. E.g. TIFF is largely a 16bpp format (the latest version supports 32bpp) but, for example, until recently GIMP couldn't save TIFF files with greater than 8bpp, so it truncated on save. Edit, that's X bpp per colour channel .. and there may be an alpha channel, so it can get very confusing.

If you have three 16bpp images in the same format and "sum" them, then whatever algorithm the application is applying in it's "sum" function (which could be weird and wonderful), the final image is still going to be 16bpp ... unless you can also change format to a deeper bpp format and assuming your application holds that data all the way to save.

Keep in mind, however, that those extra bits are at the least significant end, so you're losing precision, not brightness or saturation when truncating from, say, 16bpp to 8bpp.
Reply With Quote
  #4  
Old 06-01-2019, 08:53 AM
Merlin66's Avatar
Merlin66 (Ken)
Spectroscopy Wizard

Merlin66 is online now
 
Join Date: Oct 2005
Location: St Leonards, Vic
Posts: 6,628
I found this write-up - we can go to 64bit images!!!
See https://www.satoripaint.com/Reviews/...ingArticle.htm

""A 32 bit image consists of three colour planes plus alpha plane -RGBa - each is 8 bit."" Each having 256 levels of colour.

What's a RGBL image - when 16 bit cameras are used????
Reply With Quote
  #5  
Old 06-01-2019, 09:02 AM
DarkArts
Professional Farnarkler

DarkArts is offline
 
Join Date: Dec 2014
Location: Canberra ... I could go on
Posts: 236
Quote:
Originally Posted by Merlin66 View Post
I found this write-up - we can go to 64bit images!!!
See https://www.satoripaint.com/Reviews/...ingArticle.htm

""A 32 bit image consists of three colour planes plus alpha plane -RGBa - each is 8 bit."" Each having 256 levels of colour.

What's a RGBL image - when 16 bit cameras are used????
You should check the age of that article ...

Both Ken and I are talking about bits per colour channel per pixel. With an alpha channel as well, a "64bit image" as per the linked article is 16bpp per colour channel.
Reply With Quote
  #6  
Old 06-01-2019, 09:25 AM
DarkArts
Professional Farnarkler

DarkArts is offline
 
Join Date: Dec 2014
Location: Canberra ... I could go on
Posts: 236
Quote:
Originally Posted by Merlin66 View Post
What's a RGBL image - when 16 bit cameras are used????
The bit depth coming out of the camera may not be fixed or neatly aligned with 8, 16 bits, etc.

Say you have 14bpp (per RGB colour channel if it's colour) coming out of the camera then that will most likely be saved in a 16bpp file format with zero-fill for the unused bits ... but I'm guessing a bit there because I don't know what each image capture application is going to do. And then the image processing application may manipulate that to something else when opening the saved file.

Edit: deleted a short explanation of L, RGB, YCbCr etc. ... it's too complicated to cover in a forum post.

Last edited by DarkArts; 06-01-2019 at 09:37 AM.
Reply With Quote
  #7  
Old 06-01-2019, 09:29 AM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Hinchinbrook
Posts: 17,463
I've always saved the stack result as a 32bit float FIT file. Individual subs are 16bit FITS files.
Reply With Quote
  #8  
Old 06-01-2019, 09:38 AM
Merlin66's Avatar
Merlin66 (Ken)
Spectroscopy Wizard

Merlin66 is online now
 
Join Date: Oct 2005
Location: St Leonards, Vic
Posts: 6,628
I was assuming separate 16bit images, say with narrowband filters...

If you save as a 32bit tiff what processing software reads and manipulates it??
(Just check AA7 it says it can save as a 32bit FITS files but the TIFF files are only 16 or 8bit.)
Reply With Quote
  #9  
Old 06-01-2019, 09:44 AM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Hinchinbrook
Posts: 17,463
CCD Stack saves 32bit FIT. Any software will read them. I'm not sure but only PS would read 32bit TIFF files. An then a lot of the plugins wouldn't work and the editing would be very limited. In CS6 anyway. Don't know about CC.
Reply With Quote
  #10  
Old 06-01-2019, 05:43 PM
LewisM's Avatar
LewisM
Novichok test rabbit

LewisM is offline
 
Join Date: Aug 2012
Location: Somewhere in the cosmos...
Posts: 8,729
Not that I know squat about imaging , but in PixInsight I save 32 bit files as references, then save for final tweaking in PS6 as 16 bit TIFFS. CS6 ill read 32 bit, but editing is problematic (many tools and plugins won't function/greyed out). You can convert to 16 bit in PS6
Reply With Quote
  #11  
Old 14-01-2019, 11:29 AM
sil's Avatar
sil (Steve)
Not even a speck of dust

sil is offline
 
Join Date: Jun 2012
Location: Canberra
Posts: 1,172
A regular 24bit colour image is 8bpp (bits per pixel, 8 bits per byte times three colour channels times height of image times width of image (in pixels) gives you the raw filesize, bits are a measure of data size after all). 8 bits means 2^8 values so a value range of 256 values (0-255).

now say you have two images and the first pixel in one channel is 250 in one image and 251 in the other, when you average you get a value of 250.5 but values can only be whole numbers to fit a bit depth. So you'd need to increase the bitdepth to say 9bpp making the values 500 and 502 respectively out of 0-511 range now averaging results in a value of 501, a whole number and all good for a 9bit depth.

But the more complex operations you want to do the more decimal points you run into so you keep increasing the bits per pixel if you want to retain precision and not start rounding/clipping values.

So yes 64bits per pixel is possible (and what I always work with) but its rarely needed and TIFF supports 32bpp, maybe 64bpp but all this means if the file format is just a container for the data and can fit data to a depth of say 64bpp, you can put a regular jpg 8pp into a 16bpp TIFF format but it'll be mostly empty and still only retain the values to 8 bits of range now scaled to 16bit values. There's no new information there.

Likewise your DSLR may save a 16bit TIFF file while its sensor only has a 12bit range so the data only have has a 12bit range just stored in a 16bit container. Computers run on binary values which is wher 2^n comes from and processors and memory fit powrs of two better which is why its so common everywhere: 32bit processors, 64bit processors, and file formats with 8bits precision, 16bit etc, you can use 9 bit space but moving it around in memory to use it still takes up a whole 16bit block so why waste the space? just store 9 bits in a 16bit container for processing and gives room for extra precision if your processing step results in a 10bit value or 15 bit. saves time dealing with cropping values and rounding so it gives you the best precision so operations dont result in messy approximations. so devices require smaller chips as they tend to use value spaces that are good enough with some room for calculations before the errors occur and are so minimal they dont impact the operation of the device.

astrophotography is about signal to noise ratios and increasing that bit depth so there is room to play with processing: to stack lots of data to average the noise down into the depths allowing faint signal and better to be "grabbed" and brought forward to make a pretty picture. Stacking doesn't in itself produce a vivid bright image, usually darker as all data is averaged so will tend downwards rather than upwards, you can do it different ways which does brighten it but those also brighten the noise which makes it harder to clean up.

If you capture in 8bit you can work in 8bit but operations will quickly start clipping data, so you should always try to work with the largest bit depth you can as early as possible.

For example I work with DSLR so as I work in PixInsight with the raws I let PI save each frame as 16bit since I know my camera sensor is capturing between 8 and 16 bits (14bit I think). This means all data is kept, I havent lost anything, when I start combining I usually step up to 32 or 64 bit and after the whole preprocessing process using lights, darks, flats etc then registering and integrating I end up with my Integration Master. This is a single file containing ALL my data and stored with 64bit precision ready for doing processing.

Using higher bitdepth file formats means larger file sizes and storage needs, they are also slower to work with even if they only have 8bit of data in them. think of it more like reserved space.

not all programs support all bit depths of all file formats mainly because their work bit depth only allows for certain bitdepths rather than the full possible. plus not all programs implement the file format specifications to the full, maybe just some basics and maybe from and older version spec.

its the great thing about having standards: everyone makes theirs unique. Sometimes this is intentional so you are forced to work eentirely with their software and not just for a part of the workflow. Fits files likewise are not universally shareable between astrophotography software .

Then there are reasons for working in RGB colourspace or LAB or CYMK and bit depths play a role in not losing data as you shift colourspaces.



So to answer Merlins post the answer is none of those options.
Your 16bit images will become >16bit depth and so need to be stored in a file format of maybe 32bit/48/64 bit depth.
Or more obtusely if you processed files directly to a file it may be 16bit if thats how you defined it but you've probably lost signal and clipped values by doing that. The data in memory will end up with greater than 16bits of signal depth, YOU define the container depth when you save to a file and the software may lose signal if the precision needs to be more than you chose.

Of course our screens are basically 8bit only, there are some with greater bitdepth but we're bad at being able to differentiate between two adjacent colours which is why most people end up overprocessing their images in order to see a change in the image not just the numbers.

I constantly adjust my workflow in small ways as I run different targets through it. My DSLR is basically a constant for its settings, so the worhflow should work on any set as I try to use adjustments that do not require custom masking because of the target. So if i tweak to get nebula looking good i might find its a little noisy for galaxies as the noise wasnt apparent in the nebulosity, so I tweak that step and so on each set i take and over time my workflow is something I'm happy with for using on every set I take, and the result is something I can then do artistic processing with or mosaic with other results or take measurements or whatever.

are we confused yet? I am.
Reply With Quote
  #12  
Old 14-01-2019, 03:53 PM
Merlin66's Avatar
Merlin66 (Ken)
Spectroscopy Wizard

Merlin66 is online now
 
Join Date: Oct 2005
Location: St Leonards, Vic
Posts: 6,628
Sil,
Appeciate the time and effort gone into your reply....
My "back-up" when considering Well depth, and Image bit size has been the Hamamatsu article.
http://hamamatsu.magnet.fsu.edu/arti...amicrange.html

The image bit size, as you say is selected by the post-processing software not the camera.

This discussion came from some discussions where separate 8 bit (camera) images were stacked (summed) - the claim was made that the large ADU count in the sum required greater image bit size usage.
I then said I use 10 bit SER Video frames and stack hundreds of them (solar imaging)- does this mean I get the equivalent of a 98bit image??????
(I know when they are stacked (median combine) the result is still a 10 bit file)
The answer given,.....yes ofcourse!!
He would not be convinces otherwise.......
Reply With Quote
  #13  
Old 15-01-2019, 12:45 PM
sil's Avatar
sil (Steve)
Not even a speck of dust

sil is offline
 
Join Date: Jun 2012
Location: Canberra
Posts: 1,172
I see a lot of confusing from people anywhere a "bit" is involved, really just trying to help others understand it better that its a set quantity of digital size and whether you are talking computer displays, image files, mp3 music or modems it doesn't matter a bit is still a bit and once you ounderstand eight of the buggers make a byte you can start working out stuff. Like 100Mbps network card has a size component and a time one so you can easily work out answers relating to both.

I'm not an expert, i badly wish I was as I really love mathematics. generally I always work in the highest bit depth I can, its rarely noticable on screen until the very end. lots of steps in a workflow add up in the end to a real different (hopefull improvement). Only when I'm happy the processing is complete do I start to downgrade to a lower bit depth for my output needs. Usually a 16bit PNG and 8bit JPG for AP is all I need for uploading to astrobin, printing and easy sharing/viewing. I always retain my 64bit master integration file plus the 64bit fully processed file too.

Just to confuse things further those of us old enough to remember compact discs may recall seeing 1bit error correction and 8bit error correction printed on CD players. logically 8bit should be better but the method of error correction that was used was a cascade process where it would resample a single bit multiple times and single bit correction was best since a single bit was all that could be stored. read once its either 0 or 1, so decide and store, 8 bit would do it 8 times iteratively in a 1bit space so it could reinforce an incorrect reading with each successive reading.

I'll shut up now, sorry
Reply With Quote
  #14  
Old 15-01-2019, 07:03 PM
kens (Ken)
Registered User

kens is offline
 
Join Date: Oct 2014
Location: Melbourne, Australia
Posts: 278
Quote:
Originally Posted by Merlin66 View Post
This discussion came from some discussions where separate 8 bit (camera) images were stacked (summed) - the claim was made that the large ADU count in the sum required greater image bit size usage.
I then said I use 10 bit SER Video frames and stack hundreds of them (solar imaging)- does this mean I get the equivalent of a 98bit image??????
(I know when they are stacked (median combine) the result is still a 10 bit file)
The answer given,.....yes ofcourse!!
He would not be convinces otherwise.......
Actually - if you stack, say, 1000 frames you only need another 10bits (2^10=1024) or 20 bits in total to store the sum. If you retain 10bit precision by averaging then you'll be adding in some noise from rounding.
It gets more interesting above 32bits when floating point becomes a viable option. With 32bit floating point The IEEE 754 standard provides 24bits precision and exponents from -126 to +127. Plenty good enough for AP. Or you could stick with 32bit integer and sum up to 65535 16 bit images with
no loss of precision.
Reply With Quote
  #15  
Old 16-01-2019, 08:11 AM
Merlin66's Avatar
Merlin66 (Ken)
Spectroscopy Wizard

Merlin66 is online now
 
Join Date: Oct 2005
Location: St Leonards, Vic
Posts: 6,628
Hmmm
My summary version:
1. Bit depth is used by camera suppliers to "define" both the well depth and (say a unity gain where 1e=1ADU) the number of "shades"/ levels which can be obtained.
2. Bit depth is used in processing to maximise the "usable" number of "shades"/levels.
3. Bit depth in the final image is used to present the "best" resulting image.

Example:
The camera:
An "8 bit" mono camera, at unity gain would have a well depth of 256bits and be capable of handling 256 shades of grey, each shade would be 1bit.

A "16 bit" mono camera, at unity gain would have a well depth of 65000bits and be capable of handling 65536 shades/levels of grey, each shade would be 1bit.

The Processing:
If we use a stack (say 1000) of say 8 bit images (as above) then when summed the total ADU count would be 256,000. Each ADU would be a shade level, giving 256,000 different levels in the image (!!!)
To manipulate each level available we would need to process at 18bit (2^18 = 262144)

If we used 16bit processing, then the 65536 limit would mean that 256,000/65536 =3.9=4bit per level. Inferring that the "finer detail" contained in the 4bit step would be compromised(?)

A side note: Stacking either by summing (or more commonly by averaging) always improves the SNR - all things being equal -See Howell's "Handbook of CCD Astronomy", p 71)

If we "average" stack the 8 bit camera image, the result is still an 8 bit image (1000 x 256)/1000 =256. BUT the SNR is improved due to the reduction in "variation", which is also smoothed.

The Final Image
If we save a 8bit camera file to an 8 bit image format then it will have 256 levels of grey.
If we save an 8bit camera file as a 16 bit image, it will still have 256 levels. Due to the unity gain and each level can't have less than 1e. This means that not all the possible 65536 levels are used.

If we save a 16bit camera file to an 8 bit format, the number of levels is restricted to 256 - each level in this case would have 65536/256 =256 electron. Again inferring that fine detail could be compromised..

As always, open to comment/ correction and ridicule.
Reply With Quote
  #16  
Old 16-01-2019, 03:48 PM
Camelopardalis's Avatar
Camelopardalis (Dunk)
Drifting from the pole

Camelopardalis is online now
 
Join Date: Feb 2013
Location: Brisbane
Posts: 4,401
Just to alleviate an element of confusion, bit depth is often used in graphics and displays (monitors) to, coincidentally, define the number of shades the unit can process/display. To avoid that, data type is probably a better term, given that in the files we capture/process can use several different data types, be they 8- or 16-bit integers or floating point numbers.

Maybe I'm missing something, but I don't see the point of summed stacking. You just end up with a larger number.

Typically, we would take the average (or median). Doing this over a collection of subs (even just 2!) will result in values that don't fit in the native range. For example, the average of two values for a pixel, say, 2019 and 2020 results in 2019.5... if your source data type is 16-bit integer, then this number can't be represented by the same data type to appropriate precision, and needs to be promoted to something higher...typically, 32-bit is the next jump up from 16 as computers like powers of 2 (and it's a convenient "word size" for modern computers).

The final image part comes back to the capabilities of the display...most computers have the capability to display only 256 (8-bit) shades of each primary colour, resulting in 16,777,216 "colours". Modern graphics cards can store and display more, but higher bit depth displays remain specialist...and that's before we get on to colour gamut...
Reply With Quote
  #17  
Old 16-01-2019, 03:57 PM
Merlin66's Avatar
Merlin66 (Ken)
Spectroscopy Wizard

Merlin66 is online now
 
Join Date: Oct 2005
Location: St Leonards, Vic
Posts: 6,628
Dunk,
Thanks.
Re summation stacking - this example was use to "show" me that when summed, the amount of "data" in the image was much, much more than when averaged and hence "could be manipulated more" with higher bit processes..

If you average say a 8 bit file and process as a 16 bit file I can't see the loss of significant info.
Yes, possibly, if you process the 8 bit averaged file in 8 bit process....
Reply With Quote
  #18  
Old 16-01-2019, 11:37 PM
kens (Ken)
Registered User

kens is offline
 
Join Date: Oct 2014
Location: Melbourne, Australia
Posts: 278
Quote:
Originally Posted by Merlin66 View Post
Hmmm
My summary version:
1. Bit depth is used by camera suppliers to "define" both the well depth and (say a unity gain where 1e=1ADU) the number of "shades"/ levels which can be obtained.
Actually they use electrons to define FWC. FWC is more relevant to dynamic range than contrast.
2. Bit depth is used in processing to maximise the "usable" number of "shades"/levels.
3. Bit depth in the final image is used to present the "best" resulting image.

Example:
The camera:
An "8 bit" mono camera, at unity gain would have a well depth of 256bits and be capable of handling 256 shades of grey, each shade would be 1bit.
A bit is a binary digit. You mean 256 units or ADU. It's not a good idea mixing units this way. The well depth is an effective well depth in this context because the ADC saturates even though the well is not full and thereby limits dynamic range. The ADC is rated by its bit depth - 8 bits in this case. Assuming appropriate use of gain to manage dynamic range, this affects contrast stretching or tonal gradients (if that's the right term).
A "16 bit" mono camera, at unity gain would have a well depth of 65000bits and be capable of handling 65536 shades/levels of grey, each shade would be 1bit.
No. The limit is more likely to be the sensor. My ASI1600 has a FWC of 20000 electrons and a 12bit (4096 level) ADC. At zero gain the 20k electrons are shoehorned into 4096 levels. At unity the ADC saturates at 4096 electrons when the well is only 20% full. 16bits here refers to either the ADC output or the image bit depth. My 12bit ADC output is multiplied by 16 to make it a 16 bit value for saving to file. Conversely, in 8bit mode the ADC outputs 10bits which are divided by 4 to convert to 8 bits for faster transmission and higher frame rate.
The Processing:
If we use a stack (say 1000) of say 8 bit images (as above) then when summed the total ADU count would be 256,000. Each ADU would be a shade level, giving 256,000 different levels in the image (!!!)
To manipulate each level available we would need to process at 18bit (2^18 = 262144)

If we used 16bit processing, then the 65536 limit would mean that 256,000/65536 =3.9=4bit per level. Inferring that the "finer detail" contained in the 4bit step would be compromised(?)
Another way to look at it is that you have introduced rounding or quantization noise
A side note: Stacking either by summing (or more commonly by averaging) always improves the SNR - all things being equal -See Howell's "Handbook of CCD Astronomy", p 71)

If we "average" stack the 8 bit camera image, the result is still an 8 bit image (1000 x 256)/1000 =256. BUT the SNR is improved due to the reduction in "variation", which is also smoothed.
But you have added some quantization noise which reduces SNR. In effect you lose some of the smoothing benefit by rounding it off.
The Final Image
If we save a 8bit camera file to an 8 bit image format then it will have 256 levels of grey.
If we save an 8bit camera file as a 16 bit image, it will still have 256 levels. Due to the unity gain and each level can't have less than 1e. This means that not all the possible 65536 levels are used.
Due to read noise all levels will be output by the ADC. Stacking will reduce the read and other noise. Averaging in stacking also causes all levels to be filled
If we save a 16bit camera file to an 8 bit format, the number of levels is restricted to 256 - each level in this case would have 65536/256 =256 electron. Again inferring that fine detail could be compromised..
Or that you have added quantization noise
As always, open to comment/ correction and ridicule.
Comments above. I've tried to explain in terms of noise. The two main things we are concerned with are noise and contrast. Quantization noise is quantified as 1 LSB / sqrt(12) where LSB=least significant bit or 1 unit
All noise, including quantization noise, is amplified when we stretch to get contrast. So more bits, especially during processing, are better.
Reply With Quote
  #19  
Old 31-01-2019, 11:10 AM
Astrofriend's Avatar
Astrofriend (Lars)
Registered User

Astrofriend is offline
 
Join Date: Aug 2017
Location: Stockholm, Sweden
Posts: 144
All my sub images are stored as .cr2 (14 bit), .fit (16 bit) or .tif (16 bit). My dslr camera has a 14-bit ADC so all these three formats store them without loss. Only .cr2 store all of Canon's Exif data. After calibration, dark, flat and more I save them as 32-bit per pixel. Either 32-bit floating point fits or maybe 32-bit tiff, it depents which software I use, but all of them handle the information without loss in normal cases, even after stacking. No reason to not save in 32-bit, memories are cheap today. Why save money and destroy your expensive data you paid so much for to get, both money and time.

The 16-bits formats are integers and the 32-bits are floating points in this case.

The trick is then how to process back from 32-bit to a common 8-bit deap (each color) and still see the weak details in a nebula and don't clip the stars and without get no nice looking image. You increase the weak signal and dampen the high levels, gamma function do that something like our eyes do. More advanced processing do different thing on different areas. All this needs high dynamic data input, at least 32-bit. It doesn't matter if you do averaging or sum. It's the small details that differ. Sometimes the noise is heavy and mask the problem.

The software I normaly use are AstroImageJ and Fitswork. Both are free and handle 32-bit floating numbers. Long time ago I used Matlab which can work with 64-bit. But no need to the images I have today, maybe in future.

Update (now when not in the middle of night):
With a 16-bit ccd camera that are calibrated with masterdarks and masterflats maybe you reach in theory the limit when to move from 32-bit floating point to use 64-bit. Let say we calibrate the sub image with an average masterdark that consist of 50 subs. The means you subtract with a resolution of just 1/50 of a bit. To keep it without truncating it you need only for this calibration step 5 or 6 extra bits. The 32-bit floating point format only has 24 bit, the other are used for the expontent -/+ 127. We have now used 21 of the 24 bit we have avaible. Then we can only stack about 10 images before we loss information. In reality we don't even come close to this because we have a lot of noise that mask this problem. Little bit surprised that it was so close with a 16-bit camera, I have a 14-bit camera and then get two extra bit to play with, then I can add about 40 images when stacking without problem, in theory. Correct me if I did something wrong in my estimations. I will do a calculator later on my homepage to find how many bits needed in theory.

This was for deep sky images, videofiles I don't have very much experience from. But from hometheatre I know 10-bit is far better then 8-bit when processing the videosignal.

Here is a list of what software I use:
http://www.astrofriend.eu/astronomy/...nt.html#part05

Lars

Last edited by Astrofriend; 01-02-2019 at 07:54 AM.
Reply With Quote
  #20  
Old 01-02-2019, 10:37 AM
Astrofriend's Avatar
Astrofriend (Lars)
Registered User

Astrofriend is offline
 
Join Date: Aug 2017
Location: Stockholm, Sweden
Posts: 144
Now I have started to make a calculator that indicate the theoretical bit resolution that's needed.

Lot of work left but you can already now see my idea behind it.

At the moment it can take bits from camera and number of sub darks used, or even zero darks.

Update:
Now most of the functions are in place. Still big risk of errors in the equations. But I correct then and then.

You can try it here:
http://www.astrofriend.eu/astronomy/...esolution.html

Play with the figures!

Everything can be wrong now in the beginning!

/Lars

Last edited by Astrofriend; 02-02-2019 at 10:08 AM.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 10:47 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
Advertisement
SkyWatcher WiFi Adaptor
Advertisement
Bintel
Advertisement
OzScopes Authorised Dealer
Advertisement
FLI Cameras and Imaging Accessories
Advertisement
NexDome Observatories
Advertisement
SkyWatcher 2018 Catalogue
Advertisement
Lunatico Astronomical
Advertisement
Interest Free Finance
Advertisement
Astronomy and Electronics Centre
Advertisement