Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rate Thread
  #41  
Old 11-07-2010, 09:07 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by irwjager View Post
The removal of the skyglow frees up dynamic range, which can be used to bin an image more until the whole of the dynamic range is used up. This would be fractional binning obviously and not a neat 2x2 - think 2,25x2,25 or thereabouts. The image would be smaller than the 2x2 hardware binned image with the light pollution still in.

You could do the same thing to the hardware binned image though - take out the light pollution and then software bin it again until the full dynamic range is used.

I'd be happy to demonstrate it!
Good stuff. I'll email you some raw fits.
Reply With Quote
  #42  
Old 12-07-2010, 07:54 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by multiweb View Post
Good stuff. I'll email you some raw fits.
Cool, thanks Marc! I'll record the intermediate images, so it's easier for people to see what's going on and reproduce (or criticize and shoot down ). I'll host them somewhere else, so we can scrutinize some high quality TIFFs instead of 200k blotchy JPEGs.

I'll also show how different debayering algorithms are more suitable than others for dealing with noisy input.
Reply With Quote
  #43  
Old 12-07-2010, 09:42 AM
Bassnut's Avatar
Bassnut (Fred)
Narrowfield rules!

Bassnut is offline
 
Join Date: Nov 2006
Location: Torquay
Posts: 5,064
Heres a thread on software binning on OSC. Click on each post down the list. http://forums.dpreview.com/forums/re...ssage=29133466
Reply With Quote
  #44  
Old 12-07-2010, 10:23 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Bassnut View Post
Heres a thread on software binning on OSC. Click on each post down the list. http://forums.dpreview.com/forums/re...ssage=29133466
This thread is about the merits (or not) of software binning in order to decrease noise by means of averaging (not summing).

It's mostly useless for normal photography - there's a good few articles about it on the web. Averaging may look better in normal photography, but what you're really doing is applying a low pass filter. Any high frequency information is lost in the image (along with the noise - random noise is also high frequency).

However, astrophotography is a bit of a different beast, in that high frequency information may not be present, due to your CCD's resolving power being larger than seeing conditions require. You can tell if that's the case if your image looks 'soft' and is devoid of high frequency signal (except for noise). An image like this is a good candidate for binning by averaging.

It can be a boon for astrophotography under the right conditions, but don't expect it to do anything for the signal-to-noise ratio in your holiday snaps...
Reply With Quote
  #45  
Old 12-07-2010, 10:59 AM
Bassnut's Avatar
Bassnut (Fred)
Narrowfield rules!

Bassnut is offline
 
Join Date: Nov 2006
Location: Torquay
Posts: 5,064
Yes, terrestial would be different there, but the DL help file mentions this about averaging in binning.

Binning and Resizing

Sometimes it is useful to shrink images. The Binning command does this in the same manner as binning inside the camera – simply combine the adjacent pixels together into a single ”super-pixel”. Unlike a CCD camera, this function averages the values instead of summing them; however the effect is otherwise identical. (Binning inside the camera may reduce total read noise, though, if the binning is done "on chip").
Simple binning does not ensure that the result meets the Nyquist Sampling Criterion. This means that small point sources like stars can all but disappear. The correct way to resize an image is to first low-pass filter it, so that no spatial frequencies exceed one half the new sample interval. This prevents the addition of aliasing distortion into the image. The Half Size command includes such a Nyquist filter.
Reply With Quote
  #46  
Old 12-07-2010, 02:36 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Bassnut View Post
Yes, terrestial would be different there, but the DL help file mentions this about averaging in binning.

Binning and Resizing

Sometimes it is useful to shrink images. The Binning command does this in the same manner as binning inside the camera – simply combine the adjacent pixels together into a single ”super-pixel”. Unlike a CCD camera, this function averages the values instead of summing them; however the effect is otherwise identical. (Binning inside the camera may reduce total read noise, though, if the binning is done "on chip").
Simple binning does not ensure that the result meets the Nyquist Sampling Criterion. This means that small point sources like stars can all but disappear. The correct way to resize an image is to first low-pass filter it, so that no spatial frequencies exceed one half the new sample interval. This prevents the addition of aliasing distortion into the image. The Half Size command includes such a Nyquist filter.
That's a great find Fred!

To see how binning by averaging works and why you need an input that is a bit blurry (due to seeing conditions, or artificially introduced by a low pass filter) download this image (from Bart van der Wolf's excellent site on comparing downsampling methods).

View it at 100% and it should look like ever narrowing concentric rings. It should be a single ring in the center, with no other rings being visible.

Now software bin (simple bin) this image at 2x2 and look at the result (again at 100%). Due to the presence of high frequency signals in the original, the binning (averaging) has introduced artifacts (you will see multiple rings), however any random noise that would've been present would have been greatly reduced.

Undo everything until you have your original back.

Now perform a simple blur (anything with a 1-pixel radius will do) and perform a 2x2 again. You should now see a perfect binned copy of the original image without aliasing. Again, any random noise that would've been present would have been greatly reduced.

The blur we applied acted as a low-pass filter, to eliminate any high frequency signal from the original (analogous to the slightly blurry image you get when imaging with a CCD that resolves more than seeing conditions permit).

We just simulated the "perfect" situation whereby binning by average can be used to improve the signal-to-noise ratio. Any noise that would have been present in the image would have been averaged and greatly reduced.

This little experiment also shows why the signal-to-noise ratio does not improve by binning willy-nilly (e.g. without suitable source material); yes you reduce the random noise, but you pay for it by introducing artifacts (aliasing).

This experiment also provides a plausible reason for why most people perceive a downsampled (e.g. binned by averaging) image as 'better', regardless of the suitability of the source; the aliasing that was introduced is far less noticeable than the noise in the original image. The aliasing can be quite subtle, even pleasing to human eyes. However, it is still information that doesn't belong in the image and still counts towards noise. It's just "pretty" noise.

So to recap;

Got a soft/blurry image at a high resolution and want to get rid of some noise? Perfect! You don't have any detail to lose anyway and your image is sufficiently blurred to counter any noticeable aliasing. Simply bin it by averaging.

Got a crisp image, but don't mind losing some detail (and resolution) to get rid of some noise? Blur it, then bin it by averaging (or choose a binning algorithm that does both steps for you - like Fred found in Maxim DL).

Last edited by irwjager; 12-07-2010 at 07:31 PM.
Reply With Quote
  #47  
Old 12-07-2010, 04:23 PM
Bassnut's Avatar
Bassnut (Fred)
Narrowfield rules!

Bassnut is offline
 
Join Date: Nov 2006
Location: Torquay
Posts: 5,064
I admire your persistance Ivo, good work, youve got me thinking more now, Ill look into this further. Cant help thinking though that just stacking large numbers of (sharpish) subs will get the same result without loosing resolution. Im a bit like Marc, would like to see the results without the bother of doing it myself 1st up.

I havent heard of anyone else bothering with software binning, I must say.

Thanks for your effort and time on this Ivo.
Reply With Quote
  #48  
Old 12-07-2010, 07:27 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Bassnut View Post
I admire your persistance Ivo, good work, youve got me thinking more now, Ill look into this further. Cant help thinking though that just stacking large numbers of (sharpish) subs will get the same result without loosing resolution. Im a bit like Marc, would like to see the results without the bother of doing it myself 1st up.

I havent heard of anyone else bothering with software binning, I must say.

Thanks for your effort and time on this Ivo.
Thanks Fred! You're right ofcourse - more subs is definitely the way to go. And no matter how clever your processing, you can't beat more and better quality data!

This would make the whole discussion rather academic, were it not for my situation where my equipment is 'basic' (to put it mildly), light pollution is very high (inner Melbourne) and all I have at my disposal is cheap processing power, the oversized CCD of a hacked $90 consumer digital compact camera and my 'mad coding skillz' .

My limitations are by choice and some would (rightfully) say I am indeed mad. However, I absolutely love the challenges these limitations pose. The old adage "necessity is the mother of invention" rings very true in my case. Through writing software (and basic understanding of optics), I have overcome the chromatic aberration of my cheap first eyepieces, overcame the lightpollution in my backyard, put the cheap (but big) CCD in the camera to good use, now have a debayering routine that better suits my astrophotography needs, and can now auto guide my scope via my $10 homebrew interface.

I love how cheap digital compact cameras & webcams, even cheaper CPU cycles and free Open Source Software have put astrophotography within reach of the ordinary sidewalk astronomer (i.e me). Software is absolutely essential in order to capitalize on the low cost of these commodities - it's what ties it all together and repurposes it for astrophotography.

With my huge, but lesser quality, uncooled CCD, I imagine my processing chain looks totally different from yours. I got resolution in spades, but am drowning in noise and fuzz. I *need* things like software binning, and have to use every trick I know to get rid of noise and recover precious signal.

It is my hope that more enthusiasts on a shoestring budget will follow suit, recognize the potential of off-the-shelf hardware and use my tools (or contribute their own). Hopefully you'll see quite a few more people bothering with software binning then!
Reply With Quote
  #49  
Old 13-07-2010, 01:08 AM
ericwbenson (Eric)
Registered User

ericwbenson is offline
 
Join Date: Sep 2009
Location: Adelaide, Australia
Posts: 209
Adding my two cents here:
If you examine the CCD equation and rearrange it a bit, you can make some observations about binning and other stuff. First the simplified equation for the SNR of square object (easier math!) spanning at least a few pixels in width (so we are above the critical sampling) recorded on a CCD:

SNR = S / sqrt(S + Nsky + Nd + npix˛*nr˛)

S: Signal [e-]
Nsky: sky signal [e-]
Nd: dark signal [e-]
nr: CCD read noise [e-/pixel]
npix: object digital width [pixel]

So we can expand the terms in the equation into other measurable/comparable factors using:

S = QE * t * A * w˛ * Cobj

QE: CCD average quantum efficiency [e-/photon]
t: time [sec]
A: telescope collection area [m˛]
w: object angular width [arc sec]
Cobj: average photon rate from object hitting the earth [photon/sec/m˛/arcsec˛], this is the related to object brightness quoted in [magnitudes/arcsec˛] with a conversion factor since flux is actually measured in Jansky's, we'll skip that part for now! Suffice to say that a 22 mag star = 14 photons/sec for every square meter of telescope area.

Nsky = QE * t * A * w˛ * Csky

Csky: average photon rate from sky background [photon/sec/m˛/arcsec˛], due to light pollution, moon, aurora and sky glow, again it is related to what a sky quality meter reads in [magnitudes/arcsec˛]

Nd = t * Cd * npix˛
Cd: average dark signal per pixel from heat [e-/sec/pixel]. This is a function of CCD temperature and usually halves for every ~6 deg. C. drop. For Kodak chips it is in the vicinity of 0.1 e-/pix/sec at -20C.

So the telescope and CCD parameters give us the image scale:
Image scale = 206 * p / f [arcsec /pixel]
p: CCD pixel size [µm]
f: telescope focal length [mm]

and now we can relate the object digital size to its angular size in the sky:
npix = w * f / (206 * p)

Introducing these factors into the original equation gives:

SNR = QE * t * A * Cobj * w˛ / sqrt[ (QE * t * A * w˛ * Cobj) + (QE * t * A * w˛ * Csky) + (t * Cd * npix˛) + (npix˛*nr˛)]

grouping terms and replacing npix

SNR = QE * t * A * Cobj * w˛ / sqrt[ (QE * t * A * w˛) * (Cobj + Csky) + (t * (w*f/(206*p))˛) * (Cd + nr˛/t)]

pulling out the t and w from the sqrt and dividing the nominator:

SNR = QE * sqrt(t) * A * Cobj * w / sqrt[ (QE * A * (Cobj + Csky) + (f/(206*p))˛ * (Cd + nr˛/t)]

So if you can ignore light pollution, dark current and read noise (perfect camera in outer space), telescope area, integration time, CCD QE, object brightness are all equally important as is the angular area of the object.
Pixel size only matters if QE*A*(Csky+Cobj) is small as compared to dark current and read noise. Of course pixel size matters very much for system resolution!
Therefore when the object is faint under a dark sky or when using a narrow band filter (where Cobj is not affected but Csky is greatly reduced) increasing the pixel size (i.e. binning) reduces the effect of read noise. N.B. the dark current noisee remains unchanged since binning 2x2 makes Cd 4x bigger!

Lets look at some real numbers:
Csky (naked eye limiting mag ~ 6.0) ~ 21 mag/arcsec˛ = 18 photons/sec/m˛ in green filter
Cobj (e.g. a 12th mag. galaxy 30"x30" in size) ~ 19 mag/arcsec˛ = 82 photons/sec/m˛ in green filter
A (for a 10" scope)= pi * 0.254˛ / 4 = 0.05 m˛
QE = 0.5
f = 2500 mm, p = 9 µm
Cd = 0.1 e-/pix
nr = 15 e-
exposure time t = 600 sec

Inside the sqrt we have:
0.5 *0.05*(88+14) + (2500/(206*9))˛ * (0.1 + 225 / 600)
0.025*100 + 1.8 * (0.1 + 0.375)
2.5 + 0.855
So the read noise is more important than the dark noise, but the shot noise from the galaxy itself dominates all other noise sources. In light polluted skies (18 mag/sq" = 300 photons/sec/m˛), the Csky term easily dominates above all else, making binning, pixel size, and to some extents cooling, sorta moot.

BTW Csky is easily measured if you know your CCD QE and gain [e-/ADU]. Take an exposure of a few minutes, record the pixel value in an empty area, subtract any software pedestal (MaxIm/CCDSoft add 100 ADU for technical reasons), multiply by the gain to get electrons, divide by QE, telescope area and exposure time and you get flux in photons/sec/m˛.

EB
Reply With Quote
  #50  
Old 13-07-2010, 09:01 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by ericwbenson View Post
Adding my two cents here:
I'd rate that at a little more than two cents!

Awesome explanation (and application) of the math behind the whole concept.

If anyone wants to toy around with numbers of their own, there's a bunch of on-line calculators and spreadsheets available via this site (scroll down to about 3/4ths of the page).

One of them (a great on-line calculator by John Smith) is particularly instrumental in quickly demonstrating the effect of varying your different circumstances on the different proportions of noise in your signal, as well as at which point read-out noise does become a serious issue (hint: it's exactly the sort of circumstances under which Bassnut's experience taught him to resort to hardware binning - dark site, broad area, faint object, narrow band).
Reply With Quote
  #51  
Old 14-07-2010, 12:26 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by multiweb View Post
Good stuff. I'll email you some raw fits.
I received Marc's images and did the following. (Please be aware the image files are quite big).

This is the image I got from Marc.

First I created a light pollution map. I used StarWipe, but you might want to use something else;
StarWipe --in=bin_test.tif --out=bin_test_lpmap.tiff --scale=5 --window=30 --mode=global --maponly
That resulted in this light pollution map.

Next I subtracted the light pollution map from the original image. Again, I used StarWipe for this as well, but importing the original image in PhotoShop, then importing the light pollution map in a separate layer and setting that layer to 'subtract' will do the same thing;
StarWipe --in=bin_test.tif --inmap=bin_test_lpmap.tiff --mode=global --out=bin_test_wiped.tiff --nonormalize
Now we're left with a dimmer version of the original, but with the light pollution removed. However, now I'm no longer using the full dynamic range I have at my disposal - I can still crank the brightness up without clipping my histogram to the right.

We should increase the signal in the image so that the image is bright again, but not so bright that my histogram starts clipping to the right.

At this point, I use (fractional) software binning (approx 1.41 x 1.41 in this case) to crank up the brightness again, instead of stretching the brightness levels of the pixels.
StarBright --in=bin_test_wiped.tiff --scale=71 --mode=cap
With this as the final result. (notice though that the image is 71% of the original size in X and Y directions)

So what's the difference between stretching and binning?

If I stretch the brightness levels, I trade off precision for signal

If I bin the image, I trade resolution for signal.
or
If I stretch the brightness levels, gaps will start to appear in the histogram. Stretch the levels enough and you'll start to see banding and noise will become more apparent.

If I bin the image, the histogram will stay intact and appear smooth. I can bin as much as I want without ever seeing any banding (though my image will get smaller and smaller and uselessly bright).

That's how (and why) I use software binning in a nutshell!
Reply With Quote
  #52  
Old 14-07-2010, 07:41 PM
ericwbenson (Eric)
Registered User

ericwbenson is offline
 
Join Date: Sep 2009
Location: Adelaide, Australia
Posts: 209
Quote:
Originally Posted by irwjager View Post
So what's the difference between stretching and binning?

If I stretch the brightness levels, I trade off precision for signal

If I bin the image, I trade resolution for signal.
or
If I stretch the brightness levels, gaps will start to appear in the histogram. Stretch the levels enough and you'll start to see banding and noise will become more apparent.

If I bin the image, the histogram will stay intact and appear smooth. I can bin as much as I want without ever seeing any banding (though my image will get smaller and smaller and uselessly bright).

That's how (and why) I use software binning in a nutshell!
Wait a sec, it sounds as if you are working in only 8bits? CCDs produce images with much more than 8bits of dynamic range (10-14 is typical). And stacked images have even more (16 images in a stack could in theory get 4 more bits). Also the camera quantization levels are generally set to be smaller than the read noise. That way the discreet digital levels are not a limiting factor. That's why MaxIm DL and Mira do their processing in 32bit floating point, none of this banding occurs and there is no trade off in signal range-precision. Although I have seen banding on the display image when I had a large smooth ultra low noise profile, like that produced by a giant elliptical galaxy for which I had tons of exposure time on.

You would only want to software bin to make the image a) smaller and/or b) smoother by giving up resolution.

Cheers,
EB
Reply With Quote
  #53  
Old 14-07-2010, 08:35 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by ericwbenson View Post
Wait a sec, it sounds as if you are working
in only 8bits?
Nope, 10-bit source/CCD, 64-bit integer processing and 16-bit TIFF output (as in the links).
Quote:
Originally Posted by ericwbenson View Post
CCDs produce images with much more than 8bits of dynamic range (10-14 is typical). And stacked images have even more (16 images in a stack could in theory get 4 more bits). Also the camera quantization levels are generally set to be smaller than the read noise. That way the discreet digital levels are not a limiting factor. That's why MaxIm DL and Mira do their processing in 32bit floating point, none of this banding occurs and there is no trade off in signal range-precision.
I couldn't disagree more. There *is* signal degradation and you *will* start to notice it when you're starting to manipulate the signal. See how you go when you're trying to multiply the signal, such as when you're trying to make a high dynamic range composite from a high bit-depth image (i.e. you multiply the signal progressively and blend the non-over exposed parts with the original to bring out more detail, in, let's say, 8-bit intervals).

I do agree that there's a certain color depth, beyond which it becomes hard to see any difference (your screen can only represent 8-bits each color channel). That doesn't mean however that the imprecision isn't still there, waiting to bite you in the a** once you start manipulating your data further (such as in the HDR scenario above).

Also, you can still get banding with 32-bit floating point numbers. Floating point numbers can't represent every real number and rounding errors creep in quickly and become larger, the more you manipulate the data. Unless Maxim and Mira use floats to store integers in, in that case you can store 2^24 integers correctly until things go wrong. But that would defeat the purpose - why not just use a 32-bit integer... Fidelity and accuracy is not a float's strong point once you start performing arithmetic on them; for example, one of the biggest sins you can commit as a programmer in the finance industry is to use floats...
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 10:37 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement