Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rating: Thread Rating: 10 votes, 5.00 average.
  #1  
Old 01-03-2013, 10:04 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
An opinion on where the icx694 fits in

Hi

Have recently completed a detailed design exercise for a hi res imaging system. One thing that became clear during that process was how interesting the Sony icx694 CCD is. There has been a lot of discussion here and elsewhere on this chip, so this is a summary of where I think it might fit in, based on the results of the modelling. I have also ordered a 694 camera and will be very interested to see how it performs in real life.

The icx694 is a capable 6 megapixel CCD, with an absolute QE of >0.75 peak, very low read and thermal noise and a full 12 bits dynamic range. It is a relatively small chip with small pixels and a combination of characteristics that requires somewhat different design assumptions from those used in the past.

First off, you cannot just bolt one on the back of your existing scope and expect it to be more sensitive than your big pixel Kodak CCD – it won’t be. The small size (4.5microns) of the Sony pixels means that, although the resolution may be better, the signal levels will not – the combination of a given scope and the 694 will be roughly 1/3 as sensitive as the same scope with a typical 9 micron sensor. Expect to see posts along the lines of “the 694 is NOT very sensitive” when people do a simple camera comparison without any consideration of pixel scale….

If you really want to tap into the extra sensitivity of the chip, you need to use a scope with a shorter focal length so that you get an appropriate pixel scale – then the extra sensitivity and low noise will enable you to reduce imaging times significantly (by roughly half at Ha) with the same aperture and resolution. The conventional wisdom that hi res imaging requires a long focal length scope does not apply, since the 694 only needs a medium focal length scope to obtain seeing limited performance. For example, to get under 1 arcsec /pixel sampling with a K11002 requires 2m focal length (eg 10” f8) – the same sampling is available from the 694 at 1m focal length (eg 10” f4). In fact, in Australian seeing, it probably does not make much sense to consider a scope of much greater than about 1.2m fl for any purpose with this chip –with a fast scope at this focal length, you will reach seeing limited resolution in all but exceptional conditions.

So what will you get from the 694? In a nutshell, high resolution and high sensitivity with a moderate field of view from a short to medium fl scope. This chip makes a lot of sense if you currently use a well corrected fast Newtonian or APO for wide field imaging. With a 694 camera, you can switch your wide-field scope over to hi res imaging if the seeing is good. This chip is up to twice as sensitive to Ha as most of the alternatives and has lower noise, so use it with APOs (particularly the fast Petzvals) and fast Newtonian scopes to produce hi res narrow band images with very little noise. The new opportunities for most short scope owners look to be significant and exciting.

The 694 may also help a bit with cost. Although 694 cameras are relatively expensive, you may save on the associated equipment - filters/wheels for example may be less costly and bulky at 1.25”. The chip does not need extensive cooling, a shutter or RBI flooding, so cameras can be lightweight, allowing lighter scopes/focusers to carry them. Since everything can be lightweight, it may be possible to use a less expensive mount. In addition, if you can use one scope for both wide-field and hi res imaging by just changing cameras, the cost savings may be even more significant.

In summary then, if you have already invested in a high res imaging system based on a longer fl scope and a camera with large pixels, don’t bother with the 694 – it is not suitable for your scope. Similarly, the 694 is not a good choice if you want to take panoramic images of wide swathes of sky. However, if you have a well corrected fast Newtonian or a small to medium APO and you want to image smaller nebulae and globulars, galaxies or planetary nebulae, then the 694 offers a new capability to resolve fine detail. It could also be suitable for solar and lunar imaging. The Sony icx285 and Kodak KAF8300 went some way down the short scope path, but the icx694 gives even higher resolution and excellent SNR performance. It is definitely worth a close look for a lot of amateur astronomers.

Thanks for reading. Regards ray

Last edited by Shiraz; 01-03-2013 at 10:56 AM.
Reply With Quote
  #2  
Old 01-03-2013, 10:59 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
I think you have made an error there. QE determines sensitivity not pixel size. That's what QE means - what percentage of photons hitting the pixel result in a signal.

Pixel size determines resolution.

It will be more sensitive than the 9 micron cameras with 60% QE. I have both 16803 and 8300. They both have 60% QE. The 8300 is 5.4 mcirons, the 16803 is 9 microns. I don't see any difference in sensitivity. The seem much the same on several different scopes. The resolution is different as is the FOV and more importantly the smaller wells make it more sensitive to bright stars and losing colour and halos around bright stars.

There may be a slight loss with smaller pixels due to the antiblooming channels around the pixel resulting in a larger percentage of surface area taken up by them. Also perhap QE of these sensors may vary with F ratio of the scope in that super fast scopes may give a steeper light cone less suitable for a pixel receiving light.

SBIG ST10XME has been the highest popular QE camera and it has 6.6 micron pixels (and non antiblooming).

Greg.
Reply With Quote
  #3  
Old 01-03-2013, 01:38 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by gregbradley View Post
I think you have made an error there. QE determines sensitivity not pixel size. That's what QE means - what percentage of photons hitting the pixel result in a signal.

Pixel size determines resolution.

It will be more sensitive than the 9 micron cameras with 60% QE. I have both 16803 and 8300. They both have 60% QE. The 8300 is 5.4 mcirons, the 16803 is 9 microns. I don't see any difference in sensitivity. The seem much the same on several different scopes. The resolution is different as is the FOV and more importantly the smaller wells make it more sensitive to bright stars and losing colour and halos around bright stars.

There may be a slight loss with smaller pixels due to the antiblooming channels around the pixel resulting in a larger percentage of surface area taken up by them. Also perhap QE of these sensors may vary with F ratio of the scope in that super fast scopes may give a steeper light cone less suitable for a pixel receiving light.

SBIG ST10XME has been the highest popular QE camera and it has 6.6 micron pixels (and non antiblooming).

Greg.
Thanks for the response Greg.

There is no error. The number of detected photons (the signal) depends on the quantum efficiency and the number of photons actually entering the pixel - this depends entirely on how big the pixel is - a big pixel intercepts more photons than a small one with a given scope. That's how binning works - it makes bigger effective pixels to increase the sensitivity. Sampling is maybe the single the most important determinant of how sensitive a system is and yet it seems to be often relegated to being a minor afterthought.

Regards Ray

For confirmation, the following is a quote from the Apogee website at http://www.ccd.com/ccd113.html

note that the assumed seeing is somewhat worse than I have worked on, but the message is the same.

"Pixel Sensitivity

The larger the pixel, the more sensitive the camera will be for any given focal length. This is also a sampling issue. Under excellent seeing conditions, a camera with 24µ pixels on a telescope of 2000 mm focal length will produce images that are very close to being undersampled. For faint deepsky objects, however, these large pixels will outperform (in terms of sensitivity) a camera with 9µ pixels on the same telescope. This is because a camera with 9µ pixels being used on a telescope with 2000 mm focal length will produce images that are nearly oversampled. That is, there are too many pixels making up each star image. The result will be reduced sensitivity, but better resolution."

Last edited by Shiraz; 01-03-2013 at 03:10 PM.
Reply With Quote
  #4  
Old 01-03-2013, 03:05 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
That would be true if there were large gaps between pixels which I don't think there is.

Empirical evidence says its the other way round. If your theory were correct I wouldn't see the same sensitivity between Microline 8300 and Proline 16803 and I do see the same. So the proof of the pudding is in the eating so to speak.

I see the same sorts of theories on DPreview about DSLR cameras and megapixels. The Nikon D800E proved all that incorrect. Its about the most sensitive camera out there, really low noise, it has small pixels,57% QE. That's a Sony Exmor sensor.

Fuji XE1, Sony Nex 5 and 6,Nikon D7000, Canon 60D/7D all show low noise with small pixels.

I don't see so many of those posts now as the proof is in the many images showing low noise and high sensitivity. The theory was small pixels give less sensitivity and higher noise. What that doesn't take into account is the constant improvement made to the sensors by clever engineers looking for a boost in performance. Technology has moved on.

A lot of phone manufacturers are going for the newish Sony stacked backlit CMOS sensor. Its 13mp and 1.2 micron pixels! Yet its twice as bright as other sensors.

These Sony engineers are at the cutting edge. I read about constant improvements in sensor design. From clever colour filter arrays from Toshiba that have no light loss compared to the regular dye Bayer filter arrays, to a clever use of larger capacitor with a gate by Aptina that increases well depth when the well fills. Sony Exmor has some clever analogue to digital circuitry in the columns that they bought out another company to acquire and is a major reason why their Exmor sensors are better than anyone elses. Plus their backlit stacked sensor. Even Kodak/True Sense Imaging's clear RGB filter array is looking a bit dated and superceded already.

I am not sure what new technology the Sony engineers have in that CCD but clearly some of these advancements have made their way into it to get that sort of performance out of a small pixelled camera.

Mike Sidonio will be using his soon so his results will be the final evidence of this and we will know for sure. Otherwise the rest is speculation.

Greg.
Reply With Quote
  #5  
Old 01-03-2013, 04:09 PM
Peter Ward's Avatar
Peter Ward
Galaxy hitchhiking guide

Peter Ward is offline
 
Join Date: Dec 2007
Location: The Shire
Posts: 8,088
Quote:
Originally Posted by Shiraz View Post
Thanks for the response Greg.

There is no error. The number of detected photons (the signal) depends on the quantum efficiency and the number of photons actually entering the pixel -
Yes and no.

QE is purely a measure of how many photons are detected vs the total number of photons falling onto a pixel.

This is not the same as system sensitivity...which you correctly point out should take into account pixel size.

Given the current Sydney weather, the "bucket" analogy works well here.

Put a small cup outside in the rain for a time, and it will fill say with 3 cm of water.

A big bucket next to it will also collect 3 cm of water, but when you
empty the bucket you may have many cups full of water... literally buckets of signal compared to our small cup.

You could consider QE to be how much water the cup or bucket loses before you measure their contents.
Reply With Quote
  #6  
Old 01-03-2013, 07:43 PM
PRejto's Avatar
PRejto (Peter)
Registered User

PRejto is offline
 
Join Date: Jan 2011
Location: Rylstone, NSW, Australia
Posts: 1,397
Hi Ray,

Thanks for this interesting post. Like everywhere this is discussed nobody can ever agree!! But, I think Greg is right in the sense that as these cameras are used more and more certain facts will become clearer no doubt. I'm in no position to argue the technical side of things.....

I'm wondering if you might share which camera you decided to buy (and perhaps why), and which scope you intend to test this with?

I wonder how it might perform with my TEC140 at f7 (.95 arcsec/pix) compared with my KAF8300 (1.14 arcsec/pix)? Not a significant improvement in resolution, but would I see much improvement in sensitivity for RGB and Ha? Would it be worth the decreased FOV? Of course, as reported by several, the possibility to not need darks or deep cooling, and lighter package are all appealing selling points.

Thanks,

Peter
Reply With Quote
  #7  
Old 01-03-2013, 08:22 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by gregbradley View Post
That would be true if there were large gaps between pixels which I don't think there is.

Empirical evidence says its the other way round. If your theory were correct I wouldn't see the same sensitivity between Microline 8300 and Proline 16803 and I do see the same. So the proof of the pudding is in the eating so to speak.

I see the same sorts of theories on DPreview about DSLR cameras and megapixels. The Nikon D800E proved all that incorrect. Its about the most sensitive camera out there, really low noise, it has small pixels,57% QE. That's a Sony Exmor sensor.

Fuji XE1, Sony Nex 5 and 6,Nikon D7000, Canon 60D/7D all show low noise with small pixels.

I don't see so many of those posts now as the proof is in the many images showing low noise and high sensitivity. The theory was small pixels give less sensitivity and higher noise. What that doesn't take into account is the constant improvement made to the sensors by clever engineers looking for a boost in performance. Technology has moved on.

A lot of phone manufacturers are going for the newish Sony stacked backlit CMOS sensor. Its 13mp and 1.2 micron pixels! Yet its twice as bright as other sensors.

These Sony engineers are at the cutting edge. I read about constant improvements in sensor design. From clever colour filter arrays from Toshiba that have no light loss compared to the regular dye Bayer filter arrays, to a clever use of larger capacitor with a gate by Aptina that increases well depth when the well fills. Sony Exmor has some clever analogue to digital circuitry in the columns that they bought out another company to acquire and is a major reason why their Exmor sensors are better than anyone elses. Plus their backlit stacked sensor. Even Kodak/True Sense Imaging's clear RGB filter array is looking a bit dated and superceded already.

I am not sure what new technology the Sony engineers have in that CCD but clearly some of these advancements have made their way into it to get that sort of performance out of a small pixelled camera.

Mike Sidonio will be using his soon so his results will be the final evidence of this and we will know for sure. Otherwise the rest is speculation.

Greg.
Thanks Greg. Looked up your chips - the reason you see the same ADU from the two is that the 8300 has about 3x the internal gain of the 16803. I guess that is deliberate, to make it easier to change cameras. However, when you see the same sort signal levels on the two, the 8300 is getting there with about 1/3 as many photons. At that gain, you should see more noise from the 8300.

If you see similar ADU from your various scopes I am guessing that they all have similar focal ratios. You could try a test and upset the pixel scale by putting a 2x Barlow in and comparing what signal you get with and without.

On the issue of sensitive small pixels, I couldn't agree more. The new generation of chips, including the 694, has very sensitive low noise pixels and that was what I was getting at in the thread. there is nothing inherently insensitive about small pixels - in fact there are good reasons why they can be sensitive and low noise at the same time. Where we disagree is in the effect of sampling - I contend that even high performance pixels will be behind the eightball if they cannot get many photons and that is exactly what happens with oversampling. Getting the pixel scale right is a major sensitivity issue and it is often largely ignored.

Quote:
Originally Posted by Peter Ward View Post
Yes and no.

QE is purely a measure of how many photons are detected vs the total number of photons falling onto a pixel.

This is not the same as system sensitivity...which you correctly point out should take into account pixel size.

Given the current Sydney weather, the "bucket" analogy works well here.

Put a small cup outside in the rain for a time, and it will fill say with 3 cm of water.

A big bucket next to it will also collect 3 cm of water, but when you
empty the bucket you may have many cups full of water... literally buckets of signal compared to our small cup.

You could consider QE to be how much water the cup or bucket loses before you measure their contents.
That's a very good analogy, thanks Peter



More generally, it may be worth a short explanation here. The main aspects of the model I use are adapted from a paper (possibly a lecture) published by the University of California Observatories http://www.ucolick.org/~bolte/AY257/s_n.pdf and from http://learn.hamamatsu.com/articles/ccdsnr.html
The maths is quite straightforward and all aspects of the model are well validated elsewhere - short of me making a mistake with the EXCEL formulae, it is going to be pretty reliable - it is basic electro-optical engineering and certainly not my "theory".

The whole idea of doing this was to use the same method that professional observatories use to design new gear - they model it to death until they understand exactly what it will do, way before any glass is ground. It turned out that a simple model could do this reasonably effectively for a small scope - all required parameters are readily available and few assumptions need be made.

Once the model was working OK, it seemed that it could be a useful way to cut through some of the hype and misunderstanding that has built up around what the icx694 actually offers and where it is deficient - and that was the main thrust of this thread. It seems that it has not been a completely successful enterprise.

Regards ray

Last edited by Shiraz; 01-03-2013 at 11:28 PM.
Reply With Quote
  #8  
Old 01-03-2013, 08:51 PM
Peter Ward's Avatar
Peter Ward
Galaxy hitchhiking guide

Peter Ward is offline
 
Join Date: Dec 2007
Location: The Shire
Posts: 8,088
Quote:
Originally Posted by Shiraz View Post
That's a very good analogy, thanks Peter

Regards ray
No problemo.

The bucket analogy goes further (at one stage CCD's were nearly called BBD's, literally: Bucket Brigade Devices) .

It also gives good insight understanding the the dynamic range of various sensors.

Our two rain..( al la photon...) gauges, one made from a bucket, the other a cup, hold x-ml of water.

Let's say it was really raining and we captured 12cm of water...our cup is overflowing (saturated ...) so we really don't know how much rain fell...but our bucket still has a good deal of volume left before it fills so we can still quantify the downpour.

Not only that, but our (big pixel) bucket holds a good deal more water, so the number of millilitres (photons) we can measure, from empty to full, is in the thousands, compared to our cup which can only hold a few hundred at best.

In short big pixels give more signal and higher dynamic range... the rub is: to get good (angular) resolution, you'll also need big optics.
Reply With Quote
  #9  
Old 01-03-2013, 08:52 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by PRejto View Post
Hi Ray,

Thanks for this interesting post. Like everywhere this is discussed nobody can ever agree!! But, I think Greg is right in the sense that as these cameras are used more and more certain facts will become clearer no doubt. I'm in no position to argue the technical side of things.....

I'm wondering if you might share which camera you decided to buy (and perhaps why), and which scope you intend to test this with?

I wonder how it might perform with my TEC140 at f7 (.95 arcsec/pix) compared with my KAF8300 (1.14 arcsec/pix)? Not a significant improvement in resolution, but would I see much improvement in sensitivity for RGB and Ha? Would it be worth the decreased FOV? Of course, as reported by several, the possibility to not need darks or deep cooling, and lighter package are all appealing selling points.

Thanks,

Peter
thanks Peter. I chose an SX because 1. it is available locally. 2. it has a mechanism to adjust the squaring of the sensor. This is only an issue with fast scopes and I intend to use the chip with either my ancient 8 inch f4 or a new 10 inch f4.

I will put your gear into the model and post results later - I am not able to do it tonight.

regards ray
Reply With Quote
  #10  
Old 01-03-2013, 09:45 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by Shiraz View Post
More generally, it may be worth a short explanation here. The main aspects of the model I use are adapted from a paper (possibly a lecture) published by the University of California Observatories http://www.ucolick.org/~bolte/AY257/s_n.pdf
The maths is quite straightforward and all aspects of the model are well validated elsewhere - short of me making a mistake with the EXCEL formulae, it is going to be pretty reliable - it is basic electro-optical engineering and certainly not my "theory".
Thanks for the pointer to the lecture notes, Ray. Interesting stuff!

Cheers,
Rick.
Reply With Quote
  #11  
Old 01-03-2013, 11:03 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by Peter Ward View Post
No problemo.

The bucket analogy goes further (at one stage CCD's were nearly called BBD's, literally: Bucket Brigade Devices) .

It also gives good insight understanding the the dynamic range of various sensors.

Our two rain..( al la photon...) gauges, one made from a bucket, the other a cup, hold x-ml of water.

Let's say it was really raining and we captured 12cm of water...our cup is overflowing (saturated ...) so we really don't know how much rain fell...but our bucket still has a good deal of volume left before it fills so we can still quantify the downpour.

Not only that, but our (big pixel) bucket holds a good deal more water, so the number of millilitres (photons) we can measure, from empty to full, is in the thousands, compared to our cup which can only hold a few hundred at best.

In short big pixels give more signal and higher dynamic range... the rub is: to get good (angular) resolution, you'll also need big optics.
C'mon Peter - you don't seriously expect me to let that lot go through to the keeper do you?

Your analogy ran out of steam on the first post because it conveniently does not include noise - which, as you well know, is the other part of the dynamic range measure. A small chip will have smaller wells, but the reduced real estate makes it possible to also reduce the noise - small chips can have good dynamic range.

If we look at the dynamic ranges of a few chips from published well depth and read noise data we find:
K8300 = 2700:1
icx694 = 4000:1
K11002 = 4500:1
K16803 = 11000:1
The icx694 is right up there with the competition, despite its small pixels. The only one it does not compete with here is the 16803, but a camera with that chip can, by itself, cost more than an entire system built around an 8300 or a 694. There have been some beautiful images produced by 8300s and 11002s - on that basis, the 694 has ample dynamic range.

"In short big pixels give more signal and higher dynamic range". ???? It has been shown above that the dynamic range of small pixel chips need not be significantly lower than that of large chips. And you conveniently forgot the bit about "but only if you use the same scope" when referring to the "more signal" bit. There is nothing inherently more sensitive about big pixels - they just intercept more photons in a given geometry than small ones so you gain sensitivity and lose resolution. If the pixel scale is matched to give the same resolution, small pixels on a short focal length scope give just as much signal as large pixels on a long focal length scope. I am sure you know this - why confuse the issue?

So what you should have actually said in your throwaway sentence is: "In short, small pixels give the same signal and about the same dynamic range as similar bigger ones when both are matched to scopes of appropriate focal lengths and with the same aperture". And that excludes the fact that the 694 has higher QE than the competition.

Regards Ray

Last edited by Shiraz; 02-03-2013 at 12:17 AM.
Reply With Quote
  #12  
Old 01-03-2013, 11:04 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by RickS View Post
Thanks for the pointer to the lecture notes, Ray. Interesting stuff!

Cheers,
Rick.
thanks Rick - that was the best overall summary I found, but the same ideas are presented in many other places eg http://learn.hamamatsu.com/articles/ccdsnr.html.

Regards Ray

Last edited by Shiraz; 01-03-2013 at 11:25 PM.
Reply With Quote
  #13  
Old 01-03-2013, 11:15 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by Shiraz View Post
thanks Rick - that was the best summary I found, but the same ideas are presented in many other places. Regards Ray
Ray,

I have read similar stuff in HAIP, articles by Craig Stark, etc. but I pick up a new idea or two each time I look at a new source. I've seen a few different takes on the maths for sky limited exposure times and I'm hoping that soon I'll have the theory down well enough to make a call on which model is most correct.

Cheers,
Rick.
Reply With Quote
  #14  
Old 02-03-2013, 09:27 AM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
My take is Well depth is still an important factor not to be ignored.

The choice of CCD for each person is very much dependent on so many things. : Budget, purpose (huge spectrum of requirements there - scientific vs artistic, planetary, Vs DSO, Vs photometric, blooming or antiblooming, spectral response and extended sensitivity at particular bands, binning capabilities, download speeds . . . ), the telescope being used and therefore image scale and all its considerations such as average local seeing conditions, degree of over sampling desired, well depth, dark noise, cooling, integrated guiding, filter wheels, adaptive optic capability, OAG options, light pollution etc etc etc

That is why there are books written on the subject and no one camera can ever suit all purposes or all people.

Thus its a personal thing because our needs are all a bit different.

Dynamic range is often a key factor in any decision (well depth e- / read noise e- or 20 * log(welldepth e- / readout noise e-) if you want it in db)

But Well Depth is one of the main things that allows us to image for a very long period of time to capture the very, very faint nebulosity without over saturating the rest of the image (ignoring stars).

Faint nebulosity has so few photons arriving that you either get a huge telescope or you expose for longer - since super high Qe is not generally affordable for mere mortals and true photon multiplying in silicon is likewise unavailable to us and really big telescopes are not quite so practical or affordable.
Exposing for longer is the cheapest, most affordable and available option !

So a long exposure with low well depth is harder to capture the full dynamic range of your target - (in one exposure) than a similar camera with greater well depth - despite having similar dynamic range.
After all, this is really the quest - to capture the enormous dynamic range of our chosen subject and compress it by so many orders of magnitude into an image that we look at (usually in an 8 bit format ! on screen or printed) and manipulate it to display the qualities we seek - dark dust clouds and shadows, enhanced colour, non linearly selectively stretched features etc to make it appealing and exaggerate these faint interesting and beautiful features

So to my mind there is some benefit to having a camera that has deeper wells than the Sony has (<>20,000e), irrespective of the chips dark noise or read noise and dynamic range calculation.

But they are making astro cameras more affordable and I think we are seeing the effects of this across the board.

Rally
Reply With Quote
  #15  
Old 02-03-2013, 10:19 AM
Peter Ward's Avatar
Peter Ward
Galaxy hitchhiking guide

Peter Ward is offline
 
Join Date: Dec 2007
Location: The Shire
Posts: 8,088
Quote:
Originally Posted by Shiraz View Post
C'mon Peter - you don't seriously expect me to let that lot go through to the keeper do you?

Your analogy ran out of steam on the first post because it conveniently does not include noise - which, as you well know, is the other part of the dynamic range measure. ...

Regards Ray
True, but I was trying to keep it simple.

That said, you could make a further analogy but making our cup nice and clean while our bucket is a little grubby....

Larger pixel sensors, eg 14u-20u do however have massive dynamic ranges compared to 4um-5um offerings. ..90 db or more.

I think there there is a visible hallmark to tiny well sizes (small pixels): as many of the images taken by these devices show nearly all stars as saturated to white as it is tricky to keep those tiny wells from overflowing.
Reply With Quote
  #16  
Old 03-03-2013, 06:29 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by Peter Ward View Post
True, but I was trying to keep it simple.

That said, you could make a further analogy but making our cup nice and clean while our bucket is a little grubby....

Larger pixel sensors, eg 14u-20u do however have massive dynamic ranges compared to 4um-5um offerings. ..90 db or more.

I think there there is a visible hallmark to tiny well sizes (small pixels): as many of the images taken by these devices show nearly all stars as saturated to white as it is tricky to keep those tiny wells from overflowing.
Thanks Peter

Larger pixel sensors, eg 14u-20u do however have massive dynamic ranges compared to 4um-5um offerings. ..90 db or more.

yes, but we are not discussing that class of chip here - bit of a red herring.

I think there there is a visible hallmark to tiny well sizes (small pixels): as many of the images taken by these devices show nearly all stars as saturated to white as it is tricky to keep those tiny wells from overflowing.

Now I am confused. If you are saying that the icx694 will show nearly all saturated stars then you must also be implying that the ST8300 cameras that you sell* will show the same - after all, the KAF8300 CCDs (well depth 25,500) have only a little more well depth than the icx694 (20,000). There are enough fine images out there from 8300s to show that these CCDs do not necessarily have this problem, so there is no reason to expect that the icx694 will have it either.

There must be some other cameras out there that produce the saturated star images that you refer to? DSLRs, OSCs maybe?? I know the effect that you describe and suspect that image exposure strategies and processing techniques may play a significant role.

(*note - I have been told that you are an SBIG dealer, please correct if that is wrong)
__________________

Quote:
Originally Posted by rally View Post
My take is Well depth is still an important factor not to be ignored.

The choice of CCD for each person is very much dependent on so many things. : Budget, purpose (huge spectrum of requirements there - scientific vs artistic, planetary, Vs DSO, Vs photometric, blooming or antiblooming, spectral response and extended sensitivity at particular bands, binning capabilities, download speeds . . . ), the telescope being used and therefore image scale and all its considerations such as average local seeing conditions, degree of over sampling desired, well depth, dark noise, cooling, integrated guiding, filter wheels, adaptive optic capability, OAG options, light pollution etc etc etc

That is why there are books written on the subject and no one camera can ever suit all purposes or all people.

Thus its a personal thing because our needs are all a bit different.

Dynamic range is often a key factor in any decision (well depth e- / read noise e- or 20 * log(welldepth e- / readout noise e-) if you want it in db)

But Well Depth is one of the main things that allows us to image for a very long period of time to capture the very, very faint nebulosity without over saturating the rest of the image (ignoring stars).

Faint nebulosity has so few photons arriving that you either get a huge telescope or you expose for longer - since super high Qe is not generally affordable for mere mortals and true photon multiplying in silicon is likewise unavailable to us and really big telescopes are not quite so practical or affordable.
Exposing for longer is the cheapest, most affordable and available option !

So a long exposure with low well depth is harder to capture the full dynamic range of your target - (in one exposure) than a similar camera with greater well depth - despite having similar dynamic range.
After all, this is really the quest - to capture the enormous dynamic range of our chosen subject and compress it by so many orders of magnitude into an image that we look at (usually in an 8 bit format ! on screen or printed) and manipulate it to display the qualities we seek - dark dust clouds and shadows, enhanced colour, non linearly selectively stretched features etc to make it appealing and exaggerate these faint interesting and beautiful features

So to my mind there is some benefit to having a camera that has deeper wells than the Sony has (<>20,000e), irrespective of the chips dark noise or read noise and dynamic range calculation.

But they are making astro cameras more affordable and I think we are seeing the effects of this across the board.

Rally
Hi Rally. thanks for the post.

I agree that well depth is very important - I just do not think it can be considered in isolation. To explain why, consider that you have two systems with the same aperture - one matched to an icx694 and the other to a Kodak 11002.
For discussion, lets say that the 11002 runs into star saturation (full wells of 60,000 electrons) at 20 minutes exposure. If you take four 5 minute images with the 694 system and add them together, you will run into star saturation at 20,000 electrons (full well) for each image or 80,000 electrons for the combined image. Thus you get more headroom than the 11002. The read noise adds in quadrature, so in the combined image it will be 2x single image read noise, or 10 electrons. This is less than the single image read noise from the 11002 of 13 electrons. The signal level will be the same from either 4x5 minutes or 20x1 minutes (assuming the QE is the same for simplicity). Thus, by stacking multiple frames, you get better headroom, lower noise and the same signal for the 694 when compared to the 11002. Note that this can only work if the read noise is low and that is a characteristic of the 694. I am not trying to say that the 11002 is no good, just that the 694 can be much better than it would seem to be from a consideration of well depth in isolation. It will be interesting to experiment with a variety of exposure strategies for the 694 - it really is a quite different beast.

I agree with your comments on the varied bases for people's decisions on cameras. I have been uneasy acting as a sort of defacto advocate for the Sony chip for this very reason. However, I considered that there had been some unjustifiably negative information posted about the icx694 chip and I thought that someone needed to point just how good its specs actually are - be a pity if anyone was dissuaded from buying one purely on the basis of dodgy technical comment - thereby missing out on a very capable device.

regards Ray

Last edited by Shiraz; 03-03-2013 at 07:45 PM.
Reply With Quote
  #17  
Old 03-03-2013, 07:42 PM
rally
Registered User

rally is offline
 
Join Date: Sep 2007
Location: Australia
Posts: 896
Ray,

Well - Yes and No

For high level signals that are well above the noise this is mostly true

The very low level signals need a lot more time to accumulate any sort of a signal that can get above all the other sources of noise.
The photon count is extremely low.

Stacking 4 subs with a weak signal with a lot of noise and not much signal may get you some signal back, but it wont be as good as the longer exposure.
An extreme example might be one of Hubble's Ultra and Extreme exposures - no matter how much stacking you do of short subs Hubble couldn't have got the detail without long exposures - they still stacked them !
Not a perfect example but good for the general idea.

So despite the fact that the read noise on the Sony might (and in some cases might not) be lower in the first place, you will not necessarily get useful data.

There are many objectives in imaging, but getting the most signal out of the most faint regions is certainly high on the list. Presently the images with the Ooh Aahh factor are making the most out of enhancing not so much the light that they are collecting but rather the lack of light ! - the shadows and dark lanes

But it can still be done with the Sony, just that the Sensors full well Dynamic Range is still a few bits shorter, so you would need to split the imaging up.

It would be great to see a real life comparison, or have someone with the appropriate skills to do the proper maths to explain it.
And it just might be the case that a good imager, with good gear and good seeing, with good processing skills is likely to produce results that might be indistinguishable !

Rally
Reply With Quote
  #18  
Old 03-03-2013, 08:04 PM
Peter Ward's Avatar
Peter Ward
Galaxy hitchhiking guide

Peter Ward is offline
 
Join Date: Dec 2007
Location: The Shire
Posts: 8,088
Quote:
Originally Posted by Shiraz View Post
Thanks Peter

Larger pixel sensors, eg 14u-20u do however have massive dynamic ranges compared to 4um-5um offerings. ..90 db or more.

yes, but we are not discussing that class of chip here - bit of a red herring.

I think there there is a visible hallmark to tiny well sizes (small pixels): as many of the images taken by these devices show nearly all stars as saturated to white as it is tricky to keep those tiny wells from overflowing.

Now I am confused. If you are saying that the icx694 will show nearly all saturated stars then you must also be implying that the ST8300 cameras that you sell* will show the same - after all, the KAF8300 CCDs (well depth 25,500) have only a little more well depth than the icx694 (20,000). There are enough fine images out there from 8300s to show that these CCDs do not necessarily have this problem, so there is no reason to expect that the icx694 will have it either....

regards Ray
I'd honestly say my pedaling KAF based devices is a sideline...and I do try to keep any commercial biases out of any postings...that said..

I'm very much a user of Astro-imaging gear with KAF8300, KAF16803 and ICX694 senors (plus a number of Canon DSLR's) all being in my personal arsenal.

My KAF8300 does indeed saturate very quickly... but I'd not call 25% better performance a "little" improvement over the ICX694.

A boost in QE, for example, by 25%, is obvious when you look at the raw data.

I also think Sony's noise figures are a little rubbery...clever correlation filters that also scrub photon liberated electrons can easily make noise look good at the expense of subtle signal.

Also can you, or someone, point me to a SONY ICX694 data sheet with absolute QE?...I can only find 3rd party specs...which I frankly don't trust.

The only method I've read about...apart from deep cooling...that keeps thermal electrons in the background is "Skipper" CCD technology...sadly not commercially available as far as I am aware.

I suppose what I am still saying, when it comes to pixels, is: bigger = better.

The downside is, bigger (matching optical systems) often cost a whole lot more....
Reply With Quote
  #19  
Old 04-03-2013, 12:41 PM
clive milne
Registered User

clive milne is offline
 
Join Date: May 2011
Location: Freo WA
Posts: 1,443
Ray,
you might find this interesting:

http://www.astrosurf.com/buil/isis/noise/result.htm

Christian Buil presents a noise and electronic gain evaluation of a sample of astronomical CCD cameras including; KAF8300, KAF1603, KAF16803, KAF3200, KAF402, KAF11000, KAI2020, ICX694, ICX285.
Reply With Quote
  #20  
Old 04-03-2013, 05:27 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
Hi Rally,

An interesting thread. I agree sampling is definitely an important factor and matching pixel size to focal ratio is a well known datum. 1 arc second per pixel is one approximation that AP recommends and others use .66 arc seconds per pixel as being more specific to using the Nyquist sampling theory in application (3 times sample is better than the minimum 2X).

As far as gain goes in 8300 chip I don't know anything about that so I take your work for it.

As far as small pixels will have low well depth and thus likely to show white stars - I have seen examples of better looking stars in shorter exposures in fast scopes. Longer scopes probably less of an issue but then too long a focal length and the small pixels are no longer giving you proper sampling and as you point out you lose sensitivity. I have seen that with my own eyes and this is true.

As far as small pixels not having large dynamic range I think that has some assumptions. It is assuming the same read noise and as the post linking Christian Buils work on measuring some of these common chips you can see the Sony chip is in totally another league with lower read noise whilst higher QE. As dynamic range is a function of read noise (too much read noise and you lose differentiation between some levels of shadow or brightness).

As I thought Sony is totally out in front with clear air from other sensor manufacturers and are leaving them all in the dust. Canon is several generations behind Sony. Sony is also just going from strength to strength. They just last week signed a cross sharing of imaging patents with Aptina who has a huge number of imaging patents. Aptina often make excellent sensors for Nikon cameras for one.

So that means Sony will have access to Aotina's technology to actually increase well depth on any sensor of any pixel size. There is a video of Aptina President describing the technology.

A few years of development at this rate and we'll all be using some sort of hitech CMOS sensor unless True Sense releases some new advanced chips. Probably a while yet but I already see the signs.

My prediction is this 694 chip will be very very hard to beat and will see a lot of people switching their 8300 cameras for one or at least new buyers going to the Sony Darkside.


Greg.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 07:41 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement