View Single Post
  #7  
Old 01-03-2013, 09:22 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by gregbradley View Post
That would be true if there were large gaps between pixels which I don't think there is.

Empirical evidence says its the other way round. If your theory were correct I wouldn't see the same sensitivity between Microline 8300 and Proline 16803 and I do see the same. So the proof of the pudding is in the eating so to speak.

I see the same sorts of theories on DPreview about DSLR cameras and megapixels. The Nikon D800E proved all that incorrect. Its about the most sensitive camera out there, really low noise, it has small pixels,57% QE. That's a Sony Exmor sensor.

Fuji XE1, Sony Nex 5 and 6,Nikon D7000, Canon 60D/7D all show low noise with small pixels.

I don't see so many of those posts now as the proof is in the many images showing low noise and high sensitivity. The theory was small pixels give less sensitivity and higher noise. What that doesn't take into account is the constant improvement made to the sensors by clever engineers looking for a boost in performance. Technology has moved on.

A lot of phone manufacturers are going for the newish Sony stacked backlit CMOS sensor. Its 13mp and 1.2 micron pixels! Yet its twice as bright as other sensors.

These Sony engineers are at the cutting edge. I read about constant improvements in sensor design. From clever colour filter arrays from Toshiba that have no light loss compared to the regular dye Bayer filter arrays, to a clever use of larger capacitor with a gate by Aptina that increases well depth when the well fills. Sony Exmor has some clever analogue to digital circuitry in the columns that they bought out another company to acquire and is a major reason why their Exmor sensors are better than anyone elses. Plus their backlit stacked sensor. Even Kodak/True Sense Imaging's clear RGB filter array is looking a bit dated and superceded already.

I am not sure what new technology the Sony engineers have in that CCD but clearly some of these advancements have made their way into it to get that sort of performance out of a small pixelled camera.

Mike Sidonio will be using his soon so his results will be the final evidence of this and we will know for sure. Otherwise the rest is speculation.

Greg.
Thanks Greg. Looked up your chips - the reason you see the same ADU from the two is that the 8300 has about 3x the internal gain of the 16803. I guess that is deliberate, to make it easier to change cameras. However, when you see the same sort signal levels on the two, the 8300 is getting there with about 1/3 as many photons. At that gain, you should see more noise from the 8300.

If you see similar ADU from your various scopes I am guessing that they all have similar focal ratios. You could try a test and upset the pixel scale by putting a 2x Barlow in and comparing what signal you get with and without.

On the issue of sensitive small pixels, I couldn't agree more. The new generation of chips, including the 694, has very sensitive low noise pixels and that was what I was getting at in the thread. there is nothing inherently insensitive about small pixels - in fact there are good reasons why they can be sensitive and low noise at the same time. Where we disagree is in the effect of sampling - I contend that even high performance pixels will be behind the eightball if they cannot get many photons and that is exactly what happens with oversampling. Getting the pixel scale right is a major sensitivity issue and it is often largely ignored.

Quote:
Originally Posted by Peter Ward View Post
Yes and no.

QE is purely a measure of how many photons are detected vs the total number of photons falling onto a pixel.

This is not the same as system sensitivity...which you correctly point out should take into account pixel size.

Given the current Sydney weather, the "bucket" analogy works well here.

Put a small cup outside in the rain for a time, and it will fill say with 3 cm of water.

A big bucket next to it will also collect 3 cm of water, but when you
empty the bucket you may have many cups full of water... literally buckets of signal compared to our small cup.

You could consider QE to be how much water the cup or bucket loses before you measure their contents.
That's a very good analogy, thanks Peter



More generally, it may be worth a short explanation here. The main aspects of the model I use are adapted from a paper (possibly a lecture) published by the University of California Observatories http://www.ucolick.org/~bolte/AY257/s_n.pdf and from http://learn.hamamatsu.com/articles/ccdsnr.html
The maths is quite straightforward and all aspects of the model are well validated elsewhere - short of me making a mistake with the EXCEL formulae, it is going to be pretty reliable - it is basic electro-optical engineering and certainly not my "theory".

The whole idea of doing this was to use the same method that professional observatories use to design new gear - they model it to death until they understand exactly what it will do, way before any glass is ground. It turned out that a simple model could do this reasonably effectively for a small scope - all required parameters are readily available and few assumptions need be made.

Once the model was working OK, it seemed that it could be a useful way to cut through some of the hype and misunderstanding that has built up around what the icx694 actually offers and where it is deficient - and that was the main thrust of this thread. It seems that it has not been a completely successful enterprise.

Regards ray

Last edited by Shiraz; 02-03-2013 at 12:28 AM.
Reply With Quote