Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rating: Thread Rating: 5 votes, 5.00 average.
  #21  
Old 23-06-2015, 05:26 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Hi Greg. That dpreview post is a fine example of what I was saying - the term "QE" is used, but it doesn't mean a whole lot. Even the author agrees since he says - in the post where he defines his methodolgy - "The calculated QE is denoted as relative QE, since there might be an multiplicative factor for all QEs. But my opinion is that the range 23 – 40% is quite true for CMOS sensors". I have a lot of trouble taking this seriously when the key ISO measures came from somewhere else (who knows what was measured or how) and the quoted QE figures are relative, based on an unknown multiplicative factor that might apply - and that ultimately it is all the opinion of the author...

Anyway, I guess you don't see any use for the sensitivity estimation method - fair enough

ref:http://www.dpreview.com/forums/thread/2750802

Last edited by Shiraz; 23-06-2015 at 07:07 PM.
Reply With Quote
  #22  
Old 23-06-2015, 06:38 PM
rmuhlack's Avatar
rmuhlack (Richard)
Professional Nerd

rmuhlack is offline
 
Join Date: Nov 2011
Location: Strathalbyn, SA
Posts: 916
A further point is that the dpreview post uses *peak* QE, not average QE. Peak QE for a DSLR would clearly be higher than 10-15%, but what is on view with Ray's calculation is a mean sensitivity across the visible broadband spectrum.

A further confusing issue when comparing CCD with DSLR is gain. Greg's post gives the example of a D800E at ISO6400 showing plenty of picture. The complicating factor here is that due to gain, the full well capacity of a D800E pixel at ISO6400 is (according to sensorgen) only 833 electrons. Compare that with a CCD camera!! Another factor ignored when considering the comparison is noise. The D800E at ISO6400 will have plenty of signal, but what of the noise...? Ray's initial post assumes that systems being compared at equivalent SNR.

Perhaps I muddied the waters by bringing my own anecdotal experience into the mix. When I put my own spreadsheet together using Ray's formula, i can compare my Canon 1000D and Canon 450D against my newly acquired SBIG ST10XE. Referring to the SBIG data, and given my camera doesn't have microlenses, I'm using an average QE of 55% for the ST10XE. (That is taking an average QE across 450-650 wavelengths in 25nm increments as per the SBIG graphs for my camera). Comparing these cameras based on the formula and assuming an average DSLR QE of 20%, the ST10 would have 4.05x the relative sensitivity of my old Canon 1000D and 4.88x the relative sensitivity of the Canon 450D.

This might muddy the waters further, but now for a practical test: compare the following images of NGC6164:
  1. Broadband image with Canon 450D @ ISO1600, FL = 650, f-ratio = 5, integration = 12.1 hours
  2. Narrowband 6nm Ha with ST10XE, FL =1278, f-ratio = 6.4, integration = 4.7 hours

I think the comparison is 'chalk and cheese' (especially in light of the slower scope in (2) ), and quite insightful given the discussion here in this thread. Even reducing the mean QE of the DSLR in the formula to 10% still suggests that the ST10XE is (only) 9.77x more sensitive than the 450D used here. Considering that the ST10XE image is narrowband and taken with a slower telescope, perhaps 10% is still being generous...
Attached Thumbnails
Click for full-size image (NGC6164_drizzle_integration process 4sm.jpg)
174.7 KB29 views
Click for full-size image (light-FILTER_Ha-BINNING_1_firstlight_process1_sm.jpg)
168.5 KB24 views
Reply With Quote
  #23  
Old 23-06-2015, 07:05 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Yes good point Richard. Average QE would be a lower number.

Ray I think your system is of use I am not sure the multiplication factors of how much one system is faster than another are spot on. Perhaps it is and I haven't realised the differences as much as they are.

Certainly the Honders at 305 F3.8 with 16803 is plenty sensitive and with the Trius694 even more. I took an image of the Seagull Nebula with the Honders and the Trius and it showed the bubbly jellyfish part very clearly with little noise after only an hour of Ha. I was amazed as I have imaged that area with the CDK17 and the Trius and it was quite noisy still after an hour's exposure so yes there is a direct example. Of course the image scale was quite different with the Honders being more widefield and there is the rub I suppose - Image scale.

So aperture and QE still remain the 2 most important factors in the speed of the system and F ratio is more the wideness of the field.

Greg.
Reply With Quote
  #24  
Old 23-06-2015, 07:19 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by gregbradley View Post
So aperture and QE still remain the 2 most important factors in the speed of the system and F ratio is more the wideness of the field.

Greg.
No, actually they are not. As indicated in the first post, FNumber and pixel size are by far the most important factors in sensitivity, because they are squared terms. QE and optics efficiency also come into it.

focal length and total chip area determine the wideness of the field.

the CDK/694 was much less sensitive than the RHA/694 because your CDK/694 system is heavily oversampled whereas the RHA/694 is properly sampled. For interest, you could use the formula to work out the relative sensitivities of the two systems....

Last edited by Shiraz; 23-06-2015 at 11:04 PM.
Reply With Quote
  #25  
Old 23-06-2015, 07:24 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
I am thinking what I am hitting up against is the Stan Moore's F ratio myth where he basically concludes its not Fratio but aperture and actually shows several CCD shots at different F ratios and the star looks the same.

Its a similar argument with DSLR lenses where full frame and APS equivalence of F ratio of a lens is the same ie. an F2.8 lens is F2.8 on either system. It creates lots of arguing posts for both sides.

Greg.
Reply With Quote
  #26  
Old 23-06-2015, 08:11 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by gregbradley View Post
I am thinking what I am hitting up against is the Stan Moore's F ratio myth where he basically concludes its not Fratio but aperture and actually shows several CCD shots at different F ratios and the star looks the same.

Its a similar argument with DSLR lenses where full frame and APS equivalence of F ratio of a lens is the same ie. an F2.8 lens is F2.8 on either system. It creates lots of arguing posts for both sides.

Greg.
I think that these issues are encapsulated in the equation. I could provide a critique of Stan's paper, but it wouldn't add anything. However, you might ask yourself why the image scale is almost the same in his two images, when one was taken at 2630mm fl and the other at 820mm. You might also ask under what circumstances changing the fstop on your DSLR lens from f2.8 to f11 would actually make no difference to the image quality.

An f2.8 lens remains so regardless of where it is. Full frame chips may have larger pixels, so there will be a sensitivity difference due to that.

Last edited by Shiraz; 24-06-2015 at 09:02 PM.
Reply With Quote
  #27  
Old 23-06-2015, 10:05 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,980
This I guess is the main reason I prefer the equation that I use, although it (redundantly?) uses image scale factors, it gives a better indication as to what you're actually getting.
It is all well and good to do some calculations on a few different optical systems and cameras to find out which is going to be the best for you just to find that your resolution is too heavily under sampled.

I had a discussion with someone a couple of months ago that continually mentioned that the f/ratio is king and rules over all. On my 1.5 hour trip home stuck in a traffic jam I calculated that a 10" F/6.3 (my Meade with a focal reducer) could slaughter a 12" F/3 astrograph with near the same image scale. It all comes down to the cameras being used.
He was using the KAF-8300 which would give a scale of 1.24 arcsec. If I was to through an Apogee F77 on the back of my Meade I would get 1.55 arcsec BUT twice the efficiency.

Of course, the FOV of these two systems is not comparable! In saying that, by putting the F77 on my 100 ED F/9 it would just barely outshine a 12" f/3 astrograph with a third of the image scale, a vast improvement on the 10" F/6.3.

What is the point of all this? It is one thing to say that a 50mm F/1.8 lens on an FLI-16803 is faster than the same camera on a CDK-17, which would you rather use though? The 50mm may be ~14x faster, the CDK slaughters it though in the real world.
Reply With Quote
  #28  
Old 23-06-2015, 10:46 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
I agree. In the first post, I pointed out that there are many other equally important considerations, but perhaps I should reword it to emphasise.

The main motivation for this thread was to add another tool to the toolbox. There are websites out there that tell you how to get the right field of view, how to get the resolution you want, what design of scope might be best, what mount you will need etc etc. But there was nowhere you could go to (that I could find) that would tell you how long you would need to sit out under the stars with the equipment you chose. This tool doesn't quite do that, but it does allow you to see how your system compares with others in terms of sensitivity and also to make appropriate choices when changing something. That has to be better than flying blind and could help you improve your system in a cost effective way - for example, you may have a choice between a $4k camera upgrade for 28% better QE - or - a new scope for $3k with an FNumber of 5.6 rather than 7. This sort of tool will quickly tell you which option will give you the best bang for the bucks. It can also be used for planning imaging sessions - for example, if you like an image on IIS and know the exposure times and equipment used, you can work out how sensitive your system is compared to that used for the original image and estimate how long you will need to image to get the same SNR result.

The 50mm may be ~14x faster, the CDK slaughters it though in the real world. not if you are taking an image of the milky way

Last edited by Shiraz; 24-06-2015 at 12:02 AM.
Reply With Quote
  #29  
Old 24-06-2015, 09:20 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by rmuhlack View Post

This might muddy the waters further, but now for a practical test: compare the following images of NGC6164:
  1. Broadband image with Canon 450D @ ISO1600, FL = 650, f-ratio = 5, integration = 12.1 hours
  2. Narrowband 6nm Ha with ST10XE, FL =1278, f-ratio = 6.4, integration = 4.7 hours

I think the comparison is 'chalk and cheese' (especially in light of the slower scope in (2) ), and quite insightful given the discussion here in this thread. Even reducing the mean QE of the DSLR in the formula to 10% still suggests that the ST10XE is (only) 9.77x more sensitive than the 450D used here. Considering that the ST10XE image is narrowband and taken with a slower telescope, perhaps 10% is still being generous...
thank you Richard.

At the risk of muddying the waters even further, I found some luminance data of the same region taken with my 250f4 and icx694 system. I tried a few different stacks until reasonably satisfied that the image SNR and depth was roughly comparable to your colour image. That required 36 minutes of data, which is about 1/20 the time needed for the colour image. There is not much science here (how to compare image quality by eye?, sky brightness?), but if the 20x difference in imaging time is anywhere near right, then I would agree with you that 10% is over generous for the average QE of the DSLR (at 10% QE, the model says that the difference should be "only" 6x). Although it is a sidetrack to the thread, this is a real eyeopener - I hadn't fully appreciated the extent (in image terms) of the difference between DSLR and mono.

I assume that the DSLR had been full spectrum modded when this was taken.
Attached Thumbnails
Click for full-size image (richard.jpg)
178.6 KB24 views

Last edited by Shiraz; 24-06-2015 at 11:14 AM.
Reply With Quote
  #30  
Old 24-06-2015, 12:14 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
What is the conclusion you would reach from this model about pixel size and sensitivity?

You mentioned it was a very important factor as its one of the terms that is squared.

One bit of empirical observation was on my CDK17 one time I imaged M104 with both Proline 16803 and then Microline 8300.
Same night, same scope, same conditions. I was surprised to see the 8300 lacked resolution and was dimmer but what surprised me most was it was blurrier. Seeing hadn't changed as far as I know.

Greg.
Reply With Quote
  #31  
Old 24-06-2015, 01:24 PM
rustigsmed's Avatar
rustigsmed (Russell)
Registered User

rustigsmed is offline
 
Join Date: Mar 2012
Location: Mornington Peninsula, Australia
Posts: 3,950
just a quick one Ray,
with dslr's - is it really QE that the sensor is reduced due to the bayer matrix or is it more accurately defined as a loss of resolution?

cheers

rusty
Reply With Quote
  #32  
Old 24-06-2015, 02:02 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by gregbradley View Post
What is the conclusion you would reach from this model about pixel size and sensitivity?

You mentioned it was a very important factor as its one of the terms that is squared.

One bit of empirical observation was on my CDK17 one time I imaged M104 with both Proline 16803 and then Microline 8300.
Same night, same scope, same conditions. I was surprised to see the 8300 lacked resolution and was dimmer but what surprised me most was it was blurrier. Seeing hadn't changed as far as I know.

Greg.
assuming you were imaging without an FR, the figures for your 2 configurations are:
S(16803) = 0.53*0.75*9*9/6.3/6.3 =0.81
S(8300) = 0.47*0.75*5.4*5.4/6.3/6.3 =0.22

ie, the sensitivity of the 8300 system was about 1/4 that of the 16803, so it would have had much worse noise. the 8300 has about 2.5x? the gain of the 16803, so the the ADU brightness on the screen would have been about 2/3 that from the 16803.
The 8300 system was heavily oversampled, resulting in poor SNR and excessive image scale, leading to a perception of a lack of sharpness - it would have looked c**p.

By itself, pixel size is unimportant - however, it is vitally important in a system context and must be considered along with Fnumber, focal length, efficiency etc, when working out what a system will do. the equation encapsulates how pixel size interacts with other parameters in determining sensitivity but, as Colin pointed out, you must also consider other aspects such as resolution, field of view etc.

Last edited by Shiraz; 24-06-2015 at 02:23 PM.
Reply With Quote
  #33  
Old 24-06-2015, 02:32 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by rustigsmed View Post
just a quick one Ray,
with dslr's - is it really QE that the sensor is reduced due to the bayer matrix or is it more accurately defined as a loss of resolution?

cheers

rusty
Hi Rusty.

The job of the Bayer filter is to absorb about 2/3 of the photons that hit it. Since the majority of the photons never make it to the underlying detector array, the sensor as a whole must have a QE below about 0.4, even before taking into account whatever the detectors themselves do. This is a huge hit to QE, but it is necessary to encode the colour data.

The Bayer filter also mucks up the luminance data, but debayering can fairly effectively disentangle lum from colour (essentially by smart guesswork), although there is still some loss of resolution and fidelity in the process (but not a whole lot).
Reply With Quote
  #34  
Old 24-06-2015, 02:39 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,980
On the CDK17 you're looking at an image scale of 0.61 for the 16803 and 0.37 for the 8300. The 16803 is around the theoretical limit that you're going to get from a ground based telescope under near perfect conditions. Under great skies, that's where you want to be anyway.
As Ray pointed out, the 16803 is a better sensor. It has a lower read noise, higher gain and less dark current from memory (in part due to having larger pixels).

Ultimately, this is why I go back to stating that I prefer to use image scale and aperture for calculating the efficiency of an optical system. The equation that Ray has stated is fantastic, it is quicker and just as accurate, its only caveat is that you lose perspective. Pixel size and f/ratio does exactly the same as image scale and aperture, just keeps everything in perspective in my opinion.

I am currently saving up to buy a research system. I started with the camera (16803), then calculated image scale and worked backwards to the focal length and then to the telescope from there. May be backwards but it is a good way I have found.
Reply With Quote
  #35  
Old 24-06-2015, 02:59 PM
rustigsmed's Avatar
rustigsmed (Russell)
Registered User

rustigsmed is offline
 
Join Date: Mar 2012
Location: Mornington Peninsula, Australia
Posts: 3,950
Quote:
Originally Posted by Shiraz View Post
Hi Rusty.

The job of the Bayer filter is to absorb about 2/3 of the photons that hit it. Since the majority of the photons never make it to the underlying detector array, the sensor as a whole must have a QE below about 0.4, even before taking into account whatever the detectors themselves do. This is a huge hit to QE, but it is necessary to encode the colour data.

The Bayer filter also mucks up the luminance data, but debayering can fairly effectively disentangle lum from colour (essentially by smart guesswork), although there is still some loss of resolution and fidelity in the process (but not a whole lot).
yep thanks Ray,

I understand that not all pixels are being utilised which is why I was visualising it as resolution issue rather than QE issue.
I was looking at it like they were really 3 separate 'mini-mono-sensors' in one (but stuck together) each with a filter in front with the blue and red being 25% the 'total' sensor size and the green being 50% the sensor size.
in thinking this way only those specific 'coloured' photons would hit those just like a mono camera would receive a photon with a R or G or B filter in front (or nb).
ie there is not much difference between a mono camera with a blue filter when a 'red photon' hits it than to a dslr with the difference being the dslr is 'really' made up for 3 really tiny sensors. am I going crazy?
Reply With Quote
  #36  
Old 24-06-2015, 03:03 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Thanks Ray and Colin. The KAF16803 is in my opinion still the best sensor out there and likely to be for the foreseeable future.
Its got the FOV, the well depth, the good QE, the low noise and the ability to crop being so huge. The downside is the cost of the gear to be able to use it. Not all scopes can handle a 52mm diagonal.

Ray there have been developments in Bayer filters with some camera companies reducing the colour density of the filters to gain more throughput. Also there are some companies (Samsung is one) that are experimenting with a clear RGB or clear RB filters that get more throughput again like the Kodak Truesense.

Also Sony is working on a sensor where the colour filter array moves across the pixels so each pixel gets RGB. That would boost performance as well. Big time boost.

Additionally there are sensors being developed by Panasonic and Fujifilm that have an organic top layer which have much greater dynamic range.

Also Sony is working on a Foveon style sensor where the light hitting different depths into the sensor determines RGB. Foveon though has poor high QE performance.

Also the latest Sony sensor in the A7Rii is backside illuminated. With smaller sensors this represents a 40% gain in sensitivity as the surrounding circuitry is under the sensor. The pixels are closer to the surface and more light overlaps increasing performance a lot (said to help small pixelled sensors more than large pixelled cameras). So there is a lot of development going on in this area and we are seeing some of it hitting the market.

Samsung's NX1 also has a backside illuminated APSc sensor that is similar (probably not as advanced as the Sony one which is full frame).

I wonder also with the advent of 4K video with these super sensors becoming more common if this would make these cameras super for planetary work. You could image a planet in K video. The Sony RX100 4 is backside illuminated and capable of 240fps in 4K. That is phenomenal and the A7Rii is even more with full frame downsampling in Super 5 format into 4K. Planetary imagers may be able to take advantage of this.

A Sony A7Sii is likely to have backside illumination added very likely and that would make it crazy good.

Greg.
Reply With Quote
  #37  
Old 24-06-2015, 03:21 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Just curious Ray.

What are the results from these setups:

All with AP RHA 305 F3.8 1159mm focal length;

1. 16803 with 50% QE (I think its more like 50% looking at the graph again) 9 micron pixel and 9 electron read noise

2. Trius 694 70% QE and 4.54 micron pixels, 5 electron read

3. Trius 814 with say 65% QE, 3.69 micron pixels 3 electron read noise

4. KAF8300 camera with 5.4 micron pixels say 45% QE and 15 electron read noise.

Greg.
Reply With Quote
  #38  
Old 24-06-2015, 04:29 PM
Atmos's Avatar
Atmos (Colin)
Ultimate Noob

Atmos is offline
 
Join Date: Aug 2011
Location: Melbourne
Posts: 6,980
The good thing about using the same telescope is that you only need to deal with the first half of the equation.
(((pixel*206.265)/focal length)^2)*QE

1) 1.6 arcsec @ 1.28
2) 0.8 arcsec @ 0.46
3) 0.65 arcsec @ 0.28
4) 0.96 arcsec @ 0.42

This is really to be expected. A general rule of thumb is that you are ALWAYS going to be sacrificing exposure time (sensitivity) for resolution. Personally, I would go with #2 for general purpose from those numbers. Unless you want to do wide field and then the 16803 is definite the way to go.

This doesn't take into consideration read noise and stuff but they're generally not a big issue as that can be taken out of account by changing exposure times.
Reply With Quote
  #39  
Old 24-06-2015, 06:12 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,877
Thanks Colin.

Yes Mike S has shown the Trius 694 with inch F3.8 is an excellent setup.

Roland Christan though uses a QSI68wsg8 when he tests the RHA scopes.

I am finding the 16803 (at 1.67 arc secs/pixel) is giving excellent results at around minimum of 4 hours and preferably around the 6 hour band. 6-8 hours is a good exposure target with this setup. It will be deep and rich.

The 814 didn't come up so well. I guess in good seeing though it would be hard to beat as it is giving .66 arc secs/pixel which I thought was ideal under good conditions. Roland goes for arc sec /pixel based on average seeing of around 3 arc secs. I think my seeing at home is more 2-3 so .66 arc secs to 1 arc sec would be ideal with 3X sampling. The 8300 sensor gives you .96 arc secs hence he goes for that.

Greg.
Reply With Quote
  #40  
Old 24-06-2015, 06:42 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by rustigsmed View Post
yep thanks Ray,


ie there is not much difference between a mono camera with a blue filter when a 'red photon' hits it than to a dslr with the difference being the dslr is 'really' made up for 3 really tiny sensors. am I going crazy?
The way I understand it is that the DSLR only has 1 red pixel out of every 4 pixels (ie 3/4 of a red frame is holes the between the active pixels) The mono chip with a red filter has every pixel active. The deBayering process guesses what might be in the holes in the DSLR RGB frames, but it doesn't have as much information to go on as the mono.

Of course the mono also allows you to record a luminance image (RGB added together) using all available photons and pixels, which is how it gets its main advantage.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 07:36 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement