View Single Post
  #25  
Old 29-03-2013, 07:41 AM
rmuhlack's Avatar
rmuhlack (Richard)
Professional Nerd

rmuhlack is offline
 
Join Date: Nov 2011
Location: Strathalbyn, SA
Posts: 916
Quote:
Originally Posted by Shiraz View Post
Hi Richard

FWIW, my take on your original question:

The signal in a pixel from an extended object (eg nebula or galaxy) is:

S = B*A*pixangle*opticsefficiency*time* QE

Where B is the object brightness, A is the aperture area and pixangle is the solid angle subtended by a pixel.

If you keep pixangle the same for the two systems (by using different cameras) and assuming all else is equal, increasing the aperture area A will proportionally increase the pixel signal – nothing else changes much.

The resolution of the two systems will be largely determined by the atmospheric seeing. Ignoring tracking blur and charge diffusion, the total PSF for each system is the convolution of the atmospheric PSF and the optics PSF. Assuming Gaussian approximations apply to the PSFs and that seeing is specified in angular FWHM terms,

FWHMtotal = SQRT(FWHMseeing^2+FWHMoptics^2)

Now, FWHMoptics = 1.03*lambda/aperturediameter, (eg 0.58 arcsec for the 200mm)

so for the 200mm system in 2 arcsec seeing, FWHMtotal = 2.08 arcsec
and for the 300mm system, FWHMtotal = 2.04 arcsec

ie, there is not much difference in FWHM of the combined PSFs – scope resolution is not a significant factor and both systems are basically seeing limited in 2 arcsec seeing. Since the pixels are 1x1 arc sec in each case, the sampling is also similar at about 2 pixels/FWHM (the ideal seems to be somewhere around 2.5-3, so the images will be slightly undersampled).

entirely different answer if you kept the same camera. And of course this does not include any consideration of SNR and dynamic range.

regards Ray
Thanks Ray, that is extremely helpful. I will be plugging all those equations into my model.
Reply With Quote