If we had two 130mm scopes, one at f/10 and one at f/4, focal length aside (and visual astronomy also aside), 30 seconds on the same target would yield very different results right?
Lets say that we even allowed for the focal length and cropped the wide f/4 image down to match the narrow f/10 field-of-view image. Comparing just those 2 resultant images... the f/4 cropped image would have to have collected more photons just in that specific cluster of pixels right...OR...are we saying that number of 'photons per pixel' would remain more or less constant given the aperture is 130mm for both scopes and therefore the cropped area of pixels would essentially have to have collected a fraction of the total photons (relevant to whatever the crop size was).
So for example lets say the crop covers only 1/4 of the sensor, this would then mean that the cropped f/4 image could only have collected 1/4 of the photons as compared to the f/10 image (which used its entire sensor).
After all there's only so many photons you can cram down a tube..and thats probably agnostic of what f/number the scope is...so that constant has to be governed by the aperture, and its whether you spread those collected photons over a wide area of sky, or a narrow area of sky (ie. focal length as determined the by f number).
So does this mean the 120mm will have to collect more photons per second than the 100mm (regardless of f number)...
Edit: When i have that out loud... its seems obvious.
|