There is one aspect to the discussions on FWHMs that are achieved/measured during imaging sessions that I would like to throw out there for a discussion. That is the relationship between:
the image scale of the system in use; and
what can be expected at a particular site; and
what can be, or is being, measured during a particular imaging session.
My conclusion is that it is inconclusive to cite a particular achievement in seeing as measured by FWHM unless one also cites the pixel scale of the optical system being used to capture the data. For the same reason, it is inconclusive to form a view that a particular site is only capable of a particular seeing as measured by the FWHM unless one takes into account the specifications of the optical system being used for the measurement. My reasoning is this...
Based on the Nyquist sampling theorem, in a ideal world we would design our imaging system to be 1/3 of the target FWHM we would like to be able to image. For example, if we expected a particular site to be able to achieve a 1.5" seeing then we would design our system to have an image scale of .5". Does that mean that we can only ever achieve an FWHM with this system of 1.5" or greater? No - of course not. On a good night we can do better than that but based on the Nyquist theorem, once we get below the 1/3 sampling the accuracy of the FWHM figure we are measuring become less accurate. Yes, we can definitely see the improvement in the star quality in the sub if the seeing improves so all good.
In my case I have two systems:
a TSA120/QSI683 for an image scale of 1.64"; and
a 200mm Canon lens/ASI6200 for an image scale of 3.9".
According to the Sub Frame Selector tool in Pixinsight, I have measured FWHM figures for the TSA120/QSI down to as low as 2.15". For the Canon 200mm lens/ASI Camera combination I have measured FWHM figures down to as low as 6.85".
Clearly, the seeing at my location is better than that reported by the 200mm lens/ASI camera combination. Is it better than that reported by the TSA120/QSI camera? I have no way of knowing unless I set up a system in the order of a .5" image scale and start imaging with that. Another way of stating this is that it would incorrect for me to assume that the best seeing that I can get is 2.15" as I am below the 1/3 sampling rate and somewhat under sampled which will affect the accuracy of the measurement anyway.
So again, my conclusion is that "seeing" conditions as reported using FWHM figures should always cite the image scale for the optical system in use.
Unless of course, one is using a DIMM (Differential Image Motion Monitor), to measure seeing and that should be stated too .
I would appreciate the input of others into this discussion.
My own experience follows yours; a greater image scale allows for a more accurate reading of the seeing. Imaging at 1.67” my best FWHM was around 2.8”.
At 1.15” my best was 1.7”.
At 0.4” my best has been 1.1 - 1.2”.
Dropping below 0.4”/pixel isn’t going to do any better for me as I don’t believe I’m ever going to break that 1” barrier from my location. It’s why I’ve decided to have a setup now that is at 0.5”/pixel. Under near perfect seeing for my location I’m not going below 2 pixels but I’ll generally be around nyquist.
There is one aspect to the discussions on FWHMs that are achieved/measured during imaging sessions that I would like to throw out there for a discussion. That is the relationship between:
the image scale of the system in use; and
what can be expected at a particular site; and
what can be, or is being, measured during a particular imaging session.
My conclusion is that it is inconclusive to cite a particular achievement in seeing as measured by FWHM unless one also cites the pixel scale of the optical system being used to capture the data. For the same reason, it is inconclusive to form a view that a particular site is only capable of a particular seeing as measured by the FWHM unless one takes into account the specifications of the optical system being used for the measurement. My reasoning is this...
Based on the Nyquist sampling theorem, in a ideal world we would design our imaging system to be 1/3 of the target FWHM we would like to be able to image. For example, if we expected a particular site to be able to achieve a 1.5" seeing then we would design our system to have an image scale of .5". Does that mean that we can only ever achieve an FWHM with this system of 1.5" or greater? No - of course not. On a good night we can do better than that but based on the Nyquist theorem, once we get below the 1/3 sampling the accuracy of the FWHM figure we are measuring become less accurate. Yes, we can definitely see the improvement in the star quality in the sub if the seeing improves so all good.
In my case I have two systems:
a TSA120/QSI683 for an image scale of 1.64"; and
a 200mm Canon lens/ASI6200 for an image scale of 3.9".
According to the Sub Frame Selector tool in Pixinsight, I have measured FWHM figures for the TSA120/QSI down to as low as 2.15". For the Canon 200mm lens/ASI Camera combination I have measured FWHM figures down to as low as 6.85".
Clearly, the seeing at my location is better than that reported by the 200mm lens/ASI camera combination. Is it better than that reported by the TSA120/QSI camera? I have no way of knowing unless I set up a system in the order of a .5" image scale and start imaging with that. Another way of stating this is that it would incorrect for me to assume that the best seeing that I can get is 2.15" as I am below the 1/3 sampling rate and somewhat under sampled which will affect the accuracy of the measurement anyway.
So again, my conclusion is that "seeing" conditions as reported using FWHM figures should always cite the image scale for the optical system in use.
Unless of course, one is using a DIMM (Differential Image Motion Monitor), to measure seeing and that should be stated too .
I would appreciate the input of others into this discussion.
Clear skies,
Rodney
This all sounds rather like the Heisenberg uncertainty principle really, huh?
Based on the Nyquist sampling theorem, in a ideal world we would design our imaging system to be 1/3 of the target FWHM we would like to be able to image.
So again, my conclusion is that "seeing" conditions as reported using FWHM figures should always cite the image scale for the optical system in use.
type: ICX694AL Exview CCD with ultra low dark current and vertical anti-blooming.
CCD Full resolution pixel data: Pixel size: 4.54uM x 4.54uM, Image format: 2750 x 2200 pixels
The Nyquist theorem is also known as the sampling theorem. It is the principle to accurately reproduce a pure sine wave measurement, or sample, rate, which must be at least twice its frequency.
So it's not really 1/3 but 1/2.
Therefore I think any measurements Mike makes on his FITS files
will be considered reasonable and able to be trusted -
such as here: https://pbase.com/strongmanmike2002/image/173195561
1.665 arc seconds seeing.
So it's not really 1/3 but 1/2.
Therefore I think any measurements Mike makes on his FITS files
will be considered reasonable and able to be trusted -
such as here: https://pbase.com/strongmanmike2002/image/173195561
1.665 arc seconds seeing.
cheers
Allan
I'm certainly no expert but 1/2 is what I had heard too..? So it was making me think that my average FWHM of 1.5-1.7" with an image scale of 0.83"/pix was about right.. ? but hey, I may be miss understanding, or confabulating two different concepts here ..somewhere
I'm certainly no expert but 1/2 is what I had heard too..? So it was making me think that my average FWHM of 1.5-1.7" with an image scale of 0.83"/pix was about right.. ? but hey, I may be miss understanding, or confabulating two different concepts here ..somewhere
Mike
Whatever -
the fact is you still have a measuring stick by which you can compare different nights
and you can also measure your older FITS files -
maybe measure your old Wallaroo site: https://pbase.com/strongmanmike2002/image/173190596
and give us some values from there to confirm
what is obvious from your comparison image?
type: ICX694AL Exview CCD with ultra low dark current and vertical anti-blooming.
CCD Full resolution pixel data: Pixel size: 4.54uM x 4.54uM, Image format: 2750 x 2200 pixels
There is no right or wrong answer here. The higher the multiplier the better the sampling and the closer the image reflects what was received down the OTA. A higher multiplier will always give a better result but it is a law of diminishing returns. But I can tell you from the planetary work I have done that 2X is not enough and 3X is closer to the money. And the question of sampling applies to all forms of astro-imaging including deep sky, since it is just the translation of a photon stream to an image.
The following DropBox link to a planetary image shows the difference in resolution on a planetary image with sampling at a 2.2X multiplier compared with 3.3X. The difference is huge. https://www.dropbox.com/s/rcai6pwhkz...tions.jpg?dl=0
The 2X multiplier developed by Nyquist was for sound which is a one dimensional array. A 2D array such as an image needs 3X or more, so that the fidelity of the resultant image is not discernible from that potentially coming down the OTA.
Cheers, Niall
Whatever -
the fact is you still have a measuring stick by which you can compare different nights
and you can also measure your older FITS files -
maybe measure your old Wallaroo site: https://pbase.com/strongmanmike2002/image/173190596
and give us some values from there to confirm
what is obvious from your comparison image?
cheers
Allan
Yes, I was only just thinking that a few moments ago I will
Given my identical equipment , there is no doubt comparisons between LBGTQI measurements between the two sites will be an accurate measure of any improvement. How it compares against others measurments is, I guess, another story..?
Fantastic comments everyone and thanks for the contributions. Whilst we can (and probably will) continue to debate the merits of 1/2 or 1/3 sampling, the key point that I am raising is that when we are citing these figures we should always be stating the specs for the system from which we obtained the measurements. Otherwise we cannot be sure if it is the seeing that is limiting what we have measured or the limitations of the optical system.
There is no right or wrong answer here. The higher the multiplier the better the sampling and the closer the image reflects what was received down the OTA. A higher multiplier will always give a better result but it is a law of diminishing returns. But I can tell you from the planetary work I have done that 2X is not enough and 3X is closer to the money. And the question of sampling applies to all forms of astro-imaging including deep sky, since it is just the translation of a photon stream to an image.
The following DropBox link to a planetary image shows the difference in resolution on a planetary image with sampling at a 2.2X multiplier compared with 3.3X. The difference is huge. https://www.dropbox.com/s/rcai6pwhkz...tions.jpg?dl=0
The 2X multiplier developed by Nyquist was for sound which is a one dimensional array. A 2D array such as an image needs 3X or more, so that the fidelity of the resultant image is not discernible from that potentially coming down the OTA.
Cheers, Niall
You're right -
if Mike was using say a 2x coma corrector he would have
an arc second/pixel ratio of 0.42 asec/pix and a more reliable measurement
which could be compared to say CHART32 and other sites.
There is one aspect to the discussions on FWHMs that are achieved/measured during imaging sessions that I would like to throw out there for a discussion. That is the relationship between:
the image scale of the system in use; and
what can be expected at a particular site; and
what can be, or is being, measured during a particular imaging session.
My conclusion is that it is inconclusive to cite a particular achievement in seeing as measured by FWHM unless one also cites the pixel scale of the optical system being used to capture the data. For the same reason, it is inconclusive to form a view that a particular site is only capable of a particular seeing as measured by the FWHM unless one takes into account the specifications of the optical system being used for the measurement. My reasoning is this...
Based on the Nyquist sampling theorem, in a ideal world we would design our imaging system to be 1/3 of the target FWHM we would like to be able to image. For example, if we expected a particular site to be able to achieve a 1.5" seeing then we would design our system to have an image scale of .5". Does that mean that we can only ever achieve an FWHM with this system of 1.5" or greater? No - of course not. On a good night we can do better than that but based on the Nyquist theorem, once we get below the 1/3 sampling the accuracy of the FWHM figure we are measuring become less accurate. Yes, we can definitely see the improvement in the star quality in the sub if the seeing improves so all good.
In my case I have two systems:
a TSA120/QSI683 for an image scale of 1.64"; and
a 200mm Canon lens/ASI6200 for an image scale of 3.9".
According to the Sub Frame Selector tool in Pixinsight, I have measured FWHM figures for the TSA120/QSI down to as low as 2.15". For the Canon 200mm lens/ASI Camera combination I have measured FWHM figures down to as low as 6.85".
Clearly, the seeing at my location is better than that reported by the 200mm lens/ASI camera combination. Is it better than that reported by the TSA120/QSI camera? I have no way of knowing unless I set up a system in the order of a .5" image scale and start imaging with that. Another way of stating this is that it would incorrect for me to assume that the best seeing that I can get is 2.15" as I am below the 1/3 sampling rate and somewhat under sampled which will affect the accuracy of the measurement anyway.
So again, my conclusion is that "seeing" conditions as reported using FWHM figures should always cite the image scale for the optical system in use.
Unless of course, one is using a DIMM (Differential Image Motion Monitor), to measure seeing and that should be stated too .
I would appreciate the input of others into this discussion.
Clear skies,
Rodney
I think you are right Rodney. The measurement of seeing as a function of the FWHM is impacted by the image scale/ pixel resolution and thus the sampling. It only stands to reason that if your stars are spread over too few pixels, the measurement of the spread of the illumination is going to be compromised.
I was able to show this well with some work I did whilst imaging the Antennae Galaxies. I have an image scale of 0.5 arc secs per pixel. If you accept that a 3x multiplier is sufficient sampling to get a good estimate of the FWHM/ seeing, my system should give decent estimates when the seeing is 1.5 arc secs or greater.
I measured the FWHM using the PixInsight Subframe Selector algorithm for a range of images with FWHM values from 1.7 arc secs to 4.1 arc secs. I took these images and then used the Photoshop function to Filter>Pixelate>Mosaic to increase the size of the pixels and therefore effectively down sample. This is a good method to simulate the effect of using a camera with a larger image scale. If you double the pixel size, the software takes 4 pixels and averages the result to create the equivalent of 1 larger pixel, exactly as a camera with larger pixels would do.
I then remeasured the FWHM values. in all cases the estimates of the FWHM values went up as I down sampled, which proved that inadequate sampling gives inaccurate FWHM and therefore seeing estimates. See the graph and here is a DropBox link to it if you can't see the attachment: https://www.dropbox.com/s/rkcvrf7rfl...eeing.JPG?dl=0
What I found interesting was the measurement when the seeing was poor at 4.1 arc secs. If a Nyquist multiplier of 3 is sufficient sampling to make a good estimate of the FWHM/ seeing then sampling of 4/3 = 1.33 arc secs per pixel ought to be good enough. Therefore the estimates for seeing at 1.4 arc secs when the sampling was 0.47 arc secs and 0.94 arc secs ought to give comparable numbers.....and they do!! The graph is quite flat between these two figures, but once the image scale gets up to 1.41 arc secs per pixel and higher the FWHM figures go up.
Comparing different systems / different software calculations / different star fields is in my opinion bad practice.
If the same system on the same target with the same software reported 2 FWHM, and the next night reported 1.6 FWHM I would be happy to report that the seeing conditions on night 2 was likely better, beyond that I see no practical value.
Comparing different systems / different software calculations / different star fields is in my opinion bad practice.
If the same system on the same target with the same software reported 2 FWHM, and the next night reported 1.6 FWHM I would be happy to report that the seeing conditions on night 2 was likely better, beyond that I see no practical value.
Not really.
example - at CHART32 they report sub 1 arc second seeing conditions on a regular basis. http://www.chart32.de/index.php/recent
That's why they have such magnificent pictures.
In Australia we don't have such good seeing.
Mike has the highest observatory in Australia so everyone
wants to know if he can ever get sub arc second seeing there.
We hope to find out one day soon if the weather ever improves.
So far his measurement of 1.7 arc seconds is promising
from only his first light picture.
The difference seeing makes to the resolution in an image
This series of images shows the difference that the seeing, as measured by the FWHM, makes to the resolution in an image. These are single stretched Luminance subs for the Antennae Galaxies across a large range of seeing conditions. My image scale is 0.47 arc secs per pixel. This says to me that my sampling, using a Nyquist multiplier of 3 is good for ~ 1.5 arc secs seeing. The correlation between the seeing and the resolution as we see eyeballing the images is remarkable.
The theory says that, had the seeing been much better than 1.5 arc secs, say 1.0 arc secs, my sampling would limit the potential resolution benefit of the better seeing. Equally, as previously discussed, due to my image scale, I may not even be able to accurately measure such excellent seeing in terms of the FWHM. https://www.dropbox.com/s/jw6kcgqxqd...ated.jpeg?dl=0
I am hoping that Mike will get some great FWHM figures and I'm thinking about the future of his observatory site.
Could it encourage collaboration with other imagers who have even more advanced telescopes
such as the Alluna Optics Alluna RC-16 with adaptive optics such
as Peter Ward is using - to set up their equipment there? -
or maybe even larger telescopes?
Such telescopes have much longer back focus which allows for
adaptive optics to be installed.
If this El Niño wet weather event ends and we get some dry weather
there could be many nights of sub 1 arc second seeing.
I am hoping that Mike will get some great FWHM figures ....If this El Niño wet weather event ends and we get some dry weather
there could be many nights of sub 1 arc second seeing.
cheers
Allan
We are in a La Nina now, it is an El Nino that we want
I'm already happy with what I have seen so far and if this continues to be reasonably consistent (as I suspect it will be) I'll be quite content .... but it would be pretty cool to see what sub arc sec seeing looks like on the screen in real time
We are in a La Nina now, it is an El Nino that we want
I'm already happy with what I have seen so far and if this continues to be reasonably consistent (as I suspect it will be) I'll be quite content .... but it would be pretty cool to see what sub arc sec seeing looks like on the screen in real time
Negative Indian Ocean Dipole near its end; La Niña to continue into summer
LA NIÑA
ENSO Forecast
La Niña continues in the tropical Pacific. Atmospheric and oceanic indicators of the El Niño–Southern Oscillation (ENSO) reflect a mature La Niña. Models indicate La Niña may start to ease in early 2023.
During summer, La Niña typically increases the chance of above average rainfall for northern and eastern Australia, and the chance of cooler days and nights for north-east Australia.
There is another factor involved in measuring the FWHM that nobody mentioned: the effect of the telescope used for the measurement.
Not only the quality of the optics and the thermal state of the OTA but even the size of the central obstruction will influence the measurement.
Take two instruments with perfect optics at thermal equilibrium, same aperture, but one with 50% central obstruction and have a look at the diffraction pattern of a star. The instrument with central obstruction will put more photons into the diffraction rings of the star therefore producing a larger spot.
I don't think the software that works out the FWHM takes into account the effect of the central obstruction.
Residual spherical aberration also has the effect of increasing the value of the FWHM reading.
There is another factor involved in measuring the FWHM that nobody mentioned: the effect of the telescope used for the measurement.
Not only the quality of the optics and the thermal state of the OTA but even the size of the central obstruction will influence the measurement.
Take two instruments with perfect optics at thermal equilibrium, same aperture, but one with 50% central obstruction and have a look at the diffraction pattern of a star. The instrument with central obstruction will put more photons into the diffraction rings of the star therefore producing a larger spot.
I don't think the software that works out the FWHM takes into account the effect of the central obstruction.
Residual spherical aberration also has the effect of increasing the value of the FWHM reading.
I am sure there are a lot of factors that will affect the FWHM measurement.
See the formula for Contrast Factor here: https://alpo-astronomy.org/jbeish/Newt_Sec_Mirror.pdf
NOTE: an unobstructed telescope yields a CF of 5.25 : 1
The accuracy of the mount too.
Adaptive optics will give a better FWHM.
There is no real way you can compare 2 different telescopes exactly
but a FWHM measurement with whatever software
is about as good as we're going to get.
There is another factor involved in measuring the FWHM that nobody mentioned: the effect of the telescope used for the measurement.
Not only the quality of the optics and the thermal state of the OTA but even the size of the central obstruction will influence the measurement.
Take two instruments with perfect optics at thermal equilibrium, same aperture, but one with 50% central obstruction and have a look at the diffraction pattern of a star. The instrument with central obstruction will put more photons into the diffraction rings of the star therefore producing a larger spot.
I don't think the software that works out the FWHM takes into account the effect of the central obstruction.
Residual spherical aberration also has the effect of increasing the value of the FWHM reading.
Stefan has mentioned the elephant in the room IMHO.
Tight & deep sky star profiles need far more than good seeing. A smooth stable and accurate mount being first and foremost. As for the optics I have found that using better than "diffraction limited" helps buffer the effect of atmospheric turbulence.
Larger apertures apart from having higher resolution, also stabilise the image position at the focal plane (i.e. the airy disk might look like rubbish, but it stays still and is not dancing all over the place). Mechanical and thermal stability prevent loss of collimation and focus drift during the exposure (e.g. ZeroDur optics do not expand/contract, hence change focus, with temperature changes).
Taking uber-tight and deep images a bit like Formula one racing, a few percent gained from the tyres, suspension, brakes, engine, aerodynamics etc, often add up to a large overall gain. If the visibility is clear and the track is dry (i.e. good seeing) then all the better.