View Single Post
  #17  
Old 09-05-2016, 07:20 PM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
Quote:
Originally Posted by Shiraz View Post
I was intending to use FWHM in the first instance. Even though there are many variables that affect the resolution, the one thing that you can count on is that adding one more will always make the situation worse and increase the FWHM. If a technique like using short subs can go the other way and reduce the FWHM, it is going in the right direction and the FWHM will show if the technique can work and what lengths of subs might help. Then it will be time to start a more detailed analysis.
So far I've not seen anything that correlates better FWHM on individual subs between 36s, 480s or 144s subs, but there's been too much at play for me to be sure of anything, in particular tweaks with auto focus settings.

I think it's still possible, and I think the shorter you go the greater the probability of better results, but I'm just not seeing it yet.

Quote:
Originally Posted by Camelopardalis View Post
Interesting stuff Lee just shows what you can get away with when your sensor has low read noise
True, true. Some very low noise CMOS sensors coming out for us to play with now and there's lots of noise on the overseas forums about very short exposures... could be some interesting times ahead.

Quote:
Originally Posted by gregbradley View Post
FWHM may be a little misleading as I would expect a shorter exposure to always have lower FWHM if only because tracking errors, PE, seeing have less chance to do their damage to FWHM.

Also this test is more for the low noise Sony sensors. For Kodak sensors with higher read noise this may not work as well. There is also a sub exposure calculator on the CCDware site. As I recall from using it in the past the results were more like 7 minutes for many popular Kodak CCDs.

Also it goes against the advice of the very top imagers who recommend long subexposures (again depends on the sensor). Sony sensors have pretty small wells so you are getting an advantage there of not filling up the wells and not making the outer airy disk halos really bright which can also make stars look fatter.

This is a massive advantage of the 16803 chip with its 100K+ well depth. Its hard to overexpose a star unless its one of the super bright ones.

Your example though is quite compelling and opens the door for a nice exposure from a mount with high PE. Not useful though for narrowband where the noise factor is more difficult to overcome.

Greg.
Cheers for your thoughts, Greg.

At this point I'm not convinced that I'll see much gains in FWHM at these sorts of exposure lengths, in individual frames. I think very short subs might be a different story though, due to mount tracking, seeing etc as you say.

To be honest I expected I might get worse eccentricity at least, in the individual subs. I'd wondered though whether the stacking of those potentially less ideal individual subs might result in a better image though due to stacking algorithms being a bit more intelligent about what's rejected or included in the final integration.

I'd argue that this test works as well for Sony sensors as it does for Kodak sensors. I'm not advocating arbitrarily short subs. My interest is in making the subs are short as I can whilst not significantly compromising the overall SNR given the same integration time. If I had a sensor with more read noise, I'd use longer subs. If I was targeting a fainter object, I'd use longer subs.

The spreadsheet that I attached (not pretty, I know) enables you to input actual sky flux, object flux, system read noise and an integration time, from which you can draw your own conclusions about how long the sub duration should be. I had a look at the CCDWare one that you mentioned, and while it's a great tool for sure, I prefer to think about things in terms of how much SNR I'm getting vs the exposure length. This enables me to make what I consider a more informed decision, rather than just taking as gospel the time some tool tells me I should expose for: give me the data, and let me make the decision.

Out of curiosity, with the 16803 you say it's hard to overexpose a star unless it's one of the "super bright ones." Do you have any idea what magnitude qualifies for "super bright"?

One thing I do get irritated by on my images is huge stars (mag ~9 or less at least). I've seen other images of the same targets with the same stars that appear much tighter, and yet I know my images are sharp (usually 1.8 - 2.2").

I've wondered whether it's (a) my imaging scale, (b) my processing, (c) my well depth, (d) microlenses on the sensor--not even sure if that's a factor. Maybe all of the above, I dunno, but it does bother me. I actually like diffraction spikes on large stars because it makes them less of an eyesore imo.
Reply With Quote