View Single Post
  #14  
Old 18-05-2016, 01:55 PM
codemonkey's Avatar
codemonkey (Lee)
Lee "Wormsy" Borsboom

codemonkey is offline
 
Join Date: Jul 2013
Location: Kilcoy, QLD
Posts: 2,058
Quote:
Originally Posted by Shiraz View Post
But, if I take 100 x 1second subs and a 10 second burst of bad seeing come through, I have 10 bad subs- wouldn't they then stuff up the final stack just as much as the same 10 seconds of bad seeing would stuff up a single 100 second sub? - I didn't discard any subs in this exercise.
Interesting thread, Ray, thanks for posting.

I'm inferring from the above post that you integrated the images. I assume that the total exposure for each integration was normalised? Sorry if that was already posted and I missed it.

In addition, if the images were integrated, there's potentially a lot of things happening here, for example, Lanczos 3 interpolation is the default in PI for registration and this has a slight sharpening effect... if you integrate more subs, does that impact the overall FWHM?

What rejection algorithm did you choose? I'd be particularly interested in comparing results with no rejection at all; as pure a mean as possible would be of most interest I think.

A note on FWHM: this is calculated very differently in different applications. Some applications, like PixInsight do it "properly"; calculating it via fitting to statistical distribution models.

Other applications, like PHD, approximate it by calculating the intensity slope between adjacent pixels.

Even applications using statistics as the basis for their calculations will vary a lot depending on the distribution model (gaussian vs moffat for example). Some might even use multiple and use a fitting algorithm to find the best fit.

Ray is correct in that a true FWHM (on an unsaturated star), should be independent of brightness value, assuming linear sensor response, but I'd be skeptical about the assumption that the reported FWHM values calculate it correctly anyway.
Reply With Quote