View Single Post
  #24  
Old 25-04-2015, 06:55 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Quote:
Originally Posted by codemonkey View Post
I think the key here is that there's some trade-offs and you need to balance those in accordance to your situation.

For me it's a matter of trying to identify "optimal exposure time" such that with an equal integration time, the SNR will be just as good as if you'd done longer exposures.

Once you can figure that out, don't go over it because you're just putting each exposure at a higher risk of being discarded due to events such as wind and tracking errors.
agree - my philosophy as well

Quote:
Originally Posted by rat156 View Post
Hi All,

Very interesting discussion. Some parts of Ray's hypothesis I would tend to agree with, some parts not.

I don't reject a sub unless it has an obvious fault, i.e. visibly poor star shapes. In most cases I get to use all my subs, (ROR jobs, PMX, off-axis guiding etc) even small defects are rejected by the data processing. So, depending on the brightness of the object I vary the exposure time. For NB imaging, 10 -15 minute subs bring out the detail in the DSO, shorter subs generally don't allow the signal to be collected in the dimmer regions of extended objects.

For LRGB, 5 minute subs are all I can do, LP from Melbourne kills the subs after that.

I'm also a fan of collecting lots of subs, but not so you can reject them based on a FWHM measurement, but so the data rejection algorithms are more accurate. So I tend to let the software decide which parts of the sub to retain and which parts to discard. After this a combined image can be sharpened heavily without showing artefacts, which in essence is removing the blur induced by seeing.

Cheers
Stuart
Very interesting post Stuart - thanks. I guess that you and Fred favour leaving data in to keep the SNR up and then deconvolving to recover detail - so you would have no interest in leaving out data that is lower in resolution. Both approaches work - I wonder which is best?

Quote:
Originally Posted by gregbradley View Post
Isn't this just the autoguiding log graphed? In which case it looks a lot like a PEC curve. So how do you extract the seeing from the PE?
I know Pempro has a calculation to do this. I am not sure what it would be other than its probably the outliers in otherwise repeating pattern.

Greg.
Hi Greg. No, it's nothing at all to do with the autoguiding. These are plots of the how the HFR star shape parameter varies from sub to sub. I used "FITs image grader" to analyse the final subs - for example, each point in the lower graph on the second image shows the HFR value for one of the 183 individual subs in that multi-night sequence (ie these 183 points represent measurements from about 10 hours of imaging). I think that FITS image grader works somewhat similarly to CCD inspector, analysing the shapes of multiple stars across a sub - however, it reports a single average star shape parameter (HFR) for the sub. HFR is similar to FWHM/2, so each data point represents what the seeing was like over the 3-4 minutes when the sub was taken (at my image scale, an HFR of 1.2 pixels would be pretty close to a FWHM of about 2.2 arc seconds).

Regards Ray

Last edited by Shiraz; 25-04-2015 at 08:53 PM.
Reply With Quote