PDA

View Full Version here: : Using short subs to improve resolution


Shiraz
23-04-2015, 11:39 AM
Hi

This idea arose in discussions with PRejto and was also recently raised in a post by Justin Tilbrook. The basic idea is that a form of lucky imaging can be applied to DSOs – you don’t need to image fast enough to freeze the seeing, but it can help if you image with short enough subs to enable you to deal with those occasions when seeing fluctuates over periods of minutes.

To illustrate with real data, the upper graph in the attached image shows the HFR for a sequence of 56 x 3 minute subs. (HFR is related to FWHM and these data correspond to FWHM of about 2-2.5 arc sec). The seeing varied fairly smoothly, but there were occasional bursts where it was worse. The lower data shows the same HFR data, but with each value being the average of a block of 7 shorter subs (ie this data represents 21 minute subs). Should I decide that I want to keep data with an HFR of say 1.2 pixels or less to maximise the resolution, I can keep 31 of the 56 short subs, but only 3 of the 8 longer subs (and one of those is marginal). Thus, I can keep > 40% more of the data if I use shorter subs – or alternatively I will get worse resolution if I choose to keep more of the longer subs. The basic problem with the longer subs is that they bundle good and bad data together and there is no way to weed out the bad without losing some of the good.

Of course, this method is not for everyone. You will need a sensitive system with low read noise to keep noise under control, but if you have a camera with high QE and low read noise, it might be worth seeing if you can get better resolution by using short subs. Even if you don’t get huge increases in resolution, you might get significantly better SNR, since you could possibly end up keeping much more data. And of course, it doesn’t cost a cent.

Thanks for reading. Regards Ray

strongmanmike
23-04-2015, 11:57 AM
Once again I love the way you verify and quantify more or less why I have found what I have through trial and error, a great explanation Ray :thumbsup:

Mike

RickS
23-04-2015, 01:26 PM
Interesting data, Ray. On a couple of occasions, when the data justified it, I have done two integrations of my luminance - one weighted for best FWHM and one weighted for best SNR. These are blended so that the high FWHM integration is used for the brightest parts of the image and the high SNR integration is used for the dim parts. It should be possible to do something similar with short subs for the high FWHM data, even with a camera that doesn't have incredibly low noise (like my KAF-16803...) Food for though :)

multiweb
23-04-2015, 01:29 PM
That's interesting to see the numbers Ray. I too have had experience with shorter subs (3min) and found that the FWHM of the final was better, probably because you can be picky given the number of subs available. But read noise was an issue. I've never experienced long uninterrupted subs. Don't have an obs and there's always a wind gust, a cloud or plane that's bound to stuff one up. Want to make the most of your setup time. :)

codemonkey
23-04-2015, 05:18 PM
Great topic / work, Ray! I've seen some discussion elsewhere on identifying the shortest possible exposure that you can use while keeping the total integration time and resulting SNR the same and this is another great reason to do that. Short exposures for the win!

tilbrook@rbe.ne
23-04-2015, 05:40 PM
Thanks Ray!:)

Thanks for doing the numbers,:einstein: I'm glad to see there's some method in my madness!!:evil:

Cheers,

Justin.

gregbradley
23-04-2015, 05:49 PM
Nice work Ray. What does HFR stand for? ( I looked it up and it wasn't listed) Is it half full radius? Whats its definition?

Another factor here is longer subs average out guide errors as well. So if your guiding isn't super fabulous then you can see the stars getting a bit fatter with longer subs compared to shorter subs.

Of course an assumption here is a bright object or a low read noise camera where the short subs allow the read noise to be stacked out.

Doing this with higher read noise cameras or with narrowband would stop you from getting very faint detail above the noise floor of the camera.

So like anything you need to know when this would be a good approach and when it wouldn't be.

I was surprised recently comparing 5 and 10 minute subs from my CDK17 and SX694. It was hard to notice much difference in terms of the brightness of the image. Shorter subs tended to have slightly rounder stars (guide scope - yuck, but a necessary evil with that setup).

So in conclusion I think it fair to say as a general rule:

1. Shorter subs are better with variable seeing, windy or partially cloudy conditions or where you tracking is not 100%.

2. Its not good for faint maximum detail or for narrowband images.

3. If your tracking is top notch and your skies are clear and stable with no wind then longer exposures would be better.

How often do you think the seeing is variable like that? As a general trend I notice seeing often improves as the night progresses and possibly best at around 3am or so.

Greg.

Shiraz
23-04-2015, 06:54 PM
thanks Mike it's good to find agreement.


thanks for that input Rick. Much food for thought indeed - have to think about what weighting for FWHM actually means, but the idea of using different stacks for different parts of the image seems really useful.


Hi Marc. The ability to throw out subs that are spoiled for whatever reason seems to be very useful. But, you are right that read noise limits what you can do.

Hi Lee. would be great if we could get hold of cameras with really low read noise - would be nice to use 1 minute subs or less and that should be possible in future.


Hi Justin. the other possibility is that maybe we are both bonkers?? Anyway, have also been thinking about how you might optimise sub length for your camera - will PM you.


Hi Greg.

The seeing behaves like that quite often at this site, but ironically it is only possible to see fluctuations like this if you use short subs (if you use long subs you will never be aware of how much the seeing varies :lol:). I used to get major seeing improvements as the night went on, but since I put up a ROR obs, the overall seeing is much more consistent (and consistently lower) throughout a night.

You are right to point out that what I call "seeing" also incorporates tracking error - but the technique works just as well on tracking error.

HFR is the Half Flux Radius (half of the Half Flux Diameter as in http://www.cyanogen.com/help/maximdl/Half-Flux.htm). It is used in Nebulosity and Sequence Generator Pro - which I use all the time.

Totally agree that this method is not appropriate for NB, except maybe for the brightest objects. However, I think that it can be used for getting maximum detail from broadband images in good conditions - provided that the subs are long enough to cover the read noise, I think that a lot of short subs will generally do slightly better than a few long subs.

Regards Ray

RickS
23-04-2015, 07:24 PM
Ray,

The ultimate weighting tool is to leave the worst subs out of the integration, but PI also has a script (SubframeSelector) which allows you to calculate a weighting for each sub based on a collection of parameters including FWHM, Eccentricity and SNR. This script adds a weight value to the FITS header that is used during image integration.

Cheers,
Rick.

Shiraz
23-04-2015, 09:10 PM
Thanks Rick - looked it up following your first post - much appreciated :thumbsup:. I can understand weighting applied when adding noisy frames to a stack, but am still trying to understand what it means for resolution - do you understand it to mean that, if you only add a fraction of a dud frame, you can still get some SNR advantage from it without messing up the detail too much? I don't think that there is any resolution advantage in adding less well resolved data - seems to me that can only work to degrade resolution, even if weighted.

regards ray

RickS
23-04-2015, 09:45 PM
Ray,

I'd reject poor FWHM subs (or set weight to zero) for the high res integration. I didn't mean to imply that there would be some benefit from including these except in the high SNR integration.

Cheers,
Rick.

Shiraz
24-04-2015, 10:19 AM
thanks for the clarification Rick - looks like a really neat idea and I must try it. regards Ray

Paul Haese
24-04-2015, 10:35 AM
I agree that not everyone will agree on this. Though I also see its uses too. Recently doing long subs on Carina nebula and incorporating short subs to capture the Homunculus is one example that I think works. Also you could do this with other high dynamic regions.

My general opinion based on my own experience is that shorter subs fail to overwhelm the noise and in particular with a sensor such as the KAF8300, although as your point out the right sensor should help in certain cameras.

Overall though I think these theories are very contentious and the astro imaging community can be quite split on which is better. For me HDR objects means using shorter subs to masked in to much longer subs. I would not advocate using short subs overall, but that is just my opinion.

Shiraz
24-04-2015, 10:47 AM
thanks Paul.

I appreciate that there are differing opinions and as I said, it is not going to work for cameras with low QE and high read noise.

However, the original post is not based on any theory or opinion at all - the data is real data from an imaging session and I just presented it to show what I had actually measured.

Regards Ray

Paul Haese
24-04-2015, 11:07 AM
Sorry; use of the wrong wording Ray, but often theory is supported by evidence, so hence why I said theory. You present evidence with your findings. Nothing intended to deny your findings.

I like the way you present your evidence. I don't always agree with your findings but it always makes me think more on the issues. And; that is certainly good for any changes that might need to be made to my imaging techniques.

Shiraz
24-04-2015, 11:14 AM
Hi Greg
re your question on how seeing varies, I did a couple of analyses on multi-night data from a couple of other targets (3 to 4 minute subs) - the attached data is HFR (in pixels as before) and shows how much and how quickly the seeing varies at this site. Hope it is interesting - it seems that short term fluctuations of around 0.2 pixels or more (or about 0.4 arcsec FWHM) are common, which is a fair bit really. Regards ray

edit: none of this data is normalised to remove the effect of elevation on seeing.

Shiraz
24-04-2015, 11:25 AM
No need to apologise at all Paul :). Ideas need to be challenged and defended - the more the merrier.

PRejto
24-04-2015, 05:10 PM
Hi Ray,

A most interesting presentation that does back up some of what I measured when chasing my blue fringing issue a while back.

Sorry if this sounds dense! But, when you write:

"The lower data shows the same HFR data, but with each value being the average of a block of 7 shorter subs (ie this data represents 21 minute subs)"

I question whether taking an "average" of 7 subs actually represents a 21 min sub taken during the same interval. Wouldn't the 21 min sub be more heavily weighted towards whatever was the worst moment of seeing during the exposure? Taking an average would seem to make the sub better than it might be. I guess I'm thinking about an analogy towards guiding when I write this. It doesn't take much guide error, or very long, to destroy a sub that "averaging" wouldn't fix. I'm pretty sure you will adjust my thinking!!

Peter

Bassnut
24-04-2015, 06:14 PM
Im right out of my expertise here, but doesnt RMS guiding count more than average or PP for extended objects on long exposures?. OK, stars are destroyed by an outlier excursion, but in my experience short exposures (on extended objects) are more sensitive to bad guiding and seeing than long exposures!.

Edit: I might expand on that. More data on a longer sub (and lower noise) even if its not as sharp as a short sub, allows more application of decon and sharpening before noise becomes obvious and artifacts occur ?.

Shiraz
24-04-2015, 07:13 PM
Hi Peter and Fred.

I also had some misgivings about using an average of the HFR, so I tested it by taking results from a highly variable block of seeing and:
1. averaged the HFR of 7 individual subs
2. stacked the subs and measured the HFR of the result (this should be pretty much equivalent to a longer sub)
I got the same result for the two processes (<2% difference), so decided to use the much easier averaging in the final analysis.

I agree that the actual effect of seeing on an image is almost certainly not fully indicated by HFR, but I wanted to show that this measure at least can be improved by shorter subs. I would be grateful if you have any ideas on how to better test this idea - I am at the limit for my pay scale:P.

regards Ray

edit: re deconvolution, although read noise will be lower for a few long subs, the fact that you can retain so much more good data with shorter subs (in this case 40% for the chosen rejection threshold) must compensate to some degree. FWIW, with my system, the total read noise is <10% of the total broadband noise with 3 minute subs so, by rejecting less data, I can actually get better SNR than I would with longer subs (broadband of course).

codemonkey
24-04-2015, 08:02 PM
I think the key here is that there's some trade-offs and you need to balance those in accordance to your situation.

For me it's a matter of trying to identify "optimal exposure time" such that with an equal integration time, the SNR will be just as good as if you'd done longer exposures.

Once you can figure that out, don't go over it because you're just putting each exposure at a higher risk of being discarded due to events such as wind and tracking errors.

rat156
24-04-2015, 08:36 PM
Hi All,

Very interesting discussion. Some parts of Ray's hypothesis I would tend to agree with, some parts not.

I don't reject a sub unless it has an obvious fault, i.e. visibly poor star shapes. In most cases I get to use all my subs, (ROR jobs, PMX, off-axis guiding etc) even small defects are rejected by the data processing. So, depending on the brightness of the object I vary the exposure time. For NB imaging, 10 -15 minute subs bring out the detail in the DSO, shorter subs generally don't allow the signal to be collected in the dimmer regions of extended objects.

For LRGB, 5 minute subs are all I can do, LP from Melbourne kills the subs after that.

I'm also a fan of collecting lots of subs, but not so you can reject them based on a FWHM measurement, but so the data rejection algorithms are more accurate. So I tend to let the software decide which parts of the sub to retain and which parts to discard. After this a combined image can be sharpened heavily without showing artefacts, which in essence is removing the blur induced by seeing.

Cheers
Stuart

gregbradley
25-04-2015, 02:39 PM
Isn't this just the autoguiding log graphed? In which case it looks a lot like a PEC curve. So how do you extract the seeing from the PE?
I know Pempro has a calculation to do this. I am not sure what it would be other than its probably the outliers in otherwise repeating pattern.

Greg.

Shiraz
25-04-2015, 06:55 PM
agree - my philosophy as well


Very interesting post Stuart - thanks. I guess that you and Fred favour leaving data in to keep the SNR up and then deconvolving to recover detail - so you would have no interest in leaving out data that is lower in resolution. Both approaches work - I wonder which is best?


Hi Greg. No, it's nothing at all to do with the autoguiding. These are plots of the how the HFR star shape parameter varies from sub to sub. I used "FITs image grader" to analyse the final subs - for example, each point in the lower graph on the second image shows the HFR value for one of the 183 individual subs in that multi-night sequence (ie these 183 points represent measurements from about 10 hours of imaging). I think that FITS image grader works somewhat similarly to CCD inspector, analysing the shapes of multiple stars across a sub - however, it reports a single average star shape parameter (HFR) for the sub. HFR is similar to FWHM/2, so each data point represents what the seeing was like over the 3-4 minutes when the sub was taken (at my image scale, an HFR of 1.2 pixels would be pretty close to a FWHM of about 2.2 arc seconds).

Regards Ray

PRejto
26-04-2015, 09:21 AM
Ray,

Thanks for your explanation of why you chose averaging. Seems a decent experiment and conclusion*

I suppose the only definitive test would be to use two scopes and two identical cameras at the same time, one taking short subs, the other one long sub, and compare. Too hard.

Peter

* when you stacked the subs I assume you didn't do any data reject, right?

Shiraz
26-04-2015, 10:55 AM
correct - basic average stack in Nebulosity with no smarts, although I did use hot pixel rejection in both cases, to ensure that bad pixels did not distort the HFR results. I thought about 2 scopes, but even then they would not look through the same air - and besides, as you say, too hard. However, I am reasonably satisfied that the conclusions are reliable enough for most purposes .
regards Ray

gregbradley
26-04-2015, 06:06 PM
Hi Greg. No, it's nothing at all to do with the autoguiding. These are plots of the how the HFR star shape parameter varies from sub to sub. I used "FITs image grader" to analyse the final subs - for example, each point in the lower graph on the second image shows the HFR value for one of the 183 individual subs in that multi-night sequence (ie these 183 points represent measurements from about 10 hours of imaging). I think that FITS image grader works somewhat similarly to CCD inspector, analysing the shapes of multiple stars across a sub - however, it reports a single average star shape parameter (HFR) for the sub. HFR is similar to FWHM/2, so each data point represents what the seeing was like over the 3-4 minutes when the sub was taken (at my image scale, an HFR of 1.2 pixels would be pretty close to a FWHM of about 2.2 arc seconds).

Regards Ray[/QUOTE]

I see. But still how possible is it to remove the effects of autoguiding from an image so the balance is wind, seeing.

Add to that the fact you may get additional flexes at different angles as the night progresses to further complicate the data.

So even though an algorithim has been used to try to say this is the seeing part I doubt it can be done very accurately. Further evidence that this is the case is the need to do a huge TPoint model then a huge CPU intensive calculation to work out a supermodel, get rid of the outliers, try different possible flexes in the system.

Even the tracking rate of the mount would not be correct at lower alititudes than near the zenith due to atmospheric effects (AP mounts enable different rates of tracking at different altitudes to compensate for this).

After having spent endless hours watching the guide errors on many many nights of different conditions I think it would be very hard to differentiate between simple PEC and seeing variability.

Planetary imagers probably know what the seeing is like the best as they fight it all the time. I wonder how often the seeing is quite stable and how often it isn't.

Regardless though if its seeing, PEC, a wind gust, balance of the mount, flex, cable drag, boundary layer on the mirror, mirror cooldown times, ground effects, polar alignment I suppose the idea of shorter subs giving sharper images is still true. But I don't think its scientific to say its 100% seeing related.

I wonder even if any mount works the same all night given there could be slight shifts in voltage from the power supply, efficiencies at one angle are not as efficient at other angles, etc etc. You could go on and find other minor factors that influence the result. Hence TPoint super models.

You can't be sure of that with your data input. What percentage the data is affected by seeing would be an educated guess and probably varies by location a lot.

At my dark site I don't think seeing varies much at all. Especially 3 hours after dark onwards.

Balance would be a big one. Scopes are often top heavy and what is in balance horizontally can be badly out of balance at 75 degrees - the common imaging angle.

It would be interesting to see FWHM numbers from the really long subexposure strategy versus SNR versus stacked short subs for the same duration.

I predict it would come up with the same result as you have shown and it boils down to the read noise of the camera as the major determining factor at a dark sky site exposure length (the usual advice is to go longer on subs at a dark sky site using Kodak chips).

Greg.

Slawomir
26-04-2015, 07:35 PM
LOL

You have composed a rather long list of our adversaries Greg...

That's why, in an attempt to not go entirely mad while trying to solve and understand all issues, I expose for as long as stars in my subs are nice and round(ish)...:rofl:

Shiraz
26-04-2015, 09:25 PM
I see. But still how possible is it to remove the effects of autoguiding from an image so the balance is wind, seeing.

Add to that the fact you may get additional flexes at different angles as the night progresses to further complicate the data.

So even though an algorithim has been used to try to say this is the seeing part I doubt it can be done very accurately. Further evidence that this is the case is the need to do a huge TPoint model then a huge CPU intensive calculation to work out a supermodel, get rid of the outliers, try different possible flexes in the system.

Even the tracking rate of the mount would not be correct at lower alititudes than near the zenith due to atmospheric effects (AP mounts enable different rates of tracking at different altitudes to compensate for this).

After having spent endless hours watching the guide errors on many many nights of different conditions I think it would be very hard to differentiate between simple PEC and seeing variability.

Planetary imagers probably know what the seeing is like the best as they fight it all the time. I wonder how often the seeing is quite stable and how often it isn't.

Regardless though if its seeing, PEC, a wind gust, balance of the mount, flex, cable drag, boundary layer on the mirror, mirror cooldown times, ground effects, polar alignment I suppose the idea of shorter subs giving sharper images is still true. But I don't think its scientific to say its 100% seeing related.

I wonder even if any mount works the same all night given there could be slight shifts in voltage from the power supply, efficiencies at one angle are not as efficient at other angles, etc etc. You could go on and find other minor factors that influence the result. Hence TPoint super models.

You can't be sure of that with your data input. What percentage the data is affected by seeing would be an educated guess and probably varies by location a lot.

At my dark site I don't think seeing varies much at all. Especially 3 hours after dark onwards.

Balance would be a big one. Scopes are often top heavy and what is in balance horizontally can be badly out of balance at 75 degrees - the common imaging angle.

It would be interesting to see FWHM numbers from the really long subexposure strategy versus SNR versus stacked short subs for the same duration.

I predict it would come up with the same result as you have shown and it boils down to the read noise of the camera as the major determining factor at a dark sky site exposure length (the usual advice is to go longer on subs at a dark sky site using Kodak chips).

Greg.[/QUOTE]

Hi Greg. What I did was use real measured data to show the consequences of throwing out subs in which the stars were fatter in shape than some chosen threshold. It doesn't matter at all how they got to be fat - seeing, guiding, flexure etc. all have basically the same effect, as you point out - and I have not tried to separate them out. I have used the term "seeing" to describe the combined result, but it was noted in an earlier post that the measured HFR includes more than just the atmospheric effects. What I found was that, if you decide to try for better resolution by throwing out bad subs, you can do so more efficiently using short subs rather than long ones. I was particularly surprised to find that the SNR can be better if short subs are used cf long ones - didn't anticipate that.

Re seeing stability in planetary imaging, I find that it varies over quite short timescales - it is rare to have seeing that stays stable over more than 10 seconds - it occasionally does, but not often. At my site, it is far more common to face waves of ordinary to bad seeing flowing through, with occasional bursts of a few seconds of better seeing every now and again. And of course at 50Hz framerate and 0.2 arcsec sampling, the primary things affecting the imaging are atmospheric seeing and scope PSF - guiding, flex etc are unimportant.

The measurements have nothing to do with read noise. The RN component of this data is well below 10% of the total noise - ie it is not significant.




Do you also take star size into account Slawomir, or just shape?

Regards Ray

Slawomir
27-04-2015, 06:24 AM
3nm filters tend to help big time in keeping star sizes in check, in fact, I feel that quality of filters used is often heavily underestimated in their impact on the overall sharpness and quality of astro images.

An example of an image resulting from collecting subs with a 3nm filter, low above the horizon (30-50 degrees) and humidity was relatively high (January in Brisbane), no subs were rejected: http://www.astrobin.com/full/148715/D/

gregbradley
27-04-2015, 01:00 PM
Thanks for posting Ray. It does shed some light on what may be the best imaging strategy on the night with a particular setup.

Greg.