Go Back   IceInSpace > Equipment > Astrophotography and Imaging Equipment and Discussions

Reply
 
Thread Tools Rate Thread
  #1  
Old 03-12-2016, 09:45 AM
SamD's Avatar
SamD (Sam)
Registered User

SamD is offline
 
Join Date: Jan 2014
Location: Brisbane SW
Posts: 71
Stack SNR - Impact of resampling during alignment

Looking at Ray's imaging design spreadsheet I was reminded of some thoughts I had on stack SNR calculations, and wondered if this analysis sounds plausible.

There's no real practical benefit in this, it doesn't improve the stack SNR achieved! It just means that the stack SNR will often be better than might be expected.

The "normal" SNR calculation goes as follows:

Signal in one sub = 50e-
Total noise in one sub = 10e-
Hence, SNR for one sub = 5.0

But the SNR for a 4 sub stack is better, it's SQRT(4) times better, so = 10.0

In general:
Stack SNR = Individual sub SNR x SQRT(No of subs)

But, my thinking is that, in practice:
Stack SNR = Individual sub SNR x SQRT(No of subs) x 1.5

This is where I get the extra 1.5 factor from...

When stacking, individual subs are aligned. For example, a sub might need to be moved by 2.5px in x and -4.5px in y to align it with the reference sub.

But, in order to do this alignment, the subs must be resampled (let's say by bilinear interpolation). In this example, the resampled pixel that is mapped to (100,100) in the reference sub would be resampled by taking the mean value of the original (102,96), (103,96), (102, 95) and (103,95) pixels.

However, in taking the mean of these 4 pixels, by normal error combination theory, the noise (or error) in each resampled pixel is reduced from the original noise. In this example, the original noise is reduced by SQRT(4) = 2.

Of course, if the alignment offset was a whole number of pixels in x and y (and no rotation was necessary), resampling would not reduce the noise per pixel. In this case, the pixels, with all their original noise, would just be shifted to be directly on top of reference pixel positions.

On the other hand, if the offset was something else, like 0.25 px in x and 0.25px in y, resampling would reduce the noise by between 1 and 2. In fact, a bit of arithmetic shows it to be reduced by 1.6.

The fractional part of the offset in pixels between subs in practice are random, e.g. 0.3 when the offset was 2.3 pixels. It can be shown that for a random offset in both x and y, the average noise reduction introduced by resampling during stacking is 1.5 (assuming bilinear interpolation when resampling).
Reply With Quote
  #2  
Old 03-12-2016, 10:44 AM
ericwbenson (Eric)
Registered User

ericwbenson is offline
 
Join Date: Sep 2009
Location: Adelaide, Australia
Posts: 209
Hmmm, not sure about this, it sounds like a free lunch.
I could take any of my subs and simply register them to some arbitrary half-way point between pixels and improve the subs SNRs? Unless the resampling loses resolution it cannot gain SNR, otherwise it gains information for free.

Of course resampling dithered subs destroys the correlated FPN, which reduces the noise hence increasing SNR, perhaps this is what you are seeing?

Regards,
EB
Reply With Quote
  #3  
Old 03-12-2016, 11:01 AM
SamD's Avatar
SamD (Sam)
Registered User

SamD is offline
 
Join Date: Jan 2014
Location: Brisbane SW
Posts: 71
Quote:
Originally Posted by ericwbenson View Post
Unless the resampling loses resolution it cannot gain SNR, otherwise it gains information for free.
However, I'm pretty sure that resampling does lose resolution, so you don't get anything for free!

For example, take a single pixel star on the original sub and resample at 0.5px in x and y. It would get smeared over surrounding pixels.

On the other hand, if the pixels in the original sub were of a uniform background, it's exactly this smearing effect that reduces the noise and improves SNR.

I think, that the lose of resolution in stacking will often explain why stack FWHM is usually less than individual sub FWHM.
Reply With Quote
  #4  
Old 03-12-2016, 11:22 AM
markas (Mark)
Registered User

markas is offline
 
Join Date: May 2012
Location: Melbourne Australia
Posts: 461
My understanding is as follows:

Stacking of calibrated undithered subs (ie no registration - which what you should have if your tracking is sub-pixel) does not improve S/N by sqrt(number of subs). The improvement is less because additional noise is introduced by the calibration frames.

The reason why many calibration subs should be used to make the calibration frames is to minimise the fpn of these frames. But with undithered subs the stacking multiplies the fpn contribution to the overall noise by the number of stacked subs.

However, if dithering is performed the build-up does not occur, and the calibration noise contribution is closer to that from a single calibration frame. Dithering puts the image onto different pixels each sub - that is critical to its value in reducing fpn.

Mark
Reply With Quote
  #5  
Old 03-12-2016, 11:31 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
In the case of Drizzle where something similar but more complex is happening, the noise in adjacent pixels is correlated. The result looks visually cleaner and noise estimates are lower, so Drizzle appears to do some noise reduction. From a science POV the noise is still there, of course.

This link has a good description but isn't working right now. I presume it's just a temporary DNS glitch:
http://www.stsci.edu/hst/HST_overvie...34.html#385457

Cheers,
Rick.
Reply With Quote
  #6  
Old 03-12-2016, 11:49 AM
ericwbenson (Eric)
Registered User

ericwbenson is offline
 
Join Date: Sep 2009
Location: Adelaide, Australia
Posts: 209
Quote:
Originally Posted by SamD View Post
However, I'm pretty sure that resampling does lose resolution, so you don't get anything for free!

For example, take a single pixel star on the original sub and resample at 0.5px in x and y. It would get smeared over surrounding pixels.

On the other hand, if the pixels in the original sub were of a uniform background, it's exactly this smearing effect that reduces the noise and improves SNR.

I think, that the lose of resolution in stacking will often explain why stack FWHM is usually less than individual sub FWHM.
Well I generally don't see any loss in resolution from stacking resampled frames, nothing obvious at least.

But the question does make me think, is resampling a reversible process? Unlike binning which permanently loses resolution (unless you invoke different math such as deconvolution which sorta gets some resolution back).

If it is reversible then no information is lost and you could not see any improvement in SNR, but if the correlated noise in the adjoining pixels is "mixed" then it is not reversible and your hypothesis may be correct. However I feel the improvement would be nowhere near 1.5x and again would be due to FPN reduction. Uncorrelated noise would just redistribute itself to the four target pixels leaving the whole thing where it started and you could undo it although the exact noise signature would not be restored, the average noise would stay the same.

Cheers,
EB
Reply With Quote
  #7  
Old 03-12-2016, 03:39 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
Thanks for bringing this up Sam.

just did a test.

stacked 11 subs with no alignment and then again with star alignment.

- single sub noise ~48e. stacking 11 such subs should give ~14.5e

- stacked the aligned subs and the noise was ~14e, ie the noise reduced as expected by the sqrt of the number of subs

however, with no alignment, the noise after stacking was ~20e and it varied a lot with position (your factor of 1.5?).

My take on it that there is definitely a difference in SNR depending on alignment, but that the expected sqrt(n) relationship applies to the aligned stack. The unaligned stack is the odd one out and it has worse noise than expected due to the inherent persistence of FPN (it doesn't reduce with stacking unless it is decorrelated in some way)

You raise some very interesting issues - we use interpolation like this by default, without giving any thought to what it may mean to the images. For example, with heavy rejection of outliers, maybe it is possible to tighten up star profiles in the stacks. And what difference does the choice of interpolation algorithm make?

Last edited by Shiraz; 03-12-2016 at 04:57 PM.
Reply With Quote
  #8  
Old 03-12-2016, 04:30 PM
Slawomir's Avatar
Slawomir (Suavi)
Registered User

Slawomir is offline
 
Join Date: Sep 2014
Location: North Queensland
Posts: 3,240
Very interesting topic - thank you Sam.

I also did a test and compared drizzle integration (scaling x2) with 'normal' integration.

As per Rick's response, drizzled image had a slightly lower noise in the low signal / background areas. I compared standard deviation in small areas in two images and got about 15-17% decrease in the standard deviation. Total integration (3nm Ha) was over 12 hours.

There are a few screen shots for reference. Drizzled data is on the left.

The native image scale is 1.33"/pixel, while drizzle integration results in half of that (about 0.67"/pixel).

Suavi
Attached Thumbnails
Click for full-size image (drizzle1.jpg)
135.2 KB30 views
Click for full-size image (drizzle2.jpg)
160.5 KB31 views
Click for full-size image (drizzle3.jpg)
120.8 KB36 views
Reply With Quote
  #9  
Old 03-12-2016, 05:07 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
perhaps do a resample by 2.0 on the undrizzled image to compare it with the drizzled image at the same scale?
Reply With Quote
  #10  
Old 03-12-2016, 05:57 PM
SamD's Avatar
SamD (Sam)
Registered User

SamD is offline
 
Join Date: Jan 2014
Location: Brisbane SW
Posts: 71
Quote:
Originally Posted by Shiraz View Post
just did a test.
stacked 11 subs with no alignment and then again with star alignment.
Actually, I hadn't thought of doing this! Despite being pretty terrible to look at, non aligned stacks are a good control in an experiment to compare against aligned stacks.

I picked a set of 50 luminance subs with plenty of light pollution and did a similar test. I found:

Non aligned stacks: Noise = Original Noise / SQRT(N) x 1.15
Aligned stacks: Noise = Original Noise / SQRT(N) x 0.74

Hence, I didn't achieve the normal SQRT(N) reduction in noise for non aligned stacks (maybe FPN becomes a factor).

However, I did "beat" the normal SQRT(N) reduction in aligned stacks, by 1/0.74 = 1.35 - not quite the 1.5 SNR improvement I was looking for (again maybe FPN limits the noise reduction in larger stacks).

The important bit is that my noise in aligned stacks was lower than that in non aligned stacks by 1.15 / 0.74 = 1.55, like Ray reports, and close to the 1.5 in the theory of resampling after alignment.

In my experience anyway, I'd therefore go with a formula:
Stack SNR = Sub SNR / SQRT(N) x 1.35

But it is complicated by the impact of FPN in stacks and also by the difficulty of consistently measuring noise in images, and hence measuring SNR.

In the spreadsheet, I measured the noise by programmatically dividing the image into rectangles and doing a kappa-sigma standard deviation calculation (to stop stars affecting the noise calculation). I then picked a few rectangles manually in starless areas to manually verify the calculated noise.
Attached Files
File Type: zip Stack_NoiseReduction_2016.zip (11.7 KB, 21 views)
Reply With Quote
  #11  
Old 03-12-2016, 07:23 PM
SamD's Avatar
SamD (Sam)
Registered User

SamD is offline
 
Join Date: Jan 2014
Location: Brisbane SW
Posts: 71
Quote:
Originally Posted by ericwbenson View Post
But the question does make me think, is resampling a reversible process?
Pretty sure it's not, shifting by half a pixel right (by linear interpolation), then left again, is lossy in 1D.
Attached Files
File Type: zip Resampling.zip (2.5 KB, 21 views)
Reply With Quote
  #12  
Old 03-12-2016, 10:15 PM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
just tested another dataset with 16 very similar dithered, calibrated and aligned subs. the noise in a few subs chosen at random varied from 4.55 to 5.5 with a mean of about 5. A stack without any rejection produced a noise level of 1.24, which is almost exactly 1/4 of the average sub noise levels - as it should be for 16 subs. I am not seeing any gain at all over the expected sqrt(n).

ie, I get Stacknoise = Subnoise/sqrt(n)

A stack of the unaligned subs had noise about 25% higher than that from the aligned stack - assume that this indicates some residual FPN.

Last edited by Shiraz; 04-12-2016 at 08:15 AM.
Reply With Quote
  #13  
Old 04-12-2016, 07:45 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by Shiraz View Post
just tested another dataset with 16 very similar calibrated subs. the noise in a few subs chosen at random varied from 4.55 to 5.5 with a mean of about 5. A stack without any rejection produced a noise level of 1.24, which is almost exactly 1/4 of the average sub noise levels - as it should be for 16 subs. I am not seeing any gain at all over the expected sqrt(n) due to alignment before stacking.

ie, I get Stacknoise = Subnoise/sqrt(n)

A stack of the unaligned subs had noise about 25% higher than that from the aligned stack - assume that this indicates some residual FPN.
That makes sense, Ray. Shot noise imposes an upper bound on SNR and the best you can do is minimise the effect of read noise, thermal current noise and FPN.
Reply With Quote
  #14  
Old 04-12-2016, 10:14 AM
Shiraz's Avatar
Shiraz (Ray)
Registered User

Shiraz is offline
 
Join Date: Apr 2010
Location: ardrossan south australia
Posts: 4,918
just did some more testing - seems I missed the point in my last posting.

Since the key operation is the alignment interpolation, reasoned that it should apply equally well to a single sub and thereby get around the FPN issue. So I tried a single sub and aligned it to another dithered sub using a variety of interpolations in PI. results for both noise and FWHM are:

original sub: noise = 5.53, FWHM = 3.73
bilinear interpolation: noise = 3.17, FWHM = 3.97
Lanczos3 interpolation: noise = 4.74, FWHM = 3.72
bicubic bspline: noise = 2.73, FWHM = 4.07
Mitchell: noise = 3.13, FWHM = 3.96
CatmulRom: noise = 3.65, FWHM = 3.84

conclusions:
- As Sam noted, there is a smoothing effect. It depends on the algorithm chosen (and presumably on the nature of the noise).
- The smoothing may be accompanied by an increase in FWHM, depending on interpolation algorithm.
would be nice to have results from other datasets if anyone can spare the time.

Thanks very much for the insight Sam.

regards Ray

Last edited by Shiraz; 04-12-2016 at 01:40 PM. Reason: get it right!
Reply With Quote
  #15  
Old 04-12-2016, 10:48 AM
ericwbenson (Eric)
Registered User

ericwbenson is offline
 
Join Date: Sep 2009
Location: Adelaide, Australia
Posts: 209
Quote:
Originally Posted by SamD View Post
Pretty sure it's not, shifting by half a pixel right (by linear interpolation), then left again, is lossy in 1D.
Hi Sam,
Looks like you are right, linear interpolation is mathematically the same as a moving average of width= 2. Features get averaged out, it is strongest for smaller PSF. Hence why oversampling (beyond Nyquist) is not always a bad thing. (If we recall 2 pixels per PSF is the critical Nyquist limit which applies to bandwidth aliasing and the frequency domain...I always wondered why it was taken so literally in 2D imaging applications )

I redid you 1D experiment in excel and graphed it for clarity, the attached chart shows the effect of two consecutive linear interpolations on 3 made up data sets (FWHM =4,3,2 pixels). The green points (first interpolation) all fall exactly on the piecewise linear curve (blue line) between data points (open blue circles) and seemingly reproduce the correct signal. However it is an illusion since shifting the points by half a pixel again (orange circles) falls off the blue line, information about the peak structure was lost, how much depends on the size of the PSF.

So just by resampling our data we are performing a small (2x2) averaging filter. Fancier algorithms such bi-cubic and Lanczos may preserve detail at smaller PSF (1.5x1.5 effective kernel?) however they have the unavoidable side effect of generating ringing artefacts (which is effectively aliasing in the frequency domain, still no free lunch !!!).

Regards,
EB
Attached Thumbnails
Click for full-size image (LinearInterpolation_Effect_on_FWHM.png)
51.2 KB26 views
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 04:17 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement