Go Back   IceInSpace > General Astronomy > General Chat
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Rate Thread
  #1  
Old 18-01-2019, 07:40 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,062
Large Integration Times

I'm interested in hearing how people stack a lot subs, let's say hundreds or even more from different imaging sessions, assuming same scope/camera/filter were used. Do you stack them in batches, then create multiple masters, then stack the masters to get the final, or do you reload every single sub and stack the lot? Have you tried to do both and compare? Did you see any improvement one way or the other?
Reply With Quote
  #2  
Old 18-01-2019, 08:47 PM
Slawomir's Avatar
Slawomir (Suavi)
Registered User

Slawomir is offline
 
Join Date: Sep 2014
Location: North Queensland
Posts: 3,240
Interesting questions Marc.

My longest integration to date is this one: https://www.astrobin.com/364378/G/?nc=user

Although I did not think of creating separate masters of smaller batches, I experimented with calibration frames (all done in PI).

I found that for my camera and for current location, adding even a significant number of darks (40 or so) adds a bit of noise to the master, but only when integrating up to about 120-150 15-minute dithered subs.

For 200 15-minute dithered subs, there was a slight decrease in mostly background noise when calibrating with darks (in addition to using Bias and flats). I think this is due to fixed patterns that camera introduces, and this fixed pattern becomes measurable when stacking hundreds of subs. Visually I could not tell the difference, no matter how close and hard I looked...

Measurements of noise were performed with PI using statistics process for various areas in the integrated images.
Reply With Quote
  #3  
Old 18-01-2019, 09:38 PM
mynameiscd's Avatar
mynameiscd (Andy)
Registered User

mynameiscd is offline
 
Join Date: Jan 2017
Location: Langkoop, Victoria
Posts: 457
Really good question Marc,
I have been wondering the same thing a while back so i tried something.
I had a few hundred subs so I put the whole lot into DSS and went to sleep.
There were 30, 60, and 120 sec subs and darks from different nights and one lot even went through a meridian flip.
It wasnt really long (only a few hours total ) but it seemed to work out fine but if i did masters of each and then stacked would it be better?
I also wonder what the pros do.
Cheers
Andy
Reply With Quote
  #4  
Old 19-01-2019, 05:33 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Hi Marc,

I regularly do stacks of hundreds of subs. Now that I'm doing short subs with CMOS cameras I'm sure the need to deal with thousands is going to arise before too much longer

A single large stack will always give the best result. It has the most information possible for the normalisation and rejection algorithms to chew on. It also offers the most consistent weighting of subs.

Of course, it may not be practical to do a single large stack and in my (limited) experience a stack of stacks (no rejection the second time) will also work well. The only caution I'd offer is that making individual stacks of data from a single night is not a good idea. Batching multiple nights will help to reduce the impact of differing sky conditions.

Cheers,
Rick.
Reply With Quote
  #5  
Old 19-01-2019, 08:00 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,901
If you use CCDStack you have no choice as you'll hit memory errors very quickly.

I'd say the max I can stack in one go in CCDstack with 64gb RAM is about 40 subs.

Perhaps this is another advantage to PI.

I have regularly stacked batches. I haven't done side by side so can't comment on that except that I get good results stacking stacks.

I also wondered about this as well. I agree with Rick. The statistically based combine methods would work better with larger numbers of subs to define more accurately the outliers for noise rejection and also for data rejection techniques.

But I am pretty sure the maths would flatten out fairly quickly so the the stats would be pretty solid once you had already stacked say 20 subs. Further subs would not add to much to the rejection process in my untested opinion. That is to say if your data was already noisy its going to make not enough difference to be too noticeable. If you had good data then you should get a nice solid clean final stacked image.

But when I upgraded my computer I did go for a lot of RAM so I could stack a lot more than I could before. With 6gb RAM I think I could typically only stack about 10 1x1 binned 32mb files.

CCDstack uses too much memory and is limited in this regards. Sequator does not seem to be so limited neither is PixInight (I just stacked 200 files that were about 40mb each in my LMC image).

I guess my point is I can't see this being the difference between a good image and a bad image but I know we strive for the best results with our data. I just can't see it being that important except on the faintest of objects.

Greg.
Reply With Quote
  #6  
Old 19-01-2019, 08:17 AM
Marke's Avatar
Marke (Mark)
Registered User

Marke is offline
 
Join Date: Nov 2009
Location: Sydney
Posts: 1,193
Quote:
Originally Posted by gregbradley View Post
If you use CCDStack you have no choice as you'll hit memory errors very quickly.

I'd say the max I can stack in one go in CCDstack with 64gb RAM is about 40 subs.

Perhaps this is another advantage to PI.

I have regularly stacked batches. I haven't done side by side so can't comment on that except that I get good results stacking stacks.

I also wondered about this as well. I agree with Rick. The statistically based combine methods would work better with larger numbers of subs to define more accurately the outliers for noise rejection and also for data rejection techniques.

But I am pretty sure the maths would flatten out fairly quickly so the the stats would be pretty solid once you had already stacked say 20 subs. Further subs would not add to much to the rejection process in my untested opinion. That is to say if your data was already noisy its going to make not enough difference to be too noticeable. If you had good data then you should get a nice solid clean final stacked image.

But when I upgraded my computer I did go for a lot of RAM so I could stack a lot more than I could before. With 6gb RAM I think I could typically only stack about 10 1x1 binned 32mb files.

CCDstack uses too much memory and is limited in this regards. Sequator does not seem to be so limited neither is PixInight (I just stacked 200 files that were about 40mb each in my LMC image).

I guess my point is I can't see this being the difference between a good image and a bad image but I know we strive for the best results with our data. I just can't see it being that important except on the faintest of objects.

Greg.
Sounds like something not right there Greg as I regularly stack a couple hundred at a time in CCDStack mine are 32Mb each I have 32Gb Ram and I watch it get used up to about 80% but never fails to load them all . Win10 64bit
Reply With Quote
  #7  
Old 19-01-2019, 08:39 AM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,062
Thanks all for the feedback. PI would not be an issue as it can stack an industrial quantity of frames because of its batch like processes. CCDStack needs to load them all in RAM prior to do any manipulation on the stack.

Quote:
Originally Posted by RickS View Post

A single large stack will always give the best result. It has the most information possible for the normalisation and rejection algorithms to chew on. It also offers the most consistent weighting of subs.
Yes, my thoughts exactly. So to mitigate the variability of quality in 10 nights, let's say I have 10 sets/sessions of ~50x10min; if I grade, register and take 5 calibrated subs of each set and make 10 new batches of 50 subs my new batches will contain roughly the same range of "good and bad" frames. So when I create my masters from the batches they should be more or less similar SNR? Would that be a better, more uniform, approach to this?

Also you said you'd do a straight combine of the masters, without data rejection this time. Any reason? Would you reject too much overall?
Reply With Quote
  #8  
Old 19-01-2019, 09:48 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by multiweb View Post
Yes, my thoughts exactly. So to mitigate the variability of quality in 10 nights, let's say I have 10 sets/sessions of ~50x10min; if I grade, register and take 5 calibrated subs of each set and make 10 new batches of 50 subs my new batches will contain roughly the same range of "good and bad" frames. So when I create my masters from the batches they should be more or less similar SNR? Would that be a better, more uniform, approach to this?
I think a mix and match approach is a good idea.

Quote:
Originally Posted by multiweb View Post
Also you said you'd do a straight combine of the masters, without data rejection this time. Any reason? Would you reject too much overall?
So long as you're not doing small batches the rejection in each batch should be sufficient, especially since you're integrating a lot of data and the contribution from each sub is small. A second round of rejection is going to reduce SNR without any benefit.

Cheers,
Rick.
Reply With Quote
Reply

Bookmarks


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 08:06 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Testar
Advertisement
Bintel
Advertisement