View Full Version here: : Large Integration Times
multiweb
18-01-2019, 07:40 PM
I'm interested in hearing how people stack a lot subs, let's say hundreds or even more from different imaging sessions, assuming same scope/camera/filter were used. Do you stack them in batches, then create multiple masters, then stack the masters to get the final, or do you reload every single sub and stack the lot? Have you tried to do both and compare? Did you see any improvement one way or the other? :question:
Slawomir
18-01-2019, 08:47 PM
Interesting questions Marc.
My longest integration to date is this one: https://www.astrobin.com/364378/G/?nc=user
Although I did not think of creating separate masters of smaller batches, I experimented with calibration frames (all done in PI).
I found that for my camera and for current location, adding even a significant number of darks (40 or so) adds a bit of noise to the master, but only when integrating up to about 120-150 15-minute dithered subs.
For 200 15-minute dithered subs, there was a slight decrease in mostly background noise when calibrating with darks (in addition to using Bias and flats). I think this is due to fixed patterns that camera introduces, and this fixed pattern becomes measurable when stacking hundreds of subs. Visually I could not tell the difference, no matter how close and hard I looked...
Measurements of noise were performed with PI using statistics process for various areas in the integrated images.
mynameiscd
18-01-2019, 09:38 PM
Really good question Marc,
I have been wondering the same thing a while back so i tried something.
I had a few hundred subs so I put the whole lot into DSS and went to sleep.
There were 30, 60, and 120 sec subs and darks from different nights and one lot even went through a meridian flip.
It wasnt really long (only a few hours total ) but it seemed to work out fine but if i did masters of each and then stacked would it be better?
I also wonder what the pros do.
Cheers
Andy
RickS
19-01-2019, 05:33 AM
Hi Marc,
I regularly do stacks of hundreds of subs. Now that I'm doing short subs with CMOS cameras I'm sure the need to deal with thousands is going to arise before too much longer :)
A single large stack will always give the best result. It has the most information possible for the normalisation and rejection algorithms to chew on. It also offers the most consistent weighting of subs.
Of course, it may not be practical to do a single large stack and in my (limited) experience a stack of stacks (no rejection the second time) will also work well. The only caution I'd offer is that making individual stacks of data from a single night is not a good idea. Batching multiple nights will help to reduce the impact of differing sky conditions.
Cheers,
Rick.
gregbradley
19-01-2019, 08:00 AM
If you use CCDStack you have no choice as you'll hit memory errors very quickly.
I'd say the max I can stack in one go in CCDstack with 64gb RAM is about 40 subs.
Perhaps this is another advantage to PI.
I have regularly stacked batches. I haven't done side by side so can't comment on that except that I get good results stacking stacks.
I also wondered about this as well. I agree with Rick. The statistically based combine methods would work better with larger numbers of subs to define more accurately the outliers for noise rejection and also for data rejection techniques.
But I am pretty sure the maths would flatten out fairly quickly so the the stats would be pretty solid once you had already stacked say 20 subs. Further subs would not add to much to the rejection process in my untested opinion. That is to say if your data was already noisy its going to make not enough difference to be too noticeable. If you had good data then you should get a nice solid clean final stacked image.
But when I upgraded my computer I did go for a lot of RAM so I could stack a lot more than I could before. With 6gb RAM I think I could typically only stack about 10 1x1 binned 32mb files.
CCDstack uses too much memory and is limited in this regards. Sequator does not seem to be so limited neither is PixInight (I just stacked 200 files that were about 40mb each in my LMC image).
I guess my point is I can't see this being the difference between a good image and a bad image but I know we strive for the best results with our data. I just can't see it being that important except on the faintest of objects.
Greg.
Marke
19-01-2019, 08:17 AM
Sounds like something not right there Greg as I regularly stack a couple hundred at a time in CCDStack mine are 32Mb each I have 32Gb Ram and I watch it get used up to about 80% but never fails to load them all . Win10 64bit
multiweb
19-01-2019, 08:39 AM
Thanks all for the feedback. PI would not be an issue as it can stack an industrial quantity of frames because of its batch like processes. CCDStack needs to load them all in RAM prior to do any manipulation on the stack.
Yes, my thoughts exactly. So to mitigate the variability of quality in 10 nights, let's say I have 10 sets/sessions of ~50x10min; if I grade, register and take 5 calibrated subs of each set and make 10 new batches of 50 subs my new batches will contain roughly the same range of "good and bad" frames. So when I create my masters from the batches they should be more or less similar SNR? Would that be a better, more uniform, approach to this? :question:
Also you said you'd do a straight combine of the masters, without data rejection this time. Any reason? Would you reject too much overall?
RickS
19-01-2019, 09:48 AM
I think a mix and match approach is a good idea.
So long as you're not doing small batches the rejection in each batch should be sufficient, especially since you're integrating a lot of data and the contribution from each sub is small. A second round of rejection is going to reduce SNR without any benefit.
Cheers,
Rick.
vBulletin® v3.8.7, Copyright ©2000-2025, vBulletin Solutions, Inc.