View Single Post
  #8  
Old 19-01-2019, 09:48 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by multiweb View Post
Yes, my thoughts exactly. So to mitigate the variability of quality in 10 nights, let's say I have 10 sets/sessions of ~50x10min; if I grade, register and take 5 calibrated subs of each set and make 10 new batches of 50 subs my new batches will contain roughly the same range of "good and bad" frames. So when I create my masters from the batches they should be more or less similar SNR? Would that be a better, more uniform, approach to this?
I think a mix and match approach is a good idea.

Quote:
Originally Posted by multiweb View Post
Also you said you'd do a straight combine of the masters, without data rejection this time. Any reason? Would you reject too much overall?
So long as you're not doing small batches the rejection in each batch should be sufficient, especially since you're integrating a lot of data and the contribution from each sub is small. A second round of rejection is going to reduce SNR without any benefit.

Cheers,
Rick.
Reply With Quote