May chime in here to either provide further clarity or confuse the hell out of everyone.
Through normalisation in CCDStack, differing exposure durations are weighted differently allowing one to combine them all if they so desire with little to no ill effect on the data rejection algorithm. I regularly do this when creating a synthetic luminance from LRGB subs.
Clearly, whether you want to go down this path is going to be based on what you're trying to achieve in the image i.e. control the targets dynamic range in an effective manner is best suited to layer as many have previously advised.
I'm not sure how dark frames got brought into this equation, but I'll add that you can use one dark frame for all light frames with differing exposures...providing you have a bias frame. The bias frame is subtracted from the dark frame which turns it into a thermal frame that can be scaled to match the noise in any exposure duration light frame below the dark frame duration. However there is a general rule to this. The dark frame should be at least 2-3x the exposure length of the longest light frame you expect to calibrate to improve the scaling accuracy. So if all you intend to capture are 300s, 600s and 900s light frame subs, the master dark frame should be at least 1800s (30mins (2x900s)). The cool thing here is that if you've got a master dark of 1800s duration, there's no need to scale a thermal frame if you decide to take some 1800s light frame subs - so in many ways you improve the versatility of the calibration library.
|