Motivated by a friend who asked recently about whether imaging with the Moon up was worthwhile, I dug out a back of the envelope calculation I made a while ago so that I can give a better answer than ... err... maybe not.
Thought it might be interesting to share, as the result is surprisingly neat and simple.
Suppose we have a bunch of N1 images with sky brightness B1 (i.e. the brightness of the background above that of a dark image - measured in whatever units you like). And we have a bunch of N2 images with sky brightness B2. Assume B2 > B1 for argument, i.e. the second set of images is taken in worse conditions than the first. The question is ... will adding the N2 images to a stack of the N1 images make the final image worse? Should we ditch them and just stack the N1 images?
It turns out (simple calculation which simplifies down surprisingly well) that using the new images will make the final image better if
B2/B1 < 2 + N2/N1
(assuming sky limited exposures, same exposure time, same seeing, same transparency, no fancy stacking algorithms, etc)
In other words, if your new sky conditions (say with Moon up or in a suburban site instead of a dark sky site where you took the original set of N1 images) have less than twice the sky brightness, they are always going to help reduce noise. And if we have a lot of them (N2 is large), they can be even brighter and still have a benefit. But at least this gives you a quantitative way to make a decision. (For example, when I image in a dark sky site vs my suburban home, the brightness is about 5xlower. Hence I'd need at least 3 times as many images from home in the stack as from the dark sky site to make it worth using the home images).
Another thing we can look at is how quickly the benefit decreases. We consider the case where we have a lot of images with sky brightness B1 and we add just one new image with brightness B2. What is its utility in reducing noise in the updated stack? Normalising so that if B2=B1 it has utility 1, we find its utility is roughly
2-B2/B1.
So it decreseas in utility linearly with brightness - and is useless when B2=2*B1 (as per the above for large N1).
This can provide a quantitative guide to when to bother to image. Roughly .... if the sky brightness is more than double your baseline image sky brightness, have a beer/watch Netflix, unless you want to commit to taking a LOT of images in these conditions (i.e. make N2 large). Even if it is only closish to that threshold ..... I'd start to ponder firing up Netflix and visiting the fridge if B2 is more than about 1.5 * B1, as then my images are less than half as effective as the better images in reducing noise
Obviously there are a lot of assumptions here, and the complexity of the real world cases will render it no more than an educated guideline at best. But at least it gives a quantitative answer to a question that is otherwise just met with a gut feel response. And it can be easily monitored, as e.g. SGP can report the median image ADU as you take images. Just subtract the median of your dark frames from this to give a measure of the sky brightness of that image.