Quote:
Originally Posted by LewisM
Well, I ususally don't eed or take darks with the SXVR-M25C (and the manual suggests not important either, and can DEGRADE the result), so I don't even have a library of them.
|
I'm curious what they mean by "degrade" the result? If you're shooting a bright diffuse object with only say 1 or 2 subs, then sure - I can understand that dark subtracting hot pixels might leave you with a few unsightly black spots.
However, if you're stacking multiple shots and dithering between them, then dark subtraction is very useful for cancelling out the read noise and thermal noise.
If you're applying flats, then dark and dark flat subtraction is essential - otherwise the scaling will artificially
increase your read/thermal noise.
Quote:
I'll continue to probably NOT use darks and use kappa-sigma median, but just wondering what.why this occurs in Entropy/maximum - obviously because of the high tonal contrast with these settings?
|
I had a quick read of the academic papers that DSS refers to for entropy-weighted stacking. Here's a quick go at an intuitive explanation:
One way to think of entropy is "information content". A completely black image (RGB = 0 0 0) contains very little information (low entropy) because you can reproduce it with the instructions "every pixel is black". On the other hand, a well-exposed photo contains a lot of information (high entropy) because to reproduce it you basically need to know the RGB of each individual pixel - it takes a lot more instructions to reproduce it.
Think of the M42 core taken with long exposures: it will probably be blown out to white (low entropy) but the rest of the nebula will be nicely detailed (high entropy). However, with short exposures the core will be highly detailed (high entropy) but the rest of the nebula will be faint or black (low entropy).
If you were to stack together an M42 image from both long and short exposures, you'd want the core to take the detail from the short exposures, and the surrounding nebulosity to take the detail from the long exposures. That is, you maximise overall image detail by selecting each part of the image from the sub with the most entropy. This is basically what entropy-weighted stacking does.
In your case - think of the entropy in a relatively dark, uniform area of sky compared to a similar patch of sky with a few hot pixels thrown in. The hot pixels basically make the image more detailed (hence higher entropy) and so entropy-weighted stacking will preferentially use the hot pixels... does that make sense?
For each set of equal length exposures, kappa-sigma (or similar) stacking of calibrated subs should give you the "best" results. If you had multiple sets of varying length subs (e.g. short/long for M42), then you could stack those calibrated/stacked subs with entropy-weighting to maximise the amount of detail in your final output.