I don't understand the quest for red that most imagers have. The results
from Da's and modded DSLRs are, it would seem, false representations
of DSOs, because they overwhelm the blues and greens that are present,
whilst unmodded ones are also false because they show little red.
A good example is Traveller's post of M20. 35 mins exposure time, and only the faintest trace of blue. Ordinary DSLRs, it would seem are
insufficiently sensitive to red, but moddeds are at the other extreme.
Is it not possible to produce a camera with a greater range of colour
sensitivity.
raymo
This was with a CCD - a ONE SHOT COLOUR (same principal as a DSLR with the bayer matrix) - 1 hr total.
No filters - just showing what the sensor can see. An H-a filter will show more red. You can use a clip in H-a with a DSLR - just extract the red channel and merge into the RGB (a few methods to do this)
You can use Ha in just the luminance channel (synthetic or straight), to get the 'body' of the structure without overly biasing the colour. You can also control how much (if any) Ha you blend with the Red chrominance channel.
I would have thought it was because most of these nebulae emit most of their light from the red end of the spectrum, that's why the images of them are quite 'red heavy'.
Forgive my oldie's lack of digital knowledge, and thanks for the explanation. Just one question remains. If CCDs can show these colours,
why do so many imagers post images where the DSOs are just about entirely red and white? Is it personal preference, or lack of processing knowledge?
raymo
I would have thought it was because most of these nebulae emit most of their light from the red end of the spectrum, that's why the images of them are quite 'red heavy'.
Almost. Emission nebulae (including M20) is composed mostly of ionised hydrogen, thus the characteristic red appearance. These are usually associated with hot young stars.
Planetary nebulae (such as Ring Nebula) is the ionised outer gas "shell" from a dying star, and thus have relatively more heavier elements, and therefore different "colours".
As for my image of M20, it was taken in strong moonlight (last Sat night) within 10km of Melbourne CBD. I had to stretch the image in order to pull out some of the details. I will have another go at processing it and see if I can tease out more blue without losing the red.
Bo
I tend to agree Ray, Ha (Reds) and OIII (green/blue) are a delicate balance with either type of camera. However I found some nebula VERY difficult with a standard unmodded camera, like the Cat's Paw NGC6334 and the Lobster NGC6357. Even the Veil nebula is difficult to get reds with a standard camera.
In skilled hands, a modified camera will always win. That extra red sensitivity is a massive advantage when it's needed. The removal of the built in filter also gains about 1 extra F stop in speed as well as better Ha. Want more blue in the modded camera? Just put a blue filter in the optical path. It's that easy. Not so easy to remove the inbuilt filter in an unmodded camera when red is needed! Although... adding an LP filter to an unmodded camera helps to "bring out the reds" by suppressing other parts of the spectrum, but at a loss of speed.
For the perfect balance, Lewis has a point, use a mono camera with RGB filters.
Also keep in mind there is no "perfect balance" even with a mono CCD. The sensor varies in sensitivity across the entire visible spectrum, the transmission factor of your filters is a complex function of wavelength, atmospheric extinction varies depending on altitude and atmospheric conditions, not to mention light pollution gradients and sky glow.
I have tried using calculated extinction factors, G2V stars and other stellar spectral data (eXcalibrator) and these techniques don't seem to give more accurate results than simple methods. Just lining up the R, G & B peaks in a histogram works pretty well for most images.
Thanks again everyone. Regarding Cam's choice of what is "really there", and what the human eye tends to see. It seems logical to me to portray
a DSO as we would see it if it was near enough for us to see it's colours.
Regarding "really there", we might as well photograph flowers in UV because that's how bees see them. We wouldn't normally think of doing that, would we. We capture them in visible light because that's the way
we experience them in life.
Just my opinion.
raymo
The hydrogen alpha emission wavelength is at the far red end of the VISIBLE spectrum. As hydrogen is the most abundant element in the universe i think it is reasonable to take steps to maximise the sensitivity to that wavelength. For DSLRs that means a filter change to allow that light to reach the camera sensor. The challenge is simply to ensure that the colour balance is correct as Rick has previously mentioned.
Hmm this is a very interesting conversation topic and one that I struggle with getting correct. I have attached two Histograms, I would like opinions as to which is closer to true colour. I see this as a learning opportunity and would very much like to here from all of you including some of the more experienced people so we can all get a better idea of what is classed colour balanced when we process our photo's.