G'day Steve
If you check out this article about Plantery Imaging in the "Aticles and Projects" section of the site it details how Astra Image another image processing program fits into working your images http://www.iceinspace.com.au/index.p...63,306,0,0,1,0 . Well worth it to go over, fantastic tutorial
Thanks beren for the information, I will check the link out and start my learning curve.
Joe, I will have a look at work and see if I can find some appropriate references and post them back here. Atmospheric effects are the bane of most terrestrial remote sensing researchers and while I’ve got a pretty good handle on it, most of the modern techniques use atmospheric models that measure atmospheric attenuation in specific parts of the electromagnetic spectrum that lie outside of human vision (and thus not RGB cameras), which are used to reduce its effect across all wavelengths. Unfortunately, we don’t have these wavelengths to use, so I will have a think about some earlier but very simple and useful techniques that may be of some use for RGB imagery and find some appropriate references – albeit earth observation based. One approach for instance is that blue wavelengths scatter in the atmosphere to the 4th power more than red (or something like that) so the overall ‘brightness’ of each image band could be adjusted relative to this value. Also, the angle between the declination of the celestial object and the zenith of your position is highly correlated with the amount of atmospheric influence; this could also be adjusted for. In the end this may all be academic as the brightness adjustments that everyone seems to be applying to their imagery is kind of doing the same thing, but applying objective corrections might (might!) be more realistic. I am starting to dribble on so I will go and try and find some references for you.
The effects you describe are certainly noticeable while doing high-resolution planetary imaging, such as the Jupiter image in this thread.
Monochrome RGB imaging certainly reveals that the blue channel suffers worst from the atmospheric dispersion and turbulence ("seeing"). We also notice the focal point can change slightly between the red, green and blue wavelenghts - moreso in bad seeing, and moreso when the object is lower on the horizon.
With RGB imaging, the settings can be changed during capture, so things like brightness, gain, exposure can be adjusted to suit the conditions, rather than having to worry about it too much in post-processing.
The post-processing routine usually involves aligning all the individual frames; ranking them in order of "sharpest" to blurriest (based on edge-detection type algorithms), and then "stacking" the sharpest frames to increase the signal to noise ratio.
The better the seeing, the better (more accurate) the alignment and ranking, and the better the seeing, the more frames can be stacked together to give you more signal, smoothing out the image.
More post-processing is then done using sharpening algorithms such as wavelets and typical blur/sharpen filters in photoshop.
Of course the individual colour channels can then still have adjustments applied including levels, colour balance etc.
Firstly, sorry for such a long posting – I won’t do it again, promise.
Mike, thanks for the overview of the processing procedure in your last post – this has been a big help. I should make it clear that I am under no allusion that I can waltz in and provide any real improvements to your already excellent results. You guys really know your stuff and I have a lot to learn. I doubt now that the more conventional techniques used in remote sensing for atmospheric correction are transferable to planetary RS as they are not designed to cope with multiple frames of the same wavelength band.
These techniques I refer to are either absolute models (that use non-visible wavelengths to measure atmospheric absorption) or relative models (that use ground-based target spectral signatures to correct the effect of the atmosphere ‘relative’ to image derived signatures).
There is one method though that I alluded to in my last post that may be used on DSOs and not dissimilar to the correction technique you use at the pre-processing stage – ie. altering the gain of the sensor for each wavelength - called Dark Object haze reduction (some times called the histogram method of haze reduction) and is described by Pat Chavez (1988) – I can email via PM the paper if anyone is interested.
Basically, It requires the analyst to look in dark areas of an image where there should be no reflectance (earth: deep clear water or shadowed areas; Jupiter image: the dark space to the side of the disk). While not strictly true, these pixels should be zero in value, but of course they often have values significantly >0 due to atmospheric scattering and residual noise of the sensor. These minimum values (different for each band due to different amounts of scattering) can be used as a Starting Haze Value (SHV) – the amount that should be subtracted from each image band respectively to reduce haze. However, Chavez describes how only one SHV is typically used from one wavelength band (e.g. the blue band, as it scatters light the most) and is then used along with a simple scattering model to predict the SHV in each of the other bands. For example:
Simple scattering model:
Very Clear = λ-4
Clear = λ-2
Moderate = λ-1
Hazy = λ-0.7
Very Hazy = λ-0.5
So given the approx. centre wavelength for RGB with a Very Clear sky, then:
B = 0.485um x -4 = 18.07
G = 0.560um x -4 = 10.17
R = 0.660um x -4 = 5.27
Now calculate a multiplication factor to predict haze:
18.07/18.07 = 1
10.17 / 18.07 = 0.56
5.27 / 18.07 = 0.29
So finally, if the SHV in the blue band is say 40 (for 8 bit data) then the predicted SHV for all other bands would be:
Band 1 (blue): 1 x 40 = 40
Band 2 (green): 0.56 x 40 = 22.4
Band 3 (red): 0.29 x 40 = 11.6
Notably, these SHVs should be determined from the raw radiance data (measured in physical units - watts) and not the quantized 8, 10… or 32bit image data. Most RS data like that from Landsat’s ETM+ sensor, is supplied as 8 bit data. All users are supplied with the transfer gain and offset values for each wavelength band – each band will record a slightly different gain – ie. The range of the incoming signal (radiance) relative to the range of the signal output (the image quantization level), which effects contrast. The offset is the recording of energy when there is no energy present, which effects brightness and is equivalent to the SHV of each band.
So after this very long winded explanation, you guys are already altering the contrast and brightness during both the pre and post processing stage – ie. you are basically doing the same thing. The only difference is, in RS they might be applied a little more objectively rather than subjectively (primarily for consistency between image dates for change detection work). Clearly, the objective approach does not make for better visual reconstruction.
I am going to stop over theorizing now and learn more about what you guys do and just start having a go – Bird’s July 25 Jupiter image is telling me just that. Thanks for listening.
Well...this is my first ever attempt at RGB imaging/processing, certainly with good data.
Didn't really do much with it other than a little Registax wavelets processing before taking all three tiffs into PS7 and combining the frames....a little levels and curves, some colour adjustment, some unsharp mask.
That's about it...and it shows.
Possibly a little too green/blue...but I'm happy with it for a first crack.
I'm stoked that at last I'm finally starting to get the hang of how to use PS...and how to create colour images from composite RGB frames.
I have no idea how Mike gets his final images looking sooooo good.
It's a little on the "pale" side - really needs a boost in saturation and contrast (to my eyes), using the layers->add new saturation layer, and layers->curves.
Grab the middle of the curve and drag it down/right, see how it changes things.
It's a little on the "pale" side - really needs a boost in saturation and contrast (to my eyes), using the layers->add new saturation layer, and layers->curves.
Grab the middle of the curve and drag it down/right, see how it changes things.
Thanks, Mike.
Was thinking exactly the same thing re: the image looking a little 'pale'. Will follow those two bits of advice.
Out of curiosity, why wouldn't you just 'boost' the saturation of the final image in PS, rather than create a completely new saturation layer?
My attempt at the data given. I'm not adept at working with greyscale but hopefully did this some justice.
I just wish we had the seeing conditions in Melbourne that you had back then at your locale.
Here's my attempt.
Steps:
wavelet sharpening in Registax 6
combine to get colour in Registax 6
Adjust colour balance and saturation in GIMP
Adjust brightness/contrast in Registax 6
Convert to compressed JPEG in GIMP
i have had very limited experience with processing planetary data, mainly because my data is no where near as good as this, but here is my attempt. i didn't do any wavelets in registax, i started to but saw that it was downgrading the quality of the image too much so i did all my processing in PS. not as good as yours Mike, but i am happy at how it turned out.