JUPITER reprocessed by splitting R,G,B and processing separately - worth it?
Hi All, noticed a few (Mike and DP) have been splitting avi's into separate colour avis and processing separately in Registax before recombining for final image. I've resisted this because I'm basically lazy, but decided (in the DP spirit of investigation) to give it a go on one of my recent Jupiter Avi's (1/10s at 5fps for 100 sec). Attached shows the original processed as a RGB combined avi image at upper left and then lower and upper right showing the separately processed and recombined colour channel image - the latter resampled up to see how well it stood amplification. Stacked between 350 and 400 frmaes for each colour channel.
I had a little trouble re-aligning the separate colour channels well for the combined image and I'm not fully happy with the colour, but on the whole it does seem to show finer detail than the original... hmm maybe there's something in this?
Looks real nice with (like U said) finer details springing forth. I've got enough on my plate without attempting this procedure ...I'll leave it to the experts & continue to do it the 'old fashioned way'...
Hey Rob, there's definitely finer detail in that new image, and it's standing up well to resampling.
Re-Aligning the colour channels after recombine in AstraImage is probably the most difficult part, but "generally", the red channels need to move -1 or -2, and the blue channels +1/+2. The poor AstraImage preview window doesn't help either. I usually zoom in x2 to try and get it right, and a bit of trial and error each way.
Your image has quite a red bias in the new one - this can be easily overcome by using the weight when recombining (try 1.1 on the red channel), or by using the colour balance sliders afterwards.
Separate R/G/B processing takes longer, but i'm convinced it produces a much better result.. DP and I reckon about 5-10% better, sometimes more depending on the image and the seeing, and the care in processing.
Mike I'll try doing the colour align on magnified image, didn't think of that. Do you crop in registax before alignment. I didn't and found I had to move green almost 16 in one direction.
Letting registax only "see" one colour at a time should give you a much better alignment, especially if you're feeding it colour data that has a couple of pixels worth of dispersion error between the channels...
There's more advantages too - you can judge the quality and sharpness of each colour channel seperately and choose how to process each one so that you end up with three channels of equal sharpness and detail.
rob, ppm centre works well here. if you convert all avis to bmps and then centre them then registax can't move em.
if you splilt the movies and then process each colour separately, then you end up with a different reference frame more than likely, which because the tracking is not perfect could be in a different place.
hence into astra image, you have to move it. i magnify at least 3 times and then align each colour.
depends on how good it is looking to start with. if the data looks great, go the whole hog, ie split, ppmcentre, registax astra image etc
As DP said, I use ppmcentre to centre, crop (and sort) before throwing into registax.
So my routine is:
- Capture avi
- Use virtual dub to save as bmp's
- Use ppmcentre to centre, crop and sort
- Use netpbm tools to split into r/g/b bmp's.
- Registax each r/g/b
- etc
I have cygwin unix tools on my windows XP laptop, and I have a small shell script which runs the ppmcentre and netpbm tools comamnds. I just pass as an argument a "directory name" which holds the bmp files, and it does the rest.
Sorry Robert, I've been snowed under with work and imaging...maybe when this Jupiter season is past I'll have some time to finish it.
I tend to use the same settings all the time with ppmcentre, so it's not too much hassle...one of us can help you get the right settings and then you just re-use them.
I could be out of line here but do you planetary guys take into account the size of the Airy Disk. At f32 it is 22 micron and at f64 43 micron for green light even with perfect optics. It is even worse for red.
Just wondering.
I could be out of line here but do you planetary guys take into account the size of the Airy Disk. At f32 it is 22 micron and at f64 43 micron for green light even with perfect optics. It is even worse for red.
Just wondering.
Bert
Hi Bert, my feeling on this is that there's no problem with oversampling the image as long as there is sufficient light for the camera to keep the signal to noise at a reasonable level.
Shorter exposures, and larger sampling, make it easier to recover the final image in the presence of unpredictable turbulence and added noise from the camera etc.
Also, the airy disk for a star is maybe not quite the same thing as the combination of airy disks from a planet - in the latter case you have many overlapping airy disks, and so there will be transitions of colour and light/dark between these disks that are smaller than the disks themselves. Sampling at a larger size than the disk allows you to detect these edges more clearly.
If you think about what you see on a planetary image live in the eyepiece, you don't see a collection of circular airy disks - if you did then the planet might look like a collection of sharply defined circular spots. Instead you see the sum of all the disks from an infinite number of points across the image, and so there must be detectable features that are smaller than the individual disks. Not resolved features, but just detected features.
On Jupiter, say, this may allow you to detect the presence of a tiny dark spot corresponding to a storm in the polar region that would not otherwise be seen.
Also, remember that we're aligning and stacking somewhere between a few hundred and a few thousand frames, and that every frame is slightly different due to atmospheric effects and tracking - ie the image falls onto a slightly different place across the ccd grid in each frame, and the actual resolved details change from one frame to the next, images are distorted or larger/smaller from one frame to the next, and this impacts on the theoretical size of the airy disk in each frame.
This is advantageous for us, as it means that we can see more detail than otherwise would be possible if all the frames were identical. To take best advantage of this dynamic environment requires some amount of oversampling on the raw data and then careful selection and processing of some subset of the raw frames later on.
I've felt for a while that the current crop of software packages used for post-processing are not getting the most out of the raw data, as they all use very simple algorithms. My gut feeling is that there is probably another 50% or more of resolving power possible just from improvements in image processing.
taken with a C11, it's quite evident that detail much smaller than the airy disk size is visible. In particular if you look at the white storm in the upper right of the image, to the right of the (now red) oval BA, this white storm is approximately 0.3 arc seconds across (smaller than the airy disk size for a C11), and yet it is clearly resolved, including a distinct dark edge. Looking around the image shows many features much smaller that are also partially or fully resolved!
It was obvious to me that the resolution you are all getting is better than the Rayleigh criteria for your optic, but as you explained that is for two identical point sources. I think what is happening the Airy disk is actually the central peak of a Bessel Function and you guys are detecting the peaks which are far smaller in size than the so called Airy Disk (diameter to first minima). Hence the importance of the recording level, too high and resolution suffers. I was just wondering why the images you guys are getting were so good. I think I can see why now.
Thanks
This also highlights something else - we have to be even more critical in the collimation, cooling, etc of the scopes, because there is the potential to see detail way smaller than the airy disk, or rayleigh criterion. It would an error to use the airy disk size as a criterion to judge whether you have the scope "close enough" in setup, it may be better to use 1/4 this size or something as I think that is closer to the true resolving power on extended objects like planets.
Bird/Bert, I won't pretend to understand all this...but I guess what you're saying/concluding is that planets are different to stars and resolution possible for planetary feature may be different (better?) than that possible for resolution (separation) of stellar point objects?
I think the resolution is the same, but stars are featureless points of light, so in a sense it doesn't matter how large/small the airy disk is for a star- theres nothing else to see.
A planet has lots and lots of detail, and so we can see the interaction between all these airy disks, giving detail down to scales much smaller than the disks themselves.
Bird/Bert, I won't pretend to understand all this...but I guess what you're saying/concluding is that planets are different to stars and resolution possible for planetary feature may be different (better?) than that possible for resolution (separation) of stellar point objects?
cheers,
lets just nod our heads rob and say "hmmmm...yes" a few times, whilst putting fist to our chin. At least we can pretend to know what it is they are talking about!!!