I wouldn't compare it against anything to be honest. The more I look at it, the more I can see problems. Stupid things like applying colour noise reduction after the GradientXterminator process caught me out - rather critical when you have the balance background colour checked in the plugin. As I indicated, I didn't spend much time on it. Its great to see other imagers having a shot at such data. Its actually pretty good to work with as the luminance lends itself well to being stretched quite hard. Hope to see more results from others and of course I hope the competition goes well. Having the same data offers an equal field (just like stock car racing) - comes down to skill instead of the bantering "my equipment is better than yours". Who cares I say - its what you can do with the data that counts.
I wouldn't compare it against anything to be honest. The more I look at it, the more I can see problems. Stupid things like applying colour noise reduction after the GradientXterminator process caught me out - rather critical when you have the balance background colour checked in the plugin. As I indicated, I didn't spend much time on it. Its great to see other imagers having a shot at such data. Its actually pretty good to work with as the luminance lends itself well to being stretched quite hard. Hope to see more results from others and of course I hope the competition goes well. Having the same data offers an equal field (just like stock car racing) - comes down to skill instead of the bantering "my equipment is better than yours". Who cares I say - its what you can do with the data that counts.
It's good to see what others do. Although I am not entering the competition I actually learnt more tonight by playing with this data and using your final result as a reference. It showed me that even using the same source data the end result shows the way every user processes all of his own data. It has this "personal touch" regardless of how it was acquired. My processing is always a bit blurry and soft. I realised that I couldn't get close to the sharpness and details you got out of it so that made me rethink a lot of things I took for granted in my processing flow because you used the same source files. Good exercise.
Thanks Marc (BTW I liked your processing as well) .....you could spend more time with this data set. Selective contrast enhancement, deconvolution, further gradient reduction....the list goes on.
The problem I had was with the data..when you see double diffraction spikes you know you have: a close binary
or less that perfect focus.....which is the *one* thing we can all get right
Am I missing something as I think the data is crap to what I am used to. It is noisy and full of artefacts. Sorry
The fact that Jase could tease out the image he did from what to me is poor data is a revelation. I am not used to
playing with this type of pathetic data. It is still crap if you have to use elaborate methods to get a final image. You are
all kidding yourselves.
......... It is still crap if you have to use elaborate methods to get a final image. You are
all kidding yourselves.
Bert
Hummm....decisions decisions. Noise from thermal electrons, cosmic rays
and overflowing signal. Or, no noise due signal attenuation and low dynamic range plus square stars from undersampling due Bayer binning.
Thanks for this little challenge Monte. Sheesh! You try and do people a service and you get stabbed in the face for it! Nice!
I've never combined channels before, so please excuse my effort. It was done totally within CS3 - from manual alignment and rotation of each set of subs using marker stars and guides to creating smart objects for each channel set to finally adding luminance. Eeks! There has to be an easier way, but I'm comfy with PS so I stick to it. I'll have another go later to get some more luminance out of it somehow, and do a better job on inter-sub alignment. I've managed to introduce quite a bit of noise too, which is a little undesirable. Thanks though!
Nice wide field Bert, but looks flat to me. And the stars are simply not well sampled or resolved. I had to scale a section of your data around 800% to get the following roll-over demonstration :
"It is still crap if you have to use elaborate methods to get a final image"
....apart from precipitating my previous post, also made me realize there was indeed merit in an image processing comp at the SPSS (well done Monte and crew!).
The SPSS sample data not withstanding ...... data can be excellent, but is plagued by a rough exterior that needs to be gently peeled away to reveal the true beauty beneath.
Hubble CCD datasets are a good example.
The RAW frames are (almost) tragic. Noise and Cosmic rays abound. Fortunately, the seeing, focus and tracking are literally diffraction limited....and any noise can be removed completely once its nature is well understood (even spherical error )
To avoid all this noisy grief, why did NASA choose not to fly a CMOS device?
They wanted maximum QE, resolution and dynamic range.
Elaborate methods are indeed used for Hubble Heritage images, but last time I checked it was a Fairchild back illuminated CCD in low earth orbit in preference to a Canon CMOS.
Can you show me that for the whole image? Thought not. You are are amazing peter.
Bert
Bert, the point was: despite the albeit *very* wide field, your data is under sampled. Even blind Fredy can see the stars are blocky. I simply high-lighted this fact via the roll-over.
No point me putting my spiel on this, might as well quote it.
"Sampling refers to how many pixels are used to produce details. A CCD image is made up of tiny square-shaped pixels. Each pixel has a brightness value that is assigned a shade of gray color by the display routine. Since the pixels are square, the edges of features in the image will have a stair-step appearance. The more pixels and shades of gray that are used, the smoother the edges will be.
Images that have blocky or square stars suffer from undersampling. That is, there aren't enough pixels being used for each star's image. The number of pixels that make up a star's image is determined by the relationship between the telescope focal length, the physical size of the pixels (usually given in microns, or millionths of a meter), and the size of the star's image (usually given in arcseconds).
...
Unfortunately, we don't have as much control over the size of the star image, which will vary, depending mostly on the seeing conditions of the observing site. Mountaintop observatories often have 1 arcsecond (or better) seeing, whereas typical backyard observing sites at low elevations in towns or cities might have 3 to 5 arcsecond seeing.
A good rule of thumb to avoid undersampling is to divide your seeing in half and choose a pixel size that provides that amount of sky coverage. For example, if your seeing conditions are generally 4 arcseconds, you should achieve a sky coverage of 2 arcseconds per pixel. If your seeing conditions are often 1 arcsecond, you'll want a pixel size that yields 0.5 arcseconds per pixel. The following formula can be used to determine sky coverage per pixel with any given pixel size and focal length:
Sampling in arcseconds = (206.265 / (focal length in mm) )* (pixel size in microns)"
Thanks Mike - and to all those who are concerned the data is not perfect, they are telling me something I already know. I chose it exactly for that reason, because it is a challenge.
It's not called the Image Processing Cakewalk. The fact that these are not easy to work with will show who really knows their stuff and who knows only how to stack the pretty colours.
By the way Bert, nice Carina shot, what was it taken with?