jase
07-08-2007, 02:54 PM
I'm pleased to present for your viewing pleasure, the Cat's Paw Nebula (NGC6334) (http://www.cosmicphotos.com/gallery/image.php?fld_image_id=112&fld_album_id=11).
…An exhibit of data scaling possibilities.
The Cat's Paw Nebula (NGC6334) as it’s aptly named, is located in the constellation Scorpius. It is an emission nebula with a profusion of ionized hydrogen atoms that produce a vibrant red colour. The nebula is located along Milky Way galactic plane and as such is obscured by dust particles. These dust particles reduce the emission of blue and green wavelength light from the surrounding region (extinction factor). This phenomenon is known as “interstellar reddening” and is the reason why many of the young stars around and within the nebula do not exhibit blue light. The Cat’s Paw Nebula is approximately 5,500 light-years distant.
Before closing the formalities, I'd like to take the opportunity to thank Brad Moore who assisted with GRAS008 robotic operations.
About this image;
Total exposure time is 3.5 hours (60min lum, 50min per RGB channel) and is an LRGB composite. The luminance data was acquired with a 12.5"RC and the RGB from the FSQ (which I had taken previously - see wide field image (http://www.cosmicphotos.com/gallery/image.php?fld_image_id=108&fld_album_id=11) or IIS post (http://www.iceinspace.com.au/forum/showthread.php?t=22399)). Yes, you heard right - the data was collected from two difference telescopes and as such vastly different focal lengths. The 12.5” RC operating at 2846mm F/9.15 and the FSQ at 530mm F/5. The RC is over five times the focal length! A challenge? Oh yeah!
So how did I integrate the data despite these differences? I used Registar to perform a processing task known as "data upscaling/downscaling" (I would have thought of a more fancy/crafty name if I invented it). As the RGB data acquired from the FSQ was very good (50min-5x10min subs per channel) the data was capable of being scaled. But I seriously didn't think it would scale over five times its original size without introducing inherent noise. If I had performed this with lower-grade/weak RGB data, it simply wouldn't have worked as well. However, there are downsides to using this method. The stars in the upscaled RGB data seriously bloat. It took me well over three hours to manage this problem - trying different techniques, masks etc. It’s not perfect, that’s for sure. Some stars are lost some colour, while others became heavily saturated in the process. If the RGB data was collected from a higher focal length instrument (say 1500mm), bloat would be reduced.
The next thing to consider is resolution. Upscaling data has the potential to result in a lower resolution output as the data is skewed, stretched and twisted to align with the larger luminance frame. I processed this image with no intentions of using the RGB data for resolution. It was integrated purely for chrominance (colour data) purposes. As the human eye detects resolution in scales of grey better than in colour, upscaling the RGB data made perfect sense. 100% of the resolution came from the data collected with the 12.5"RC (to produce a luminance frame) so there was no resolution loss. This is not the best substitute to collecting RGB data on the RC, but as can be seen it works reasonably well and delivers a pleasing result.
Rule of thumb (but pretty obvious), don’t upscale luminance data. Only RGB/colour data. You can of course downscale the luminance data. In fact, my original intention when acquiring the RC data was to "downscale" the luminance data to add greater depth to my wide field vista of NGC6357 and NGC6334. I have done this on an offline image and found it really packs a punch. Downsizing the luminance brings out incredible details in the wide field view. I may release the image in due time. I also wanted to collect Ha data using the RC, but I had difficulties finding a guidestar for the specific frame I wanted.
A few words on image processing of this image;
Where does one start... I'm not going to elaborate on my dealings with Registar (if you want specific info PM me offline). It’s a powerful tool - enough said. After registration of the RGB data, I loaded it into PS and began stretching - levels, then shadow/highlight tool. I only stretch the data to 25% of its final capacity for two reasons. Firstly I wanted to clean the image before further stretch occurred and secondly, stretch the data too far makes it difficult to blend the luminance in later. The image certainly needed some cleaning after being upscaled. Numerous colour artifacts and some rainbow colour star bloats. After cleaning, I duplicated this layer (background) and then trashed the background layer. This allowed me to select the blend mode to colour. Under normal circumstances, the background layer blend mode can't be changed. However by duplicating and deleting the layer this can be achieved. I ran the luminance data through two iterations of LR deconv in CCDsharp. Then loaded into PS. Stretch using levels and curves to around 50% of capacity. I progressed to clean this layer as some hot pixels were present, nothing major to deal with compared to the upscaled RGB (nightmare). From this point I systematically began stretching both layers ensuring I did not over saturate and lose too much colour in the process. Once I reached what I felt was the right level for luminance detail, I pushed the colour data one more time with Shadow/highlights. Image flattened and final levels adjusted. High pass layer mask used to bring out the finer details from the luminance. Seasoned to taste. The downside of data scaling is image size. My PSD document I was working on with all layers ended up 1.2G in size. This certainly tested computing processor power and memory.
Well, if you’ve read through this far I hope you’ve gleaned some info.
Enjoy! All comments welcome.:)
…An exhibit of data scaling possibilities.
The Cat's Paw Nebula (NGC6334) as it’s aptly named, is located in the constellation Scorpius. It is an emission nebula with a profusion of ionized hydrogen atoms that produce a vibrant red colour. The nebula is located along Milky Way galactic plane and as such is obscured by dust particles. These dust particles reduce the emission of blue and green wavelength light from the surrounding region (extinction factor). This phenomenon is known as “interstellar reddening” and is the reason why many of the young stars around and within the nebula do not exhibit blue light. The Cat’s Paw Nebula is approximately 5,500 light-years distant.
Before closing the formalities, I'd like to take the opportunity to thank Brad Moore who assisted with GRAS008 robotic operations.
About this image;
Total exposure time is 3.5 hours (60min lum, 50min per RGB channel) and is an LRGB composite. The luminance data was acquired with a 12.5"RC and the RGB from the FSQ (which I had taken previously - see wide field image (http://www.cosmicphotos.com/gallery/image.php?fld_image_id=108&fld_album_id=11) or IIS post (http://www.iceinspace.com.au/forum/showthread.php?t=22399)). Yes, you heard right - the data was collected from two difference telescopes and as such vastly different focal lengths. The 12.5” RC operating at 2846mm F/9.15 and the FSQ at 530mm F/5. The RC is over five times the focal length! A challenge? Oh yeah!
So how did I integrate the data despite these differences? I used Registar to perform a processing task known as "data upscaling/downscaling" (I would have thought of a more fancy/crafty name if I invented it). As the RGB data acquired from the FSQ was very good (50min-5x10min subs per channel) the data was capable of being scaled. But I seriously didn't think it would scale over five times its original size without introducing inherent noise. If I had performed this with lower-grade/weak RGB data, it simply wouldn't have worked as well. However, there are downsides to using this method. The stars in the upscaled RGB data seriously bloat. It took me well over three hours to manage this problem - trying different techniques, masks etc. It’s not perfect, that’s for sure. Some stars are lost some colour, while others became heavily saturated in the process. If the RGB data was collected from a higher focal length instrument (say 1500mm), bloat would be reduced.
The next thing to consider is resolution. Upscaling data has the potential to result in a lower resolution output as the data is skewed, stretched and twisted to align with the larger luminance frame. I processed this image with no intentions of using the RGB data for resolution. It was integrated purely for chrominance (colour data) purposes. As the human eye detects resolution in scales of grey better than in colour, upscaling the RGB data made perfect sense. 100% of the resolution came from the data collected with the 12.5"RC (to produce a luminance frame) so there was no resolution loss. This is not the best substitute to collecting RGB data on the RC, but as can be seen it works reasonably well and delivers a pleasing result.
Rule of thumb (but pretty obvious), don’t upscale luminance data. Only RGB/colour data. You can of course downscale the luminance data. In fact, my original intention when acquiring the RC data was to "downscale" the luminance data to add greater depth to my wide field vista of NGC6357 and NGC6334. I have done this on an offline image and found it really packs a punch. Downsizing the luminance brings out incredible details in the wide field view. I may release the image in due time. I also wanted to collect Ha data using the RC, but I had difficulties finding a guidestar for the specific frame I wanted.
A few words on image processing of this image;
Where does one start... I'm not going to elaborate on my dealings with Registar (if you want specific info PM me offline). It’s a powerful tool - enough said. After registration of the RGB data, I loaded it into PS and began stretching - levels, then shadow/highlight tool. I only stretch the data to 25% of its final capacity for two reasons. Firstly I wanted to clean the image before further stretch occurred and secondly, stretch the data too far makes it difficult to blend the luminance in later. The image certainly needed some cleaning after being upscaled. Numerous colour artifacts and some rainbow colour star bloats. After cleaning, I duplicated this layer (background) and then trashed the background layer. This allowed me to select the blend mode to colour. Under normal circumstances, the background layer blend mode can't be changed. However by duplicating and deleting the layer this can be achieved. I ran the luminance data through two iterations of LR deconv in CCDsharp. Then loaded into PS. Stretch using levels and curves to around 50% of capacity. I progressed to clean this layer as some hot pixels were present, nothing major to deal with compared to the upscaled RGB (nightmare). From this point I systematically began stretching both layers ensuring I did not over saturate and lose too much colour in the process. Once I reached what I felt was the right level for luminance detail, I pushed the colour data one more time with Shadow/highlights. Image flattened and final levels adjusted. High pass layer mask used to bring out the finer details from the luminance. Seasoned to taste. The downside of data scaling is image size. My PSD document I was working on with all layers ended up 1.2G in size. This certainly tested computing processor power and memory.
Well, if you’ve read through this far I hope you’ve gleaned some info.
Enjoy! All comments welcome.:)