Here's another object from the ongoing project to find planetary nebula halos. Abell 42 is already one of the faintest Abell planetary nebulae and of course the halo is even fainter. I wanted to get more data to flesh the image out a bit but the weather has been uncooperative lately to say the least.
Anyway here it is. Not what you would call spectacular.
When you start hitting some of the smaller planetary nebula without having a space telescope they do naturally start to become less "spectacular". It is a nice shot though, your 12" is going pretty deep to say the least. Nicely done Steve.
Well, certainly looks like a halo there Steve, is that a confirmation then?
Mike
Well Sakib seems convinced. I was just having another look to make sure I didn't overdo my "halo brightening" processing. Possibly I did but there certainly something there.
Quote:
Originally Posted by Atmos
When you start hitting some of the smaller planetary nebula without having a space telescope they do naturally start to become less "spectacular". It is a nice shot though, your 12" is going pretty deep to say the least. Nicely done Steve.
Thanks Colin. It's actually a 14.5" but I imagine the 12" would produce similar results.
Man you do go after some tough ones!very cool object and so tiny...well done!.
Thanks Louie. It's not that small and I have gone smaller but generally I don't image anything smaller than about 40". This was a request otherwise I probably wouldn't have bothered..
Thanks Paul. It's artificially brightened a bit. Almost invisible on the unmodified data.
What do you mean by artificially brightened?
For the faint halosI have revealed, I create several versions where I have stretched the begeezus out of the faint data to varying degrees and then I (very) carefully blend just the faint stuff revealed, back into a more normally processed version, I then use targeted noise reduction on the layered in heavily stretched data to make it a bit smoother.....this is not artificial but more like enhanced. I think as long as the stretching done before the layering is done globally on the raw data and not "artificially" targeted by say, lassoing an arbitrary area, it is a valid representation, particularly for scientific purposes.
For the faint halosI have revealed, I create several versions where I have stretched the begeezus out of the faint data to varying degrees and then I (very) carefully blend just the faint stuff revealed, back into a more normally processed version, I then use targeted noise reduction on the layered in heavily stretched data to make it a bit smoother.....this is not artificial but more like enhanced. I think as long as the stretching done before the layering is done globally on the raw data and not "artificially" targeted by say, lassoing an arbitrary area, it is a valid representation, particularly for scientific purposes.
Mike
Point taken. Artificial was a poor choice of words and enhanced would be more appropriate. I use a global stretching too followed by Jay Gabany's layered contrast stretching which seems to work okay but you have to be careful with it.
Thanks Louie. It's not that small and I have gone smaller but generally I don't image anything smaller than about 40". This was a request otherwise I probably wouldn't have bothered..
Steve,Ive just moved up to 2000 mtrs from 380 mtr focal length....2.5' is tiny to me and probably most of us mate
In general, it must be very difficult to know for sure that it's a halo, and not an effect of dodgy flats (doesn't look it in your case), or of processing. As a very retired researcher, I'd be very nervous about any processing that did anything at all that was regionally selective - handled different parts of the image differently.
I could imagine that a formal way of doing this might be to move the beastie to a completely different part of the chip for each sub, register, and then use analysis of variance to see whether say the outer half of the halo region is statistically significantly brighter than the background. You'd need to use robust statistical methods such as a rank transform (most robust) or winsorization (more informative) to ignore stars.
Another vague concern is glare. One could perhaps analyze the glare pattern around a bright star and control for this using radial distance as a covariate. (No, I'm not volunteering. It's been too long).
Having said all that, it sure looks real from the foot of the bed. And it looks very beautiful too. Well done.
In general, it must be very difficult to know for sure that it's a halo, and not an effect of dodgy flats (doesn't look it in your case), or of processing. As a very retired researcher, I'd be very nervous about any processing that did anything at all that was regionally selective - handled different parts of the image differently.
I could imagine that a formal way of doing this might be to move the beastie to a completely different part of the chip for each sub, register, and then use analysis of variance to see whether say the outer half of the halo region is statistically significantly brighter than the background. You'd need to use robust statistical methods such as a rank transform (most robust) or winsorization (more informative) to ignore stars.
Another vague concern is glare. One could perhaps analyze the glare pattern around a bright star and control for this using radial distance as a covariate. (No, I'm not volunteering. It's been too long).
Having said all that, it sure looks real from the foot of the bed. And it looks very beautiful too. Well done.
Best,
Mike
Thanks Mike. You make very good points. My halo processing could no doubt be improved. It sounds as if Mike Sidonio has a better procedure. Basically I stretch the heck out of it and convince myself there is something there. Once I do that I use the more specific stretching procedures such as Jay Gabany's layered contrast stretching to bring out more detail. I also reprocess the image in subtly different ways and it's encouraging that they all come up with basically the same result. Sakib Rasool who is the coordinator of this exercise thinks I have this one pretty well right because it matches what he sees with extreme stretches of the DSS images. In general I think though that my skies are too bright for the really faint stuff.