View Full Version here: : Vela SNR in 3nm NII and OIII blended with RGB
avandonk
20-01-2013, 12:44 PM
Compared to facing the threat of bushfires, 40C days and 30C nights are really only a minor annoyance. When temperature controlled optics are hotter than their set temperature trying to collect any data is a waste of time.
Things did cool down and I managed to get a decent RGB data set of a bit of the Vela Supernova Remnant. I have added the NB data to this RGB.
It is not a final as I am sure I have not got it as good as the information the data contains. My processing skills may/will improve with practice.
Large full sensor resolution image 10MB
http://d1355990.i49.quadrahosting.com.au/2013_01/VSNR_RGB+NB_N.jpg
The RH200 actually has better resolution with good seeing than the 9 micron pixels of the sensor of the PL16803 camera. With dithering and multiple exposures and stacking upsized this can be recovered. The deep well depth is far more important than going for smaller pixels.
All the data was stacked at X1.5 times the pixel size of the PL16803. This 6000x6000 pixel image shows far better resolution than any single native size image. 20MB
http://d1355990.i49.quadrahosting.com.au/2013_01/VSNR_RGB+NB_1.jpg
Bert
Keep them coming please Bert. Really enjoying following your RH200 adventure!
Larryp
20-01-2013, 01:38 PM
Magnificent, Bert!
LightningNZ
20-01-2013, 10:00 PM
As always that's awesome work Bert. Would you mind sharing the components of your processing software stack? Do you use DSS or something fancier?
Cheers,
Cam
strongmanmike
20-01-2013, 10:05 PM
I agree with Rob, great to see these wide deep shots of Vela with that magnificent scope of yours :thumbsup::thumbsup: 30degC+ nights aren't much fun, even with a ProLoine :doh: :thumbsup:
Mike
venus
21-01-2013, 08:56 AM
"Magnificent" is correct I've never seen the veil this good...great work!
Ross G
21-01-2013, 09:23 AM
Great photo Bert.
Again, such amazing detail.
I agree with Rob as to how interesting and informative it has been following your adventure with the new eauipment.
Thanks.
Ross.
avandonk
22-01-2013, 11:56 AM
I will keep them coming. There is no point in only producing images to be kept hidden.
Bert
avandonk
22-01-2013, 12:15 PM
I use ImagesPlus for initial correction for darks and flats. I stack with either RegiStar or DSS or both and compare, depending on the number of frames. All this is done unstretched.
To get RGB or NB individual stacked frames in register I use RegiStar to excise and register the common areas.
I stretch faint stuff differently to bright stuff. The main thing is to not lose real data at either the dim or bright end of the histogram.
The stretched image is balanced carefully for colour in PS. This is important as I use EasyHDR to tone map the data to the final sixteen bit tiff that is used in PS to produce the eight bit image or jpg that you see here.
Most of this I worked out myself. I prefer to fly myself than use an autopilot.
It quite easy to enhance a small field of view to show maximum detail. With widefields the dynamic range is always far greater than any one small area of detail.
Bert
avandonk
22-01-2013, 12:23 PM
When all the image train is still at 32C when ambient is down to 29C and very slowly falling even a Proline can only get to -25C. Even then there was no point as focus was very far out and changing.
Yep ye canna beat wide and deep shots with very good resolution!
Bert
avandonk
22-01-2013, 12:26 PM
There were three main reasons I bought this system. The SMC, the LMC and the Vela SNR. I could spend the rest of my life on only these three objects and still not have it perfect! For now I will settle for as good as it gets.
Bert
avandonk
22-01-2013, 01:10 PM
For interest this is the RGB data 10MB
http://d1355990.i49.quadrahosting.com.au/2013_01/VSNR_RGB_N.jpg
As you can see that dim stuff takes a lot of getting!
Bert
avandonk
26-01-2013, 10:16 PM
The real take home message is after a lot of RGB and NB images I did not have to worry about coloured haloes due to spectral aberrations or filter deficiencies.
The Astrodon LRGB filters do not need any focus adjustment. The NB do need a bit but at F3 it is critical as the critical focus zone is so tight.
Only more data will marginally improve this image.
Bert
h0ughy
26-01-2013, 10:42 PM
like what you got - a real eyeopener
avandonk
29-01-2013, 01:20 PM
Thanks h0ughy, I hope there are many more to come.
Bert
Peter Ward
29-01-2013, 10:17 PM
Bert, it's been great seeing your journey with fast wide field optics and CCD acreage but....
.... I'd suggest you don't push the data so hard..to the point it looks like Tech Pan 2415?
Martin (Pugh) does this sublimely....crafty bugger...but I have no idea how he does it.
anyway I digress... Smooth the data a little: shadows and highlights should show a gradual transition and structure rather than a stark contrast.
Just my 2 cents worth :thumbsup:
Look for a review in Sky & Telescope from Dennis Di Cicco about his experiences with the RH200. His Proline 16803 images look pretty good.
Bert, I'm scratching my head on this one. I don't see how you can achieve better resolution by dithering and simply upscaling your images. It is indeed a bigger image, and dithering eliminates continuous pattern noise as well as other artifacts, but improvement to arcseond per pixel resolution is news to me. Can you explain what you have done in detail?
It would seem like a technique that anyone could apply to their images.
j
Poita
30-01-2013, 10:24 AM
I'm assuming he is using a super-resolution technique utilising sub pixel shifts in multiple images.
We do it for feature film reconstruction when negatives are lost. I imagine you could do the same for astro images.
I'd like to know which software though.
http://users.soe.ucsc.edu/~milanfar/publications/journal/SRfinal.pdf
Peter Ward
30-01-2013, 10:52 AM
Ditto. I re-sized & layered the two examples in Photoshop.
Switching between the layers, the images look identical: re-scaling data simply doesn't improve resolution.
I think that is a process called "drizzeling". Not to be confused with dithering. :shrug:
avandonk
30-01-2013, 08:04 PM
It is simple folks just look up Sampling Theorem and Nyquist Theorem. Shannons Theorem also helps.
This only works mathematically if your optic has inherently better resolution than your sensor. It has nothing to do with fractional pixel movements but a redundancy in sampling. It is immaterial how big the dither steps are as long as they are random.
Your resolution is not your pixel size but actually twice this at best. At forty five degrees to your pixel separation it is actually worse by a factor of nearly root two.
So to make it simple with nine micron pixels at an image scale of 3.1 seconds of arc. A single image has at best 6.2 seconds of arc resolution. With dithering and upsizing this can come back to about 4.4 seconds of arc resolution at best. To display this you need at least 2.2 seconds of arc per pixel! I think if you do the simple calculation that a 6000x6000 pixel image from my system is a tad over 2 seconds of arc per pixel.
Trying for more resolution with smaller pixels say 6 micron will at best give you without dithering about the same. Meanwhile you have given up well depth and dynamic range.
It is all about balance of the variables.
Another factor is that resolution is limited by diffraction and is purely dependant on the F ratio of the optic. It is not dependant on focal length and/or aperture solely.
By the way to try and see a difference in 'resolution' of eight bit jpg's is a futile exercise.
You have to compare the original 200MB 16bit tiffs for that.
Bert
avandonk
30-01-2013, 08:56 PM
I am just a beginner Peter! Of course I want to show the bling! Subtlety will come with age. At least that is what my mother told me fifty years ago.
I am getting results that are not quite what I am used to.
Bert
Peter Ward
30-01-2013, 09:04 PM
OK I'll bite.
Nyquist states you can *re*-construct an analog waveform by
taking small discreet steps to approximate the original smooth wave, by approximately halving the wavelength of your wave as your step size.
This is not the same as mathematically doubling your pixel size and saying 6 arc seconds is all you can resolve.
Two point sources, in perfect seeing 6 arc seconds apart will fill, not one, but two pixels.
But by simple inspection of the image, you can confidently state, with a lone illiminated pixel, nothing lies beyond it's 3 arc-sec patch of sky.
avandonk
30-01-2013, 09:43 PM
It is not about biting anyone or being bitten. I only went up by a factor of root two. You are also missing the point Peter it is about sampling. This gives a finer mesh than nine micron pixels.
Do you really want me to invoke my old mate Fourier? I can give you a treatise on all of this. It would be a waste of time as anyone who has studied advanced mathematics would be totally familiar with all of this.
Hand waving statements about the state of a single pixel is not only simplistic but totally meaningless.
Bert
Peter Ward
30-01-2013, 10:50 PM
Bert... I did some math at Uni in my previous life...Are you seriously suggesting a 9 micron pixel @ 600mm can't resolve down to 3 arc sec?
Of course it can.
And, sure, in a vacuum (al la Hubble) drizzling can and does work
The point I think you may be missing is: you can't (well, let's say it would be very difficult) model/predict the seeing/turbulence from shot to shot for sub-pixel sampling shifts to contribute any significant improvement to the resolution.
Sure with thousands of frames and some serious super-computing power...maybe......but I'd suggest seeing would still swamp any pixel shift terms.
Hence the two images you have posted look identical in every respect except scale....which is why I'm puzzled as to why you'd bother re-scaling the data.
avandonk
30-01-2013, 11:31 PM
Peter it was not about the stacked data sets. It was about the resolution of individual frames versus the resolution of the upsized stacked frames.
Bert
I don't see any resolution improvement with this techique, just a larger image. However, it has been written that star alignment can be improvied in certain software packages by upsizing the individual sub exposures by 2X prior to the combine, then downsizing to the native resolution after the combine.
j
Hey, there was just a knock at the door. My super 3nm Ha filter from AstroDon has just arrived and it's clear tonight!!! They'll be wishing their mother's never met their father's when I get RH300 results with this.
j
Poita
31-01-2013, 11:14 AM
Bert, are you talking about introducing dithering to counter the effects of amplitude quantisation, to get a finer than camera pixel level detail? (I think I'm on the wrong track here)
I'm trying to understand exactly what the process is.
cventer
31-01-2013, 11:27 AM
My BS detector has gone off the scale on some of the information in this thread.
Lots of fancy mathematical terms being thrown about it without any actual information being delivered.
Love to hear in plain english the process you undertake to deliver the outcome you are describing. Ie what steps in which software package.
I certainly have used the technique of upsizing my final luminance 2x and then running deconvolution on this 2x sized file. This works well on undersampled data and gives nice star shapes as apposed to the diamond shape you can get running deconvolution on smaller stars. You then downsize again after this and combine Roth rgb data.
Poita
31-01-2013, 11:45 AM
Chris, I get the concept of upscaling images before processing them, you get a finer resolution for the processing to work within. It makes perfect sense and gives noticeably better results.
I'm also familiar with adding dithering to an analogue signal to get better than A/D resolution out of it,(i.e. improving on the effects of quantisation) but I am missing something with what Bert is doing here.
The bit I am missing is what the actual process is. Is the image being dithered and then processed in some way and then returned to its original resolution?
Poita
31-01-2013, 01:41 PM
I just re-read the thread, Bert is upscaling and dithering before stacking. That makes perfect sense.
cventer
31-01-2013, 02:15 PM
How do you dither after image is captured ? Dithering is done between frames while capturing.
Poita
31-01-2013, 02:42 PM
I assume he is dithering by slight movement between captures, then upscaling the image then stacking and processing, in that order.
Not capturing the image and then injecting noise.
Poita
31-01-2013, 02:46 PM
Found an older post.
http://www.iceinspace.com.au/forum/showthread.php?t=59528
cventer
31-01-2013, 03:28 PM
Ok I see the technique now. Not sure i am convinced, but will try it myself as I allways dither by several pixels between sub frames.
Only danger with this technique is you would need to reject hot pixels before you upsize as this will blur the hot pixels into looking like stars.
strongmanmike
31-01-2013, 04:36 PM
:rofl:
RickS
31-01-2013, 05:37 PM
http://en.wikipedia.org/wiki/Drizzle_%28image_processing%29
http://www-int.stsci.edu/~fruchter/dither/drizzle.html
avandonk
01-02-2013, 01:44 PM
It is very simple folks. After collecting frames with dither.
1. Correct for darks and flats.
2. Use the filter thingy in ImagesPlus to get rid of column defects.
3. Upsize by a factor of 1.5. This is close enough to a factor of root two.
4. Stack these upsized images.
You will then find as RickS's post has shown that your resolution is far better than any individual frame.
This only works if your optic has better resolution than what your sensor can resolve.
There are only two rules in our Universe!
1. There is no free lunch.
2. If something sounds to good to be true, it is not.
This is not magic or sleight of hand. It is proven mathematical rigor of data interpretation. You all accept noise reduction by stacking multiple images.
Why is the concept of resolution enhancement so foreign?
Bert
troypiggo
01-02-2013, 02:41 PM
Please don't take this the wrong way, but what's the point of the extra resolution if the image is noisy, clipped, and white balance off when viewed at full scale?
avandonk
01-02-2013, 03:18 PM
That is a good question.
Can you find an image this deep anywhere? I just do not know what colour to make it! I have no one to follow as I am so far in front! I must admit when I zoom in the colour is crap. As far as resolution is concerned I may go to presenting my images in very blocky low resolution jpg's that hide a myriad of faults. Then the assembled multitudes can go wow to a mediocre image that panders to their equally mediocre taste.
I just do not care what anyone thinks.
I did not take it the wrong way as I knew exactly what sort of sentiment your comment engendered.
So please do not take this the wrong way when I say you are lacking the knowledge to even pass judgment.
That is some really dim stuff and is difficult to record let alone display with any sort of relation to your run of the mill bright targets. So cosmetically it will not look at all like the classic deep sky images.
Bert
troypiggo
01-02-2013, 04:05 PM
Interesting response. You start off by by acknowledging that it's a good question, then spend 5 paragraphs not answering it directly and descend into making assumptions about my intelligence. Clearly you have taken it the wrong way.
There's a difference between "deep" and "over-stretched". Mr Ward hinted at this earlier in the thread.
I agree. It's all subjective and personal taste. Some don't like green in their astro-images, and others do.
This must've either been written tongue-in-cheek, or you're serious.
If serious, you could prove it by leading from the front, which sometimes means being a man and listening to constructive criticism instead of throwing it back in people's faces.
I seem to recall reading here somewhere that you're a (retired?) lecturer? Did you react the same way to your students' questions? ie It's your way or the highway? I would have thought a man of academic background would appreciate different views and being challenged.
No need for the sarcasm. I'm sure that most images taken with the sort of gear you're wielding would look the same at 1:1.
Again, I suppose it's personal taste. Some post pictures that can be seen whole on one screen and look pleasing to the eye. While others prefer to delve deep and appreciate the noise up close.
Me neither. That's why I dared to ask you an honest question instead of blindly saying "wonderful image."
[COLOR=black][FONT=Verdana]
You're entitled to your opinion. And I agree that I don't have the skill, knowledge, and experience of others here. Most here. I'm humble like that.
But where you're wrong is that I'm also entitled to my opinion.
And you're also confusing knowledge about these topics with intelligence.
I'll have to check the actual numbers, but there's a certain percentage of grumpy old men that can't take criticism or even a question without over-reacting and berating. I wish they would stop proving it, but they just can't help themselves.
Poita
01-02-2013, 06:05 PM
Troy, the technique does deliver extra resolution *if* you are undersampling.
i.e. if the optics are delivering more resolution than than the camera can resolve.
It really doesn't matter if one likes Bert's image or not, for whatever reasons, the important thing to me is does the technique work.
The answer is yes, and the technique is widely utilised in a myriad of signal processing applications.
The math is well known, and the theory is sound.
Anyone can try it themselves, and verify whether the technique produces usable extra resolution.
So to anyone wondering, give it a go. It works for me and resolves extra fine detail. Not jaw dropping but easy to see. Allan has good results and images to show it in the other thread I referenced.
I'm glad Bert shared the technique.
Peter.M
01-02-2013, 06:15 PM
Oversampling is the camera at a higher resolution than the seeing/optics can resolve, Undersampling is optics are delivering more resolution than than the camera can resolve.
Poita
01-02-2013, 06:34 PM
Brain-fart moment, I just came back from my 3hr math exam and am still a bit addled. Thanks for the catch, I've fixed it now.
-P
gregbradley
01-02-2013, 07:07 PM
You certainly got a lot of detail there. Not sure what the total exposure time of this image is. More data could smooth it out more and reveal even more. The RH200 is proving itself to be a real alternative to APO's perhaps more so than other compound scopes. There's some smart people around.
Greg.
Peter Ward
01-02-2013, 07:49 PM
A physicist has some washing that needs doing, and sees a sign in a shop shop-window down the road. "Laundry done here"
He hauls the washing into the shop. The shop-owner (by co-incidence named Albert) is a Mathematician. The Physicist asked when he can have his washing done?
"Oh no!" "We just make signs! "
The point of my tale?
While the math works, I'd suggest it ignores the elephant in the room.... ...atmospheric turbulence...
Turbulence spreading light beyond a pixels location is why this works. With perfect seeing it would fail, and you'd need to move the sensor (i.e. Drizzle)
Improving the S/N ratio by stacking images (assuming random noise) and getting multiple samples of a stellar location (effectively averaging its location to a precision less than a single pixel) makes the difference....
... but I still fail to see how, the application of a constant (i.e make pixel X bigger by 1.5) improves anything with floating point calculations that follow.
I suspect a 1.5 pixel Gaussian spread/blur applied prior to stacking would end up looking the same.
troypiggo
01-02-2013, 08:06 PM
I never once questioned the technique or theory. And my original question wasn't meant to imply that I did or didn't like the image. I was simply trying to understand why go to these lengths of getting extra resolution when there's more fundamental things (mainly noise and clipping as I agree that white balance is subjective and personal taste) that could be corrected in the image at normal resolution. Get that right first, then start worrying about resolution.
Again, just trying to understand the "why."
I shouldn't be surprised. I asked Bert here in another thread a while ago about why he was choosing to use his NII filter instead of Ha while he was still testing and trying to get flat fields. My reasoning for the question was innocent, just trying to understand.
His response was that many object like the Helix nebula emit strong NII, and that Ha filters wider than 3nm also pick up the NII spectrum. Correct statement, and I already knew that. But it didn't answer my question, and he wasn't imaging Helix, he was imaging Eta Carinae nebula or something that was strong Ha not NII.
I'm wondering if he was using Ha instead of NII he'd be getting stronger signal/noise and he wouldn't have to stretch so much.
Again. I'm no expert. That's why I'm asking the questions. To understand, not antagonise.
Peter.M
01-02-2013, 09:00 PM
I wonder, on a purely scientific level if the N2 band does indeed stop the HA light at F3. The 2 lines are so close together it would not be impossible for the shift to move the line to incorporate part of the HA emission
Originally Posted by Avandonk
About fifty percent of humanity is below average intelligence. I wish they would stop proving it!
I laughed pretty hard reading this.
Bert has more than the average mans arms.
avandonk
05-02-2013, 01:06 PM
Peter the images are randomly dithered so all detail is sampled at a finer mesh than the pixel size of the sensor. It would be useful to rotate the sensor as well only to eliminate the factor of nearly root two along the diagonals of the pixels.
I just cannot see why you find this so difficult to understand.
If the seeing is perfect then this will give a resolution of about four seconds of arc rather than a bit over six seconds of arc. The best a single image with this optic and camera can do is 3.1 seconds of arc per pixel which is really a resolution of 6.2 seconds of arc.
All I am claiming is that at best my resolution is four seconds of arc for all practical puposes.
It is even worse than this even if seeing is about 2 seconds of arc. Long exposures and mount tracking errors of even 1 second of arc rms will add to this blurring.
All this means that it is not just the undersampling of the sensor that is the limit of the final resolution. By using this method real practical gains can be made in resolution. After all the best that can be done is limited by the seeing and guiding errors.
The PMX rms guiding errors are less than one second of arc. So on a night of typical seeing say two seconds of arc my system can image to a resolution limited by seeing and guiding rather than inherent sensor resolution.
My choice of 600mm focal length was quite calculated as I am not really limited by seeing, as even when working at best performance of the system the seeing has to be worse than three seconds of arc to affect my imaging resolution.
Where this is method of resolution enhancement is really useful is when binningx2 to get four times the sensitivity and well depth. Rather than doubling your sensor 'pixels' in size they only go up by a factor of root two.
Bert
avandonk
05-02-2013, 01:10 PM
Troy I am truly sorry for seeming to be impatient or abrupt. I must read genuine questions more carefully before responding.
Most emission objects are about 50:50 NII:HA. NII emission needs close hot blue stars. I am hoping that by making images in HA and NII I can find some differences.
Leakage due to angular spectral shifts is a very valid question. This can only be resolved by taking many images with the object/s in question at both the centre of the field and the edge at both NII and HA.
Bert
troypiggo
06-02-2013, 07:19 PM
Thanks for the reply, Bert. I wasn't aware that it was 50/50 for emission objects. I've always read about the strong Ha, but NII is seldom mentioned other than planetary nebulae in the places I've seen.
avandonk
07-02-2013, 04:37 PM
Troy what people have been calling HA for many years is really HA and NII lumped together. If their filter was 5nm or greater in bandpass they would be recording both.
Below is a spectral graph of their relative emissions I assume for a mixture of 50:50 N and H as both these spectral lines come from the monatomic atoms not molecules as H2 and N2 exists on Earth. It is a bit more complicated than just two emission lines.
It is only in the vast emptiness between stars that atoms exist without forming molecules and really only interact with the odd passing electron only much later to have a photon from a nearby star ruin the relationship. This always results in a new photon emitted when the electron falls back into the proton. I am sure you would see a naked proton would be more likely to interact with an electron rather than another proton. To form molecules of even H2 you would need two protons and two electrons in the same place due to multiple random collisions. This very rarely happens in deep space.
All the other atomic species such as O and N and S and others will hang on to any electron/s and then start this same process as the naked proton or Hydrogen atom when the incoming photons have the energy to ionise them. It takes far more energy from the incident photon to ionise a large nucleus such as NII or OIII than a puny single proton which is the Hydrogen atom. Why the wavelengths and quanta are nearly the same is because the already accumulated electrons shield the larger nucleus of a O or N so the outer electron has a similar quantised energy difference.
For dust and gas to form into stars and a planetary system it all has to get very cold first so gravity will overcome the escape velocity of all the components be they atoms molecules or tiny bits of dust.
What fascinates me is that this is where we all came from! Uncountable atoms have interacted for billions of years so we can argue!!
I have not come very far from the ape at the Zoo that throws its own excrement at people he/I do not agree with!
For more see the FAQ at Astrodon.
Final note. When contemplating an image of a Super Nova Remnant. You are looking at an image of your real ancient ancestors!
Bert
troypiggo
08-02-2013, 08:35 AM
I know about the spectral lines and filters. I looked hard at this when I bought my Astrodons ;)
But that's got nothing to do with what the objects are actually emitting. What I'm asking is how you arrived at the objects are emitting 50/50 Ha and Nii, as opposed to what the filters can pass. From my reading it's more (some?) planetary nebulae than emission like Eta Carinae that emit Nii strongly. It makes sense to me that star forming regions would be stronger in H than N for the very reasons you mention.
I mean, just because you're wearing a baseball glove doesn't mean someone is throwing baseballs at you.
avandonk
08-02-2013, 10:25 AM
I have done some images in NII and HA and have generally found the relative intensities to be about the same. Just do a search on avandonk and NII and OIII.
Here is one NII to green channel and HA to red.
http://d1355990.i49.quadrahosting.com.au/2012_11/HH_HA_NII_.jpg
There are more.
Bert
vBulletin® v3.8.7, Copyright ©2000-2025, vBulletin Solutions, Inc.