Go Back   IceInSpace > Beginners Start Here > Beginners Astrophotography

Reply
 
Thread Tools Rate Thread
  #1  
Old 01-08-2012, 09:38 AM
originaltrilogy (Petr)
Registered User

originaltrilogy is offline
 
Join Date: Oct 2011
Location: Bathurst, NSW
Posts: 116
Help with processing please

I have taken photo with QHY8 and Nebulosity and stack them.

If I try to process I get strange gradient.

Can anyone process one of these? I cannot get good image from them.

I have friend share it on dropbox for me, a .fit file and a 16bit .tiff file.
Same file differing format.

If someone who is good can try for me to see if data is in there or if I am capturing wrong.

Thank you

Petr

https://www.dropbox.com/s/g4y3i4t971ekzhl/andro-comp.tif

https://www.dropbox.com/s/hrce6gwmfm...ndro-stack.fit
Reply With Quote
  #2  
Old 01-08-2012, 03:46 PM
A23649's Avatar
A23649 (Nathan)
I've told you once.

A23649 is offline
 
Join Date: Sep 2010
Location: Canberra, Australia
Posts: 133
I have had trouble processing them because of dust on the sensor. Flat Frames will correct for this and for the circular illumination gradients. Flats do not correct the colour of an image, which you will need to do in later processing. I hope this helps as the image has a ton of promise.

Cheers,
Nathan

P.S. Try taking the Flat frames at evening when the sky seems to be sort of whitish but not bright blue. Then take your exposures with the telescope pointing to that area with the dust cap off with the histogram from the image looks to be about 1/3 of the way across brightness wise. This might be 0.2 to 2 seconds depending on the camera/CCD. Repeat for each filter if it's a mono or just the once for colour and try to take at least half as many frames as your light frames. Just add these when you are stacking you images and go from there.
Reply With Quote
  #3  
Old 01-08-2012, 05:39 PM
Garbz (Chris)
Registered User

Garbz is offline
 
Join Date: May 2012
Location: Brisbane
Posts: 644
There's data in there. I've given it a go but I'm an absolute novice at this. The background had quite a bit of a green cast and after correction and stretching I found what appears to be light bleeding into the edge of the frame. That could have been my rush job at correcting for the background vignetting through.

The data is there but the dust is unfortunately right in the area of interest. Here's my very newbie (and very quick) attempt at processing. Also I rushed the deconvolution step so some stars are a bit doughnutted.
Attached Thumbnails
Click for full-size image (andro-stack.jpg)
151.3 KB66 views
Reply With Quote
  #4  
Old 01-08-2012, 07:15 PM
originaltrilogy (Petr)
Registered User

originaltrilogy is offline
 
Join Date: Oct 2011
Location: Bathurst, NSW
Posts: 116
Help with processing, Hyperstar images of Sculptor.

Thank you for help.
I do know about flats, but left them out on purpose to see if people think I am acquiring image okay. I mean is there something in the image that I do wrong.
Camera is QHY8 OSC. I get big green gradient in pictures, but the moon was nearly full. Is new camera for me so experimenting.
Mostly no one posts their unprocessed image along with their processed one, so I am not being sure if my capture is my problem or if processing.

Chris that looks better than I can do.

For informations I used a big black dewshield so I think no light was getting in that should not be getting in.

I checked corrector plate and no moisture or fog on it.
I will clean camera too and try when moon is gone.

Thank you for help.

Last edited by originaltrilogy; 08-08-2012 at 10:11 AM.
Reply With Quote
  #5  
Old 01-08-2012, 07:15 PM
originaltrilogy (Petr)
Registered User

originaltrilogy is offline
 
Join Date: Oct 2011
Location: Bathurst, NSW
Posts: 116
can you tell if I should make shorter or longer subs?
Reply With Quote
  #6  
Old 01-08-2012, 07:45 PM
RobF's Avatar
RobF (Rob)
Mostly harmless...

RobF is offline
 
Join Date: Jul 2008
Location: Brisbane, Australia
Posts: 5,716
Here it is with a bit more "lippy" on Peter.

How much data is there here? Chasing galaxies is certainly "courageous" from city skies with the moon about, and small amounts of data.
The much brighter background sky means your overall signal to noise is much much less than from clear dark skies. Altitude of object important too of course.

http://www.youtube.com/watch?v=ik8JT2S-kBE
Attached Thumbnails
Click for full-size image (M31.jpg)
161.5 KB75 views
Reply With Quote
  #7  
Old 01-08-2012, 08:16 PM
originaltrilogy (Petr)
Registered User

originaltrilogy is offline
 
Join Date: Oct 2011
Location: Bathurst, NSW
Posts: 116
Oh wow! How did you do that?

There is just over 4 hours of data. Image is just above horizon though.

Here is something from higher up. The sculptor.

https://www.dropbox.com/s/eyo1zchnmmk0snc/sculptor.fit
https://www.dropbox.com/s/i7fol4vefl...culptortif.tif
Reply With Quote
  #8  
Old 01-08-2012, 09:14 PM
RobF's Avatar
RobF (Rob)
Mostly harmless...

RobF is offline
 
Join Date: Jul 2008
Location: Brisbane, Australia
Posts: 5,716
Pixinsight particularly good at background gradient correction and stretching out detail in noisey data. Plus practice processing my own questionable data at times. Processing is an adventure in itself, colour calibration, etc.

Would certainly help here to have flat corrected stacked image for donuts and vignetting.
Reply With Quote
  #9  
Old 02-08-2012, 09:38 AM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
Quote:
Originally Posted by originaltrilogy View Post
Oh wow! How did you do that?

There is just over 4 hours of data. Image is just above horizon though.

Here is something from higher up. The sculptor.

https://www.dropbox.com/s/eyo1zchnmmk0snc/sculptor.fit
https://www.dropbox.com/s/i7fol4vefl...culptortif.tif
I had a quick go just with photoshop levels and posted it in your other thread.
Reply With Quote
  #10  
Old 02-08-2012, 11:31 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Hi Petr,

As said here before, you definitely stand to gain a lot by imaging during new moon, using flats and perhaps better framing, so you capture the full object in the frame with space left around the edges. Unfortunately most of the M31 signal is drowned out by the moon glow or light pollution, but I've attached my attempt at recovering some of the data and fixing up the dust donuts.

EDIT: Added a quick attempt at the sculptor galaxy as well.

Cheers,
Attached Thumbnails
Click for full-size image (andro-stack_start_ST5.jpg)
194.1 KB53 views
Click for full-size image (sculptor.jpg)
190.4 KB50 views

Last edited by irwjager; 02-08-2012 at 12:15 PM.
Reply With Quote
  #11  
Old 02-08-2012, 12:42 PM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
Holy Moly Ivo, that is great work, I think I'll quietly delete my quickie from his other thread
Reply With Quote
  #12  
Old 02-08-2012, 01:01 PM
alistairsam's Avatar
alistairsam
Registered User

alistairsam is offline
 
Join Date: Nov 2009
Location: Box Hill North, Vic
Posts: 1,837
Quote:
Originally Posted by irwjager View Post
Hi Petr,

As said here before, you definitely stand to gain a lot by imaging during new moon, using flats and perhaps better framing, so you capture the full object in the frame with space left around the edges. Unfortunately most of the M31 signal is drowned out by the moon glow or light pollution, but I've attached my attempt at recovering some of the data and fixing up the dust donuts.

EDIT: Added a quick attempt at the sculptor galaxy as well.

Cheers,
Ivo,

That is really cool, could you briefly explain what you did as it'll really help.
I have a QHY8 and am new to CCD processing and this concept of linear and stretching and the filters is not that straight forward.
Reply With Quote
  #13  
Old 02-08-2012, 01:02 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Poita View Post
Holy Moly Ivo, that is great work, I think I'll quietly delete my quickie from his other thread
Hehehe
Didn't think yours was too bad at all Peter. Plus, there is no one 'right' way to process an image - as long as you feel the tools you have at your disposal have done exactly what you had in mind, then your interpretation is as as valid as anyone else's.
Reply With Quote
  #14  
Old 02-08-2012, 02:44 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by alistairsam View Post
Ivo,

That is really cool, could you briefly explain what you did as it'll really help.
I have a QHY8 and am new to CCD processing and this concept of linear and stretching and the filters is not that straight forward.
Hi,

Before I delve into the processing, let me just say that you really need to try to get to know your gear and how to get the most out of it during acquisition. If you're on a budget you work with sparse equipment, or you can't travel to a dark sky site or the moon is out, then you need to know how this affects your data.
Don't have a field flattener? Expect coma. Using cheap glass? Expect chromatic aberration. Using a cheap mount? Expect eggy stars. Shooting under light pollution? Expect a background level with a particular color signature. Shooting with the moon out? Expect a background level with another signature. Shooting widefield? Expect stronger gradients. And so on.
You need to learn how to cope with any one of these problems that may affect you. Do that in hardware first. If you can't then use software as a fallback.

Once you understand the strong points and weak points of your gear, your circumstances and even yourself (how much patience do you have? how much of a perfectionist are you?), then you can start provisioning for these weak points.
Next, try to understand your strong points as well. Use them when picking choosing an object, or use them to cover up your weak points. Bad tracking? Go widefield. Sensor too big? Use binning. Light pollution? Consider imaging in narrow band, etc.

Only when you feel you're comfortable with how you acquire your data and what you acquire on a typical night, can you really start to develop a processing regime. For us 'commoners', this workflow will be different for everyone because everyone acquires different types of data, because they acquire under different circumstances, with different gear, with different skills of different objects. The more serious you become with this hobby, the more consistent and flawless your data will be.

Processing, besides the usual steps that everyone goes through (signal stretching, etc.), will deal primarily with addressing your weak points, using your strong points and tackling object-specific challenges (such as high dynamic range issues).

So let me preface my description with saying that the steps I took were mostly specific to these data sets and may not apply to you. Let me also say that some steps, techniques and algorithms, while having a basis in advanced image processing, are frowned upon by others, for example, the creators of PixInsight. This is because some techniques partially use data-driven best-estimate reconstructions of some pixels, instead of only using 1:1 transformations (curves, filters, histogram manipulations) of the data as it was recorded.

As far as Petr's M31 data set goes, it suffers from severe vignetting and light pollution. There are several tools you can use that will ameliorate this (GradientXTerminator for PS, ABE or DBE in PixInsight, Wipe in StarTools). This particular data set is severely affected, so much so that I had trouble with StarTools getting rid of all of it without adversely affecting the fainter parts of the galaxy.

Some trivial stretching soon revealed that the outline of M31 was very hard to distinguish from the background.

To bring out M31, I used a new technique called 'large scale light diffraction remodelling' (as implemented in StarTools' Life module), where semi-automatically selected individual parts of an object are used to remodel the aggregate light diffraction of the whole object, restoring its outline and making it stand out against the background again, with the original (already visible detail) embedded inside it. The outline is an approximation but tends to correlate very well to what the real object would look like. It's just one example of data-driven best-estimate replacement of pixels is used to bring out an object in your data that would otherwise be lost or inaccessible.

I used a special astronomy-specific version (coming in ST 1.3) of an algorithm similar to PhotoShop's 'content aware fill' to remove the dust donuts while still retaining any stars that were dimmed by it. It's another example of a data-driven, best-estimate replacement.

I finished off with rounding the stars (another data-driven, best-estimate modification of the image).

The Sculptor galaxy data was much better. After a screen stretch (e.g. a stretch that doesn't modify the data but merely visualises it), I started of with some mild selective deconvolution as the noise level was quite good (low). It managed to recover a fair amount of detail.
It didn't require such aggressive gradient removal and a small amount of stretching was enough to bring out the galaxy further.

As with the M31 image, I removed the dust donuts. Next I selected the whole galaxy in a mask and calibrated the white point to the full galaxy and nothing else. I bumped up the saturation a little, did some very mild noise reduction, rounded the stars and called it a day.

Cheers,
Reply With Quote
  #15  
Old 02-08-2012, 03:01 PM
silv's Avatar
silv (Annette)
Registered User

silv is offline
 
Join Date: Apr 2012
Location: Germany 54°N
Posts: 1,110
Ivo, this is a very helpful description of how to deal software-wise with this or that bug in these 2 particular images. thank you so much!
Reply With Quote
  #16  
Old 02-08-2012, 04:34 PM
alistairsam's Avatar
alistairsam
Registered User

alistairsam is offline
 
Join Date: Nov 2009
Location: Box Hill North, Vic
Posts: 1,837
Thanks Ivo,
Understood all what you said about hardware and knowing one's hardware limitations and strengths.
Thanks for the explanation into processing as well.
I may not understand what each software technique actually does in the back end, but will have to try it out.
Reply With Quote
  #17  
Old 03-08-2012, 03:46 AM
Poita (Peter)
Registered User

Poita is offline
 
Join Date: Jun 2011
Location: NSW Country
Posts: 3,586
Thanks so much Ivo for taking the time to explain all that, it really really helps.

Taking the sculptor galaxy image, what could have been done to improve the image at the aquisition end, other than the obvious need for flats?

Longer or shorter subs, more data?

I'd wager he is using a Hyperstar setup, I got very similar ring patterns when first using mine.
Reply With Quote
  #18  
Old 03-08-2012, 11:36 AM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by Poita View Post
Taking the sculptor galaxy image, what could have been done to improve the image at the aquisition end, other than the obvious need for flats?

Longer or shorter subs, more data?
It's hard to say... We don't know the rest of Petr's setup, where he was at what time or the exposure times and gain he used. Plus I wouldn't rate myself a hardware expert. I guess flocking (if applicable) could help with any stray moonlight being reflected in the tube?
It appears the data is not blown out anywhere, so that is good. It also appears there are enough frames as the signal appears quite workable, even in the face of the LP/gradients.
Reply With Quote
  #19  
Old 03-08-2012, 08:22 PM
RobF's Avatar
RobF (Rob)
Mostly harmless...

RobF is offline
 
Join Date: Jul 2008
Location: Brisbane, Australia
Posts: 5,716
Quote:
Originally Posted by irwjager View Post
Hi,

Before I delve into the processing, let me just say that you really need to try to get to know your gear and how to get the most out of it during acquisition. If you're on a budget you work with sparse equipment, or you can't travel to a dark sky site or the moon is out, then you need to know how this affects your data.
Don't have a field flattener? Expect coma. Using cheap glass? Expect chromatic aberration. Using a cheap mount? Expect eggy stars. Shooting under light pollution? Expect a background level with a particular color signature. Shooting with the moon out? Expect a background level with another signature. Shooting widefield? Expect stronger gradients. And so on.
You need to learn how to cope with any one of these problems that may affect you. Do that in hardware first. If you can't then use software as a fallback.

Once you understand the strong points and weak points of your gear, your circumstances and even yourself (how much patience do you have? how much of a perfectionist are you?), then you can start provisioning for these weak points.
Next, try to understand your strong points as well. Use them when picking choosing an object, or use them to cover up your weak points. Bad tracking? Go widefield. Sensor too big? Use binning. Light pollution? Consider imaging in narrow band, etc.

Only when you feel you're comfortable with how you acquire your data and what you acquire on a typical night, can you really start to develop a processing regime. For us 'commoners', this workflow will be different for everyone because everyone acquires different types of data, because they acquire under different circumstances, with different gear, with different skills of different objects. The more serious you become with this hobby, the more consistent and flawless your data will be.

Processing, besides the usual steps that everyone goes through (signal stretching, etc.), will deal primarily with addressing your weak points, using your strong points and tackling object-specific challenges (such as high dynamic range issues).

So let me preface my description with saying that the steps I took were mostly specific to these data sets and may not apply to you. Let me also say that some steps, techniques and algorithms, while having a basis in advanced image processing, are frowned upon by others, for example, the creators of PixInsight. This is because some techniques partially use data-driven best-estimate reconstructions of some pixels, instead of only using 1:1 transformations (curves, filters, histogram manipulations) of the data as it was recorded.

As far as Petr's M31 data set goes, it suffers from severe vignetting and light pollution. There are several tools you can use that will ameliorate this (GradientXTerminator for PS, ABE or DBE in PixInsight, Wipe in StarTools). This particular data set is severely affected, so much so that I had trouble with StarTools getting rid of all of it without adversely affecting the fainter parts of the galaxy.

Some trivial stretching soon revealed that the outline of M31 was very hard to distinguish from the background.

To bring out M31, I used a new technique called 'large scale light diffraction remodelling' (as implemented in StarTools' Life module), where semi-automatically selected individual parts of an object are used to remodel the aggregate light diffraction of the whole object, restoring its outline and making it stand out against the background again, with the original (already visible detail) embedded inside it. The outline is an approximation but tends to correlate very well to what the real object would look like. It's just one example of data-driven best-estimate replacement of pixels is used to bring out an object in your data that would otherwise be lost or inaccessible.

I used a special astronomy-specific version (coming in ST 1.3) of an algorithm similar to PhotoShop's 'content aware fill' to remove the dust donuts while still retaining any stars that were dimmed by it. It's another example of a data-driven, best-estimate replacement.

I finished off with rounding the stars (another data-driven, best-estimate modification of the image).

The Sculptor galaxy data was much better. After a screen stretch (e.g. a stretch that doesn't modify the data but merely visualises it), I started of with some mild selective deconvolution as the noise level was quite good (low). It managed to recover a fair amount of detail.
It didn't require such aggressive gradient removal and a small amount of stretching was enough to bring out the galaxy further.

As with the M31 image, I removed the dust donuts. Next I selected the whole galaxy in a mask and calibrated the white point to the full galaxy and nothing else. I bumped up the saturation a little, did some very mild noise reduction, rounded the stars and called it a day.

Cheers,

Top post Ivo, almost worth of pinning somewhere. The reality is that many people don't want to spend an arm and a leg on top quality astro gear, and you can make do with a lot less than top notch as long as you understand and expect certain limitations. You can learn to understand and manage those limitations and do things at data collection and post processing to increase the chance of achieving your goals. Not everyone is worried about APODs (nice though!), and the PMX, FSQ, 16803 rig can wait until we win lotto anyway
Reply With Quote
  #20  
Old 04-08-2012, 01:33 PM
irwjager's Avatar
irwjager (Ivo)
Registered User

irwjager is offline
 
Join Date: Apr 2010
Location: Melbourne
Posts: 532
Quote:
Originally Posted by RobF View Post
The reality is that many people don't want to spend an arm and a leg on top quality astro gear, and you can make do with a lot less than top notch as long as you understand and expect certain limitations. You can learn to understand and manage those limitations and do things at data collection and post processing to increase the chance of achieving your goals. Not everyone is worried about APODs (nice though!), and the PMX, FSQ, 16803 rig can wait until we win lotto anyway
Too true Rob. Sometimes we just need to take a step back and realise how incredible our 'beginners' equipment is when compared to that of a professional 30-40 years ago.
No one would argue that the results that people got back then were any less valid, just because they had simple equipment by today's standards. Yet somehow we talk ourselves into some sort of inferiority complex when imaging with modern, but 'simple' equipment, which, in the right hands, would easily blow away anything you'd be able to produce 30-40 years ago.
I have very little time for people (I won't mention any names or vendors) that say you should just give up on this hobby if you can't afford a coma corrector, a CCD, a proper mount, this filter or that filter.
Just enjoy the ride and capture some fresh photons of amazing things only a fraction of people on earth have captured photons of before. Whether you use a webcam or a CCD, it's easier and more affordable than it has ever been!

Cheers,
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 11:56 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement