Go Back   IceInSpace > Equipment > Software and Computers

Reply
 
Thread Tools Rate Thread
  #1  
Old 13-10-2015, 04:43 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
What's your processing workflow for LRGB images?

I'm interested in hearing what others are using as a processing workflow to make LRGB images from Mono CCDs.

Greg.
Reply With Quote
  #2  
Old 13-10-2015, 05:30 PM
LewisM's Avatar
LewisM
Novichok test rabbit

LewisM is offline
 
Join Date: Aug 2012
Location: Somewhere in the cosmos...
Posts: 10,389
I am a rank amateur compared to yourself Greg, but what I USED to do:

CCDStack:
1. Open each filter as a set.
2. apply Dark/Bias/Flat
3. Register, usually using auto, then applying Lanzcos/sinc 36.
4. Do a mean or Median combine - see which one had a better balance of noise reduction and detail.

After each filter set is done, then load up the Mean or Median combined of each filter, reregister, apply normalisation (I think I used the Lum file usually as the base), then run rejection and interpolation of rejected pixels, and use the LRGB combine. Then I will usually do a background neutralisation, and then fiddle a LITTLE with the image tools with DPP and the others to try to stretch it a LITTLE. Save as a TIFF, and into PS CS6 or 5.1.

In PS, I usually used to run vibrancy and saturation first, followed by either a small stretch or run my detail extraction algorithm.. then fiddle somewhat with light noise reduction (maybe a small median blur), some contrast enhancement with overlay and screen masks, and voila - Lewis's Cruddy Low Data Integration Images
Reply With Quote
  #3  
Old 13-10-2015, 08:38 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Thanks Lewis.

I do similar.

A few points. Firstly data rejection should be done after registration and on each filter set not on the masters. Data rejection requires a certain number of subs (the more the merrier) for the statistical maths used to identifier what is noise and what is most likely signal.

Mean versus Median. Rick has said Mean gives best Signal to noise ratio so I checked. In some instances it gave a tiny bit more but often it was the same.When it was better it was negligible but it does not remove noise as well as median. Mean is another word for average. Median is the midpoint in a set of numbers like 3,5,7 the median is 5 and the average is also 5. But in some sets the median and mean will be a different number.
A set of data that has some high numbers that are noise may show up better with median than mean/average. Say 3, 5 ,100 the mean is 36 but the median is 5. So the noise does not skew the image as much and odd values get dropped out as 100 is a long way from 5 so it shows up as an outlier ( a value that is outside the usual value of a set and therefore most likely noise).

If you watch the subs when using mean you can see it get more solid but some noise artifacts, satellites etc may not disappear. Now use median and as long as you have enough subs to make a meaningful statistical sample those artifacts will vanish.

I am not sure how maximum and minimum combine work but I have found them to not be so useful.

As far as noise reduction goes I think its common practice in digital imaging to do it early in the process. I don't always but I have started to. It makes sense as a lot of the stretching or enhancing steps will boost the noise as well.

With my Trius 694 I check each new set to make sure what calibration works. Sometimes flats (with no bias subtracted), darks and biases or sometimes bias subtracted flats, darks. Sometimes best is only bias (it gets rid of the whitish left hand edge of the Trius 694 images. I found recently some bias subtraction was showing the fixed pattern noise of the sensor - a fine grid pattern. I started using sigma reject for my biases and flats. That helps. But CCDstack has memory issues so I can't open all my luminance files even with an 8 hour total LRGB image. Its limited to about 15 x Proline 16803 files (32.4mb each) or about 30-40 Trius files (11.4mb each) before I get memory problems. I might start using Pix Insight soon for this as Rick has pointed out it has no problem with that. Also it has drizzle and CCDstack does not.

Before I combine my subs for a particular filter I normalise them. I drag the box over to include a dark area and a bright area in the same box. You normalise to again to help identify the outliers so the noise stands out. Normalise means to make the range of values to be similar or the lowest number in each to be 1. I then run data rejection and hot pixel then interpolate then cold pixel then interpolate. I have seen others use the other data rejections but usually I have not found it to be that useful. Once data rejection is done then I median combine to make a sample. When doing registration I am now using Lanczos 256 to resample the data as bicubic sampling can slightly blur the image (very minor though). Lanczos takes ages though.


1. I open the subs for one filter in CCDstack. I flick through them and erase duds. Clouds, errors, bad tracking etc.
2. I calibrate them (I usually experiment to see what works best for the Trius but the Proline its darks, flats (no bias subtract, don't ask me why it just works better for my CDK), bias.
3. Normalise using a dark and bright area in the dragged box.
4. Data rejection hot pixels/interpolate, then cold pixels then interpolate (this gets rid of spots and if not done you may get colour smarties noise in your background later on in processing).
5. Register the images and resample using Lanczos 256.
6. Do a median combine and save as a 32 bit floating FITs. I may do deconvolution 25 iterations positive constraint on the luminance or a fat star RGB master to make the stars similarly sized.
7. Do the same for the rest of the filtered images.
8. Register them all using the Luminance as the base image. Lanczos 256 resampling and save them overwriting the earlier file as they are now aligned with each other.
9. Do an LRGB colour create.
10. Save as a 16bit TIFF once I am happy with it.
11. I sometimes save the masters as 16 bit scaled tiffs so they can be opened in Photoshop without any stretching. But if you do this in CCDstack you need to lighten the background and watch the histogram as it will default black clip your images.
12. Open in Photoshop and do a stretch/colour processing/noise reduction etc.

Greg.

Greg.

Last edited by gregbradley; 13-10-2015 at 08:51 PM.
Reply With Quote
  #4  
Old 13-10-2015, 09:53 PM
Somnium's Avatar
Somnium (Aidan)
Aidan

Somnium is offline
 
Join Date: Oct 2014
Location: Sydney
Posts: 1,669
Quote:
Originally Posted by gregbradley View Post
Mean versus Median. Rick has said Mean gives best Signal to noise ratio so I checked. In some instances it gave a tiny bit more but often it was the same.When it was better it was negligible but it does not remove noise as well as median. Mean is another word for average. Median is the midpoint in a set of numbers like 3,5,7 the median is 5 and the average is also 5. But in some sets the median and mean will be a different number.
A set of data that has some high numbers that are noise may show up better with median than mean/average. Say 3, 5 ,100 the mean is 36 but the median is 5. So the noise does not skew the image as much and odd values get dropped out as 100 is a long way from 5 so it shows up as an outlier ( a value that is outside the usual value of a set and therefore most likely noise).

If you watch the subs when using mean you can see it get more solid but some noise artifacts, satellites etc may not disappear. Now use median and as long as you have enough subs to make a meaningful statistical sample those artifacts will vanish.
some stacking software such as RegiStar have a median/mean stacking function. meaning it rejects outlier data then uses the mean algorithm to average out the background noise component. best of both worlds. not sure if CCDstack has this option.
Reply With Quote
  #5  
Old 13-10-2015, 09:57 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Perhaps that's the best of both worlds.

Greg.
Reply With Quote
  #6  
Old 13-10-2015, 10:08 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
The difference in SNR between a median and mean combination is provable with some relatively simple mathematics.

A mean combination with an appropriate rejection algorithm will generally give best results (good rejection and high SNR). There are some quite sophisticated algorithms, some of which even use the median value to detect and remove outliers, e.g. Winsorized Sigma Clipping.

Cheers,
Rick.
Reply With Quote
  #7  
Old 13-10-2015, 10:40 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Quote:
Originally Posted by RickS View Post
The difference in SNR between a median and mean combination is provable with some relatively simple mathematics.

A mean combination with an appropriate rejection algorithm will generally give best results (good rejection and high SNR). There are some quite sophisticated algorithms, some of which even use the median value to detect and remove outliers, e.g. Winsorized Sigma Clipping.

Cheers,
Rick.
I did several stacks of images with the data window open in CCDstack where it reports SNR and other data. Using both mean and median I either got the same SNR or if it were different mean was a very small amount better, hence my comment of it being negligible. Median however did a far better job of getting rid of satellite trails, cosmic ray hits etc than mean which tended to leave satellite trails in.

This is using CCDstack though and PI may have other options that work better and give a different result.

Greg.
Reply With Quote
  #8  
Old 14-10-2015, 10:03 AM
Paul Haese's Avatar
Paul Haese
Registered User

Paul Haese is offline
 
Join Date: Jan 2009
Location: Adelaide
Posts: 9,991
My sort of work flow (it changes a bit except for calibration) I don't really do processing the same way each time in PS simply because each data set has different characteristics. Anyway here is what I do (sort of).

Calibrate data set (one filter set)
Register
Normalise
Data reject
Combine (either sum or median) To data I have not found mean helpful but I might take a look at this with a current data set.

Once I have data set calibrated I register with my base image which will also be a luminance set. Sometimes I create a luminance set of all the data including NB and broadband data. It is unorthodox but I found it creates an interesting detail layer set.

Once I registered I create a base image.

I then take it into PS and work the data further. I rarely use noise correction due the sheer volume of data I gather. If I do use noise reduction it is via inverted reveal masking and it is very slight. I have had to do this with the Helix image I posted despite the amount of data. Subsequent data runs on that object I hope will alleviate the need to do that.

I use a lot of masking techniques and nothing as a set regime. Each image has its own work flow and controls. Some of those masks are contrasting, sharpening, star saturation, object saturation and detail masking. One thing I do is do each mask separately and then flatten it. I don't like to stack masks one on top of the other. I see each image as an individual creation and if I get it wrong I just back to prior to that event to remove the mask.

Greg with regard to your not being able to open more frames you need to look at the memory setting in CCDstack. I have mine set to 8 gig and it opens 60 odd hours of data without any issue. Mind you I have 16 gig of memory to work with on my machine. I could use up to 11 gig of that data to run CCDstack. Ring me if you need further explanation.
Reply With Quote
  #9  
Old 14-10-2015, 03:48 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Thanks Paul.

What data rejection are you doing and how are you doing it? I was reading some posts by Stan Moore and he says you don't need to do interpolate after hot/cold pixel rejection as the stacking process will get rid of it. So you run the hot/cold pixel rejection but then use one of the mean/median etc stacking methods.

I experimented with sum but it does not get rid of artifacts.

I'll check that setting on CCDstack. My earlier laptop had 16gig of RAM.

I may get the updated Adam Block complete CCDstack tutorial as its a bit of a world in itself.

Are you using bias subtracted flats or subtract the bias when applying the non bias subtracted flats? I don't know why that would make a difference but it seems to with my CDK. I am now taking larger numbers of flats than I used to. 12-16.

Greg.
Reply With Quote
  #10  
Old 14-10-2015, 06:02 PM
Paul Haese's Avatar
Paul Haese
Registered User

Paul Haese is offline
 
Join Date: Jan 2009
Location: Adelaide
Posts: 9,991
Hi Greg,

I am using Poisson Sigma reject at 2%. That usually picks up everything that needs rejecting. I do data rejection prior to the combine.

Sometimes summed data still has some artefacts left over but if I up the rejection to 3% it will get rid of it.

When I create my flat frames I subtract the bias first and that works best for me. My flats are usually 30 subs.

Look under edit/settings/large stacks and then up the VM MB from base amount to whatever you want to use for memory settings.
Reply With Quote
  #11  
Old 14-10-2015, 06:05 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Thanks Paul.

Greg.
Reply With Quote
  #12  
Old 15-10-2015, 10:32 PM
Somnium's Avatar
Somnium (Aidan)
Aidan

Somnium is offline
 
Join Date: Oct 2014
Location: Sydney
Posts: 1,669
do you guys do anything to really dial in the stars to a pinpoint? or do they come out well after stacking subs and compiling the LRGB image.
Reply With Quote
  #13  
Old 16-10-2015, 12:07 AM
rogerg's Avatar
rogerg (Roger)
Registered User

rogerg is offline
 
Join Date: Feb 2006
Location: Perth, Western Australia
Posts: 4,563
Take it as a pinch of salt, considering how good and frequent my LRGB image processing is ... but ....

Overall:
  1. Reduce all files in MaximDL
  2. Process each filter (see below)
  3. CCDStack: register frames then colour combine
  4. PixInsight: DBE
  5. Photoshop: tones of processing to fix it up

For each filter:
  1. CCDStack: de-bloom
  2. CCDStack: align
  3. CCDStack: normalise
  4. CCDStack: sigma reject
  5. CCDStack: mean combine

I'm trying to get a handle on deconvolution to fit that in the workflow, to get rid of star halo's.

Feel free to point out what I'm doing wrong
Reply With Quote
  #14  
Old 16-10-2015, 08:35 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Quote:
Originally Posted by rogerg View Post
Take it as a pinch of salt, considering how good and frequent my LRGB image processing is ... but ....

Overall:
  1. Reduce all files in MaximDL
  2. Process each filter (see below)
  3. CCDStack: register frames then colour combine
  4. PixInsight: DBE
  5. Photoshop: tones of processing to fix it up

For each filter:
  1. CCDStack: de-bloom
  2. CCDStack: align
  3. CCDStack: normalise
  4. CCDStack: sigma reject
  5. CCDStack: mean combine

I'm trying to get a handle on deconvolution to fit that in the workflow, to get rid of star halo's.

Feel free to point out what I'm doing wrong

Sounds good Roger.

I read a post by Stan Moore and I did an experiment last night. If you do data reject and then interpolate the pixels to remove them it slightly blurs the data. Its cleaner but its also slightly blurred.

If you do data reject and don't interpolate and them stack them using say median the data is a bit sharper. Not much but you can see it slightly.

Something to know. So how you do data reject is important. Interpolating the rejected data I think must have come from a tutorial (Adam Block?).

I think also using Lanczos for the resampling after aligning makes a slight difference as well compared to bicubic sampling. Again sharper.

Which data rejection method and the final results are a little unclear to me. I settled on hot and cold pixel rejection only for a long time. Paul uses Poisson set to 2 or 3. Sigma Reject sounds good as well. I don't know that one method over another is totally superior. Open to comments about results from different data reject methods. Stan can be a bit vague about it and suggests experimentation.

Basically most of my subs are pretty clean as my cameras are very good but there can be some spot noise, shot noise, satellites etc that get cleaned up by it. Even some streaky subs can be used and the streaks get cleaned up.

As far as decon for halos- that will make them worse. Decon tends to hammer any halo area around a bright star.

I know Mike hates decon but perhaps that is a crap implementation in the software he is using. I have seen some shockers. CCDstack's decon is very good and pretty artefact free up to a certain point. A 25 iteration positive constraint with auto star selection is pretty nice and no damage. But I do agree with him a little goes a long way. You can do multiscale sharpening using various strengths of decon and layering them in Photoshop with reducing opacities as a detail enhancement tool. Marcus has done that to good effect and I have used it a few times as well and liked what I got.

It depends on the data. As Paul has pointed out 2 images are never exactly the same so your processing flow needs to be a little flexible when you get to the Photoshop/PI part of the processing flow. Some subs take decon really well and others don't. Oversampling helps decon I have read many times.

Halos are best handled ideally physically with better filters or baffling or in Photoshop. I developed a method to handle halos in photoshop. It works but like any processing getting clean data is always infinitely preferable to a Photoshop fix up.

Greg.
Reply With Quote
  #15  
Old 16-10-2015, 08:39 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Quote:
Originally Posted by Somnium View Post
do you guys do anything to really dial in the stars to a pinpoint? or do they come out well after stacking subs and compiling the LRGB image.

Pinpoint stars are more an equipment plus seeing issue. Better tracking, better sharper focusing, refocusing with temp shifts and using scopes that don't shift focus with temp change so much. Accurate tracking, using an AO all add up to shaper stars.

I read one post where a faster scope can make sharper stars. Not oversampling can lead to sharper stars.

Cameras with small wells with overexposed stars can make stars fatter especially if not protected with masks during stretching and colour enhancement processing.

Star reduction techniques do exist though. Star tools is one of them. I think Morphological transformation in PixInsight is another. Selecting bright stars and then reducing them with curves is another. Avoid the minimum filter that is a star and image wrecker.

Liquify tool in PS for bight stars with it set to pucker and hit the bright stars a couple of times reduces the large ones.

There is also the unscientific technique of clone stamping out a troublesome star.

Greg.
Reply With Quote
  #16  
Old 16-10-2015, 09:07 AM
Somnium's Avatar
Somnium (Aidan)
Aidan

Somnium is offline
 
Join Date: Oct 2014
Location: Sydney
Posts: 1,669
Quote:
Originally Posted by gregbradley View Post
Pinpoint stars are more an equipment plus seeing issue. Better tracking, better sharper focusing, refocusing with temp shifts and using scopes that don't shift focus with temp change so much. Accurate tracking, using an AO all add up to shaper stars.

I read one post where a faster scope can make sharper stars. Not oversampling can lead to sharper stars.

Cameras with small wells with overexposed stars can make stars fatter especially if not protected with masks during stretching and colour enhancement processing.

Star reduction techniques do exist though. Star tools is one of them. I think Morphological transformation in PixInsight is another. Selecting bright stars and then reducing them with curves is another. Avoid the minimum filter that is a star and image wrecker.

Liquify tool in PS for bight stars with it set to pucker and hit the bright stars a couple of times reduces the large ones.

There is also the unscientific technique of clone stamping out a troublesome star.

Greg.
Thanks for the response Greg, very helpful
Reply With Quote
  #17  
Old 16-10-2015, 10:16 AM
rogerg's Avatar
rogerg (Roger)
Registered User

rogerg is offline
 
Join Date: Feb 2006
Location: Perth, Western Australia
Posts: 4,563
Quote:
Originally Posted by gregbradley View Post
As far as decon for halos- that will make them worse. Decon tends to hammer any halo area around a bright star.
Perhaps a misunderstanding of what kind of Halo I'm talking about Greg. The Halo's I get are introduced by one set of subs (such as green) being less sharp than another, due often to slightly different focus point of viewing conditions. That results in a colour halo because the stars are more fuzzy in one filter than the others. So, I'm lead to believe decon will tighten up the stars in that channel hence removing the colour halo.

Roger.
Reply With Quote
  #18  
Old 16-10-2015, 12:19 PM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by gregbradley View Post
Which data rejection method and the final results are a little unclear to me. I settled on hot and cold pixel rejection only for a long time. Paul uses Poisson set to 2 or 3. Sigma Reject sounds good as well. I don't know that one method over another is totally superior. Open to comments about results from different data reject methods. Stan can be a bit vague about it and suggests experimentation.
In PI you get an estimate of the SNR when you stack and can tweak the rejection algorithm and parameters to get an optimal result.

The best choice of algorithm depends on the data and number of subs. For a small number of subs, Percentile Clipping is good. When you get to larger numbers of subs, Winsorized Sigma Clipping and Linear Fit usually produce the best results.

BTW, I will post a summary of my workflow when I get a few minutes to write it up.

Cheers,
Rick.
Reply With Quote
  #19  
Old 16-10-2015, 12:22 PM
Octane's Avatar
Octane (Humayun)
IIS Member #671

Octane is offline
 
Join Date: Dec 2005
Location: Canberra
Posts: 11,159
Just link to your PDF, Rick!

H
Reply With Quote
  #20  
Old 16-10-2015, 12:30 PM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 18,183
Quote:
Originally Posted by RickS View Post
In PI you get an estimate of the SNR when you stack and can tweak the rejection algorithm and parameters to get an optimal result.

The best choice of algorithm depends on the data and number of subs. For a small number of subs, Percentile Clipping is good. When you get to larger numbers of subs, Winsorized Sigma Clipping and Linear Fit usually produce the best results.

BTW, I will post a summary of my workflow when I get a few minutes to write it up.

Cheers,
Rick.

That would be good Rick as I suspect PI is the better tool for this aspect of image processing.

Greg.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 05:15 AM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement