#21  
Old 25-04-2018, 10:29 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
Quote:
Originally Posted by multiweb View Post
I used startools a lot for noise reduction and sharpening if needed. Although as I go I tend to do less and less NR in my processing.

Yes the less NR the better and its no substitute for long exposure time and dithering.

But we often operate in less than ideal circumstances so its good to have some good tool to handle when needed.

Lightroom has a nice noise reduction slider at least for digital camera images.

Noise Ninja plug in stopped getting support a few years back so its only a 32bit plug in. But its still in PhotoNinja. I just tried it again in that program and it works really well.

I also tried Topaz a bit more and played with the sliders and yes it does seem a lot better than I initially thought.

I also have a Franzis Pro Noise reduction software I hadn't really been using. It seems to be useful.

Nik free filters define seems good but comes up with a repetitive message that won't stop. So a little buggy perhaps with the latest Photoshops.

Capture One may be good as well.

Greg.
Reply With Quote
  #22  
Old 25-04-2018, 10:33 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
This also opens up the question what point is best in the workflow to do noise reduction?

I have tended to do it late in the workflow but it makes sense to do it earlier so the noise does not get stretched and boosted.

My workflow has traditionally used CCDstack 2 and Photoshop. I have a few plug ins and Carboni actions in Photoshop that are useful but basically its mostly Photoshop basic tools that I use:

CCDStack2 :
1. Callibrate (often trial and error to make sure I get the best calibration). Dark subtract with a bias and flat with a flat with no bias subtracted when I made the master but rather click the bias in the flat menu box in CCDstack 2. It would seem to do the same but it sometimes doesn't and with a tricky scope to flat field I find this works best.
Part of this step is to check each sub and see if its useable. I reject cloud damaged subs (they bloat the final combine) or bad eggy star images. If I have enough data I would reject enlarged star subs with bad FWHM. Sometimes I don't get that luxury of excess data.
2. Register (align).
3. Normalise bright and dim areas.
4. Data reject hot and cold pixels and interpolate the result. I don't use any of the other data reject tools as I have never seen them improve an image but others may see a result.
5. Combine I use Median as it gets rid of satellite trails etc. Perhaps Average gets slightly better results in PI but I don't see any gain in CCDstack 2.
6. I may do some gentle Decon on the luminance at this point or on a colour R G or Blue that has larger stars to get them to be more or less the same size between RGB masters. As Mike has pointed out Decon is a touchy tool so I usually only use it lightly but also have done multi layered decon as per Ken Crawford to get extra sharpness out of galaxy shots. Not usually though.
6. Save as a master file as 32bit floating. I reregister the masters using the luminance as the base image.
7. Do the above for all LRGB or Ha O111 S11 files. Sometimes I save the resulting master as a scaled TIFF. If I do that I always open the histogram and make sure the background is boosted slighty as the auto button in CCDstack 2 tends to clip the black point.
8. Do a colour combine in CCDStack 2 using the LRGB. Sometimes I find I need to normalise the RGB if I get a whacky result (seems like a lottery ticket sometimes). Save the resulting colour image in 16 bit TIFF ready for Photoshop.

Photoshop:

9. Stetch the image using curves and levels. Leave some room to the left of the histogram so no black point clipping occurs.
10. Do whatever processing to tweak the colours etc. Selective high pass filtering to sharpen bright areas, here I would probably do some noise reduction or a little later once colour is to taste and saturation etc is good and after any Ha blend. Ha blends I use the Don Goldman screen technique although I don't use Screen but soft light blend instead. Screen mode is too harsh.

Greg.

Last edited by gregbradley; 25-04-2018 at 10:54 AM.
Reply With Quote
  #23  
Old 25-04-2018, 10:35 AM
LewisM's Avatar
LewisM
Novichok test rabbit

LewisM is offline
 
Join Date: Aug 2012
Location: Somewhere in the cosmos...
Posts: 10,388
Early. I do it post integration.
Reply With Quote
  #24  
Old 25-04-2018, 10:46 AM
RickS's Avatar
RickS (Rick)
PI cult recruiter

RickS is offline
 
Join Date: Apr 2010
Location: Brisbane
Posts: 10,584
Quote:
Originally Posted by gregbradley View Post
This also opens up the question what point is best in the workflow to do noise reduction?
I like to do a light noise reduction with the linear data and then a little more late in the process if required, but typically targeted carefully with a mask.
Reply With Quote
  #25  
Old 25-04-2018, 10:57 AM
gregbradley's Avatar
gregbradley
Registered User

gregbradley is offline
 
Join Date: Feb 2006
Location: Sydney
Posts: 17,871
Quote:
Originally Posted by LewisM View Post
Early. I do it post integration.
Yes I think that makes the most sense to do some early on so the noise doesn't get the later processing which tends to boost the noise as well as the signal and cause a breakdown.

How do you do it straight after combining? At that stage its merely a master image in 32bit FITs? Do you reduce to 16 bit TIFF and use some noise reduction tool in Photoshop or do you do something else? I do data rejection which is a kind of noise reduction but mostly I find it gets rid of the colour specks that appear near the end of your processing if you don't do this step.

Quote:
Originally Posted by RickS View Post
I like to do a light noise reduction with the linear data and then a little more late in the process if required, but typically targeted carefully with a mask.
That sounds very smart. I think I will try that.
Reply With Quote
  #26  
Old 25-04-2018, 03:05 PM
multiweb's Avatar
multiweb (Marc)
ze frogginator

multiweb is offline
 
Join Date: Oct 2007
Location: Sydney
Posts: 22,060
Quote:
Originally Posted by RickS View Post
I like to do a light noise reduction with the linear data and then a little more late in the process if required, but typically targeted carefully with a mask.
That's the way startools does it as well. The NR is the last step but is applied on the linear data prior to stretching and deconvolution/sharpening.
Reply With Quote
Reply

Bookmarks

Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time is now 06:31 PM.

Powered by vBulletin Version 3.8.7 | Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Advertisement
Bintel
Advertisement
Testar
Advertisement