View Single Post
  #20  
Old 02-11-2009, 11:27 AM
jase (Jason)
Registered User

jase is offline
 
Join Date: Sep 2006
Location: Melbourne, Victoria
Posts: 3,916
Indeed. Clipping masks for SII, Ha, OIII, R, G, B. The broadband data simply adds extra punch in certain areas. He also discussed synthetic luminance by combining SII, Ha and OIII, but Ha at a lower weighting so it does not dominate. If you've used CCDStack before, you'll understand that you can weight subs post normalisation. You would normally let the normalisation technique handle this, but it may not deliver the desirable result. Typically the reference normalisation sub should be average seeing, not that of exceptional seeing. Ken also makes reference to the PS plug-in known as focus magic. This is a essentially an alternate deconvolution method that you would use after using CCDstack's PC or MaxEnt deconvolution. Like any form of deconvolution, you need strong quality data. The general trend is to take 20+ hours of data, which is not difficult to achieve with a robotic set up.

I've got pages of notes, but most will be covered in the presentations when released.

Quite an interesting presentation by Tony Hallas around noise. In summary, he revealed an asymptote curve which displayed going beyond 16 subs will not reduce inherent noise any further and doesn't take more than 8 dark or bias frames. So in short, mega data doesn't equate to a noiseless image, there comes a point of diminishing returns.
Reply With Quote