View Single Post
Old 16-08-2011, 01:20 PM
jase (Jason)
Registered User

jase is offline
Join Date: Sep 2006
Location: Melbourne, Victoria
Posts: 3,916
Processing 101 w/normalisation

I originally produced this for Lightbuckets members who were just starting out with RGB filtered imaging. I thought I'd post it here as well given I didn't want such information to go into a blackhole, never to be seen again.

Other the overall work flow (which is a couple of years old now I should add), the focus of attention was on normalising of data. This is a very important step in a workflow, but I found the term is often used loosely, hence there can be confusion as to the definition. Normalisation is a processing activity that equalises data for two purposes. These are;

Normalisation for data rejection
Normalisation for colour combine

Normalising for data rejection typically occurs after you have registered the subs and hot/dead pixels have been removed. The act of normalising the data to the same level (equalisation) across the sub frame set is vital in determining what data is considered outlier i.e. what data does not conform and should be considered noise or another form of erroneous data and be rejected. If the sub frames are not equalised prior to the data being combined, the data rejection algorithm will fail to determine what is "correct" or normal value of specific pixel. Depending on the tool you use, normalisation is automatic. In MaximDL there is a small check box in the combine dialog box asking if you want to normalise the subs before combining the data. In CCDStack, the normalisation process is more of a manual task were you have the opportunity to select in the image data what constitutes as being a highlight (bright area) and background (dark area). Once normalised, the average values of the pixels amongst the subs should be relatively close to each other.

Normalisation for colour combine is performed on the R, G, B master frames. That is the sub frames per colour channel that have already been combined to produce a noise free master frame. The act of normalising the R, G, B master frames is to equalise the background colour between the frames so that the correct colour weights are applied. For example, you may have blue filtered data that was collected when the moon was rising, as such the background is brighter then the R and G filtered data. Without normalisation of the background, the resulting colour combined image would have a strong blue cast across the image. Normalisation simply alters the pixel values across the three filters master frames (R,G,B) to similar values (again equalises).

The attached processing 101 flow chart highlights where normalisation is performed. The first set is the data rejection i.e. prior or during the combine algorithm executing. Then, the colour combine normalisation that occurs only on the RGB master frames.

The work flow listed is dated though is proven to work having produced several images using the flow in the past. Of course should you have any questions, please PM me or establish a new thread for discussion.

Attached Thumbnails
Click for full-size image (Processing101.jpg)
166.0 KB167 views
Reply With Quote