Well Doug, where does one start (its difficult not to write a novel on this topic - perhaps its time I start making money and producing my own tutorial DVDs


)...it can be daunting going from one shot colour to a mono camera with filters. Once you've been there however, its difficult to go back.
Have you worked out the G2V filter weights? Sure, manufacturers will indicate you can use 1:1:1 colour weights, but it's rarely the case that this is indeed correct. Its usually close however, so depends on how accurate or particular you want to get with your imaging.
I'd say you need to go longer on your RGB data. 30 second subs ain't that great, even for binned 2x2 data. While many focus on luminance, its easy to forget the importance of quality RGB. If time is an issue, sure, bin the data, but this can come at a cost. For wide field work where the arcsec/pixel is high - 1x1 all the way. There is little value of binning.
With regards to the combine process. A few ways to do it. Obviously treat the subs through each filter like any other sub. Closely observe its quality against others in the stack. Discard ones impacted by seeing or significant guiding problems. Aeroplane trails etc are fine to leave, they'll be rejected as outlier pixels by the combine algorithm. Create master R,G,B frames. I usually create the luminance master in the process as well. Once, you've got the luminance, register/align the R,G,B masters to it. At this point, if the R,G,B is binned, then it will be upscaled to match the luminance resolution. Unbinned RGB is simply registered. I don't register the RGB subs together and register it again to the luminance. Double registrations regardless of the form should be avoided at all time. The more times data is registered the further resolution can be lost due to the resampling. While in the case of RGB, its not a huge issue, but for luminance - take care.
With the R,G,B subs aligned, they need to be normalised. A process that equalises the background across the images. If not done, the colour weights you apply will not balance the images accurately. Very rarely will the background be equal across the R,G,B masters - mostly due to differing conditions i.e. western sky may have a high reading of sky glow or the moon more illuminated. MaximDL and other tools do this well. However, if you want more control - do it yourself manually. Why would you do it manually? - Because, automated normalisation tools don't always understand the data your dealing with. Gradients are a classic example of this. Anyway, using MaximDL, measure the average ADU in the same location across the R,G,B masters. This location should be void of nebulosity. If a gradient is present, measure the darkest point. Take note of the values between the masters. The goal is to make them equal ADU's around the value of 50. You can use another value, say 100, but the point being is they need to be equal. So, an example; R ADU is 954 counts, G ADU is 1282 counts, B ADU is 1183 counts. (Green tends to be high if light polution exists). So to get to 50 ADU, using MaximDL's pixel math function, you subtract from all pixels in the image - R:-904;G:-1232;B:-1133. Once done go back over the R,G,B master and measure the area again - it should now be as close as possible to 50 ADU across the images. They are now normalised are ready for the RGB combine. You don't need to perform this for every image you create, but if you're having problem in balancing the colours its worth the effort. Analysing your data is key to success as you'll begin to understand how far it can be pushed.
With the RGB master created, you can commence using this as a base layer in photoshop or as a colour, softlight etc blend in more advanced colour processing techniques to obtain the richness you desire. It may help to start off with straight RGB images and not add any luminance in the first instance (at least until you get the colour weights under control as luminance can alter this). Later you can start getting "crafty" and create synthetic luminance from a R,G,B masters to boost original luminance signal or you can try RRGB blends using a colour channel as the luminance. I've blended blue filtered data into the luminance data before to improve contrast for strong reflection nebulae. Many possibilities await, you simply need to experiment to see what works.
I feel certain many others can offer their LRGB tricks and tips. Good first attempt I reckon. Keep at it mate. The deeper down the hole you go, the more you find there is to learn - its a constant state of evolution.
Cheers