No further ideas Bert - I think you’ve done this one to death.

If you’re not a narrowband purist or attempting to project some scientific data, then you could try subtly blending conventional RGB broadband data with narrowband data. I’ve seen this done before with mixed results. It can improve image aesthetics if done right.
You’re correct, the Ha to Hb emission lines are 3:1 ratio. Keep in mind this is theoretical, the ratios can vary. A blend of ~30% Ha data into OIII is quite common for narrowband work.
“Since most of the Hα emission we detect arises from hydrogen recombination, atomic physics is the only thing that dictates the ratio of Hα to Hβ emission from most ionized intersteller gases. Although it is a slight function of temperature, near 10,000 K, the ratio is about 3:1 in favor of Hα. However, intersteller dust absorbs more blue light than red so that ratios greater than this are typical in observations. Observed ratios of Hα/Hβ should be an intersesting probe of dust in front of and within the ionized gas. Our current plan is to cover at least +/- 30 degrees about the Galactic plane along with several brighter ionized regions, such as the entire Orion-Eridanus complex. The observations began in December, 1999 but may take much of 2000 to complete since we have doubled the Hα exposure time for Hβ observations.” – reference
http://www.astro.wisc.edu/wham/science.html
Indeed, the extinction factor is more prevalent in blue wavelengths (OIII) than red. This is the reason why it’s important to plan an imaging session if you want good results. I typically wait for the object to reach its highest point in the sky before collecting blue data to minimise the extinction factor. Interesting reading -
http://cfa-www.harvard.edu/icq/ICQExtinct.html. It’s not hard to calculate once you’ve done it a few times. A stepped imaging approach works well where you collect data based on the objects location in the sky (as it approaches zenith – RRGGBB<object at zenith>BBGGRR). The same applies to narrowband work.
What you also need to consider is normalisation of each channel before the colour combine function. Failing to have equal average sky background brightness across the data will result in unequal weightings across the available dynamic range. If you only use PS you’ll need to calculate this manually by taking samples of the sky. Once you have the right weightings you shouldn't stretch individual channels, but all simultaneously to keep balance. Keep in mind that the right nebulosity colour doesn’t mean the stars will also have the right colour. Sometimes you’ll need to split the two and process them differently, then layer them to get the desired result.