Hi Carl,
Good first start!
Deconvolution is your friend, especially during bad seeing and longer exposure times or when averaging your frames.
Though useful to bring out scale-dependent detail in non-challenged images, wavelet-based image manipulation is not really the go-to tool to correct for bad seeing conditions or focus related issues.
The linked wavelets routine in Registax, as far as I can tell, do allow you to emulate a chain of sharpening operations that can approach a simple deconvolution operation. However, you'll get better results with less fiddling by using a 'real' deconvolution plug-in or dedicated app.
Bad focus and/or bad seeing smear (convolve) your 'sharp' image across the image plane. They way they do that differ. Deconvolution (un-smearing) can help in cases that aren't too bad and where the input image isn't too noisy.
Bad seeing scatters photons in a random distribution pattern following a Gaussian pattern. Deconvolution with a
2D Gaussian kernel can therefore fix this type of 'blur'. The worse the seeing, the bigger the pattern.
Incidentally, this is also what Registax' "Linked Wavelets" mode effectively does; deconvolution using a Gaussian kernel by sharpening, re-blurring, sharpening, re-blurring, sharpening - this has got nothing to do with Wavelets anymore (it's single scale) and in essence it's a not-so-user-friendly (you have to specify each iteration manually) reblurred Van Cittert Decon routine that lets you (incorrectly) apply deconvolution to stretched data. Don't get me wrong; 'Linked Wavelets' in Registax have the capability of improving your image somewhat when used in this capacity and it's better than nothing, but know that they will be almost always sub-optimal results.
An out-of-focus image, on the other hand, is the convolution of the real/sharp image with the shape of your scope's aperture (an evenly lit circle for a refractor, a circle with a black spot and some spider vanes in case of a Newtonian, etc.) De-convolution by the shape of your scope's aperture therefore should yield a sharp image again.
Remember to use deconvolution on a non-stretched (e.g. non-gamma corrected) input image for best results - only stretch and/or gamma correct your image afterwards. Decon only makes sense on linear data (e.g. data as it was captured by the sensor being a simple photon counter).
Hope this helps (and isn't too confusing)!
Cheers,
Ivo
EDIT: I should also say that if you'd like to use deconvolution (and you should

), it should be one of the first things you do to your image. This is for the simple reason that, the more you modify your image, the harder it becomes to model the 'blur' that your image contains. The harder it is to model the blur, the less effective deconvolution will be when attempting to reverse that blur.