The "eyepiece" image seems to have a square grid superimposed on it.
I think you can get a lot better dynamic range from a "live" image at the eyepiece, than you can get on a computer monitor. So some contrast enhancement is needed to compensate. The image as it is with just 256 shades (8-bit) per colour channel this would be difficult, as the bulk of the information is contained in a lot fewer shades than that. Do you work with 16-bit-per-channel data when you do the stacking and processing?