No point me putting my spiel on this, might as well quote it.
"Sampling refers to how many pixels are used to produce details. A CCD image is made up of tiny square-shaped pixels. Each pixel has a brightness value that is assigned a shade of gray color by the display routine. Since the pixels are square, the edges of features in the image will have a stair-step appearance. The more pixels and shades of gray that are used, the smoother the edges will be.
Images that have blocky or square stars suffer from
undersampling. That is, there aren't enough pixels being used for each star's image. The number of pixels that make up a star's image is determined by the relationship between the telescope focal length, the physical size of the pixels (usually given in microns, or millionths of a meter), and the size of the star's image (usually given in arcseconds).
...
Unfortunately, we don't have as much control over the size of the star image, which will vary, depending mostly on the seeing conditions of the observing site. Mountaintop observatories often have 1 arcsecond (or better) seeing, whereas typical backyard observing sites at low elevations in towns or cities might have 3 to 5 arcsecond seeing.
A good rule of thumb to avoid undersampling is to divide your seeing in half and choose a pixel size that provides that amount of sky coverage. For example, if your seeing conditions are generally 4 arcseconds, you should achieve a sky coverage of 2 arcseconds per pixel. If your seeing conditions are often 1 arcsecond, you'll want a pixel size that yields 0.5 arcseconds per pixel. The following formula can be used to determine sky coverage per pixel with any given pixel size and focal length:
Sampling in arcseconds = (206.265 / (focal length in mm) )* (pixel size in microns)"
Quoted from
http://www.ccd.com/ccd113.html
Cheers