Hmm... I'm not sure about the maths provided so far on pointing accuracy improvements going from 8k -> 10k. Pointing error/resolution is measured by the straight line distance between where you wanted to point versus where the scope actually pointed to. The previous calculations measure something slightly different (the reduction in area or amount of sky that any given coordinate represents).
Sorry for the slight tangent... I had a few minutes to spare on a slow afternoon.
As engineers, we usually specify accuracy in the same units as the quantity being measured - like the length of a machined part might be 200 mm +/- 0.01 mm.
In this case, we can say that the 8k encoder is being used to measure 360° of sky, so the resolution of pointing would be 360°/8000 = 0.045° per step. If we ignore all other errors for now except for those due to encoder resolution, then the accuracy of pointing at any given coordinate is +/- 0.0225°. This is the worst case scenario for one axis, since you can't be more than half a step away from the target (as we're ignoring all other errors).
To extend the +/- 0.0225° pointing error to two axes (RA/DEC or Alt/Az), the worst-case-scenario error is the hypotenuse of the isosceles triangle with sides 0.0225°, i.e. +/- SQRT(2 * 0.0225° ^ 2) = +/- 0.0318°. To see why, draw a square grid on a piece of paper, pick any coordinate point, draw lines representing +/- 0.0225° error away from that point for both axes, and you'll see where the maximum error occurs. It's less complicated than it sounds - very clear once you draw it.
Similarly, we can do the above calculations for the 10k encoder, which gives us a single axis error of +/- 0.0180° and two-axis error of +/- 0.0255°.
To calculate the error of the 10k encoder relative to the 8k encoder, simply divide the 10k error by the 8k error. For one axis, it's 0.0180° / 0.0225° * 100% = 80%. For two axes, it's 0.0255° / 0.0318° * 100% = 80%.
Therefore... despite increasing the encoder resolution by 25% going from 8k to 10k encoder resolution, the overall pointing error is only reduced by 20%.
Intuitively, it makes sense that the overall reduction in pointing error has to be less than or equal to the improvement in encoder resolution. Imagine that you upgrade the encoders on both the altitude and azimuth axes, but then you leave the azimuth axis alone but re-point the altitude axis somewhere else. Your altitude axis can't suddenly become more accurate than before you upgraded the azimuth axis - they're obviously unrelated. Would be nice if it did though