Quote:
Originally Posted by DarkArts
The medical science and maths is pretty straightforward.
|
Unfortunately when it comes to the human visual system, there is
absolutely nothing straight forward to the medical science and maths.
Quote:
A very good human eye (6/6m or 20/20ft visual acuity - suitable to be a fighter pilot without corrective lenses) can resolve approximately 1 arcmin. From that, the maths gives - for an optimistic 100 inch screen size - the following maximum resolution viewing distances:
2K (FHD/regular BluRay) - 3.96 metres
4K (UHD/4K BluRay) - 1.98 m
8K (still called UHD, apparently) - 0.99 m
|
Sounds absolutely plausible but unfortunately fails to explain why people
can see the difference
A common mistake some make is that they know a little about how
digital cameras work and assume the human visual system is analogous.
For example, they assume that the three cone cell types, because of their
different spectral sensitives, must work in a way analogous to the RGB
sensors of a camera. Yet how we perceive color doesn't work in that
simplistic way at all and there are a set of beautiful and startling
experiments one can do that throw that model completely out the door.
Likewise some know a little about visual acuity and the established one
arc-minute minimum angle resolution for acute vision that you can
determine from an eye chart test or trying to split two stars naked eye.
They then might assume the eye must work like a CMOS sensor array,
throw in a little about the Rayleigh criteria and diffraction limits,
chuck in the Nyquist sampling criterion, do the numbers and convince
themselves there is no way they are going to see a difference between
two different resolution TV images at some distance because the
theory says the light from two point sources will overlap the two
photodetectors in the eye. QED.
Fortunately for us humans, when viewing complex scenes rather than just
a couple of dots, such as stars, we can do way, way better than that.
In fact in some visual tasks, our visual acuity is as good as 5 arc-seconds.
That means an observer can reliably resolve features that are less than
0.02mm apart at a 1 metre distance. That's like being able to resolve
a twenty-cent coin viewed at 17km.
We do this through a mechanism termed hyperacuity. The first experiments
demonstrating it go back to the late 1890's.
For certain visual tasks, like determining if two lines are aligned, it kicks
in and we can detect any misalignment with a precision ten times
better than with visual acuity.
We don't know entirely how it works - there are several theories - but like
color processing what we do know is that it is a function of both the brain
and the eye, not just the eye alone.
The fovea plays a big part and microsaccade - those little jerky involuntary
eye movements we make when are processing a scene - appear to
contribute to be able to see important details finer than an
explanation using simple diffraction limited visual acuity can explain.
In fact part of the trick appears to be when we have more than one
photoreceptor cell being triggered at a time by lines.
The human visual system does a lot of tricks to get the most out of its
hardware. Prior to writing this response I went for a walk. As I came
around the corner of my house to my driveway, I spotted a small snake
that my path would have intercepted with in about 5 metres if I had not
screeched to a halt. In seconds my eye and brain processed it. I could
see its tiny black eyes as it raised its head toward me, I could see the
flitting of its tongue and I could see it was a green tree snake and not the
similar sized venomous whip snake I had only seen a couple of days
earlier.
Looking back at what I saw, I know exactly where I was standing but
anything that was not the snake is somewhat of a blur.
We don't look at the world like one big camera taking it all in at once.
Our eye flits around and with our neural network in our brain, we process
the important things - the snake on the path, a person's face or what
we deem to be important when processing the images from a TV screen.
We do it in the time domain. Except when we are at the optometrist
or splitting double stars, we don't rely on just raw visual actuity.
We use hyperacuity for many visual tasks as well.
Electrical engineers are well aware of the human capability of hyperacuity
and papers modelling it appear in the professional journals. Designers of
high-end video cameras, computer graphics engineers and
engineers who design laser printers are particularly aware of it.
They appreciate that hyperacuity allows the brain to
extrapolate visual information from high resolution imagery - things like
smooth curvature detection which helps the brain process things like
faces to make us believe they look more realistic.
We also do see the world in high-dynamic range. We would be in trouble
if we didn't. But that's another topic and one that in part blends in with
how we process color in an area of the brain at the back of our heads.