The night H posted the link (thanks again H), I stayed up late reading and
browsing through the PhD dissertation by Ren Ng.
What a beautiful piece of work. It certainly looks as if it has the potential to
be a multi-billion dollar idea.
Readers with electrical engineering or computer science/engineering backgrounds
will be familiar with the often used approach in those disciplines of transforming
particular classes of problems to some other "space", performing the required
manipulations there, and then transforming to the required target "space".
Manipulations in complex number space or Fourier transforms are a couple of good
examples. In fact, a significant part of the formal study in those disciplines
is being instructed in the necessary wizardry.
The idea of going from 4D "Light Field" space to a 4D "Light Field Fourier
Spectrum" to 2D "Fourier Slices" to 2D "Inverse Fourier Transforms" as one
embodiment of the Light Field camera's refocusing approach is a beautiful
example of this "through the looking glass" magic.
What I found particularly profound was the further convergence of concepts
such as ray tracing - something we normally do just inside a computer -
with digital photography.
In the past, ray tracing has been commonly used both in the generation of computer
imagery and in the design of optics, particularly lenses.
The first use of ray tracing is attributed to Arthur Appel in 1968. In the
pursuit of photo-realism of computer generated images, the algorithms are
improved upon over the subsequent decades. Concurrently, beginning with the first
commercial embodiments in the early 1980's, digital cameras go through multiple
generations of improvements. Then comes the introduction of the Light Field
camera concept, where there is a marriage of the abstraction of ray tracing
algorithms with digital cameras for photographing real-world images.
Likewise, and in particular at present in the motion picture world, we have the
use of high dynamic range imaging (HDRI) with formats such as Industrial Light
and Magic's OpenEXR because working in that "space" enables movie makers to
more easily marry real life images with computer generated images. For example, the
production of all the Harry Potter movies has employed this magic.
And when one considers HDR and radiosity techniques being the mainstay of many
modern ray tracing systems, one then starts to think abouts its marriage with
the Light Field ray tracing concepts.
One thing is for certain, the "Photoshop"-like image programs of the future are
going to have a lot more controls compared to the relatively simple color spaces
of today.
For example, in the future, during post-processing, might one be able to
manipulate the warmth of light that had arrived just from one particular
direction? Such questions will undoubtedly already be transparent to the Light
Field developers.
Being situated in the heart of Silicon Valley is a major advantage for Lytro as
a company, as that part of the world has more experience with the necessary
fabrication techniques than anywhere else. As they will be trading pixels for
the ability to do this post processing magic, one would speculate that the
initial offerings would fall short in terms of resolution compared to commodity
digital cameras today. But the future certainly has the prospect of being very
exciting and it could well end up being one of those applications that will even
more increase the world demand for disk space and network bandwidth.