The term "technological singularity" first appeared in a 1993 article by computer
scientist and science fiction writer,
Verne Vinge, whilst he was still at San Diego State University.
The article was entitled "The Coming Technological Singularity: How to Survive in the
Post-Human Era".
A copy of that article appears here -
http://www-rohan.sdsu.edu/faculty/vi...ngularity.html
Though the concept of building a machine with artificial intelligence whose
first task is to design a machine even smarter than itself and so on ad infinitum
will one day become true, the reality is that we are not even close to achieving this by
any stretch of the imagination.
Those who have studied Computer Engineering or Computer Science
would probably agree that the most disappointing and slowest advances in
the field are the disciplines of artificial intelligence and machine learning.
Predicate calculus, decision trees, genetic algorithms, expert systems, Bayesian
networks, manifold learning, autoencoders and so on were promised decade
after decade after decade by many in the field to be the "next big thing" in
computing but with rare exceptions, have failed to deliver.
We still type away on keyboards because we struggle even with the basics,
such as reliable speech recognition.
These issues of AI failing to deliver were addressed in the 1996 book,
"
HAL's Legacy: 2001's Computer as Dream and Reality".
Some of the problems we thought back in the mid 1960's, when 2001 was made, would be hard,
like playing chess, turned out comparatively easy.
At some point during our childhoods, most of learn that you can always reliably win
or draw at a game of noughts and crosses because for any starting state of the game, there
is a precise set of moves that will lead to a favourable outcome. Chess is no different, just that
the move tree is much, much deeper. 1997 saw IBM's Deep Blue beat Kasparov.
(In
Game Theory, chess and noughts and crosses aren't really regarded as
"games", but that is another matter).
But other problems, such as machine vision and
speech recognition have proven
much more difficult. Machines still struggle to recognize arbitrary objects
in real world settings and speech recognition systems still struggle to correctly
interpret many natural language phrases.
Remember the scene in 2001 when HAL lip reads the astronauts' plans to disconnect
him, a skill that requires both machine vision and speech recognition.
It wasn't really anyone's fault that by 1996 we had not managed to build
a machine even remotely as clever as HAL as the problems turned out much
harder than researchers in the field predicted. But it turns out some of the
people that were making these predictions back in the '60's are exactly the same
people making the same claims today.
Consider for a moment the 1993 article by Vinge where he states -
Quote:
Originally Posted by Vinge
I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.
|
Vigne goes on to state -
Quote:
Just so I'm not guilty of a relative-time ambiguity, let me more specific:
I'll be surprised if this event occurs before 2005 or after 2030.)
|
So given it is now 2012, we now have 18 years to fulfill Vigne's prediction.
Now a good starting point to building a conscious machine would be to
understand how the human brain works. The reality is that we don't.
Consider the state of the art of neurology in understanding brain function.
The most important tool used has arguably been the microprobe. In other words, a little
needle for poking around. We have been probing and probing for decades, combining
it with a growing understanding of molecular biology and more recently using imaging
technology.
But what makes you conscious? We currently don't have a clue. How
do we organize our memories and index them in an associative way?
Nobody definitely knows.
Like most chimpanzees who don't seem to have the capacity to
figure out that the reflection they are looking at in the mirror is of themselves,
perhaps how the brain functions, consciousness arises and memory is organized alludes
us simply because we aren't smart enough to see it.
But perhaps one day there will be a breakthrough. Perhaps it will come
from a single individual with a mind of a Newton, a Mozart or an Einstein.
Someone who studies brain function, "gets it" and can then explain it
to the rest of us primates.
But until that time, rest assured that somehow, as Vigne might suggest,
you come into the room one morning and discover your Toshiba notebook had
joined with the Internet to suddenly become self conscious is impossible.
No need to keep a broom handy to hit it with.
The Institute of Electrical and Electronic Engineers (IEEE) had a 2008
edition of its Spectrum magazine completely devoted to the topic of the
technological singularity.
Those articles and other resources including videos and podcasts are available here -
http://spectrum.ieee.org/static/singularity
It even includes a PDF wallchart of "who's who" in the debate -
http://spectrum.ieee.org/images/jun0.../swho_full.pdf
in what is described as "a guide to the singularity true believers, atheists,
and agnostics".
There are videos and podcasts by Vigne and articles such as "Reverse Engineering
the Brain". (Even the storage requirements of trying to image a fruit fly's brain is
staggering.)
The most impressive demonstration of machine learning to date has clearly
been IBM's Watson.
A wonderful documentary on YouTube here (Part 1 of 4) -
http://www.youtube.com/watch?v=5Gpaf6NaUEw
Episode of Watson playing Jeopary here (Day 1) -
http://www.youtube.com/watch?v=qpKoIfTukrA
But as for reaching the singularity any time soon? Keep banging the rocks together guys.