PDA

View Full Version here: : Beware the singularity, says Skype founder


Hans Tucker
22-07-2012, 09:49 AM
Scaremongering:question: Premature Warning:question: Now you are going to have less informed people fearing technological advancement. I thought there was a level of ethics control applied to AI development to address some of these concerns.

http://www.stuff.co.nz/technology/gadgets/7302528/Beware-the-singularity-says-Skype-founder

Ric
22-07-2012, 10:00 AM
Interesting article Hans

Thanks for the link

multiweb
22-07-2012, 10:54 AM
I think machines don't have the key ingredient we all have. The capacity to adapt.

Miaplacidus
22-07-2012, 11:48 AM
The Reapers will sort it all out for us.

silv
22-07-2012, 11:52 AM
I was thinking something similar but less victimizing ourselves:
the capacity to better our situation - even if it is not yet perfect.
that grey area between good and bad, right and wrong is difficult to judge when you don't have emotions/an amygdala and difficult to find the urge/the ambition to change something when you know you are risking the worsening of some other given circumstance.

but that would not save us from the machines taking over. Even if I am right then they'd just perpetuate the status quo indefinitely.

interesting views in the article. "scaremongering" indeed :lol:
I like playing around with ideas like that.
Thanks for posting :thumbsup:

joe_smith
22-07-2012, 01:23 PM
Maybe he is somehow involved in the The Singularity is Near project? or a fan of it.
The book (http://www.amazon.com/gp/product/0143037889?ie=UTF8&tag=crb0tamzu-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0143037889) and now movie (http://www.youtube.com/watch?v=8XWXJDgbeP0) is out called the The Singularity is Near. The movie looks pretty good its got Pauley Perrette, (Abby) from CSI in it as well cant wait to see it.

watch The Singularity is Near trailer.
http://www.youtube.com/watch?v=8XWXJDgbeP0

Who knows, go back 200 years ago do you think the people back then could of imagined what it looks like today? All of the stuff we take for granted was not even thought of by them, and if they did it was science fiction and only dreamers believed it may come true. I remember a story my dad told me about his brother when they were kids.

He was always into science fiction, books, movies and all that. He used to tell my dad and every one else "One day they will put a man on the moon and we will see it. My dad said his dad told him he was stupid, well they all did. But when they did go to the moon he told them "I told you didn't I" it was a shame they couldn't pay him the huge wager they had with him as kids lol The same goes for us, what will it be like in 200 years into our future? only time will tell, that's if time as we know it, is still around then :P

silv
22-07-2012, 01:58 PM
:lol: :jump:

FlashDrive
22-07-2012, 03:28 PM
I remember telling my oldest brother in the 60's when I was about 12 years old ... That one day ..TV Sets would be ' flat ' ... and I still remember his answer . " Rubbish .. you need a ' Cathode Tube " because the electrons need to ' bombard ' a Phosphorus surface.

How wrong he was .... and he was University educated at the time.

Should have made a ' bet ' with him.

Flash ..:D

gary
22-07-2012, 04:30 PM
The term "technological singularity" first appeared in a 1993 article by computer
scientist and science fiction writer, Verne Vinge (http://en.wikipedia.org/wiki/Vernor_Vinge), whilst he was still at San Diego State University.
The article was entitled "The Coming Technological Singularity: How to Survive in the
Post-Human Era".

A copy of that article appears here -
http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

Though the concept of building a machine with artificial intelligence whose
first task is to design a machine even smarter than itself and so on ad infinitum
will one day become true, the reality is that we are not even close to achieving this by
any stretch of the imagination.

Those who have studied Computer Engineering or Computer Science
would probably agree that the most disappointing and slowest advances in
the field are the disciplines of artificial intelligence and machine learning.

Predicate calculus, decision trees, genetic algorithms, expert systems, Bayesian
networks, manifold learning, autoencoders and so on were promised decade
after decade after decade by many in the field to be the "next big thing" in
computing but with rare exceptions, have failed to deliver.

We still type away on keyboards because we struggle even with the basics,
such as reliable speech recognition.

These issues of AI failing to deliver were addressed in the 1996 book,
"HAL's Legacy: 2001's Computer as Dream and Reality (http://mitpress.mit.edu/e-books/hal/)".

Some of the problems we thought back in the mid 1960's, when 2001 was made, would be hard,
like playing chess, turned out comparatively easy.

At some point during our childhoods, most of learn that you can always reliably win
or draw at a game of noughts and crosses because for any starting state of the game, there
is a precise set of moves that will lead to a favourable outcome. Chess is no different, just that
the move tree is much, much deeper. 1997 saw IBM's Deep Blue beat Kasparov.
(In Game Theory (http://en.wikipedia.org/wiki/Game_theory), chess and noughts and crosses aren't really regarded as
"games", but that is another matter).

But other problems, such as machine vision and speech recognition (http://mitpress.mit.edu/e-books/hal/chap7/seven1.html) have proven
much more difficult. Machines still struggle to recognize arbitrary objects
in real world settings and speech recognition systems still struggle to correctly
interpret many natural language phrases.

Remember the scene in 2001 when HAL lip reads the astronauts' plans to disconnect
him, a skill that requires both machine vision and speech recognition.

It wasn't really anyone's fault that by 1996 we had not managed to build
a machine even remotely as clever as HAL as the problems turned out much
harder than researchers in the field predicted. But it turns out some of the
people that were making these predictions back in the '60's are exactly the same
people making the same claims today. :lol:

Consider for a moment the 1993 article by Vinge where he states -



Vigne goes on to state -


So given it is now 2012, we now have 18 years to fulfill Vigne's prediction.

Now a good starting point to building a conscious machine would be to
understand how the human brain works. The reality is that we don't.
Consider the state of the art of neurology in understanding brain function.
The most important tool used has arguably been the microprobe. In other words, a little
needle for poking around. We have been probing and probing for decades, combining
it with a growing understanding of molecular biology and more recently using imaging
technology.

But what makes you conscious? We currently don't have a clue. How
do we organize our memories and index them in an associative way?
Nobody definitely knows.

Like most chimpanzees who don't seem to have the capacity to
figure out that the reflection they are looking at in the mirror is of themselves,
perhaps how the brain functions, consciousness arises and memory is organized alludes
us simply because we aren't smart enough to see it.

But perhaps one day there will be a breakthrough. Perhaps it will come
from a single individual with a mind of a Newton, a Mozart or an Einstein.
Someone who studies brain function, "gets it" and can then explain it
to the rest of us primates.

But until that time, rest assured that somehow, as Vigne might suggest,
you come into the room one morning and discover your Toshiba notebook had
joined with the Internet to suddenly become self conscious is impossible.
No need to keep a broom handy to hit it with.

The Institute of Electrical and Electronic Engineers (IEEE) had a 2008
edition of its Spectrum magazine completely devoted to the topic of the
technological singularity.

Those articles and other resources including videos and podcasts are available here -
http://spectrum.ieee.org/static/singularity

It even includes a PDF wallchart of "who's who" in the debate -
http://spectrum.ieee.org/images/jun08/images/swho_full.pdf
in what is described as "a guide to the singularity true believers, atheists,
and agnostics". :lol:

There are videos and podcasts by Vigne and articles such as "Reverse Engineering
the Brain". (Even the storage requirements of trying to image a fruit fly's brain is
staggering.)

The most impressive demonstration of machine learning to date has clearly
been IBM's Watson.
A wonderful documentary on YouTube here (Part 1 of 4) -
http://www.youtube.com/watch?v=5Gpaf6NaUEw

Episode of Watson playing Jeopary here (Day 1) -
http://www.youtube.com/watch?v=qpKoIfTukrA

But as for reaching the singularity any time soon? Keep banging the rocks together guys.

silv
22-07-2012, 04:52 PM
hey cool, gary! :thumbsup:

Barrykgerdes
22-07-2012, 05:11 PM
Read Hitch hilkers guide to the galaxy
First came a super computer that then had to design an even better one and then built the earth (the super duper computer)

No matter how good a computer is designed to work on logic it can be defeated by being illogical!

Barry

multiweb
22-07-2012, 05:14 PM
They'll rust too. :P Nothing to worry about.

AG Hybrid
23-07-2012, 05:35 PM
None of this AI - HAL9000 crap will matter once The Reapers arrive. Then there won't even be AI left.