Elon Musk compares building AI to summoning demons

Status
Not open for further replies.
Saw this pop up in my facebook feed (yeah I know).

Elon Musk really wants us to be worried about the potential danger of artificial intelligence. He just told an MIT symposium that he feels it's "our biggest existential threat," then ratcheted the hyperbole further, saying "with artificial intelligence, we're summoning the demon." He added that "HAL9000 would be... like a puppy dog," and said governments need to start regulating the development of AI sooner than later. Last August, Musk said that super-intelligent robots were "potentially more dangerous than nukes." Paranoid rantings? We doubt it -- given his track record, it's more likely that Musk knows something we don't.

via:

http://www.engadget.com/2014/10/27/elon-musk-is-scared-of-killer-robots/

And the video starts here. Make sure you stick around for the beginning of the next question. It's kinda funny.

Send a T800 back in time to kill my mom if old.
 
Climate change will fuck us long before true artificial intelligence even comes close to being a threat.
 
Red, green, blue, or neither, Elon?

"Do you remember what the question was that caused the creators to attack us, Elon Musk? 'Does this unit have a soul?'"
 
I find the Singularity a terrifying prospect too. I'm really scared of both Strong AI and also active SETI, personally. The former in particular as I think life as we know it would be over as soon as that is developed. I don't mean Skynet and squads of terminators would pop up, just the world would very quickly become a fundamentally different place.
 
Technically, I can't see how he can be incorrect. If AI becomes a superior species, it will make us irrelevant one way or the other, like it's always happened before.
 
I really think Elon must be hip to something no one else knows. Because last I checked, we weren't even close to developing true AI. Seems like he just downed a red bull and fired up his laptop after watching the Matrix trilogy.
 
So Iain M Banks called this type of deal an "Outside Context Problem", and described it thus:

The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you’d tamed the land, invented the wheel or writing or whatever, the neighbours were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass… when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you’ve just been discovered, you’re all subjects of the Emperor now, he’s keen on presents called tax and these bright-eyed holy men would like a word with your priests.
 
It's not an unreasonable concern, while the development of AI is essentially inevitable, we need to be careful. We really only get one shot at getting it right.
 
I really think Elon must be hip to something no one else knows. Because last I checked, we weren't even close to developing true AI. Seems like he just downed a red bull and fired up his laptop after watching the Matrix trilogy.

I think I'm due for an update on AI development. Someone fill me in.
 
Considering nearly all AI sci-fi in the last 70 (!) years has considered this possibility, it would be really, really remiss of regulating bodies and local governments to not clamp down on the kinds of access and computational power being plugged into these things.

Any sufficiently self sufficient or self aware being has the ability to do harm, even innocently.
 
Actually, the first sounds made by an AI are bound to be the final sounds made by your dog.

Scared_dog_gif.gif
 
In theory an AI that could expand further than a human AI (with additional technological access) would be very dangerous indeed, since it could get to a level out of our reach. I think the danger of AI isn't there though, and we already let the wolf in : using AI to handle complex systems like the stock exchange can quickly explode out of control and hurt our society (like it already did). In the end the danger is more about the chaos that would ensue, than being enslaved by a self-conscious AI that we would have created.
Also, we need the Three Laws. :)
 
It's not an unreasonable concern, while the development of AI is essentially inevitable, we need to be careful. We really only get one shot at getting it right.

And if that fails, we have many other one shots in the form of patches and iterative development.
 
Check out Bostrom's Superintelligence - a great book that discusses possible scenarios in which a super intelligent agent can assert itself in disastrous ways.
 
I don't see true AI being an issue, but semi autonomous smart programs could be seriously troublesome. Gov or some Corp creates a really complex routine or program to independently do this or manage that and it just kind of gets out of control since it can't really think for itself and just does whatever it's programmed to really really well or in way it thinks it should but really shouldn't. I feel like a program that accurately emulates human thinking and emotions wouldn't go out of control as easily. But then again I don't know shit about this. I just remember reading comments by a bunch of other AI people about how unfounded the idea of rogue AI is that movies and media always use.
 
Technically, I can't see how he can be incorrect. If AI becomes a superior species, it will make us irrelevant one way or the other, like it's always happened before.

I'm completely ok with being the beloved pet of our Super Intelligent AI Overloads. Have them read a few Culture novels and ask them nicely to aim for that.

In theory an AI that could expand further than a human AI (with additional technological access) would be very dangerous indeed, since it could get to a level out of our reach. I think the danger of AI isn't there though, and we already let the wolf in : using AI to handle complex systems like the stock exchange can quickly explode out of control and hurt our society (like it already did). In the end the danger is more about the chaos that would ensue, than being enslaved by a self-conscious AI that we would have created.
Also, we need the Three Laws. :)

It always amuses me to see people say this considering how every story he wrote was about failures of the three laws. =P

Nothing that a handy 'kill -9' can't fix.

If it's actually sentient then I don't want to touch the ethics of this with a ten foot pole.
 
Looks Like he watched Transcendance

Very interesting movie btw

I've been curious about it but Depp is just off putting lately. I may check it out.

Musk has been watching too much Demon Seed:

Demon_Seed_1977.jpg


I was shown this film in a computer class in High School. Good times.
 
Im assuming Elon saw the latest Person of Interest and how much trouble Harold went through creating the Machine and is now paranoid of the outcome ;)
 
It always amuses me to see people say this considering how every story he wrote was about failures of the three laws. =P

Well most of the time it's anecdotal interpretation of the laws that can cause trouble, but everything considered it never reached Terminator level of failure. :D
Maybe if you watch the big picture of Asimov's stories (universal spoiler ;) )
you could consider Daneel enslaved the whole humanity by deciding what's best for it and pulling strings in the background.
But that's not too bad.
 
Status
Not open for further replies.
Top Bottom