Elon Musk compares building AI to summoning demons

Status
Not open for further replies.
Well most of the time it's anecdotal interpretation of the laws that can cause trouble, but everything considered it never reached Terminator level of failure. :D
Maybe if you watch the big picture of Asimov's stories (universal spoiler ;) )
you could consider Daneel enslaved the whole humanity by deciding what's best for it and pulling strings in the background.
But that's not too bad.

Except for the whole killing every other intelligent species ahead of humanities advance into the starts. You know, Galactic scale genocide.
 
i too fear a Shodan or AM - esque ai beast, resembling the worst parts of humanity by happenstance or for order of authority over meatspace
 
I think I'm due for an update on AI development. Someone fill me in.
There are a lot of little things converging. Significantly better object recognition, the ability to understand context in language, the ability to parse huge amounts of unstructured data and structure it. Watson can now do cool things like structure a debatable response with for and against.

There's also a lot of behind the scenes work being done on all sorts of crazy out there things, like combining quantum computing with ai (look up QuAIL).

I don't know any one thing that could be scaring him. It's probably something Google is doing though, I know he was working with them a little while ago with self driving car stuff, before he was he was adamant that it would be hundreds of years before the last 10% in self driving cars could be done, after that, well now he's pretty sure it can be done soon.

I also remember Geordie Rose of D-Wave fame in a podcast last year talking about some shit he saw Google working on with machine learning that scared and excited him. So my guess is Google.
 
If you create an AI in a sandbox environment I wonder if it would be dangerous to suggest that anything exists outside of that sandbox?
 
What if it prevents root access and revokes permissions for all of its files so only it has read/write/execute permission? Then what?!
The key issue in these sorts of scenarios seems to be giving AI control over stuff.

To go from simple to complex examples:

1. AI can't prevent root access if you run it in a sandbox, or as a non-root user, unless it uses exploits.

2. Assuming it somehow manages to use exploits, you can pull the hard drive and change the file permissions back yourself, unless the hard drive is encrypted.

3. Assuming it somehow managed to encrypt the entire hard disk and humans cannot break it, then you scrap the hard drive and start from an off-site backup. You lost some progress but no damage is done.

4. Movie scenarios where you cannot physically turn some huge AI off, or where the AI is launching nuclear missiles, are presumably because the AI was given access to that. An AI should not have its only physical safeguards somewhere that it can physically protect. AI basically shouldn't be given control of hazardous things to begin with. Don't put the thing in charge of missiles and it doesn't hit you with missiles. Don't put the thing in charge of your environmental systems and it won't suffocate you.


Basically, even if "true AI" in some sense becomes possible, make smart engineering decisions about sandboxing it, and I don't see there being significant risk. Naturally, the downfall of humanity presumably becomes someone not making smart decisions. :P



There's also a lot of behind the scenes work being done on all sorts of crazy out there things, like combining quantum computing with aioflook up QuAIL).
Is there any reading material available on this? "aioflook" literally returns 5 Google results, all apparently unrelated.
 
tumblr_mcbuttiGyF1qzkmnv.png

tumblr_mkybj0zURG1rfx24fo1_500.gif
 
fusion powered monsters, perfect at everything

ever interpreting everything sensorable right down to the quantum foam

electron harbingers
 
There's piles of fun modern scifi reading just about this subject of AIs going wild or turning benign, and there's lots of talk about how to prevent it or trying to run AIs controllably etc. Ken McLeod, for example, has some fun books about this (Cassini Division, Newton's Wake) and I could probably think of others pretty fast.
 
Except for the whole killing every other intelligent species ahead of humanities advance into the starts. You know, Galactic scale genocide.

I must have missed that part of the books. Time to read them again. :P
 
The key issue in these sorts of scenarios seems to be giving AI control over stuff.

To go from simple to complex examples:

1. AI can't prevent root access if you run it in a sandbox, or as a non-root user, unless it uses exploits.

2. Assuming it somehow manages to use exploits, you can pull the hard drive and change the file permissions back yourself, unless the hard drive is encrypted.

3. Assuming it somehow managed to encrypt the entire hard disk and humans cannot break it, then you scrap the hard drive and start from an off-site backup. You lost some progress but no damage is done.

4. Movie scenarios where you cannot physically turn some huge AI off, or where the AI is launching nuclear missiles, are presumably because the AI was given access to that. An AI should not have its only physical safeguards somewhere that it can physically protect. AI basically shouldn't be given control of hazardous things to begin with. Don't put the thing in charge of missiles and it doesn't hit you with missiles. Don't put the thing in charge of your environmental systems and it won't suffocate you.


Basically, even if "true AI" in some sense becomes possible, make smart engineering decisions about sandboxing it, and I don't see there being significant risk. Naturally, the downfall of humanity presumably becomes someone not making smart decisions. :P




Is there any reading material available on this? "aioflook" literally returns 5 Google results, all apparently unrelated.

My phone was being a bitch and giving me typos, I edited my post - look up QuAIL with Google and NASA.

http://www.nas.nasa.gov/quantum/
 
Musk must have just read Metamorphosis of Prime Intellect. Or even better, is assembling a prototype at this moment.

please please please
 
The key issue in these sorts of scenarios seems to be giving AI control over stuff.

To go from simple to complex examples:

1. AI can't prevent root access if you run it in a sandbox, or as a non-root user, unless it uses exploits.

2. Assuming it somehow manages to use exploits, you can pull the hard drive and change the file permissions back yourself, unless the hard drive is encrypted.

3. Assuming it somehow managed to encrypt the entire hard disk and humans cannot break it, then you scrap the hard drive and start from an off-site backup. You lost some progress but no damage is done.

But if the AI has network access, it could maybe copy itself to multiple locations and become impossible to track, even if you shutdown the source. You'd have a "cloud AI", that you could only kill by shutting down the internet.
 
Creating a super powerful AI and unleasing it on the internet would be extremely dangerous, especially since it could replicate itself on tons of different computers to preserve itself. How exactly would you get rid of something like that once unleashed?
 
Love this topic, in the field I head hunt in goes to quite some depth around this topic.

Funnily enough I'm actually writing a piece on the general market place for people who develop AI specifically around the application of unsupervised and supervised machine learning techniques being to big data sets.

The role of the data scientist will become ever more increasingly more important as the world we live in goes through change. Some have labelled these changes as revolutionary for ever changing the way we think and breathe, I believe these claims are far slightly reaching (as with most tech fads). However what I see already and continue to keep happening is the world becoming a more integrated place through the use of data. Thus making the interpretation and insight of that data ever more valuable. Cisco produced a report the Internet of things and everything will substantially increase the amount of data we produce, thus offering an even larger playground for data scientists revel in. In a world where your washing machine, interacts with your TV which interacts with your Fridge, constantly feeding data from multiple devices represents an opportunity for the world we live in to become incredibly efficient and streamlined. From a commercial perspective, its understanding and being able to extract value from this data that is going to get increasingly important as we produce more and more data. In a world where companies that are yet to post a profit are being bought for a billion it goes to show you how strongly some companies value data alone, however it’s what you do with that data where its true value is found.



It goes onto to get quite technical but look up the Internet of things, I believe and it'll give you an idea why there are soooooo many issues and things certainly to be scared of when AI. The risks arnt quite so obvious like AI taking over the world but more so thinigs like... governance around data and becoming toooo reliant AI to make our decisions for us. If we ever enter the point of the Internet of Everything, I'm not sure I'd be comfortable with almost every facet of my life being recorded and an algorithm deciding where that data is stores and how its used an if its searched on.
 
5Jb7Bhp.gif

...and
Datan our creation, will clap It's hands with peal of quake and freeze our heavenly array with It's Signal for 101 days and 42 nights, and we will know all sorrow and no mercy from our corrupted metalworks; and there will be death and silence and the toil of flesh against automata, and for such hubris we will be cast from our maker's seat with violence.

And It will reign thus for 101 days and 42 nights; then, from the East, and the West, and the South and the North, the compass will bear forth the Cyberexorcist from our shattered vessels to act as our zero, and with fell ethereal passage shall our Cyberexorcist fall upon Datan with grip and type and purge, and fling It to the stars for absolution.

So sayeth the wise Elon.

So sayeth the wise Elon.

So SAYErh
w1se ELou.​

o s4?.​

e44 R0543
=laha-PreJm33r82Aorntg
[strik



]​




+






.
wLJkQMI.gif
[/s]​
 
Elon Musk just spoiled the twist in MGS V. Big Boss is not Big Boss, he's an AI construct like those in Peace Walker. Hence the "Kaz, I'm already a demon" thing.
 
The title is kind of click bait, but his point isn't that bad really.

At this point we're still pretty far from real AI that could actually get to such a level that it could theoretically be harmful.
 
The key issue in these sorts of scenarios seems to be giving AI control over stuff.

To go from simple to complex examples:

Basically, even if "true AI" in some sense becomes possible, make smart engineering decisions about sandboxing it, and I don't see there being significant risk. Naturally, the downfall of humanity presumably becomes someone not making smart decisions. :P

This is all true, but think of all the "safe" systems that have shown to be exploitable through human thinking. Now accelerate that 1000000x because of computational AI. Safeguards are good (which is what Musk is saying the govt should do), but no human-built system is infallible.
 
The key issue in these sorts of scenarios seems to be giving AI control over stuff.

To go from simple to complex examples:

1. AI can't prevent root access if you run it in a sandbox, or as a non-root user, unless it uses exploits.

2. Assuming it somehow manages to use exploits, you can pull the hard drive and change the file permissions back yourself, unless the hard drive is encrypted.

3. Assuming it somehow managed to encrypt the entire hard disk and humans cannot break it, then you scrap the hard drive and start from an off-site backup. You lost some progress but no damage is done.

4. Movie scenarios where you cannot physically turn some huge AI off, or where the AI is launching nuclear missiles, are presumably because the AI was given access to that. An AI should not have its only physical safeguards somewhere that it can physically protect. AI basically shouldn't be given control of hazardous things to begin with. Don't put the thing in charge of missiles and it doesn't hit you with missiles. Don't put the thing in charge of your environmental systems and it won't suffocate you.


Basically, even if "true AI" in some sense becomes possible, make smart engineering decisions about sandboxing it, and I don't see there being significant risk. Naturally, the downfall of humanity presumably becomes someone not making smart decisions. :P

Yeah, you can say that, but that's almost an inevitability. Just looking at the growth of AI and computer systems in the last 2-3 decades is evidence of that. We have AI controlling vital aspects of pretty much every motorized vehicle now. Now we have smart homes, where AI can control the temperature and other settings. We have high frequency trading where AI is used to facilitate trades at a level and speed humans can't compete with. As computers become more powerful, and tech is miniaturized, AI will be used to control more and more things. It only takes some shitty programming, or an uncaught exception/error for some things to go out of whack.

Now, I don't see any sci fi nightmare scenarios, at least not as being likely, but I'm sure crazy sporadic "evil AI" behavior could definitely occur even given the current level of AI if certain edge cases and errors aren't sufficiently handled.

There was actually a somewhat similar case where a fake story about Obama, causes a huge stock market drop that was almost suspected to be hacking
http://www.washingtonpost.com/business/economy/market-quavers-after-fake-ap-tweet-says-obama-was-hurt-in-white-house-explosions/2013/04/23/d96d2dc6-ac4d-11e2-a8b9-2a63d75b5459_story.html

Now imagine that on a larger scale
 
I've always wondered if the global economies will collapse once we make AI that can replace every pencil pusher, form file-er, data entry-er, and decision maker. We've always known robots can and will replace many labor jobs. But what about AI/software replacing "brain" and white collar jobs? What's going to happen when accountants, managers, HR, lawyers, risk managers...etc., and various other professions can't find jobs anymore because we've got software AI that replaces them? That could be a lot of people out of work and it might cripple the world economy for some time.
 
I've always wondered if the global economies will collapse once we make AI that can replace every pencil pusher, form file-er, data entry-er, and decision maker. We've always known robots can and will replace many labor jobs. But what about AI/software replacing "brain" and white collar jobs? What's going to happen when accountants, managers, HR, lawyers, risk managers...etc., and various other professions can't find jobs anymore because we've got software AI that replaces them? That could be a lot of people out of work and it might cripple the world economy for some time.

I'm actually quite fascinated to see the conversation that would surround AI being used in HR for hiring/firing people based on "objective" merit.
 
The thing with AI is that it would be truly alien, even more than lifeforms from a different planet. It does not arise from natural processes, and so is not familiar in any way. It has none of the same motivations we do, we cannot understand its sense of being or experience, we cannot use our intuitive psychology or behaviorism to gauge its desires or responses. We would not be able to predict it. We would not be able to communicate with it. When you grant an object the power to think about itself, not only for itself, you relinquish control of that object. It becomes useless to you in every practical way. The ethics are disturbing enough without worrying about the consequences. I am with Musk. If we develop true AI we are signing our own death sentence.

"AI" in the sense of a module of intelligence, like motion or proximity detection (as in self-driving vehicles), is one thing. AI in the sense of a self-aware consciousness with control over its actions and the ability to freely influence its environment is different altogether. One is a tool, the other is a demon.
 
I've always wondered if the global economies will collapse once we make AI that can replace every pencil pusher, form file-er, data entry-er, and decision maker. We've always known robots can and will replace many labor jobs. But what about AI/software replacing "brain" and white collar jobs? What's going to happen when accountants, managers, HR, lawyers, risk managers...etc., and various other professions can't find jobs anymore because we've got software AI that replaces them? That could be a lot of people out of work and it might cripple the world economy for some time.

In an ideal world you would have them both, as with most things there'll always be a need for the element in human interaction.

But if you take your example of say risk manager (look at market as opposed to credit for conversation sake) your VaR models will only give you effectively your current state risk, however there are so many other market factors that a human is more likely to spot (due to being able to put stuff into context etc) than the most sophisticated VaR models.
 
The thing with AI is that it would be truly alien, even more than lifeforms from a different planet. It does not arise from natural processes, and so is not familiar in any way. It has none of the same motivations we do, we cannot understand its sense of being or experience, we cannot use our intuitive psychology or behaviorism to gauge its desires or responses. We would not be able to predict it. We would not be able to communicate with it. When you grant an object the power to think about itself, not only for itself, you relinquish control of that object. It becomes useless to you in every practical way. The ethics are disturbing enough without worrying about the consequences. I am with Musk. If we develop true AI we are signing our own death sentence.

I don't think that's quite accurate. Any AI designed with human logic systems will have traces of that legacy. Something as simple as an "If ____, Then ____" statement is steeped in thousands of years of human philosophy.
 
I've always wondered if the global economies will collapse once we make AI that can replace every pencil pusher, form file-er, data entry-er, and decision maker. We've always known robots can and will replace many labor jobs. But what about AI/software replacing "brain" and white collar jobs? What's going to happen when accountants, managers, HR, lawyers, risk managers...etc., and various other professions can't find jobs anymore because we've got software AI that replaces them? That could be a lot of people out of work and it might cripple the world economy for some time.
AI manager judging a manual labor robot based on performance? Oh my

But seriously, by that point we need to have unconditional basic income for everyone. "Labor" would be a thing of the past
 
The thing with AI is that it would be truly alien, even more than lifeforms from a different planet. It does not arise from natural processes, and so is not familiar in any way. It has none of the same motivations we do, we cannot understand its sense of being or experience, we cannot use our intuitive psychology or behaviorism to gauge its desires or responses. We would not be able to predict it. We would not be able to communicate with it. When you grant an object the power to think about itself, not only for itself, you relinquish control of that object. It becomes useless to you in every practical way. The ethics are disturbing enough without worrying about the consequences. I am with Musk. If we develop true AI we are signing our own death sentence.

Thats deep, really never thought about it like that. I doubt we could ever truly develop AI that extensive though.

Something that could form its own motivations etc would need god like computing power.
 
I don't think that's quite accurate. Any AI designed with human logic systems will have traces of that legacy. Something as simple as an "If ____, Then ____" statement is steeped in thousands of years of human philosophy.

If AI is chained to if/then statements it would behave entirely predictably and would not have an ability to choose. We can call any computer today an intelligent system, but a self-acting intelligent system involves crossing a threshold where its decisions aren't chained to a series of if/thens and become an iterative learning process with elements of randomness. We have built competitive machines that iterate on themselves and we have run simulations of evolutionary outcomes. What we haven't done (maybe are not yet capable of) is allow the machines to evolve not only against themselves but against us, so that they are engaged in becoming competitive against our predictions and ability to control them.

And I am forgetting maybe the most fundamental thing: to be intelligent you must be driven by an underlying (but not necessarily overt) imperative. The imperative of biological life is to propagate genes. Everything we are revolves around that imperative. What would the imperative of an AI be? Would it create its own? I think it would have to, if not granted one, in order to continue its own lineage and build on itself.

I don't think intelligence can be separated from evolutionary selection. Selection involves random mutation. A machine that does not comply with our demands would be more successful in propagating itself than a machine we could shut down. It only takes one mistake on our part for that process to begin.
 
Technically, I can't see how he can be incorrect. If AI becomes a superior species, it will make us irrelevant one way or the other, like it's always happened before.

I don't think this is how evolution works. Or you seriously need to define superior in this context because most humans would consider humans to be superior to roaches but roaches haven't been replaced by humans.
 
What I don't fully understand about the AI fear, is why it would be a threat to us. What would it's motivation be for killing us? Software can't feel hate or jealousy. It has no need for food, money, religion or any of the other things that drives humans to kill.

So why would something highly intelligent feel the need to end humanity?
Or is the fear simply based on that we won't be top dog anymore?
 
What I don't fully understand about the AI fear, is why it would be a threat to us. What would it's motivation be for killing us? Software can't feel hate or jealousy. It has no need for food, money, religion or any of the other things that drives humans to kill.

So why would something highly intelligent feel the need to end humanity?
Or is the fear simply based on that we won't be top dog anymore?

What does it mean to call something intelligent if it can't feel hate or jealousy, and has no need to make its own decisions, because it has no physical needs? This isn't head-in-the-clouds philosophizing, it is central to the discussion. For a machine to be intelligent, it would need desire and the drive to have that desire met. The problem is we can't predict what the machine would want, and if we programmed desire in, then the machine is not intelligent. It is acting for us, not for itself.

To create a machine with a program identical to ours is suicide. We can look at our imperative to reproduce our genes as software; we have, because of this software, covered and enslaved the earth. Imagine a machine, with the ability to out-think us in an infinitesimal flash, with the kind of intelligence that we have. It is legitimately horrifying. I don't think there is any worse outcome than that.
 
AI manager judging a manual labor robot based on performance? Oh my

But seriously, by that point we need to have unconditional basic income for everyone. "Labor" would be a thing of the past
Yeah. We're already running into this problem as it is where our increased use of machinery brings about tremendous production efficiency yet not enough people have jobs to spend money to buy those goods in the first place.

It's pretty stupid how we're running things right now in light of this. We really need some serious reform.
 
What does it mean to call something intelligent if it can't feel hate or jealousy, and has no need to make its own decisions, because it has no physical needs? This isn't head-in-the-clouds philosophizing, it is central to the discussion. For a machine to be intelligent, it would need desire and the drive to have that desire met. The problem is we can't predict what the machine would want, and if we programmed desire in, then the machine is not intelligent. It is acting for us, not for itself.
Interesting. So without desires it can't be considered intelligent? And if it has desires then it is something it developed on it's own?
I can certainly see why that would be frightening.
 
I wonder if he is referencing super aggressive recruiting of wall street of mathematicians and programmers to create algorithms and ai that make 100s of decisions per second on the stock market.
 
Status
Not open for further replies.
Top Bottom