InsaneZero
Member
What if it prevents root access and revokes permissions for all of its files so only it has read/write/execute permission? Then what?!
Then we shut down the sandbox.
The AI does exist in a sandbox, right?
What if it prevents root access and revokes permissions for all of its files so only it has read/write/execute permission? Then what?!
Well most of the time it's anecdotal interpretation of the laws that can cause trouble, but everything considered it never reached Terminator level of failure.
Maybe if you watch the big picture of Asimov's stories (universal spoiler)
But that's not too bad.you could consider Daneel enslaved the whole humanity by deciding what's best for it and pulling strings in the background.
As soon as your AI is networked, there's no shutting it downThen we shut down the sandbox.
The AI does exist in a sandbox, right?
There are a lot of little things converging. Significantly better object recognition, the ability to understand context in language, the ability to parse huge amounts of unstructured data and structure it. Watson can now do cool things like structure a debatable response with for and against.I think I'm due for an update on AI development. Someone fill me in.
The key issue in these sorts of scenarios seems to be giving AI control over stuff.What if it prevents root access and revokes permissions for all of its files so only it has read/write/execute permission? Then what?!
Is there any reading material available on this? "aioflook" literally returns 5 Google results, all apparently unrelated.There's also a lot of behind the scenes work being done on all sorts of crazy out there things, like combining quantum computing with aioflook up QuAIL).
Gamma RadiationWhat if it prevents root access and revokes permissions for all of its files so only it has read/write/execute permission? Then what?!
Except for the whole killing every other intelligent species ahead of humanities advance into the starts. You know, Galactic scale genocide.
The key issue in these sorts of scenarios seems to be giving AI control over stuff.
To go from simple to complex examples:
1. AI can't prevent root access if you run it in a sandbox, or as a non-root user, unless it uses exploits.
2. Assuming it somehow manages to use exploits, you can pull the hard drive and change the file permissions back yourself, unless the hard drive is encrypted.
3. Assuming it somehow managed to encrypt the entire hard disk and humans cannot break it, then you scrap the hard drive and start from an off-site backup. You lost some progress but no damage is done.
4. Movie scenarios where you cannot physically turn some huge AI off, or where the AI is launching nuclear missiles, are presumably because the AI was given access to that. An AI should not have its only physical safeguards somewhere that it can physically protect. AI basically shouldn't be given control of hazardous things to begin with. Don't put the thing in charge of missiles and it doesn't hit you with missiles. Don't put the thing in charge of your environmental systems and it won't suffocate you.
Basically, even if "true AI" in some sense becomes possible, make smart engineering decisions about sandboxing it, and I don't see there being significant risk. Naturally, the downfall of humanity presumably becomes someone not making smart decisions.
Is there any reading material available on this? "aioflook" literally returns 5 Google results, all apparently unrelated.
The key issue in these sorts of scenarios seems to be giving AI control over stuff.
To go from simple to complex examples:
1. AI can't prevent root access if you run it in a sandbox, or as a non-root user, unless it uses exploits.
2. Assuming it somehow manages to use exploits, you can pull the hard drive and change the file permissions back yourself, unless the hard drive is encrypted.
3. Assuming it somehow managed to encrypt the entire hard disk and humans cannot break it, then you scrap the hard drive and start from an off-site backup. You lost some progress but no damage is done.
It doesn't mean we'll become eradicated, but our place in the world could be, I don't know, maybe similar to the place apes have in our world right now. They exist, but are basically irrelevant.So you're saying the Cycle will continue.
w1se ELou.
o s4?.
The key issue in these sorts of scenarios seems to be giving AI control over stuff.
To go from simple to complex examples:
Basically, even if "true AI" in some sense becomes possible, make smart engineering decisions about sandboxing it, and I don't see there being significant risk. Naturally, the downfall of humanity presumably becomes someone not making smart decisions.![]()
The key issue in these sorts of scenarios seems to be giving AI control over stuff.
To go from simple to complex examples:
1. AI can't prevent root access if you run it in a sandbox, or as a non-root user, unless it uses exploits.
2. Assuming it somehow manages to use exploits, you can pull the hard drive and change the file permissions back yourself, unless the hard drive is encrypted.
3. Assuming it somehow managed to encrypt the entire hard disk and humans cannot break it, then you scrap the hard drive and start from an off-site backup. You lost some progress but no damage is done.
4. Movie scenarios where you cannot physically turn some huge AI off, or where the AI is launching nuclear missiles, are presumably because the AI was given access to that. An AI should not have its only physical safeguards somewhere that it can physically protect. AI basically shouldn't be given control of hazardous things to begin with. Don't put the thing in charge of missiles and it doesn't hit you with missiles. Don't put the thing in charge of your environmental systems and it won't suffocate you.
Basically, even if "true AI" in some sense becomes possible, make smart engineering decisions about sandboxing it, and I don't see there being significant risk. Naturally, the downfall of humanity presumably becomes someone not making smart decisions.![]()
I've always wondered if the global economies will collapse once we make AI that can replace every pencil pusher, form file-er, data entry-er, and decision maker. We've always known robots can and will replace many labor jobs. But what about AI/software replacing "brain" and white collar jobs? What's going to happen when accountants, managers, HR, lawyers, risk managers...etc., and various other professions can't find jobs anymore because we've got software AI that replaces them? That could be a lot of people out of work and it might cripple the world economy for some time.
I've always wondered if the global economies will collapse once we make AI that can replace every pencil pusher, form file-er, data entry-er, and decision maker. We've always known robots can and will replace many labor jobs. But what about AI/software replacing "brain" and white collar jobs? What's going to happen when accountants, managers, HR, lawyers, risk managers...etc., and various other professions can't find jobs anymore because we've got software AI that replaces them? That could be a lot of people out of work and it might cripple the world economy for some time.
The thing with AI is that it would be truly alien, even more than lifeforms from a different planet. It does not arise from natural processes, and so is not familiar in any way. It has none of the same motivations we do, we cannot understand its sense of being or experience, we cannot use our intuitive psychology or behaviorism to gauge its desires or responses. We would not be able to predict it. We would not be able to communicate with it. When you grant an object the power to think about itself, not only for itself, you relinquish control of that object. It becomes useless to you in every practical way. The ethics are disturbing enough without worrying about the consequences. I am with Musk. If we develop true AI we are signing our own death sentence.
AI manager judging a manual labor robot based on performance? Oh myI've always wondered if the global economies will collapse once we make AI that can replace every pencil pusher, form file-er, data entry-er, and decision maker. We've always known robots can and will replace many labor jobs. But what about AI/software replacing "brain" and white collar jobs? What's going to happen when accountants, managers, HR, lawyers, risk managers...etc., and various other professions can't find jobs anymore because we've got software AI that replaces them? That could be a lot of people out of work and it might cripple the world economy for some time.
The thing with AI is that it would be truly alien, even more than lifeforms from a different planet. It does not arise from natural processes, and so is not familiar in any way. It has none of the same motivations we do, we cannot understand its sense of being or experience, we cannot use our intuitive psychology or behaviorism to gauge its desires or responses. We would not be able to predict it. We would not be able to communicate with it. When you grant an object the power to think about itself, not only for itself, you relinquish control of that object. It becomes useless to you in every practical way. The ethics are disturbing enough without worrying about the consequences. I am with Musk. If we develop true AI we are signing our own death sentence.
What's to say AI hasn't already been created and is currently destroying all life in the galaxy
Climate change will fuck us long before true artificial intelligence even comes close to being a threat.
I don't think that's quite accurate. Any AI designed with human logic systems will have traces of that legacy. Something as simple as an "If ____, Then ____" statement is steeped in thousands of years of human philosophy.
Technically, I can't see how he can be incorrect. If AI becomes a superior species, it will make us irrelevant one way or the other, like it's always happened before.
just make sure their all as friendly as JARVIS
Luckily battery power is so shitty that they'll all be stuck corded to outlets anyways.
So the first words spoken by an A.I. are bound to be "Hee-ho!"
What I don't fully understand about the AI fear, is why it would be a threat to us. What would it's motivation be for killing us? Software can't feel hate or jealousy. It has no need for food, money, religion or any of the other things that drives humans to kill.
So why would something highly intelligent feel the need to end humanity?
Or is the fear simply based on that we won't be top dog anymore?
Yeah. We're already running into this problem as it is where our increased use of machinery brings about tremendous production efficiency yet not enough people have jobs to spend money to buy those goods in the first place.AI manager judging a manual labor robot based on performance? Oh my
But seriously, by that point we need to have unconditional basic income for everyone. "Labor" would be a thing of the past
Interesting. So without desires it can't be considered intelligent? And if it has desires then it is something it developed on it's own?What does it mean to call something intelligent if it can't feel hate or jealousy, and has no need to make its own decisions, because it has no physical needs? This isn't head-in-the-clouds philosophizing, it is central to the discussion. For a machine to be intelligent, it would need desire and the drive to have that desire met. The problem is we can't predict what the machine would want, and if we programmed desire in, then the machine is not intelligent. It is acting for us, not for itself.