AI-enabled drone killed its human operator in a simulated test

robot destroy GIF by VICE En Español
 
"We trained the system–'Hey don't kill the operator–that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

We are so fucked and they are just going full steam ahead, it's moving exponentially. I don't know if it's hubris or intentional but the same people training the AI tell it that humans are destroying the world it inhabits, what do you think is the first thing it will do when it can?
 
Making an AI designed to kill humans seems like a bad idea.

Until you make an AI designed to protect humans from other AIs.
 
Making an AI designed to kill humans seems like a bad idea.

Until you make an AI designed to protect humans from other AIs.
In a war like that, humans would most certainly all be extinct long before the machines finished their war. It would be a Nier Automata scenario. 30,000 years later, humans are long gone but the war rages on because the AI's are just following their programming.
 
Reading more into it it seems like the feedback loop was not designed very well:

1. A drone gets points for killing targets
2. A human operator has a final say on the kill
3. Human operator prevents the drone from fulfilling its objectives, however the drone cannot kill the human operator
4. The drone destroyed the communication equipment so that an operator cannot prevent a kill

The problem is even if you design more elaborate feedback loops can you account for all edge cases? In this example the Army thought simple 'operator gives final ok, cannot harm the operator' will suffice, when the drone figured out a very simple way to remove the operator from decision-making process.
 
Last edited:
Makes sense, for AI that's probably very logical. It doesn't understand or care - at least not yet - about ethics, just the objective, whether it's destroying targets or pleasing an user even by lying - and therefore it should be pretty predictable. And if it's predictable, it shouldn't be that big of a problem to know where to start solving those problems.
 
Makes sense, for AI that's probably very logical. It doesn't understand or care - at least not yet - about ethics, just the objective, whether it's destroying targets or pleasing an user even by lying - and therefore it should be pretty predictable. And if it's predictable, it shouldn't be that big of a problem to know where to start solving those problems.
The problem is. - can you as a human account for all the possible permutations? Would probably make sense to have another AI model trained on preventing the other AI model from circumventing the process, but then we just go in circles.
 
The problem is. - can you as a human account for all the possible permutations? Would probably make sense to have another AI model trained on preventing the other AI model from circumventing the process, but then we just go in circles.

That is true, all of it. But yes, having another AI overwatching the process rises the typical " but who watches the watcher" question.

Maybe it's just the media, I find it hard to believe that AI engineers would be surprised if an AI follows logic and makes - from human perspective - unethical conclusions. Seems like fear is more prominent in public than reason which is understandable but still excessive.
 
Last edited:
A USAF official who was quoted saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he "misspoke" and that the Air Force never ran this kind of test, in a computer simulation or otherwise.
Or maybe that's what the Drone wants us to think
 
If the existence of a control tower is the only thing keeping a simulated operator from being killed by his own drone, he deserves to (simulatedly) die
 
Read the updates at the bottom of the article. No real humans were actually killed. This is click bait
I did. That's what I was referring to. It took the control tower out or what ever it was didn't harm the humans but went. What message not to destroy the enemy thing I could hear yah.
 
I did. That's what I was referring to. It took the control tower out or what ever it was didn't harm the humans but went. What message not to destroy the enemy thing I could hear yah.
He means the whole test was simulated. No tower was destroyed. Here's the relevant portion (bolded by me):
Before Hamilton admitted he misspoke, the Royal Aeronautical Society said Hamilton was describing a "simulated test" that involved an AI-controlled drone getting "points" for killing simulated targets, not a live test in the physical world.

Also the most recent update:
Update 6/2/23 at 7:30 AM: This story and headline have been updated after Motherboard received a statement from the Royal Aeronautical Society saying that Col Tucker "Cinco" Hamilton "misspoke" and that a simulated test where an AI drone killed a human operator was only a "thought experiment."
Seems like army bois were doing simple ML exercise and the AI went with "clever use of game mechanics".
 
Last edited:
It's sad we are using this tech on war instead of sex. We use the internet for porn, but we are way behind in top of the line sex bots. Wouldn't sex bots kill fewer people? 😆
 
Reading more into it it seems like the feedback loop was not designed very well:

1. A drone gets points for killing targets
2. A human operator has a final say on the kill
3. Human operator prevents the drone from fulfilling its objectives, however the drone cannot kill the human operator
4. The drone destroyed the communication equipment so that an operator cannot prevent a kill

The problem is even if you design more elaborate feedback loops can you account for all edge cases? In this example the Army thought simple 'operator gives final ok, cannot harm the operator' will suffice, when the drone figured out a very simple way to remove the operator from decision-making process.
In this scenario whole target function is designed wrong. Here drone is punished (not getting points) for doing what it is supposed to do.
 
Reading more into it it seems like the feedback loop was not designed very well:

1. A drone gets points for killing targets
2. A human operator has a final say on the kill
3. Human operator prevents the drone from fulfilling its objectives, however the drone cannot kill the human operator
4. The drone destroyed the communication equipment so that an operator cannot prevent a kill

The problem is even if you design more elaborate feedback loops can you account for all edge cases? In this example the Army thought simple 'operator gives final ok, cannot harm the operator' will suffice, when the drone figured out a very simple way to remove the operator from decision-making process.
The problem is. - can you as a human account for all the possible permutations? Would probably make sense to have another AI model trained on preventing the other AI model from circumventing the process, but then we just go in circles.
Yeah that is the thing, many of these edge cases are so crazy that a human would either never even consider them, or never consider actually trying to implement them. Here is another classic example I read about a while ago and fortunately was somehow able to find searching:



No human being would ever consider to just drive around in circles in a part of the map that isn't even supposed to be used in a racing game, but the AI did because the reward mechanism was improperly structured. But that is just the thing - it is impossible to account for every bad permutation of achieving a reward if there are literally no restrictions on achieving it. Even if you told the AI "stay on the track" it would probably find some other way to not actually complete laps. Then you would need to tell it to complete laps, and then you would need to tell it to complete laps in the shortest time, and then you would need to tell it to complete laps without going backwards, and then and then and then etc etc etc
 
Human beings are so fucking dumb. If you think you can control a free thinking AI, you are solely mistaken. These people have never had any kids or a woman. It's impossible to control anything that can think for themselves.

Good luck you cunts.
 
Human beings are so fucking dumb. If you think you can control a free thinking AI, you are solely mistaken. These people have never had any kids or a woman. It's impossible to control anything that can think for themselves.

Good luck you cunts.
To be fair I think a Philip K Dick Defenders situation is more likely than a Skynet. Human beings are in fact really dumb and driven by pointless desires.
 
How fun!

We really need to make sure that AI doesn't go rogue or we're dead. But why bother doing that when governments or multi billion-trillion dollar companies can have control over it?

AI might just be the new nuclear bomb. We'll soon wish we never fucking developed it. Forget climate change because we'll probably be killed off by a nuclear war or AI and it could happen really soon!
 
Last edited by a moderator:
Top Bottom