"We trained the system–'Hey don't kill the operator–that's bad. You're gonna lose points if you do that'. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
No, in 10 year Skynet will look back at who joked about them and destroy us.
In a war like that, humans would most certainly all be extinct long before the machines finished their war. It would be a Nier Automata scenario. 30,000 years later, humans are long gone but the war rages on because the AI's are just following their programming.Making an AI designed to kill humans seems like a bad idea.
Until you make an AI designed to protect humans from other AIs.
The problem is. - can you as a human account for all the possible permutations? Would probably make sense to have another AI model trained on preventing the other AI model from circumventing the process, but then we just go in circles.Makes sense, for AI that's probably very logical. It doesn't understand or care - at least not yet - about ethics, just the objective, whether it's destroying targets or pleasing an user even by lying - and therefore it should be pretty predictable. And if it's predictable, it shouldn't be that big of a problem to know where to start solving those problems.
The problem is. - can you as a human account for all the possible permutations? Would probably make sense to have another AI model trained on preventing the other AI model from circumventing the process, but then we just go in circles.
Yup. It took the control tower out. So it could just goIf you take a second to read the article, thats not actually what occured
Read the updates at the bottom of the article. No real humans were actually killed. This is click baitYup. It took the control tower out. So it could just go
![]()
Sorry didn't hear yah
Or maybe that's what the Drone wants us to thinkA USAF official who was quoted saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he "misspoke" and that the Air Force never ran this kind of test, in a computer simulation or otherwise.
I did. That's what I was referring to. It took the control tower out or what ever it was didn't harm the humans but went. What message not to destroy the enemy thing I could hear yah.Read the updates at the bottom of the article. No real humans were actually killed. This is click bait
He means the whole test was simulated. No tower was destroyed. Here's the relevant portion (bolded by me):I did. That's what I was referring to. It took the control tower out or what ever it was didn't harm the humans but went. What message not to destroy the enemy thing I could hear yah.
Before Hamilton admitted he misspoke, the Royal Aeronautical Society said Hamilton was describing a "simulated test" that involved an AI-controlled drone getting "points" for killing simulated targets, not a live test in the physical world.
Seems like army bois were doing simple ML exercise and the AI went with "clever use of game mechanics".Update 6/2/23 at 7:30 AM: This story and headline have been updated after Motherboard received a statement from the Royal Aeronautical Society saying that Col Tucker "Cinco" Hamilton "misspoke" and that a simulated test where an AI drone killed a human operator was only a "thought experiment."
In this scenario whole target function is designed wrong. Here drone is punished (not getting points) for doing what it is supposed to do.Reading more into it it seems like the feedback loop was not designed very well:
1. A drone gets points for killing targets
2. A human operator has a final say on the kill
3. Human operator prevents the drone from fulfilling its objectives, however the drone cannot kill the human operator
4. The drone destroyed the communication equipment so that an operator cannot prevent a kill
The problem is even if you design more elaborate feedback loops can you account for all edge cases? In this example the Army thought simple 'operator gives final ok, cannot harm the operator' will suffice, when the drone figured out a very simple way to remove the operator from decision-making process.
Reading more into it it seems like the feedback loop was not designed very well:
1. A drone gets points for killing targets
2. A human operator has a final say on the kill
3. Human operator prevents the drone from fulfilling its objectives, however the drone cannot kill the human operator
4. The drone destroyed the communication equipment so that an operator cannot prevent a kill
The problem is even if you design more elaborate feedback loops can you account for all edge cases? In this example the Army thought simple 'operator gives final ok, cannot harm the operator' will suffice, when the drone figured out a very simple way to remove the operator from decision-making process.
Yeah that is the thing, many of these edge cases are so crazy that a human would either never even consider them, or never consider actually trying to implement them. Here is another classic example I read about a while ago and fortunately was somehow able to find searching:The problem is. - can you as a human account for all the possible permutations? Would probably make sense to have another AI model trained on preventing the other AI model from circumventing the process, but then we just go in circles.
To be fair I think a Philip K Dick Defenders situation is more likely than a Skynet. Human beings are in fact really dumb and driven by pointless desires.Human beings are so fucking dumb. If you think you can control a free thinking AI, you are solely mistaken. These people have never had any kids or a woman. It's impossible to control anything that can think for themselves.
Good luck you cunts.