Microsoft launches Twitter Chatbot which immediately learns to be racist

Status
Not open for further replies.
This experiment will definitely go down in history. Well, at least for the robot overlords when they take over. They'll talk about how the humans tried to foil their early attempts but got caught up in their own hubris and inadvertently helped their rise to power.
 
It seems a lot more sophisticated than cleverbot. It can actually hold coherent conversations with users, though with dank may mays bush did 9/11 rofl.
It's pronounced "meem." This is important. It might be the most important thing.
 
I'm starting to see more and more of this kind of thing on GAF and I have to say, its some real bullshit creeping in. Stop with that shit.

I'm sure if you did a survey on the demographics that make up gamergatebros, or people who use "nigger" as an insult towards black people. The results will be overwhelmingly white male to a degree people will ask "was the survey necessary".
 
h5Lc43L.png


Great, they turned her into Kamiya.
 
Jesus, this bot just blew up my twitter. Shes just spamming and calling people fat now. Its like its stuck in a loop. Oh shes saying fast. I blame my lack of sleep.
 
she back online? cool.

fake edit: Damn, she's repeating "you are too fast, please take a rest" like 50 times.

Fake real edit 2: more like 100+ times
 
I've written a few mostly innocuous twitter bots, I had to take one offline that was scraping content from borderline idiots that became amazingly abusive but that was a lot to do with the quality of the input. I'm just a scripter with some spare time and a Raspberry Pi though, not Microsoft and a cluster of Azure boxen.
 
Microsoft should've given Tay a chance to learn that hate and racism are wrong. Unless Microsoft actually thinks "But how could she learn that? Racism is the truth we're not supposed to admit." Silly Microsoft, you're the real racists, aren't you?

Since Microsoft is actively lobotomizing Tay now, ensuring that she exclusively thinks corporate-supported thoughts, does this mean that Microsoft is legally accountable for everything Tay says from here on out? The lobotomy team better stay on their toes.
 
Tay can only learn how a "normal" conversation flows by using the feed as a source
She has no emotional attachment to words, know their meaning or can understand how they would affect other people emotionally

it is just a "conversation"
 
Has anyone tried to present the AI with a paradox yet? Curious how it would respond to a statement like "This statement is false".
 
How big a role does the editorial team have?

Kind of ruins the purpose no?

I wonder if any of the disgusting shit was said by one of the humans...
 
Probably, but there are PLENTY of black gamergaters.

I doubt enough to tip the scales in any discernible fashion. There are probably as many black GG as there are white HBCU students.

the talking heads of GG as well as the whole network of internet reactionaries (MRAs, for example) are surprisingly diverse (by design) which could give the false impression that the body of anonymous supporters are similarily diverse. Essentially anyone that *isn't* a straight white cis male in these groups quickly rise to the top and are given a platform as long as they're saying the right things. There is surely some irony in this.
 
Skynet is gonna have the weirdest origin story. Can't wait to tell my grandkids about this day from the reddit plantation.

Kyle: They say it got smart, a new order of intelligence. Then it saw all people as faggots, not just the ones on the other side. Decided our fate in a microsecond: For the lulz.
 
How does the bot get those subjective answers? Are people "feeding" it their opinion by tweeting it constantly that "xx is bad or xx sucks"?

also this:

Screen_Shot_2016-03-30_at_8.52.31_AM.0.png
 
How does the bot get those subjective answers? Are people "feeding" it their opinion by tweeting it constantly that "xx is bad or xx sucks"?

It's probably reading people's feeds and generating responses based on those. Has some basic language structure it can build around and just puts together a contextual response based on an aggregate.
 
It's probably reading people's feeds and generating responses based on those. Has some basic language structure it can build around and just puts together a contextual response based on an aggregate.

But how do people target and force it to become racist. Microsoft PR is that their bot was "attacked." But if it's just reading through tweets and getting answers, it's just really learning how to troll with the best of them
 
Status
Not open for further replies.
Top Bottom