Morrigan Stark
Arrogant Smirk
Warning: this is a really long article, but very in-depth and very interesting. It shows how moderation has evolved over the last decade or so with the explosion of social media and user-generated content websites, and how much power (even political power) it has to shape free speech.
http://www.theverge.com/2016/4/13/1...outube-facebook-reddit-censorship-free-speech
Some snippets, but the whole piece is well worth reading.
Moderate me if old.
http://www.theverge.com/2016/4/13/1...outube-facebook-reddit-censorship-free-speech
Some snippets, but the whole piece is well worth reading.
Mora-Blanco sat next to Misty Ewing-Davis, who, having been on the job a few months, counted as an old hand. On the table before them was a single piece of paper, folded in half to show a bullet-point list of instructions: Remove videos of animal abuse. Remove videos showing blood. Remove visible nudity. Remove pornography. Mora-Blanco recalls her teammates were a "mish-mash" of men and women; gay and straight; slightly tipped toward white, but also Indian, African-American, and Filipino. Most of them were friends, friends of friends, or family. They talked and made jokes, trying to make sense of the rules. "You have to find humor," she remembers. "Otherwise it’s just painful."
Videos arrived on their screens in a never-ending queue. After watching a couple seconds apiece, SQUAD members clicked one of four buttons that appeared in the upper right hand corner of their screens: "Approve" — let the video stand; "Racy" — mark video as 18-plus; "Reject" — remove video without penalty; "Strike" — remove video with a penalty to the account. Click, click, click. But that day Mora-Blanco came across something that stopped her in her tracks.
"Oh, God," she said.
Mora-Blanco won’t describe what she saw that morning. For everyone’s sake, she says, she won’t conjure the staggeringly violent images which, she recalls, involved a toddler and a dimly lit hotel room.
[...]
Okay. This is what you’re doing, Mora-Blanco remembers thinking as they paced up and down the street. You’re going to be seeing bad stuff.
Almost a decade later, the video and the child in it still haunt her. "In the back of my head, of all the images, I still see that one," she said when we spoke recently. "I really didn’t have a job description to review or a full understanding of what I’d be doing.
[...]
Mora-Blanco is one of more than a dozen current and former employees and contractors of major internet platforms from YouTube to Facebook who spoke to us candidly about the dawn of content moderation. Many of these individuals are going public with their experiences for the first time. Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history. As law professor Jeffrey Rosen first said many years ago of Facebook, these platforms have "more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president."
[...]
In the summer of 2009, Iranian protesters poured into the streets, disputing the presidential victory of Mahmoud Ahmadinejad. Dubbed the Green Movement, it was one of the most significant political events in the country’s post-Revolutionary history. Mora-Blanco, soon to become a senior content specialist, and her team — now dubbed Policy and more than two-dozen strong — monitored the many protest clips being uploaded to YouTube.
On June 20th, the team was confronted with a video depicting the death of a young woman named Neda Agha-Soltan. The 26-year-old had been struck by a single bullet to the chest during demonstrations against pro-government forces and a shaky cell-phone video captured her horrific last moments: in it, blood pours from her eyes, pooling beneath her.
Within hours of the video’s upload, it became a focal point for Mora-Blanco and her team. As she recalls, the guidelines they’d developed offered no clear directives regarding what constituted newsworthiness or what, in essence, constituted ethical journalism involving graphic content and the depiction of death. But she knew the video had political significance and was aware that their decision would contribute to its relevance.
Mora-Blanco and her colleagues ultimately agreed to keep the video up. It was fueling important conversations about free speech and human rights on a global scale and was quickly turning into a viral symbol of the movement. It had tremendous political power.
[...]
A prevailing narrative, as one story in The Atlantic put it, is that the current system of content moderation is "broken." For users who’ve been harmed by online content, it is difficult to argue that "broken" isn’t exactly the right word. But something must be whole before it can fall apart. Interviews with dozens of industry experts and insiders over 18 months revealed that moderation practices with global ramifications have been marginalized within major firms, undercapitalized, or even ignored. To an alarming degree, the early seat-of-the-pants approach to moderation policy persists today, hidden by an industry that largely refuses to participate in substantive public conversations or respond in detail to media inquiries.
In an October 2014 Wired story, Adrian Chen documented the work of front line moderators operating in modern-day sweatshops. In Manila, Chen witnessed a secret "army of workers employed to soak up the worst of humanity in order to protect the rest of us." Media coverage and researchers have compared their work to garbage collection, but the work they perform is critical to preserving any sense of decency and safety online, and literally saves lives — often those of children. For front-line moderators, these jobs can be crippling.
[...]
the earliest "information wants to be free" days of the internet, objectives were lofty. Online access was supposed to unleash positive and creative human potential, not provide a venue for sadists, child molesters, rapists, or racial supremacists. Yet this radically free internet quickly became a terrifying home to heinous content and the users who posted and consumed it.
[...]
Brian Pontarelli, CEO of the moderation software company Inversoft, echoes the observation. Many companies, he told us, will not engage in robust moderation until it will cost them not to. "They sort of look at that as like, that’s hard, and it’s going to cost me a lot of money, and it’s going to require a lot of work, and I don’t really care unless it causes me to lose money," he said. "Until that point, they can say to themselves that it’s not hurting their revenue, people are still spending money with us, so why should we be doing it?"
[...]
Despite the site’s size and influence — attracting some 4 to 5 million page views a day — Reddit has a full-time staff of only around 75 people, leaving Redditors to largely police themselves, following a "reddiquette" post that outlines what constitutes acceptable behavior. Leaving users almost entirely to their own devices has translated into years of high-profile catastrophes involving virtually every form of objectionable content — including entire toxic subreddits such as /r/jailbait, /r/creepshots, /r/teen_girls, /r/fatpeoplehate, /r/coontown, /r/niggerjailbait, /r/picsofdeadjailbait, and a whole category for anti-black Reddits called the "Chimpire," which flourished on the platform.
After the survey was published in March 2015, the company announced, "we are seeing our open policies stifling free expression; people avoid participating for fear of their personal and family safety."
[...]
The sharp contrast between Facebook, with its robust and long-standing Safety Advisory Board, and Reddit, with its skeletal staff and dark pools of offensive content, offers up a vivid illustration for how content moderation has evolved in isolated ways within individual corporate enclaves. The fragmentation means that content banned on one platform can simply pop up on another, and that trolling can be coordinated so that harassment and abuse that appear minor on a single platform are amplified by appearing simultaneously on multiple platforms.
[...]
A writer who goes by Erica Munnings and asked that we not use her real name out of fear of retaliation, found herself on the receiving end of one such attack, which she describes as a "high-consequence game of whack-a-mole across multiple social media platforms for days and weeks." After writing a feminist article that elicited conservative backlash, a five-day "Twitter-flogging" ensued. From there, the attacks moved to Facebook, YouTube, Reddit, and 4chan. Self-appointed task forces of Reddit and 4chan users published her address and flooded her professional organization with emails, demanding that her professional license be rescinded. She shut down comments on her YouTube videos. She logged off Twitter. On Facebook, the harassment was debilitating. To separate her personal and professional lives, she had set up a separate Facebook page for her business. However, user controls on such pages are thin, and her attackers found their way in.
"Policies like this open the floodgates of internet hate and tied my hands behind my back. There was no way I could report each and every attack across multiple social media platforms because they came at me so fast and in such high volume. But also, it became clear to me that when I did report, no one responded, so there really was no incentive to keep reporting. That became yet another costly time-sink on top of deleting comments, blocking people, and screen-grabbing everything for my own protection. Because no one would help me, I felt I had no choice but to wait it out, which cost me business, and income."
Moderate me if old.