Advertisement

Facebook reveals its censorship guidelines for the first time — 27 pages of them

Facebook's updated community standards forbid making credible threats, promoting terrorism and selling firearms.
(Richard Drew / Associated Press)
Share

Among the most challenging issues for Facebook Inc. is its role as the policeman for the free expression of its 2 billion users.

Now the social network is opening up about its decision-making over which posts it decides to take down — and why. On Tuesday the company for the first time published its 27 page of guidelines it calls Community Standards which gives to its workforce of thousands of human censors. It encompasses dozens of topics including hate speech, violent imagery, misrepresentation, terrorist propaganda and disinformation. Facebook said it would offer users the opportunity to appeal Facebook’s decisions.

The move adds a new degree of transparency to a process that users, the public and advocates have criticized as arbitrary and opaque. The newly released guidelines offer suggestions on various topics, including how to determine the difference between humor, sarcasm and hate speech. They explain that images of female nipples are generally prohibited, but exceptions are made for images that promote breastfeeding or address breast cancer.

Advertisement

“We want people to know our standards, and we want to give people clarity,” Monika Bickert, Facebook’s head of global policy management, said in an interview. She added that she hoped publishing the guidelines would spark dialogue. “We are trying to strike the line between safety and giving people the ability to really express themselves.”

The company’s censors, called content moderators, have been chastised by civil rights groups for mistakenly removing posts by minorities who had shared stories of being the victims of racial slurs. Moderators have struggled to tell the difference between someone posting a slur as an attack and someone using the slur to tell the story of their own victimization.

In another instance, moderators removed an iconic Vietnam War photo of a child fleeing a napalm attack, claiming the girl’s nudity violated its policies. (The photo was restored after protests from news organizations.) Moderators have deleted posts from activists and journalists in Myanmar and in disputed areas such as the Palestinian territories and Kashmir, and have banned the pro-Trump activists Diamond and Silk as “unsafe to the community.”

The release of the guidelines is part of a wave of transparency that Facebook hopes will quell its many critics. It has also published political ads and streamlined its privacy controls after coming under fire for its lax approach to protecting consumer data.

The company is being investigated by the Federal Trade Commission over the misuse of data by Cambridge Analytica, a consultancy linked to President Trump. Facebook Chief Executive Mark Zuckerberg recently testified before Congress about the issue. Bickert said discussions about sharing the guidelines started last fall and were not related to the Cambridge controversy.

Facebook’s content policies, which began in earnest in 2005, addressed nudity and Holocaust denial in the early years. They have ballooned from a single page in 2008 to 27 pages today.

Advertisement

As Facebook has come to reach nearly a third of the world’s population, Bickert’s team has expanded significantly, and it is expected to grow even more in the coming year. A far-flung team of 7,500 reviewers, in places such as Austin, Texas; Dublin, Ireland; and the Philippines assesses posts 24 hours a day, seven days a week, in more than 40 languages. Moderators are sometimes temporary contract workers without much cultural familiarity with the content they are judging, and they make complex decisions in applying Facebook’s rules.

Bickert’s content review team also employs high-level experts including a human rights lawyer, a rape counselor, a counter-terrorism expert from West Point and a researcher with a doctorate who has expertise in European extremist organizations.

Activists and users have been particularly frustrated by the absence of an appeals process when their posts are taken down. (Facebook users are allowed to appeal the shutdown of an entire account, but not of individual posts.) People have likened this predicament to being put into “Facebook jail” without being given a reason they were locked up.

Zahra Billoo, executive director of the Council on American-Islamic Relations’ office for the San Francisco Bay Area, said adding an appeals process and opening up guidelines would be a “positive development” but said the social network still has a ways to go if it wants to stay a relevant and safe space.

Billoo said that at least a dozen pages representing white supremacists are still up on the platform, even though the policies forbid hate speech and Zuckerberg testified before Congress this month that Facebook does not allow hate groups.

“An ongoing question many of the Muslim community have been asking is how to get Facebook to be better at protecting users from hate speech and not to be hijacked by white supremacists, right-wing activists, Republicans or the Russians as a means of organizing against Muslim, LGBT and undocumented individuals,” she said.

Advertisement

Billoo herself was censored by Facebook two weeks after Trump’s election, when she posted an image of a handwritten letter mailed to a San Jose mosque and quoted from it: “He’s going to do to you Muslims what Hitler did to the Jews.”

Bickert’s team has been working for years to develop a software system that can classify the reasons a post was taken down so that users could receive clearer information — and so Facebook could track how many hate speech posts were put up in a given year, for example, or whether certain groups are having their posts taken down more frequently than others.

People whose posts are taken down receive a generic message that says that they have violated Facebook’s community standards. After Tuesday’s announcement, people will be told whether their posts violated guidelines on nudity, hate speech and graphic violence. A Facebook executive said the teams were working on building more tools. “We do want to provide more details and information for why content has been removed,” said Ellen Silver, Facebook’s vice president of community operations. “We have more work to do there and we are committed to making those improvements.”

Facebook’s content moderation is still very much driven by humans, but the company also uses technology to assist in its work. It uses software to identify duplicate reports, a time-saving technique for reviewers that helps them avoid reviewing the same piece of content over and over because it was flagged by many people at once. Software also can identify the language of a post and some of the themes, helping the post get to the reviewer with the most expertise.

The company can recognize images that have been posted before, but it cannot recognize new images. For example, if a terrorist organization reposts a beheading video that Facebook already took down, Facebook’s systems will notice it almost immediately, Silver said, but it cannot identify new beheading videos. The majority of items flagged by the community get reviewed within 24 hours, she said.

Every two weeks, employees and senior executives who make decisions about the most challenging issues meet. They debate the pros and cons of potential policies. Teams who present are required to come up with research showing each side, a list of possible solutions and a recommendation. They are required to list the organizations outside Facebook with which they consulted.

Advertisement

Bickert and Silver acknowledged that Facebook will continue to make errors in judgment. “The scale that we operate at,” said Silver, “even if we’re at 99% accuracy, that’s still a lot of mistakes.”


UPDATES:

7:25 a.m.: This article was updated throughout with additional details and comments.

This article was originally published at 4:10 a.m.

Advertisement