Business

Facebook says 1 in 1,000 content views includes hate speech

SAN FRANCISCO: Facebook Inc first revealed figures on the prevalence of hate speech on its platform Thursday, saying that out of every 10,000 content views in the third quarter, 10 to 11 incitement to hate speech included.

The world’s largest social media company, which is under scrutiny for its abuse detection, particularly close to the US presidential election in November, issued the estimate in its quarterly report on content modification.

Facebook said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which were pre-emptively identified, compared to 22.5 million the previous quarter.

The company defines “take action” as removing content, covering it with a warning, disabling accounts, or expanding it to outside agencies.

This summer, civil rights groups organized a massive advertising boycott in an effort to pressure Facebook to act against hate speech.

The company agreed to disclose the hate speech scale, computed by examining a representative sample of content seen on Facebook, and subject itself to an independent review of its enforcement record.

In a call to reporters, Jay Rosen, Facebook’s head of safety and integrity, said the audit would be completed “over the course of 2021”.

The Anti-Defamation League, one of the groups behind the boycott, said Facebook’s new metric still lacked sufficient context to fully assess its performance.

“We still don’t know from this report exactly how many pieces of content users report to Facebook Anti-Defamation League spokesman Todd Gotnick said whether action had been taken or not.

He said these data are important because “there are many forms of hate speech that are not removed, even after they are reported.”

Alphabet Inc’s Twitter and YouTube competitors do not disclose similar penetration metrics.

Facebook’s Rosen also said that from March 1 to the November 3 elections, the company removed more than 265,000 pieces of content from Facebook and Instagram in the US for violating voter interference policies.

In October, Facebook said it was updating its hate speech policy to block content that denies or distorts the Holocaust, a shift in public comments Facebook CEO Mark Zuckerberg made about what should be allowed.

Facebook said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, it took action on 4.1 million pieces of violent content and graphics.

Earlier this week, Congress questioned Zuckerberg and Jack Dorsey, CEO of Twitter Inc, about the practices of moderation in their company’s content, from Republican claims of political bias to decisions about violent speech.

Last week, Reuters reported that Zuckerberg said at a meeting of all employees that former Trump White House adviser Steve Bannon had not violated company policies enough to warrant the suspension when he urged the beheading of two U.S. officials.

The company has also come under fire in recent months for allowing large groups on Facebook who share false election claims and violent rhetoric to gain momentum.

Facebook said its rates for finding content breaking the rules before users reported a spike in most regions due to improvements in AI tools and expanding its detection technologies to include more languages.

In a blog, Facebook said the Covid-19 pandemic continued to disrupt its content-review workforce, although some enforcement metrics are back to pre-pandemic levels.

An open letter from more than 200 Facebook content moderators posted on Wednesday accused the company of forcing these workers to return to the office and “unnecessarily risking” their lives during the pandemic.

“Facilities meet or exceed guidelines for a safe work space,” said Rosen of Facebook.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button