Facebook has deleted millions of spam posts and fake accounts

Jo Lloyd
May 16, 2018

The social network released its first content moderation report today.

The social media company targeted accounts which produced inappropriate content from several areas including graphic violence, terrorist propaganda, and hate speech.

Facebook said it took down 583 million fake profiles in the first three months of the year, usually within minutes of their creation. In addition, Facebook stated that from the remaining accounts, a mere three to four percent were fake.

Facebook's vice president of product management, Guy Rosen, said in a blog post Tuesday about the newly-released report that nearly all of the 837 million spam posts Facebook took down in the first quarter of 2018 were found by Facebook before anyone had reported them.

Facebook took down or applied warning labels to 3.4 million pieces of violent content in the three months to March - a 183 percent increase from the final quarter of 2017.

In the United Kingdom, Facebook this week again resisted a request from British lawmakers to testify as part of their investigation into Cambridge Analytica, a political consultancy that improperly accessed personal information about 87 million of the social site's users. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem. All 836 million spam posts were flagged by an artificial intelligence program before human users reported them, according to the report.

Zuckerberg noted that there is still room for improvement with Facebook's AI tools - noticeably flagging hate-speech content. Hate speech is hard to flag using AI because it "often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards", according to the report.

"AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we're working on it", said Zuckerberg to CNet.

- The company found 2.5 million posts containing hate speech, a 56 percent increase over the last quarter of 2017.

Facebook has faced fierce criticism from governments and rights groups for failing to do enough to stem hate speech and prevent the service from being used to promote terrorism, stir sectarian conflict and broadcast acts including murder and suicide. But only 38 percent had been detected through Facebook's efforts - the rest flagged up by users. "This is the same data we use to measure our progress internally - and you can now see it to judge our progress for yourselves".

All told, Facebook took action on almost 1.6 billion pieces of content during the six months ending in March, a tiny fraction of all the activity on its social network, according to the company.

While AI is getting more effective at flagging content, Facebook's human reviewers still have to finish the job.

Other reports by

Discuss This Article

FOLLOW OUR NEWSPAPER