Facebook Deletes 500 Million Fake Accounts In Effort To Clean Up Network

Entrance to Facebook's Menlo Park office

Entrance to Facebook's Menlo Park office

By comparison, the company was first to spot more than 85 percent of the graphically violent content it took action on, and nearly 96 percent of the nudity and sexual content.

It also said Facebook "disabled" about 583 million fake accounts in Q1 - "most of which were disabled within minutes of registration". It attributed the increase to "improvements in our ability to find violating content using photo-detection technology, which detects both old content and newly posted content".

The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it.

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late a year ago.

Facebook says 0.22-0.27% of views by users were of content that violated its standards around graphic violence in the period.

More news: Ireland U17 goalkeeper gets sent off... during a penalty shootout

These releases come in the wake of the Cambridge Analytica scandal, which has left the company battling to restore its reputation with users and developers - though employees have said the decision to release the Community Standards was not driven by recent events.

Guy Rosen, Facebook's vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content.

Facebook has released its Community Standards Enforcement Report which details the actions the firm has taken against content that's not allowed on its platform such as graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.

The data also illustrates where Facebook's AI moderation systems are effectively identifying and taking down problematic content - and the areas where it still struggles to identify problems.

While Facebook uses what it calls "detection technology" to root out offending posts and profiles, the software has difficulty detecting hate speech.

More news: Australian astronomers find black hole as big as 20 billion suns

Of course, the authors note, while such AI systems are promising, it will take years before they are effective at removing all objectionable content. During the first quarter, Facebook found and flagged just 38% of such content before it was reported, by far the lowest of the six content types.

"For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 - 86% of which was identified by our technology before it was reported to Facebook", it said.

The company estimated that around 3% to 4% of the active Facebook accounts on the site during this time period - roughly 43 million out 2.19 billion - were fake. In Q1, it disabled 583 million fake accounts, down 16% from 694 million a quarter earlier.

"My top priorities this year are keeping people safe and developing new ways for our community to participate in governance and holding us accountable", wrote Facebook CEO Mark Zuckerberg in a post, adding: "We have a lot more work to do". It says it found and flagged almost 100% of spam content in both Q1 and Q4.

The social network estimates that it found and flagged 85% of that content prior to users seeing and reporting it - a higher level than previously due to technological advances.

More news: Amazon Go Expands to Chicago and San Francisco

Recommended News

We are pleased to provide this opportunity to share information, experiences and observations about what's in the news.
Some of the comments may be reprinted elsewhere in the site or in the newspaper.
Thank you for taking the time to offer your thoughts.