Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

According to the numbers, covering the six-month period from October 2017 to March 2018, Facebook's automated systems remove millions of pieces of spam, pornography, graphic violence and fake accounts quickly - but that hate-speech content, including terrorist propaganda, still requires extensive manual review to identify.

Facebook's vice president of product management, Guy Rosen, said that the company's systems are still in development for some of the content checks. "Accountable to the community".

The report, released Tuesday, revealed how much content has been removed for violating standards.

The number of posts on Facebook showing graphic violence rose in the first three months of the year from a quarter earlier, possibly driven by the war in Syria, the social network said on Tuesday, in its first public release of such data.

"It may take a human to understand and accurately interpret nuances like... self-referential comments or sarcasm", the report said, noting that Facebook aims to "protect and respect both expression and personal safety".

It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1. Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them.

Utilizing new artificial-intelligence-based technology, Facebook can find and moderate content more rapidly and effectively than the traditional, human counterpart-that is, in terms of detecting fake accounts or spam, at least. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do.

Nevertheless, the company took down nearly twice as much content in both segments during this year's first quarter, compared with Q4. This is a problem perhaps most salient in non-English speaking countries.

"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook". The company credited better detection, even as it said computer programs have trouble understanding context and tone of language. In total, it took action on 21 million pieces of content in Q4, similar to Q4.

The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, though the company was clear to highlight that its new methods are evolving and aren't set in stone, CNET's Parker reports.

Terrorist propaganda (ISIS, Al Qaeda, and affiliates): Facebook says it took action on 1.9 million pieces of such content, and found and flagged 99.5% of such content before anyone reported it. "By comparison, we removed two and a half million pieces of hate speech in Q1 2018,38 percent of which was flagged by our technology".

Facebook said that for every 10,000 content views, an average of 22 to 27 contained graphic violence, up from 16 to 19 in the previous quarter, a rise that was attributed to the rising volume of graphic content being shared on Facebook.

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down. The company's internal "detection technology" has been efficient at taking down spam and fake accounts, removing them by the hundreds of millions during Q1.

Related news