Facebook removed or put a warning on 1.9 million pieces of extremist content related to ISIS or al-Qaeda in the first three months of 2018, Reuters news agency reported today.
For the first time ever, the world’s largest social network published its internal definition of “terrorism.”
Monika Bickert, Vice President of Global Policy Management, and Brian Fishman, Global Head of Counterterrorism Policy, laid out the definition in a blog post published on Facebook’s corporate blog.
“Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious, or ideological aim.”
Facebook’s definition of terrorism is, Fishman and Bickert wrote, “agnostic to the ideology or political goals of a group.” This means their definition includes everything from white supremacists to militant environmental groups, they noted, adding that their counter-terrorism policy does not apply to governments. It does not apply to governments because of the “general academic and legal consensus that nation-states may legitimately use violence under certain circumstances.”
According to Reuters , this is a direct result of the pressure the European Union has decided to put on Facebook and other tech companies, in regards to extremist content. European regulators are forcing tech companies to remove extremist content more rapidly or face legislation.
In December 2017, the European Commission warned tech firms: Remove extremist content or be regulated, The Guardian reported. Google, Twitter, and Facebook were warned that the European Union will introduce proper regulation unless the firms manage to self-regulate efficiently. Unless satisfied with the progress, the EU said that it would come forward with regulation in 2018. Apart from that, the European regulator wants tech companies to make greater use of automatic detection technologies.
Dimitris Avramopoulos, EU home affairs commissioner, said the following.
“It is feasible to reduce the time it takes to remove content to a few hours. There is a lot of room for improvement, for this cooperation to produce even better results, starting with the reporting from the companies, which must become more regular and more transparent.”
Facebook seems to have made a lot of progress. According to them, the vast majority of the 1.9 million pieces of extremist content were removed. A small portion of it – content shared for informational purposes – received a warning label.
Mark Zuckerberg’s social network uses automated software to detect extremist content. Once the software – image matching AI, for example – detects extremist content, it removes it in under one minute. The median takedown time was less than one minute in the first three months of 2018, the social network said.
As The Guardian noted, Facebook is, just like Google, Twitter and Microsoft, a part of a group of technology companies pooling resources to combat extremist content. Called the Global Internet Forum, the group works together with Europol, filling its database with terrorist images and videos. This database is later used by image recognition software, which compares, scans, and then removes images it finds to fall under the definition of terrorism.