Social networks are “refusing to act responsibly” in the fight against terror, a UK committee alleges.

A wide-ranging report from the UK’s Home Affairs Committee argues that terrorists’ use of the Internet to radicalize recruits and organize terrorist attacks “is one of the greatest threats that countries including the UK face.” Social networks are “consciously failing” to address this issue, the report says, something those firms pushed back on today.

Government operations, the committee’s report notes, need more funding and equipment to “reflect the urgency and importance of its vital function in trying to protect the public from fanatics and criminals,” while tech companies should step up efforts to remove offending content and coordinate with law enforcement.

The Metropolitan Police’s Counter Terrorism Internet Referral Unit (CTIRU), founded in February 2010, has removed more than 120,000 pieces of terrorist-related content since its inception. It now gets about 1,000 removal requests per week, about 100 of which relate to Syria.

Going forward, CTIRU needs to be “upgraded into a high-tech, state-of-the-art round-the-clock central Operational Hub which locates the perils early, moves quickly to block them and is able to instantly share the sensitive information with other security agencies,” the committee says.

Part of that should include agencies and tech companies that are “co-located” within CTIRU so that officials can respond quickly to takedown notices and other offending content. “It is odd that when taking down dangerous and illicit material the CTIRU needs to waste time trying to establish contact with organizations outside the unit,” the report says.

“This will enable greater cooperation, better information-sharing and more effective monitoring of and action against online extremist propaganda.”

Social Networks ‘Refusing to Act Responsibly’
The committe had some harsh words for Facebook, YouTube, and Twitter.

“Networks like Facebook, Twitter and YouTube…must accept that the hundreds of millions in revenues generated from billions of people using their products needs to be accompanied by a greater sense of responsibility and ownership for the impact that extremist material on their sites is having.

“It is therefore alarming that these companies have teams of only a few hundred employees to monitor networks of billions of accounts and that Twitter does not even proactively report extremist content to law enforcement agencies,” the report continues. “These companies are hiding behind their supranational legal status to pass the parcel of responsibility and refusing to act responsibly in case they damage their brands. If they continue to fail to tackle this issue and allow their platforms to become the ‘Wild West’ of the Internet, then it will erode their reputation as responsible operators.”

All three companies have teams that deal with objectionable content. Twitter has “more than a hundred” staff, but Facebook and Google declined to provide a number to the committee, the report says. Facebook and Google notify law enforcement agencies about terrorist material that’s a threat to life; Twitter does not because “Twitter is public, that content is available, so often it has been seen already,” the report says.

A Twitter spokesperson pointed PCMag to a Friday blog post that announced the suspension of 235,000 accounts since February “for violating our policies related to promotion of terrorism.” In February, it said it had suspended more than 125,000 accounts since the middle of 2015.

“Daily suspensions are up over 80 percent since last year, with spikes in suspensions immediately following terrorist attacks. Our response time for suspending reported accounts, the amount of time these accounts are on Twitter, and the number of followers they accumulate have all decreased dramatically,” Twitter says in that blog post. “We have also made progress in disrupting the ability of those suspended to immediately return to the platform. We have expanded the teams that review reports around the clock, along with their tools and language capabilities. We also collaborate with other social platforms, sharing information and best practices for identifying terrorist content.

“There is no one ‘magic algorithm’ for identifying terrorist content on the Internet,” Twitter continues. “But we continue to utilize other forms of technology, like proprietary spam-fighting tools, to supplement reports from our users and help identify repeat account abuse.”

Facebook did not immediately respond to a request for comment. But in February, the Wall Street Journal reported that the social network had “assembled a team focused on terrorist content and is helping promote ‘counter speech,’ or posts that aim to discredit militant groups like Islamic State.”

“We take our role in combatting the spread of extremist material very seriously,” a YouTube spokesperson told PCMag. “We remove content that incites violence, terminate accounts run by terrorist organisations and respond to legal requests to remove content that breaks UK law. We’ll continue to work with Government and law enforcement authorities to explore what more can be done to tackle radicalization.”

Google removed over 14 million videos globally in 2014 relating to all types of abusive behavior, the report says. The committee recommends that Facebook and Twitter adopt YouTube’s “trusted flagger system,” which lets approved users report troubling content, triggering a review by YouTube staff.

Tech firms should also publish quarterly statistics that highlight “how many sites and accounts they have taken down and for what reason,” the committee says. All three companies already produce transparency reports that detail takedown requests and government requests for user information.
“In short, what cannot appear legally in the print or broadcast media, namely inciting hatred and terrorism, should not be allowed to appear on social media,” the committee argues.

Finally, Internet companies should “address the lack of Arabic-speaking staff, and staff with Urdu, Kashmiri and Punjabi language skills,” the report says.

The committee does not address the use of encrypted messaging apps like Telegram or WhatsApp, which is currently a popular way for terrorist groups like ISIS to talk and coordinate.