Tech giants including YouTube, Facebook and Google are now using automated systems to remove extremist content from the Internet, according to news agency Reuters. The move is a notable change in policy, as tech companies largely relied on a more personal system where users report terrorist content before human employees review and delete it accordingly.
Content detection systems have been used to find copyright-protected material online for some time, and this tech is now driving a new clampdown on extremist videos and other forms of content that may, for example, show hostages being killed. However, it is not clear what content is defined as extremist and whether it includes passionate rhetoric and clips encouraging violence.
Reuters’ sources have indicated that the tech companies involved are not willing to talk about the new initiative, as they are worried that terrorists may be able to find a way around the content-blocking systems they employ. The systems generally work by comparing “hashes,” or unique digital identifiers assigned to the uploaded videos to a list of banned content.
The likes of Twitter, Facebook and Google reportedly had an in-depth discussion about what they could do to combat the rise in extremist posts and videos on social media and content-sharing sites back in April. The call was hastened by Twitter’s decision to ban 125,000 users earlier in the year, which saw one teenager receive an 11-year prison sentence for running an account championing terror group ISIS.
While none of the companies involved will divulge official details about the fight on terrorism online, Facebook’s Monika Bickert, head of global policy management, has admitted that they are working together. She added that the social media giant was “exploring with others in industry ways [they] can collaboratively work to remove content that violates [their] policies against terrorism.”
There also appears to be a growing political pressure to address the issue, as US President Barack Obama recently suggested that online extremism could be linked to terrorist attacks in the west. A recent study also proposed an alternative to the current systems that involves the use of an algorithm to predict potential attacks via traffic spikes.