The Complexity of Eradicating IS Propaganda Online
As news broke on 28 November of Somali-born refugee Abdul Razak Ali Artan's attack at Ohio State University, so followed the recurring narrative of the sad new normal. There was the attack method: in this case made with a knife and vehicle, two approaches instructed by the Islamic State (IS) in weeks prior via video and magazine guides disseminated on social media.
Then there was the Facebook post: praising killed jihadi recruiter Anwar al-Awlaki and demanding that the US makes peace with IS. It was thus no surprise that IS's 'Amaq News Agency, calling him one of its "soldiers", claimed that his attack was made in response to its "calls to target citizens of international coalition countries".
Acknowledging how well IS exploits social media has become a staple of small-talk about terrorism — not much more thoughtful in itself than a comment on the weather. Yet, three years since its rise, IS still thrives on social media, and its recruitment propaganda can be found as easily as ever. It's almost as if people have accepted rampant terror propaganda and attack incitements on social media as an unchangeable reality.
However, on December 5, Facebook signalled a new approach when it announced a partnership with Microsoft, Twitter and YouTube involving a shared database, aimed to "help curb" the spread of terrorist propaganda on social media. According to the new plan, these social media platforms will begin sharing hashes — or digital "fingerprints" — of terrorist images and videos. The hashes will be shared in a central database, to which each participating company can contribute and pull from for efficient identification, review and removal from their services.
This encouraging announcement suggested that after years of misguided efforts, social media companies were ready to start a more collaborative and better-aimed fight against what has fuelled attacks like Artan's.
First and foremost, this announcement shows that these companies are acknowledging the seriousness of the problem. For years, some social media companies have apathetically characterised rampant terror propaganda and incitement on their platforms as inevitable byproducts of free speech. Others, meanwhile, invested for too long in repeated account suspensions. Facebook's announcement is thus an acknowledgement not just that something needs to be done, but that an entirely new approach is needed.
For years, some social media companies have apathetically characterised rampant terror propaganda and attack incitements on their platforms as inevitable byproducts of free speech.
Just as encouraging is the announced project's collaborative element, which is long overdue between social media companies. When a group like IS and its supporters spread propaganda, they're not just going to one platform. Rather, they are spreading the exact same material — tailored into different languages — throughout any platform they think there might be an audience on.
See the same IS release, updated on 8 October with English subtitles, being distributed across Twitter (left) and Facebook (right), among other platforms.
However, don't expect this project to be a silver bullet. Lone wolf attacks will not cease to be incited on Facebook, Twitter or YouTube, nor will calls for attacks by IS, al-Qaeda and other groups.
First of all, Facebook's announcement fails to indicate the fundamental goal of this project. The statement says the project will focus on removing "violent terrorist imagery or terrorist recruitment videos," which sounds agreeable at surface-level, but begs questions: Does this database end at violent content? Or does it aim to remove terrorist recruitment and incitements from its platforms? Kicking IS off of these platforms and ridding it of violent imagery may at times be compatible tasks, but they are largely not the same thing in this case.
IS propaganda is more than a series of beheading videos...Jihadist propaganda is not in itself visually identifiable.
IS propaganda is more than a series of beheading videos. It is a complex series of appeals aimed to exploit prospects' needs, fears, desires, alienation and frustrations. The vast majority of IS material is not violent at all. It includes pictures of daily life under the "Caliphate": restaurants, weather, agriculture and kids in playgrounds. It includes infographics of charity ("zakat") distribution and heartbreaking images and videos of homes ruined by airstrikes:
Above is a picture from a 8 December video by 'Amaq, showing results of coalition airstrikes on Mosul.
Some can be as seemingly mundane such as one from IS' Tripoli Province, which featured a pizza shop under its territory. Below is an IS photo report of the pizza restaurant, intended to project the group as running a functioning state and economy.
Pictures of pizza might seem strange within the context of terrorism, but this content is nonetheless jihadist propaganda: produced by IS, for the sake of promoting the group and recruiting to it. That said, will this material also be included in the aforementioned database?
Other images, like communiques and other statements, aren't actually pictures but formatted text templates. Daily "Bayan" news bulletins contain no images at all.
Even reporting from 'Amaq contains videos and images of clashes not much different than those from mainstream media outlets. This past Tuesday, 'Amaq released a video report featuring John Cantlie, who commented on a recent airstrike near Mosul. The video didn't include any violent images by IS, but it's a powerful propaganda and recruitment tool.
The majority of IS releases considered, the new shared hash database, despite its more positive points, seems to show another attempted shortcut in fighting terrorism online. You can't remove only certain images from IS and keep everything else. IS produces this material at an overwhelming rate. The aforementioned 'Amaq News Agency, by itself, issued over 170 reports in just the last week.
Also potentially troubling is that this database uses the same type of technology deployed to detect child pornography online. Though the software worked to remove child pornography images, jihadist propaganda is not in itself visually identifiable. It is a completely different beast, and relying on this solution to fight it would be like using an antibiotic to treat a viral infection, or a bandage to heal cancer.
The first step in "cleaning" social media of terrorist propaganda must be by first defining the parameters of the goal. After all of the proof of social media platforms' roles in terrorist recruitment, I would have expected to see a statement with heavier emphasis on IS propaganda in a larger sense— not just that of violent imagery. The second and most vital step is to finally take the time to understand the very elements that have fuelled recruitment and terrorism in recent years.
These tech giants' project could very well spark a new approach among social media companies in taking on terrorism — one that embraces collaboration, constant adaptation, creativity and a fundamental understanding of how terrorist groups disseminate media online. But while it demonstrates a positive step forward in removing terrorist propaganda from social media, it is still only just that: a step.
Here's to being cautiously optimistic.
(This article was originally posted on International Business Times.)