Then a few years later, along comes ISIS and the pro-ISIS support online.
We had this idea in 2014 when ISIS first appeared on the horizon to start tracking what we saw. Implementing it was hard, because Facebook was really good at shutting down this type of activity. But we found there’s one social media entity that’s really popular in central Europe and has 350 million users worldwide, called VKontakte. It’s exactly like Facebook, and it has this interesting feature Facebook has, which is groups. Pro-ISIS supporters would form themselves into these groups and they would exchange information about weaponry, financing, recruitment and events.
I had Ph.D. students and post-docs who were Russian speakers, Arabic speakers, and from political science — and we tracked pro-ISIS groups over time. We found that exactly like the fish under the sea, ISIS supporters slowly build up into groups and then the groups get shut down by the moderators, in which case they scatter. And people don’t disappear; they just go off and form other groups. So not only did we find exactly the mechanism that we proposed in our model, but when we looked at the size distribution of the online pro-ISIS groups, we found it was a power law and its exponent came out to be 2.5. That was a Science paper a year ago.
So once you have a model that explains where the power law comes from, how does that help? What does it actually tell you about combating terrorism?
Most of the approaches to dismantling the online support — recruiting and financing, et cetera — are at the individual level. They always seem to want to find the bad guy, the needle in the haystack, the ringleader. What our work shows is that is not the way to go. It’s like the fish: Imagine I want to stop schools of fish forming. You try to catch one fish; will it stop the grouping? No, of course it won’t. Fish No. 3 becomes No. 2, No. 2 becomes No. 1, and in fact there may not even be any hierarchy, there’s just a collection of objects. So you need this systems-level approach or you’ll never understand this behavior.
Security agents are very good at finding who’s actually buying explosives, who’s just about to do something. But what about when the people themselves don’t necessarily know where they’re heading? If you can understand how people move through these groups, then you’re going to get a sense of who is developing momentum toward at least having the intent and the capability. It certainly seems that this dynamic systems view is better than just watchlists based on immigration status.
In one recent paper, you analyzed individuals and groups on VKontakte that were banned for promoting violence; what did the research suggest?
It turns out that most people that get banned, it’s because they’ve been members of certain types of groups. But it’s not true that the more groups I join, the more likely I am to become banned. We were able to find out that people who are most likely to get banned are those who join one pro-ISIS group. Join two, and your probability of becoming banned is less. So might it be that by joining two, I sort of confuse my message to myself? Then the chance of being banned after joining three groups is less than for two groups, et cetera.
We also find that the people on the way to becoming banned tend to go for the small groups, the ones that are focused on, not the news, but something more to do with the spiritual or ideological side. It doesn’t seem to be the case that people go along, and then there’s a piece of news that bothers them and then they go out and do something; it really is this ratcheting up in ideology. And the people who develop more quickly do that in a more predictable way. The ones who take longer to cook, as it were, they tend to fluctuate around more. Which is interesting, because that means there’s probably more opportunity to persuade them away, for instance by trying to get into one of the groups that seem to be where they are heading and soften the message and deflect the person away. Now, that’s not my business; I do the science. But there are interesting possibilities that we hope might be looked into.
Can you check that your model actually identifies the terrorists, rather than just ISIS sympathizers?
There’s a whole bunch of people who are members of these online groups who don’t end up doing anything. But there are many whom we identified who are known from media reports to have eventually been killed in combat. It’s an awful thing to be talking about, but I think it’s an important thing to be doing. Because all of this is open source information. We could sit down in a Starbucks, open up group pages on VKontakte, we’d see everything, because these groups keep themselves open to try and attract recruits and new people.
Do the intelligence agencies take note of your findings?
We’ve given a lot of talks and I’m very impressed by how much interest U.S. agencies showed in this work. The unfortunate thing is, it’s basic science that we’re still trying to work out at the same time that we’re addressing the problem. So we don’t have daily interaction with those agencies. They may be doing something in private; I’ve seen our work mentioned in a lot of reports that are in the public domain.
Does your research also apply to the rise of white supremacist groups in the U.S.?
Yes, what we are doing is very relevant since the alt-right groups live, recruit and coordinate (and hence evolve) online. And from what we can already see, they do so pretty much exactly like the pro-ISIS groups evolve and coordinate, but Facebook has so far been less quick to shut them down. So the question is: What was the activity of the online groups before Charlottesville? And if we look at their evolution (as we did for pro-ISIS groups) from now on, can we foresee the growth to an outburst like a future Charlottesville, but elsewhere in the U.S.?