Arms races happen when two sides of a conflict escalate in a series of ever-changing moves intended to outwit the opponent. In biology, a classic example comes from cheetahs and gazelles. Over time, these species have evolved for speed, each responding to the other’s adaptations. A host of weirder examples come from the biology of sex, where males and females evolve bizarre adaptations to control reproduction, ranging from sperm plugs in bats, to cork-screw penises in ducks, to vaginas filled with deceptive dead-ends in ducks, as well.
One hallmark of an arms race is that, in the end, the participants are often just where they started. Sometimes, the cheetah catches its prey and sometimes the gazelle escapes. Neither wins the race because, as one gets better, so does its opponent. And, along the way, each side expends a great deal of effort. Still, at any point, the only thing that makes sense is to keep escalating.
Arms races happen in the human world too. The term arms race, of course, comes from countries at war who literally amass ever-more sophisticated and powerful weapons. But some human arms races are more subtle.
Human arms race
The philosopher Bennett Holman has argued that the interactions between pharmaceutical companies and the regulatory bodies that seek to determine if drugs are safe and effective constitute an arms race. Pharmaceutical companies deploy an ever-evolving set of tactics to influence medical knowledge. As regulators identify these tactics and seek to neutralise them, pharmaceutical companies find new ways to shape research.
We might call this an informational arms race. One side attempts to mislead the public over a key issue – the safety of a drug, whether climate change is real, or whether vaccines are dangerous, for example. At the same time, the other side works to combat this misinformation campaign.
Please note, this would often be referred to as a disinformation campaign because it is purposefully intended to mislead. I use the term misinformation in this article since it is often tricky to figure out whether something is misinformation or disinformation and since disinformation often ends up being shared by true believers with no political motives. Of course, this is precisely the sort of interaction that social-media companies such as Twitter have found themselves engaged in.
As detailed in the Mueller report – but widely known before – in the lead-up to the 2016 presidential election in the United States, the Russian government – via a group called the Internet Research Agency – engaged in large-scale efforts to influence voters and to polarise the US public. In the wake of this campaign, social-media sites and research groups have scrambled to protect the US public from misinformation on social media.
Twitter, for example, has employed algorithms aimed at identifying bots and shutting down shady accounts. By their accounts, they have recently rid Twitter of one million such accounts per day. But when Twitter gets smarter, so do the bots. A recent report noted a new bot network on Twitter specially designed to outwit detection algorithms. Another new trend – pernicious actors hijacking real accounts.
What is important to recognise about such a situation is that whatever tactics are working now won’t work for long. The other side will adapt. In particular, we cannot expect to be able to put a set of detection algorithms in place and be done with it. Whatever efforts social-media sites make to root out pernicious actors will regularly become obsolete.
The same is true for our individual attempts to identify and avoid misinformation. Since the 2016 US election, fake news has been widely discussed and analysed. Many social media users have become savvier about identifying sites mimicking traditional news sources. But the same users might not be as savvy, for example, about sleek conspiracy theory videos going viral on YouTube, or about deep fakes – expertly altered images and videos.
What makes this problem particularly thorny is that internet media changes at dizzying speed. When the radio was first invented, as a new form of media, it was subject to misinformation. But regulators quickly adapted, managing for the most part, to subdue such attempts. Today, even as Facebook fights Russian meddling, WhatsApp has become host to rampant misinformation in India, leading to the deaths of 31 people in rumour-fuelled mob attacks over two years.
Participating in an informational arms race is exhausting, but sometimes there are no good alternatives. Public misinformation has serious consequences. For this reason, we should be devoting the same level of resources to fighting misinformation that interest groups are devoting to producing it. All social-media sites need dedicated teams of researchers whose full-time jobs are to hunt down and combat new kinds of misinformation attempts.
Likewise, the US government needs to take social-media misinformation seriously as a threat to public health and to democracy. This means devoting significant government resources to combatting it, especially since the character of an informational arms race means that there are no easy patches. It is beyond alarming that the current administration is taking a see-no-evil approach to online misinformation – ignoring pleas from former Homeland Security Secretary Kirstjen Nielsen to pay attention to increasingly sophisticated Russian efforts to sway US politics. The European Union has done much better with its East StratCom taskforce, created in 2015.
The arms-race character of online misinformation means that we must also think of creative ways for broader social-media users to get involved in efforts to protect public belief. Twitter, for instance, has added a bot-reporting function. This leverages the full range of abilities that humans can use to detect potential bots – abilities that can adapt as the opposition does.
Could we implement prizes or prestigious contests for independent research teams that identify new attempts at social-media misinformation? Or who come up with the best new ideas for fighting these attempts? Instead of growing victory gardens, the patriots of today can hunt down bots. We shouldn’t expect to win the informational arms race, but if we want to protect our democracy, we have to keep fighting new threats as they emerge.
Cailin O’Connor is an associate professor of logic and philosophy of science and a member of the Institute for Mathematical Behavioral Science at the University of California Irvine. She is the co-author of The Misinformation Age with James Owen Weatherall and her most recent book, The Origins of Unfairness, is forthcoming in 2019.
This article first appeared on Aeon.