David L. Sloss
Tyrants on Twitter: Protecting Democracies from Information Warfare
Stanford University Press
352 pages, 6 1/2 x 9 1/4 inches
ISBN 9781503628441
Tyrants on Twitter makes three main points. The first point is that we’ve been experiencing a period globally for about the last 15 years of democratic decay and creeping authoritarianism. This means that the number of democracies is declining, while the number of authoritarian states is increasing. It also means that the quality of democracy in existing democratic states is deteriorating, whereas autocrats are exercising greater control over their populations than they did in the past. This is not a novel point; a lot of other people have made the point, but that is my starting point for the analysis in this book.
The second big point of the book—and this is more of a novel point—is that Chinese and Russian information warfare is contributing to both democratic decay and creeping authoritarianism. I do a really in-depth dive looking separately at China and Russia. One former government official said that “Russia is a hurricane and China is climate change.” That phrase nicely captures the differences between them. Russian information warfare is primarily negative and destructive; Russia is trying to undermine existing democracies. Chinese information warfare, in contrast, is aimed at bringing about a gradual change in the liberal international order to make that system align better with its illiberal values. The key similarity is that both Russia and China are accelerating the processes of democratic decay and creeping authoritarianism. Moreover, they are using information technology, generally, and social media, in particular, to do that.
Then, third, the book presents a detailed proposal for transnational regulatory cooperation among liberal democracies. If liberal democracies want to resist creeping authoritarianism and enhance democracy in existing democracies, we need better defenses against information warfare. This is something that liberal democracies need to work on cooperatively. It’s not a US problem, it’s not a French problem, it’s not an Australian problem – although it affects all of us. In order to respond effectively, we need a cooperative approach that involves what some legal scholars call “regulatory harmonization” among liberal democracies. So that’s the basic pitch of the book. That’s the nutshell version.
Let me focus on social media, particularly, the Russian interference in the 2016 election in the United States, as an example. By way of background, the book documents the fact that Russia has interfered in democratic elections over the last decade or so in about 20 different countries, all of which are members of either NATO or the European Union, or both, so this is a pretty widespread phenomenon.
Russian interference in the US election in 2016 made extensive use of social media and, in particular, fake accounts on social media. In the book, I distinguish among about six or seven different types of fake accounts – that’s probably a novel contribution, breaking that down and analyzing the different types of fake accounts. But without going into those kinds of details, essentially, what you get is Russian agents impersonating US citizens and setting up accounts on Facebook, Twitter, Instagram, etc., where they are pretending to be US citizens and engaging with other US citizens in conversations about the upcoming 2016 election.
Interestingly, Yevgeny Prigozhin, who’s been in the news recently as the head of the Wagner group, was at that time the head of an entity called the Internet Research Agency, which was the primary Russian organization involved in setting up these fake accounts on social media for the purpose of conducting electoral interference. Some of these Russian accounts got huge numbers of followers and were very influential – at least if you measure influence in terms of the number of likes and followers, and that sort of thing. Russian fake accounts reached a very broad audience, and for the most part the audience thought they were engaging with Americans. There is evidence that these agents were engaged with Donald Trump, Jr. He was happy to engage with these folks on social media, and he thought he was talking to Republican voters in the United States, when he was really talking to Russian agents.
There’s a plausible argument that Russian intervention was sufficiently influential to sway enough votes in the three key battleground states – Michigan, Pennsylvania, and Wisconsin – to swing the election in favor of Donald Trump. There’s no question they were trying to swing the election in favor of Donald Trump; that’s well documented in the Mueller Report. There is a lot of debate about how successful they were. Ultimately, that’s an unanswerable question. We don’t know. But it’s at least plausible that Russian agents may have been sufficiently influential to swing enough votes in those three states to make a difference in the electoral outcome. That, in itself, is pretty scary.
China is more in the camp of strengthening and promoting authoritarianism, rather than undermining democracy. The main countries where China is really undermining democracy are Taiwan, where they intervene very extensively, not surprisingly, and also Australia. Social media is one piece of a broader Chinese effort to subvert democracy in those places. But in some ways what’s of greater concern is that China has effectively used information technology to create a surveillance state within China that is reminiscent of George Orwell’s 1984. Moreover, Chinese companies have exported that technology to countries all over the world. Dictators around the world are happy to scoop up that technology because it enables them to strengthen dictatorial control in their own countries. There’s some very good empirical, social science evidence showing that dictators who import this kind of technology or make greater use of advanced information technology strengthen and lengthen their dictatorial control. The technology helps them remain in power longer because it enables them to strengthen their control over their populations.
One of the key arguments in the book is that information technology has created an uneven playing field. Not everybody is comfortable with the idea of a global struggle between democracy and authoritarianism. But I don’t think it’s wrong to look at the world that way. There are other ways you can look at it, but I think it’s reasonable to argue that the world is currently in the middle of a global struggle between democracy and authoritarianism. In that struggle, information technology tilts the playing field in favor of autocrats and against the democrats. Therefore, the main goal of my proposed regulatory solution is to level the playing field. The proposal is not designed to end autocracy around the world. I don’t think that’s a reasonable goal. I think autocracy is going to be with us for a long time. But I do want to try and level the playing field in the political competition between democracies and autocracies. That is the central goal of my policy proposal.
The key part of the proposal is to ban Chinese and Russian state agents from US social media platforms. In order to do that, I propose a registration system in which all social media users who want to maintain public accounts (as opposed to private accounts) must register by nationality. Now, that’s the piece of the proposal that I’ve probably gotten the most push-back on because a lot of people think that a social media registration system will create a situation in which “Big Brother is watching you.” I don’t actually think that’s true. I don’t want Big Brother watching me any more than anybody else does. While my proposed registration system doesn’t strengthen protections for user privacy, it doesn’t weaken protections for user privacy, either. It leaves user privacy where it is. I’m actually very sympathetic to arguments for strengthening protections for user privacy. That’s not really the goal of my proposal. I think there are other ways to do that. I’m trying to deal with the threat of Chinese and Russian information warfare. I’m trying to deal with it in a way that doesn’t weaken protection for user privacy.
So I propose this registration system for everybody. When you open up a social media account, you have a choice, initially – do I want a private account or a public account? A private account would then be limited in the size of the audience I can reach with my account, and if I’m comfortable with limiting the size of the audience I can reach, I set up a private account, and I don’t have to register my nationality. But if I want a public account so that I can reach a larger audience, I have to register my nationality. In that case, the national government would confirm that yes, in fact, there is a US citizen named David Sloss, who meets this basic demographic information. Then I get the green light to go ahead and set up my public account. After that, if I want to appear on social media under a pseudonym, I can appear under a pseudonym so that the government can’t track what I’m doing on social media. The system is designed so that after the government confirms that David Sloss is a US citizen, then the government would not have any further access to my account. And if I use a pseudonym, the government has no way to connect me to my pseudonym, so I can still speak anonymously on social media. That protects my privacy. But that check by the government is necessary to make sure that Russian agents and Chinese agents aren’t registering as US citizens, or Canadian citizens, or German citizens.
The reason for the proposed registration system is to fix problems in the current system. The system we have now is one that people have described as a Whack-a-Troll system. Russian agents get on the platform, or Chinese agents get on, let’s say Twitter, and put out a bunch of posts. At some point, Twitter decides that this is a Chinese state agent, they’re doing bad stuff, we don’t like them, so we’re going to kick them off. But then the Russian or Chinese agent set ups another account under a different alias and you go through the same thing all over again. In general, it is much cheaper and easier and faster for those foreign agents to set up fake accounts than it is for Twitter and Facebook and YouTube to discover who they are and then block those accounts. Without the kind of pre-screening that is built into my proposed registration system, the Chinese and Russian state agents can overwhelm the ability of the social media companies to play defense. By having a registration system, you basically equalize the playing field and allow the companies to block these accounts before they are set up. Without pre-screening and registration, foreign agents can easily set up five new accounts for every one account that the companies block.
Anyone who is in the US as a legal permanent resident can simply register as a US national. Somebody who’s here on a student visa, for example, a Chinese student here on a student visa, could either avoid the registration requirement by setting up a private account, or they could register as a Chinese national. But if that Chinese student can show that he or she is in the US on a student visa, then presumptively, they are not a state agent, and so they are not subject to the ban, because the only Chinese nationals subject to the ban are Chinese state agents.
A lot of people say that what we need is better media literacy training for ordinary social media users. And I think that’s a good idea. But I think that’s insufficient. The foreign agents are becoming more and more sophisticated in their ability to deceive us by passing themselves off as ordinary Americans, or French citizens, or British citizens, or whatever identity they assume. They’ve gotten very good at this. And for the ordinary social media user, it’s very hard to tell whether the person you’re interacting with on social media is who they claim to be.
The social media companies have better tools than ordinary users, so they’ve gotten better at identifying and blocking fake accounts created by foreign agents. But, as I said, they’re still playing this game of Whack-a-Troll where for every fake account that is blocked, the bad guys can come up with five more. So, media literacy training for ordinary social media users can help, particularly at identifying the most obvious kinds of disinformation and misinformation. But I think it’s at best a partial solution and doesn’t really go to the heart of the problem.
Social media really started to take off in the early 2000s or the middle of the first decade of the twenty-first century. The earliest evidence of widespread Russian use of social media for disinformation starts in about 2013 and 2014. It’s not at the very beginning of the rise of social media. It’s a little bit later. And China really gets into the game somewhat after that – with the exception of Taiwan because China has been actively intervening in Taiwan consistently. It took a while for Russian agents to become more sophisticated.
Today, one of the things that we see going on is that domestic political actors are using some of the same techniques to manipulate elections domestically. For example, in the election in Brazil that elected Bolsonaro as president, there were lots and lots of fake accounts on social media promoting his candidacy. Again, it’s very hard to say how much of an effect that had, but we know that there were lots of accounts, and we know that they were reaching lots of people, and we know that they were generally supporting Bolsonaro and his candidacy. And that’s happening now all over the world. Scholars at Oxford University have documented that this is happening in one degree or another in probably seventy countries around the world. So this is becoming a pretty commonplace phenomenon these days.
Under my proposal, in order for the regulation to be effective, I would require social media users to create one master account on, for example, Facebook. Under that master account, the user could operate several different subsidiary accounts on Facebook. But all those subsidiary accounts would be linked to one master account, because registration of the master account is where the government confirms yes, David Sloss is a real person, not a fictitious person. If we allow a single person to create multiple Facebook accounts, or multiple Twitter accounts, it becomes much too easy for Russian and Chinese agents to circumvent the ban by creating what I call “duplicate” accounts – a second or third Facebook account created in the name of a real US person who may or may not be on social media.
I think it would create real First Amendment problems if the government enacted a law saying that an individual US citizen is permitted to have only one Facebook account. There are lots of legitimate reasons why people might want to have more than one Facebook account. My proposal to have master accounts and subsidiary accounts addresses the First Amendment objection by allowing multiple subsidiary accounts. At the same time, the requirement for a single master account addresses the problem of circumvention by foreign agents.
I will just say I’m not a big fan of the Supreme Court’s First Amendment jurisprudence. Constitutional law scholars agree that one of the core goals of the First Amendment is to strengthen our democracy. The Supreme Court has tended to interpret the First Amendment in a way that I think weakens and undermines our democracy. So, to be perfectly honest, in writing the First Amendment chapter I struggled with whether to present a critique of the Supreme Court’s First Amendment doctrine, or whether to argue that my proposal actually passes constitutional muster under existing First Amendment doctrine. I ended up doing more the latter. In taking that approach, my argument takes advantage of a lot of ambiguity in existing First Amendment doctrine, and construes that ambiguity in a way that gives Congress greater leeway, rather than narrower leeway, to regulate social media for the purpose of protecting democracy.
As I said earlier, my proposal involves cooperation among liberal democracies to regulate social media. That would take the form of an international agreement that is implemented by domestic legislation in different countries. Let’s assume Congress enacts legislation to implement that international agreement. Do I think that the current Supreme Court would say that the legislation violates the First Amendment? Unfortunately, I think they probably would say that the legislation is unconstitutional. I think the Court looks worse today in that respect than it did even a few years ago, when I finished writing the manuscript. The composition of the Court has changed since then in ways that make it more likely, rather than less likely, that the Court would invalidate my proposal under the First Amendment. But I still stand by the argument that, under a correct interpretation of the First Amendment, the proposed legislation is constitutional because it promotes one of the core goals of the First Amendment, which is to make sure that we maintain a healthy democracy in this country.
I’ll just add that the Court tends to adopt a very libertarian approach to the First Amendment, which views all government regulation with skepticism. I don’t think that’s the right way to frame the issue. I think we need government regulation of social media in order to protect our democracy, and to help ensure that we have a healthy information ecosystem not only in the United States, but in other liberal democracies as well.
Let me just say a little bit about my background. I’m a law professor. I’ve been a law professor for almost 25 years. But before I was a law professor, I spent nine years working in the Federal Government. During my time in the Federal Government, I was working primarily on US-Soviet arms control negotiations, but also negotiations of other multilateral arms control agreements. So, I actually have experience in designing and creating multilateral agreements that bring together representatives from 30 or 40 countries to work out agreement on an international system of technology regulation. When I was in government, I was dealing with very different kinds of technology than what we’re talking about here. Nevertheless, anytime you try to get agreement among 30 or 40 countries to regulate some complex technology, it’s not easy.
One thing has changed significantly since I was in government. In those days, when the US asserted a leadership role, we could bring along a lot of our allies with us, and they would tend to follow our lead. I think you need some country in a leadership role to make this happen internationally. It doesn’t just evolve organically. You need a country out there that takes the lead, advocates for the proposal, and pushes it through.
If you look at what’s happening with information technology, generally, and social media, particularly, Europeans have been a lot more active than the US in regulating this technology. The US has been largely paralyzed by partisan disagreements in Congress, whereas Europe has been much more aggressive. So it’s an open question. Can the European Union really play the role of leading a multilateral negotiation that brings in the US, Japan, Canada, Australia, and the UK – a group of countries that are outside the EU – in designing this kind of system? I think the EU is probably better placed to do this than the United States is today. I tried early in the Biden Administration to get the Biden Administration to take this on, and I didn’t have a whole lot of success. Still, if you compare my proposal to other kinds of multilateral agreements that deal with complex technical issues, on a technical level, my proposal is no more complicated than a whole range of other international agreements. But you need some country that can assert leadership, and it’s not entirely clear who that is right now. I wish I could say the United States is well placed to do it, but I have my doubts about that. So it may require more European leadership.
The first is the preface. The preface is only five pages long. I have to say a bit about the drafting of this book to put the preface in context. I actually submitted the manuscript to the publisher in November 2020. The publisher then sent it out for external review. While it was out for external review, January 6 happened. And so the comments I got back from the external reviewers tended to raise questions about why I was so concerned about Chinese and Russian information warfare. “Look at what is going on domestically,” said the reviewers. “Isn’t that a bigger problem?” I think that is a fair criticism. January 6 was a huge problem, and there’s no evidence that it was instigated by foreign agents. It was a largely domestic development. So the preface addresses the question of how we need to think differently about domestic disinformation versus foreign disinformation. A lot of that hinges on the First Amendment, because the First Amendment gives Congress much greater leeway to target Chinese and Russian agents than it does to target domestic sources of disinformation. If you look at it through a First Amendment lens, I do think that domestic disinformation is a big problem, but I think it requires a different kind of solution. I think it doesn’t violate the First Amendment to ban Chinese and Russian agents from social media. You have a much bigger problem if you try to ban the Proud Boys from social media. That really does create significant First Amendment issues and probably would violate the First Amendment.
Second, there are a couple of figures on pages 9 to 10 that present graphically the story about democratic decay and creeping authoritarianism. Those figures provide a useful illustration for readers to see what’s going on in that respect.
Third, at the very beginning of chapter 6, the book has a two-page summary of my regulatory proposal. That’s at pages 145 to 146. So those two pages, I think, give readers a nice snapshot of what I’m proposing. After that, chapters 6 and 7 present much more detail about my proposal for transnational regulation.
If you asked me a year ago, what do you think are the chances that Congress will enact legislation along the lines I propose, I would have said, very slim. Now, I actually think there’s an opening. I was just reading something, recently, I believe in the New York Times, that we’re starting to see bipartisan cooperation, or at least glimmers of bipartisan cooperation, to deal with artificial intelligence. I don’t claim to be an expert on artificial intelligence; I don’t have great technical expertise in that area. Based on the little bit I do know, artificial intelligence seems to offer both real benefits and real dangers. On the danger side, what it essentially does is it magnifies the kinds of harms that we have seen from social media, particularly the harm to democracy. So if Congress really wants to deal with the threat to democracy posed by artificial intelligence, I think the best way to do that is to deal with how artificial intelligence interacts with social media. So, this may actually provide an opening for people in Congress to reach across the aisle and come up with bipartisan cooperation that addresses the harm to democracy from social media, which is essentially magnified by artificial intelligence. That’s at least a possible optimistic note on how we might move forward.
One thing I’ll just add is that I want listeners to know that this book is available on audio as well as in print. Those who like to listen to books rather than read their books can get it on Audible. They hired a professional actor to read it, who I am told is very good. I’m excited about the fact that this is actually the first book I’ve done that’s available as an audio book.