Opinion: What I Learned in Twitter Purgatory
One morning last week, around 7:30, I woke up, rolled over in bed, and reached for my phone to check my email. With one eye open, I quickly scrolled through listserv messages, faculty notices, and some junk and then I saw something that made me sit up, open both eyes, and smile: a message with the subject line “Your Twitter account has been suspended.”
For most people, having their account suspended from a social-media platform is not at all funny. Being cut off from Facebook, Twitter, or YouTube can cause real professional, logistical, and psychological harms: being cut off from business contacts, losing access to third-party sites, becoming unreachable by friends. In a time of quarantine and pandemic, when people are isolated from friends and loved ones for months at a time, access to online communities and conversations is more vital than ever.
But for better or worse, I’m not most users. For the past five years I have researched and written about the rules that govern online speech and how they are enforced. Though most of my research has focused on Facebook—I spent the last year embedded at the company with the team charged with building out a new external-appeals system for just this kind of problem—many of Twitter’s rules and policy rationales are similar.
The tweet that had landed me in social-media jail was one I’d sent the night before. I have been co-hosting a daily live-streamed show with Atlantic contributing writer Benjamin Wittes since the beginning of the pandemic, and our guest last Tuesday night had been the writer Molly Jong-Fast. In the midst of the show, she had had a sotto voce conversation with her spouse, who had reached on camera to try to take a plate away. Jong-Fast had jokingly responded, “If you take that I will kill you,” before turning back to the camera and smilingly saying, “Working at home is a delightful adventure.” Although her response was funny in the moment, my quotation of it in a tweet didn’t exactly capture the humor.
But I wasn’t suspended from Twitter for telling a bad joke. Because I’d included the words I will kill you, I was banned for violating Twitter’s rules about incitement to violence. In the simple notification I received from the company, I could see a lot that many users probably couldn’t. Content that violates the rules of a social-media platform gets flagged in two ways: by other users reporting it or through algorithmic identification. After the recent shootings in Kenosha, Wisconsin, social-media companies came under intense scrutiny to take down posts and accounts that threatened or called for violence. In such a moment, Twitter can cast a wide net for potential infractions by using algorithms that also generate a large number of false positives—say, by flagging lots of postings that include the word kill.
Once a posting is flagged as potentially problematic speech, a human likely still has to review it—but that human is probably a contract worker at a call center–type facility outside the United States. That’s not necessarily a problem, but what counts as humor, satire, or political protest depends heavily on cultural context. In addition, many such centers have been shut down because of the pandemic, and the number of humans reviewing content has been dramatically reduced—making accurate content moderation even harder.
So an overly literal and aggressive AI likely flagged my speech, a human likely reviewed it and did not get the humor—understandable—and removed it. In normal times this would have meant just my tweet coming down, but these are not normal times. So my entire account was suspended. Right now, the public-relations cost for Twitter of keeping up an account that might be spreading violent propaganda is very high—and the downside to the company of suspending the average user is very low.
But I am in a better position than most users to potentially make trouble for the company. Not only do I study speech rules for a living, but I also talk to a lot of journalists about content moderation. Because of my professional focus, I also have an unusually high level of access to people at the company. Which is presumably why, less than an hour after suspending my account, the company reinstated it.
Despite all the shininess of new technology, so often getting suspended accounts or deleted posts restored works the good old-fashioned way: via influence, power, connections. It’s an unfair system that favors elites and disfavors the average user who doesn’t know someone at a technology company, or a government official, or someone with 100,000 followers. And because I disagree with that kind of special treatment, even though I had connections, I was resolved to wait out my appeal—a process that can take up to two weeks.
But a screengrab I had sent to a few friends of the suspension email from Twitter quickly made its way to someone internally at Twitter. Another friend tweeted out the image. Many users found the humor in a “leading expert” on content moderation being banned for violating content rules and retweeted it. Twenty minutes later, my account was restored.
That’s what it’s like to be on the sweet side of the curve and have a bunch of privilege. That’s also not how speech regulation should happen. By removing old gatekeepers and making self-publication possible, the internet has been a tremendous democratizing force, expanding both the freedom to speak and the potential audience. But not everything has changed. Many features of old-school cronyism and power imbalances have simply been reproduced online, and some problems—bullying, harassment, abuse, hate speech—have been made much worse.
If society is going to try to correct for the bad, we have to simultaneously try to build out the good. Doing this is not easy. Many people offer simplistic, reactionary opinions of and against Big Tech: “take down more news” or “create panels to judge truth” or “regulate tech companies” or “break tech companies up.” But these angry talking points don’t speak to how any of these policies would work, or how they would solve any problems in the long term. “I will kill you” is a threat, and it is a joke. Censorship and authoritarian surveillance are a real threat, and so are conspiracy theories, misinformation, and cult formation. That you cannot attempt to solve the problems on one side of the spectrum without worsening the issues on the other is exactly what makes online-speech regulation a hard problem, and one deserving of more deliberation than you can fit in 280 characters.
Professor Kate Klonick joined the Law School faculty in 2018. She teaches Property, Internet Law, and a seminar on Information Privacy at St. John’s and is a fellow at the Information Society Project at Yale Law School. This piece originally appeared in The Atlantic on September 8, 2020.