The tech journalist Casey Newton recently described the most common question asked by Birdwatch users, up to this point, as “Is this insult fair?” That seems accurate to me. I did find some more consequential questions raised, such as “Who is actually part of Antifa?” and “Should we take the COVID-19 vaccine?” and “Does pineapple belong on pizza?” (My favorite note, if you were to make me choose, was added to a tweet from The Economist about the growing trend of insect-based food products. “We’re not going to eat insects,” a Birdwatcher countered.) But overwhelmingly, as I scrolled through the reports, the objections I found were personal.
In theory, the Antis—or other, more nefarious groups with single-minded concerns—will not be able to wage sustained war on their nemeses through the Birdwatch system. Their notes aren’t likely to produce the “diverse consensus” necessary for upranking, because other people are not likely to care about them. But that’s not going to stop them from trying. When I messaged a 20-year-old One Direction fan who had tweeted, “larries we coming for you” in response to the Birdwatch announcement, she told me that she wasn’t actually part of the pilot program, but in the future, she would definitely add notes to tweets from Larries. (She asked to be anonymous, because she doesn’t want her real name associated with her stan account.)
“Larries are spreading misinformation,” she said. She was happy to hear that other Antis had already started using Birdwatch to prosecute their case, and said that if I was able to get in touch with any of them, I should “send them a big hug.”
When Birdwatch was first announced, the company was met with an understandable knee-jerk reaction. Oh, good idea! Let’s put Twitter users in charge of deciding what is “reality.” Even seen in the best light, it represents an untested iteration of the existing system for labeling misinformation, which hasn’t yet been tested very well itself. Savvas Zannettou, a researcher at the Max Planck Institute for Informatics who has studied the impact of Twitter’s warning labels, cautions that Birdwatch could easily go wrong. “I will have to see how it works in practice,” he told me, but “I’m pretty confident that people will abuse and troll the system.” He mentioned the possibility of 4chan brigades; I saw several suggestions that warring stan armies will be the biggest abusers.
Still, Zannettou was optimistic about the idea and considers it a good one, as long as it can be executed in a way that minimizes manipulation. Other online spaces exist where crowdsourced definitions of fact and reality have produced reliable results. Wikipedia has gotten better and better over time. WikiHow, its founder told me in 2019, is a project that “doesn’t work in theory; it only works in practice.” These sites function because the people who contribute to them and edit them identify themselves as members of a community working toward a common goal. They also function because of hierarchies: Some experienced, committed editors have powers that others don’t. On Wikipedia these are called admins, a function that was added in the site’s first year to address vandalism by trolls and “editing wars” between egotistical contributors. In a 2010 study, researchers found that the majority of admins considered editing Wikipedia “rewarding” or “very rewarding,” and that 73 percent had been doing so for more than three years.