This first appeared in Seattle’s Post Alley on December 9th, 2021

On its face, Jack Dorsey’s resignation as CEO of Twitter last week was just another rearrangement of the Silicon Valley furniture, albeit an outsized one, given Dorsey’s iconic stature. At a different level though, the move opened a new chapter in the debate about social media platforms, regulation, the future of the internet, and ultimately how we define and allow free speech.

As the Jack Dorsey era of Twitter Inc. came to a close, a ten-year Twitter veteran, Parag Agrawal, formerly its Chief Technical Officer, took the helm. Agrawal’s appointment portends a faster-moving company. But it may also signal an even more algorithmically filtered, boosted and suppressed conversation in the years to come.

How so?

Imagine you had an algorithm which could instantly calculate the health or danger of any given tweet or online conversation. For instance, it might look in on a substantive, respectful debate amongst career astrophysicists and assign positive scores of 4852325 and climbing, but evaluate a hateful, racist tirade at -2439492, trending lower and lower.

Wouldn’t such an algorithm be something you could use to make Twitter conversations healthier? Couldn’t it also be used to block bad actors and boost desired, good-faith discussion, thereby reducing harm and promoting peace? The man who once championed this tantalizing, risky idea within Twitter is its new Chief Executive Officer, Parag Agrawal.

So what, you say. But therein lies a fierce and philosophical battle raging about how free speech should be defined, measured, protected and even suppressed online.

Twitter has become the dominant tool for the world’s real-time news consumers and purveyors — which in a sense, is all of us. It has been especially useful for journalists, particularly in the understanding of and reporting upon real-time news events. It’s become highly influential in politics and social change. And as a result, the CEO of the Internet’s real-time public square is far more than a tech executive. Ultimately, he and the people he appoints, pays, and evaluates, cast the deciding vote on consequential questions, such as: what news, political and scientific discussion are allowed to be consumed? What are we allowed to know is happening right now? Who determines what can be expressed, and what cannot? What gets boosted, and who gets booted?

Dorsey stepped down via an email to his team on November 29, 2021, posting it to where else but his Twitter account. He wrote that being “founder-led” can be a trap for a company, and he wanted Twitter to grow beyond him. He expressed “bone deep” trust and confidence in successor and friend, Agrawal.

But there were likely other reasons behind his decision too. Dorsey’s been public about wanting to spend more time promoting Bitcoin; he’s called being a Bitcoin missionary his “dream job.” There’s also the small matter that his other day-job has been running Block Inc., (formerly known as Square), the ubiquitous small-business payments processor, which is now worth more than $80 billion and employs more than 5,000. Adding to the incentive: Dorsey also owns a much bigger personal stake in Block than he does in Twitter.

A final, less discussed contributor might be simmering investor dissatisfaction. Twitter’s stock price is languishing in the mid $40’s, the same trading range it had eight years ago:

Snapshot taken December 9th, 2021

And its user numbers, while growing, have not shown the rapid growth many investors expect. Twitter’s user growth has been small compared to Facebook, Instagram and TikTok.

Activist investors Elliott Management and its ally Silver Lake Partners own significant stakes in Twitter, and they pushed for new leadership and faster innovation. According to Axios, while Elliott Management resigned its board seat in 2020, it demanded and got two things in return: new management, and a plan to increase the pace of innovation. Also looming large are regulator moves, debates over user safety and privacy, and controversy over moderation.

Agrawal has impressive technical chops. He earned a BS in Computer Science and Engineering from the prestigious Indian Institute of Technology (IIT), then earned a PhD in Computer Science at Stanford University in 2012. He’s worked in brief stints at Microsoft Research, AT&T Labs and Yahoo before joining Twitter in 2011, rising through the ranks over ten years. He’s led their machine learning efforts, and he’s been intimately involved in a research project called “BlueSky,” which is a decentralized peer to peer social network protocol.

Agrawal has moved quickly, shaking up Twitter’s leadership team. Head of design and research Dantley Davis is stepping down — the scuttlebutt is that Dantley demonstrated an overly blunt and caustic management style that rubbed too many employees the wrong way. Head of engineering Michael Montano is also departing by year’s end. Agrawal’s lines of authority are now more streamlined; he has expressed a desire to impose more “operational rigor.”

“We want to be able to move quick and make decisions, and [Agrawal] is leading the way with that,” said Laura Yagerman, Twitter’s new head of corporate communications. Agrawal’s swift change in key leadership positions suggests that Dorsey didn’t leave entirely of his own volition.

While Dr. Agrawal brings deep technical experience to the role of CEO, most outside observers are focused intently on his viewpoints regarding free speech and censorship.

Every day, voters, world leaders, journalists and health officials turn to Twitter to exchange ideas. As I write this today, the public square is pondering the dangers (or potentially nascent optimistic signs) of a new COVID variant. Foreign policy twitter is abuzz about troops massing on the Ukraine’s border and China’s activities in both Hong Kong the China Sea. Law Enforcement Twitter is asking the public for crowdsourced detective work on the latest tragic homicides.

What they all have in common is this: these stories often come to the world’s attention via Twitter. Twitter decides which types of speech should be off-limits on its platform. They say who gets booted, and what gets boosted. In other words, they have a big role in defining the collective Overton Window of online conversation. Ultimately, Twitter’s moderation policies, algorithms and (for now at least) human editorial team decide what can and cannot be said, what gets amplified, and what gets shut down.

Further, our world increasingly conflates the concepts of internet “consensus” and truth. So how do we go about deciding what information is true, and what is gaslighting? Which sources will Twitter deem “credible” and which untrustworthy? What labels will get slapped on your tweets?

The CEO of Twitter has an enormously powerful role in determining what does and doesn’t come to the public’s attention, what catches fire and what does not, and who is anointed with credibility. Agrawal knows this intimately; it’s been a big part of his work for the past several years. Twitter’s servers process nearly one billion tweets every day. And usage has swelled to nearly 220 million daily active users, with few signs of slowing:

More important, perhaps, is the highly influential nature of these users. Seth Godin called such influencers “sneezers of the Idea Virus.” Watch any cable TV news channel for more than fifteen minutes, and you’re likely to encounter someone talking about what someone has tweeted. Indeed a very high number of politicians, journalists, celebrities, government and policy officials use Twitter regularly, either to spread, consume or evaluate information. Twitter’s moderation policies can act quickly to fan an ember, or snuff it out.

During Dorsey’s tenure, Twitter came under withering fire for too-hastily suppressing and blocking views. It’s also come under fire for the opposite reason — not being fast enough to block and remove misinformation (for instance “Gamergate,” and later “QAnon” and communication surrounding January 6th.)

Most recently, concern over Twitter’s moderation policies and blocking/amplification/suppression been fiercest from civil libertarians, the right, and center-right. Among the examples:

  • In October 2020, just weeks before the presidential election, Twitter blocked the New York Post for weeks for its explosive scoop on Hunter Biden’s laptop. Twitter first said the ban was because the materials were hacked, though to this day, there is no definitive proof they were obtained that way. Subsequent reporting by Politico this year independently confirmed the authenticity of several of those emails. The New York Post was prevented from participating on Twitter for weeks leading up to the 2016 election. Dorsey later apologized for this blocking, calling it a “total mistake,” though he wouldn’t say who made it.
  • Twitter locked the account of the Press Secretary of the United States for retweeting that Biden laptop story.
  • In October 2020, Twitter suspended former Border Patrol Commissioner Mark Morgan for tweeting favorably about the border wall.
  • Twitter temporarily banned and then permanently suspended Donald Trump, a sitting president of the United States, citing his repeated violations of terms of service, specifically its Glorification of Violence policy. Yet Twitter does not ban organizations like the Taliban, nor does it suspend world leaders who threaten the nation of Israel’s existence; they generally only remove individual tweets.

Even before several of these incidents above, in a 2018 interview at NYU, Dorsey admitted that Twitter’s conservative employees often don’t feel comfortable expressing their opinions. And he conceded both that Twitter is often gamed by bad-faith actors, and that he’s not sure that Twitter will ever build a perfect antidote to that gamification. In 2020, a massive hack exposed the fact that Twitter has administrative banning and suppression tools, which among other things allow their employees to prevent certain topics from trending, and which also likely block users and/or specific tweets from showing up in “trending” sections and/or searches.

As Twitter’s influence rose, these decisions caused consternation among some lawmakers, Dorsey was pressed to sit before multiple Congressional hearings, in which he’s asked about these instances and more:https://www.youtube.com/embed/dCb9ABk-BVk?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent

One big issue is “bots,” (short for robots), which are automated programs which use Twitter’s platform and act as users. They amplify certain memes by posting content to Twitter, liking and retweeting certain content, replying affirmatively to things they are programmed to agree with, or negatively to things they are not, etc. They are a great example of how Twitter, in its “wild west” initial era, often let its platform be manipulated.

Twitter’s initial response was slow; one has to remember that bots help amplify usage numbers which in turn might help create a feeling of traction (and or ad-ready eyeballs.) But bots are often designed with malicious intent to skew the public’s perception of what’s catching fire, or to add credibility to false stories. Since 2016, Twitter has gotten more aggressive about cleaning out bots, and in 2018 greatly restricted use of their application programming interface (API.) Earlier this year, after years of hedging, Twitter finally decided to take aggressive action to de-platform the conspiracy fringe group QAnon, suspending 70,000 accounts related to that conspiracy movement, after the January 6th riot at the United States Capitol. Dorsey regretted that this ban was done “too late.”

The justification for these interventions often centers around harm. Or perhaps more accurately, it centers around what Twitter’s human and algorithmic decisionmakers judge in the snapshot moment to be “harmful.”

What’s Agrawal’s attitude about free speech? While some civil libertarians and commentators on the political right initially cheered Dorsey’s departure, that enthusiasm quickly cooled. That’s because Agrawal has in the past signaled very clearly that he believes Twitter’s censorship policy should not be about free speech, but about reducing harm and even improving peace. You can get an idea for Agrawal’s philosophy in his extended remarks with MIT Technology Review in November 2018:

“[Twitter’s] role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation and our moves are reflective of things that we believe lead to a healthier public conversation. The kinds of things that we do about this is, focus less on thinking about free speech, but thinking about how the times have changed. One of the changes today that we see is speech is easy on the internet. Most people can speak. Where our role is particularly emphasized is who can be heard. The scarce commodity today is attention. There’s a lot of content out there. A lot of tweets out there, not all of it gets attention, some subset of it gets attention. And so increasingly our role is moving towards how we recommend content and that sort of, is, is, a struggle that we’re working through in terms of how we make sure these recommendation systems that we’re building, how we direct people’s attention is leading to a healthy public conversation that is most participatory.” (Emphasis added.)

In 2010, he tweeted:

The double-negative is a bit cryptic, but one interpretation of this is that book banning might be not only acceptable but relatively desirable if it increases societal peace. This sentiment is most definitely not aligned with those who believe, as I do generally, that the best antidote to speech with which you disagree is more and better speech.

As one wag put it, “I’ll be happy to ban all forms of hate speech, as long as you let me define what it is. Deal?”

More of Agrawal’s outlook can be discerned from his November 2018 interview with MIT Technology Review. For some time from 2015 to about 2018, he and the rest of the technical team at Twittter put great effort into determining whether the health of any given public conversation can be scored algorithmically. But thus far, that effort appears to have yielded disappointment. Yet Agrawal seems undaunted in this quest.

Agrawal’s Holy Grail of algorithmic scoring of the “health” or potential “harm” of a public conversation isn’t yet fully possible. Thus, they employ humans to curate discussion, and block, ban, suppress and promote (through sorting) certain expressions over others. Now, given that human editors are expensive, Twitter focuses them on a few subjects. Agrawal specifically names pandemic response and election integrity as two areas which he deems most appropriate for such intervention. Yet let’s keep in mind he also clearly believes that automated algorithmic “scoring” of healthy conversation is both possible and desirable.

Our approach to it isn’t to try to identify or flag all potential misinformation. But our approach is rooted in trying to avoid specific harm that misleading information can cause.

Dr. Parag Agrawal, Twitter’s new CEO, MIT Technology Review November 2018

While controlling discussion to promote peace might seem to be an unalloyed good, it’s not at all clear that a harm-reducing, peace-promoting Internet public square is also necessarily a truth-promoting one. For one thing, truth doesn’t care about its impact. And it isn’t always revealed right away. Our understanding and interpretation of facts change over time. It seems increasingly often that things which we once “knew” to be certain are suddenly revealed to be quite different. Would such an algorithm optimize for the wrong things, leaving us less informed in the process? These and other conundra confront Twitter’s new CEO, who took office last week.

In a way, Agrawal’s appointment as Twitter CEO can be seen as an important waypoint in the Internet’s transformation from techno-libertarianism to a much more progressive worldview with a willingness to use a heavier hand. Anti-establishment civil libertarians used to typify many Internet and tech leaders’ outlook. Yet quite steadily over the past decade, a progressive worldview has grown dominant. While one side values free speech and civil liberties as paramount values, the other believes societal peace, equity, and least “harm” trump other goals. And for some, if free speech needs to be sacrificed to achieve it, so be it. Throughout his tenure, Dorsey himself has shown elements of each philosophy.

Agrawal may be a technocrat progressive. In 2017, he donated to the ACLU so it could sue President Trump in 2017. He has also likened religion to a pyramid scheme:

Yet Agrawal it would be highly inaccurate to characterize Agrawal as a censorship extremist. He advocates more open access to Twitter’s archives through Application Programming Interfaces (API’s), and more third-party analysis of what’s discussed on the platform.

One hopeful sign is that Agrawal has already experienced his own “my old tweets have been taken greatly out of context” moment immediately after being made Twitter’s new CEO. Critics on the right seized on this October 26th 2010 tweet of his, suggesting it somehow demonstrates that he’s equating white people with racists:

But as he quickly explained, “I was quoting Asif Mandvi from The Daily Show,” noting that his intent was precisely the opposite. Agrawal was joking about the harm of stereotypes. He was of course not making a factual statement, but using sarcasm to make a larger point.

As someone who tends to side more with civil libertarians with respect to free speech, I hope he remembers that it was his ability to respond and clarify and respond with more speech which helped convey his true feelings and much more clearly convey the truth that went beyond the first 280 characters. Wasn’t it best for him that he could quickly dispel a controversy and continue to engage, and wasn’t banned due to some lower-level employee determining his first tweet caused harm under at least one subjective interpretation?

Perhaps the central conundrum is that content moderation is impossible to get perfectly “fair” or least-harm-imposing. No algorithm or human will be able to make the correct decision at every moment. Thus, guidelines need to exist which define an optimal content moderation policy. For that, you need the platform’s leader to define what should be optimized via such a policy. Truth? Liberty? Fairness? Viewpoint Diversity? Peace?

Back to the thought-exercise which started this piece. Would everyone score the “harm” of a given conversation the same way, or the credibility or intent of the speakers? Obviously, we wouldn’t. Algorithms — especially machine learning algorithms — are tremendously powerful, but they also can give an illusion of objective authority. In reality, they’re only as good as their data training and evaluation sets, and those evaluation sets have explicit goals. Each desirable metric chosen to optimize (Peace, Truth, Viewpoint Diversity, etc.) would yield very different algorithms. And the result would be very different content moderation, amplification and suppression policies.

Agrawal’s view will likely evolve, but for the moment, he appears to prioritize what he considers “healthy conversation” and avoidance of “harm.” It is how he actually defines health and harm which will be very important indeed. For it will determine what we know to be true, and from whom we hear.

Jack Dorsey’s resignation letter concludes with this statement: “My one wish is for Twitter Inc. to be the most transparent company in the world.”

That would be most welcome. But they have a very long way to go indeed. Godspeed, Dr. Agrawal.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.