#TwitterFiles: The Complete List

An index of all the Twitter Files threads, including summaries.

Twitter Files: The Complete List

The Twitter Files are a set of Twitter threads based on internal Twitter Inc. documents that were made public starting in December 2022. Here’s a complete list as of this writing, with direct links to both the Twitter threads and relevant hashtags. I offer up my own subjective summary of some key points, but I urge you visit the thread and form your own opinion. I’ll attempt to keep up to date as new ones come in.

Nomenclature: I refer to the pre-Musk era at the company as Twitter 1.0.

SubjectAuthor(s)
1. Twitter and the Hunter Biden Laptop Story, Dec 2 2022

Summary: Twitter blocked the New York Post from sharing a bombshell October 2020 story about Hunter Biden’s laptop contents, just prior to the 2020 US presidential election. It also suppressed people re-sharing this story, including the Press Secretary of the United States. Twitter attempted to justify this under its “hacked materials” policy, even though there was considerable debate about whether it legitimately applied.

1a. Twitter Files Supplemental

Summary: The Twitter Files 1 thread was delayed, based upon the surprising revelation that then-employee Jim Baker, former FBI General Counsel and current Twitter Deputy Counsel had been reviewing all materials before handing them to the journalists Musk invited to Twitter HQ. (Musk let Baker go.) Bari Weiss uncovers the Baker story.

Discussion: (#TwitterFiles)

MT
2. Twitter’s Secret Blacklists: Shadow Banning and “Visibility Filtering” of users, Dec 8, 2022

Summary: Was Twitter 1.0 “shadow-banning?” Twitter executives Jack Dorsey and Vijaya Gadde have frequently claimed that Twitter does not shadow-ban, but multiple tools exist within Twitter to limit the tweet distribution and visibility of a given account. “Do Not Amplify” settings exist, as do several settings around propagation of tweets to others.

Discussion: (#TwitterFiles2)
BW
3. The Removal of Donald Trump Part One: Oct 2020-Jan 6 2021, Dec 9, 2022

Summary: On January 7th 2021, Twitter summarily banned the 45th President of the United States from its platform. What led up to their decision, and what were some of the internal conversations surrounding it? Part 1 of 3.

Discussion: (#TwitterFiles3)
MT
4. United States Capitol Attack January 6th 2021, Dec 10 2022

Summary: The ban of Donald Trump from Twitter stemmed directly from the January 6th 2021 attack on the United States Capitol by supporters/protestors/rioters. The stunning event led Twitter executives to finally make the call they had long discussed. Part 2 of 3

Discussion: (#TwitterFiles4)
MS
5. The Removal of Trump from Twitter, January 8th 2021: Dec 12, 2022

Summary: Trump was banned from Twitter on January 8th, 2021. Though Twitter 1.0 was always adjusting discussion rules on the platform, it’s notable that on January 7th, Twitter staff adjusted several key rules to allow for and justify the banning of the then-President. Part 3 of 3

Discussion: (#TwitterFiles5)
BW
6. FBI & Hunter Biden Laptop, Dec 16, 2022

Summary: The FBI attempted to discredit factual information about Hunter Biden’s foreign business activities both after and even before the NY Post revealed the contents of his laptop. Why would the FBI be doing this? And what channels existed between the FBI and Twitter 1.0?

Discussion: (#TwitterFiles6)
MS
7. Twitter, The FBI Subsidiary, Dec 19, 2022

Summary: Twitter’s contact with the FBI was constant, both social and professional, and pervasive. A surprising number of communications from the FBI included requests to take action on election misinformation, even involving joke tweets from low-follower and satirical accounts. FBI asked Twitter to look at certain accounts, suggesting that they “may potentially constitute violations of Terms of Service.”

Discussion: (#TwitterFiles7)
MT
8. How Twitter Quietly Aided the Pentagon’s Covert Online PsyOp Campaign, Dec 20, 2022

Summary: While they made public assurances suggesting they would detect and thwart government-based manipulation, behind the scenes Twitter 1.0 gave approval and special protection to a branch of the US military related to psychological influence operations in certain instances.

Discussion: (#TwitterFiles8)
LF
9. Twitter and “Other Government Agencies”, Dec 24 2022

Summary: The FBI responds to Twitter Files 7, vigorously disputing some of the framing and reporting. Taibbi responds to FBI communication and press releases, and further shares internal documents related to FBI and “other government agency” correspondence.

Discussion: (#TwitterFiles9)
MT
10. How Twitter Rigged the COVID Debate, Dec 26, 2022

Summary: David Zweig illustrates how Twitter 1.0 reduced the visibility of true but perhaps inconvenient COVID information, and discredited doctors and other experts who disagreed.

Discussion: (#TwitterFiles10)
DZ
11 and 12.

How Twitter Let the Intelligence Community In, Jan 3, 2023

Summary: Twitter 1.0 responds to governmental inquiry regarding some Russian-linked accounts, attempting to keep the governmental and press focus on rival Facebook.

Twitter and the FBI “Belly Button”, Jan 3 2023

Summary: Twitter 1.0 works diligently to resist acting on State Department moderation requests. In the end, it allowed the State Department to reach them via the FBI, which FBI agent Chan calls “the belly button” of the United States government.

Discussion: (#TwitterFiles11)
MT
13. Twitter and Suppression of COVID Vaccine Debate, Jan 9 2023

Summary: Scott Gottleib, a Pfizer board member, used his influence to suppress debate on COVID vaccines, including from the head of the FDA. Twitter 1.0 frets about the damage the effectiveness of natural immunity might have on vaccine uptake, and Twitter slaps a label on a key tweet former FDA commissioner Brett Giroir’s tweet touting the strength of natural immunity.

Discussion: (#TwitterFiles13)
AB
14. The Russiagate Lies One: The Fake Tale of Russian Bots and the #ReleaseTheMemo Hashtag, Jan 12 2023

Summary: On January 18th 2018, Republican Congressman Devin Nunes submitted a classified memo to the House Intelligence Committee listing abuses at the FBI in getting surveillance approval of Trump-connected figures. His memo also called into question the veracity and reliability of the Steele “Dossier.” #ReleaseTheMemo started trending, but Democrats attempted to discredit this by saying it was all being amplified by Russian bots and trolls, referencing Hamilton 68, a dashboard powered by the Twitter API (See Twitter Files #15, next in the series.) Though Nunes’ assertions would eventually be basically fully verified in a report by the Justice Department, a significant PR campaign was launched to discredit the memo, labeling it a “joke.” This TwitterFiles thread discusses Democrats’ desire to discredit the #ReleaseTheMemo hashtag as being of Russian origin/amplification, and Twitter’s compliance with those requests. Note that there is heavy reliance on the “Hamilton 68 Dashboard” in many of these discussions, which is the subject of Twitter Files 15. The important bit: Twitter executives knew it was fraudulent from about 2017 onward, yet did nothing to discredit it in the media, allowing this DNC-message-benefitting sham to continue.

Discussion: (#TwitterFiles14)
MT
15. Move Over, Jason Blair: Twitter Files Expose Next Great Media Fraud (Hamilton 68 Dashboard), Jan 27 2023

Summary: This thread delves into the Hamilton 68 dashboard referenced in TwitterFiles 14 above. Twitter knew as early as October 2017 that it was simply pulling tweets from a curated list of about 650 accounts, and also knew that very few of those accounts were actually Russian. They knew that the media and Democrat officials were citing Hamilton 68 Dashboard as somehow credible. Though Twitter executive Yoel Roth tried several times to raise internal concern about the integrity of this tool, he was overruled within Twitter, and Twitter 1.0 never directly discredited this tool or explained how it worked.

Discussion: (#TwitterFiles15)
MT
Complete List of “Twitter Files” Threads

Authors

MT: Matt Taibbi, Racket News: @mtaibbi

MS: Michael Shellenberger, Michael Shellenberger on Substack: @shellenbergermd

BW: Bari Weiss, The Free Press, @bariweiss

LF: Lee Fang, The Intercept, @lhfang

AB: Alex Berenson, Alex Berenson on Substack, @alexberenson

DZ: David Zweig, The New Yorker, New York Times, Wired, @davidzweig

Meta-Story: Behind the Scenes, In the Authors’ Words

Our Reporting at Twitter, Bari Weiss, The Free Press, December 15 2022

Interview with Matt Taibbi, Russell Brand:

Wait, Twitter Knew The “Russian Bot” Narrative Was Fake… For Five Years?

In the most explosive Twitter Files yet, Matt Taibbi uncovers the agitprop-laundering fraud engineered by a neoliberal think-tank.

It’s been a little more than three months since Elon Musk burst into the Twitter headquarters in San Francisco, bathroom sink in tow, wryly captioning his tweeted photo “Let That Sink In.” In the time since (has it really only been fourteen weeks?), Musk has slashed staff and made many internal changes. In a type of “Sunshine Committee” initiative, he’s invited a team of independent journalists to Twitter’s HQ to rifle through internal communications. Musk is letting them uncover what they may. His only proviso is that these journalists must first publish what they discover about the Twitter 1.0 era… on Twitter itself.

And thus, the #TwitterFiles were born. We’re now up to thread Number 15, one of the most interesting ones yet.

In #TwitterFiles 15 published on January 27th 2023, journalist Matt Taibbi documents how an ostensibly bipartisan Washington DC political organization leveraged Twitter to disseminate a mysterious dashboard purporting to reveal the big online narratives that “Russian bots” were amplifying. The dashboard was called “Hamilton 68”, and its name stems from Federalist Paper 68, a treatise warning against foreign influence in elections authored by Alexander Hamilton in 1788. Alexander Hamilton supplied the name, and a thin veneer of high-tech and well-credentialed advisors supplied gravitas.

The organization behind this media tool has one of those “Who Can Possibly Be Against This?” institute names: The Alliance for Securing Democracy (ASD.) Its Advisory Board includes ex-FBI and Homeland Security staffers (Michael Chertoff, Mike Rogers), Obama Administration and DNC officials (Michael McFaul, John Podesta, Nicole Wong), academics and European officials and formerly conservative pundits (Bill Kristol.) Taken as a whole, the ASD is comprised largely of officials affiliated with the Democratic party and this nation’s security apparatus. The Hamilton 68 Dashboard project was led by former FBI counterintelligence official and current MSNBC contributor Clint Watts.

From 2017 up until about one week ago, the Hamilton 68 Dashboard was highly regarded, and cited by numerous mainstream media outlets, from The Washington Post to MSNBC to Politifact to Mother Jones to Business Insider to Fast Company. It was the genesis for countless news stories from 2017 through 2022. Maybe you read Politico’s The Russian Bots are Coming. Or the Washington Post’s Russia-linked accounts are tweeting their support of embattled Fox News host Laura Ingraham.

Or maybe you watched one of CNN’s many stories on the growing threat of Russian bots, such as this one:

Or maybe you watched this piece on The PBS News Hour, warning about how “Russians” are amplifying hashtags like #ReleaseTheMemo:

Or maybe you caught MSNBC’s Stephanie Ruhle casually asserting that “Russians are amplifying this hashtag”, an assertion which came from Hamilton 68 output:

No matter where we heard it, millions of us heard it. And read it. “The Russians are amplifying these terms on Twitter!”

Before we go further, let’s put one thing to bed: Is Russian bot activity, at least to some extent, real? Yes, it is.

Clearly, disinformation efforts have been underway since the dawn of communications, through journalism, radio, television, the Cold War, computer networking, and then, greatly accelerated during the era of social media. Foreign troll and bot activity has been documented first-hand. For that matter, we in the US are no doubt sending and amplifying messages their way too.

But the fraudulent “Hamilton 68” project by ASD deceptively leveraged public desire for bipartisan monitoring, with only the thinnest of high-tech patinas for partisan political gain.

How so? Here’s the shocker: The only thing behind the vaunted “Hamilton 68” Dashboard was… a list. No, not some algorithmically curated list looking at, say, the IP addresses of tweeters. Nor was it a list of known Russian agents, nor even frequent robotic re-tweeters of Kremlin agitprop.

No, the list was simply a bunch of accounts on Twitter that Hamilton 68 staffers hand-picked, and then summarily declared to be Russian bots or Russian-affiliated. The Hamilton 68 Dashboard was simply a list of these 648 Twitter accounts, right-leaning Twitter accounts for sure, but in no way provably Russian “bots.” While there were a handful of Russian accounts sprinkled in that 648, Russian accounts didn’t even represent the majority. The majority of accounts were merely conservative-leaning US, UK or Canadian citizens. You could just as easily have curated your own 648-person list yourself. Had Hamilton68 staffer selected a list of teens, their rigorous “analysis” would have implied “The Russians are amplifying the #TidePodChallenge on Twitter.”

Get that? Quite a racket. Assemble a heavyweight panel of credentialed experts. Build a list of accounts who tend to favoring the messages of your political opponents. Label it “Russian Disinformation,” and add a veneer of high-tech and state-apparatus gravitas. Critically, keep the methodology secret. Then, feed this “advanced dashboard” to the media, and boom — endless “news” pieces about — wouldn’t you know it? — Russian bots preferring GOP-aligned messaging. Opposition research PR has never been so easy.

According to the Wayback Machine, ASD has been trumpeting the Hamilton 68 Dashboard thusly:

These accounts were selected for their relationship to Russian-sponsored influence and disinformation campaigns, and not because of any domestic political content.

We have monitored these datasets for months in order to verify their relevance to Russian disinformation programs targeting the United States.

…this will provide a resource for journalists to appropriately identify Russian-sponsored information campaigns.

ASD Website, Hamilton 68 Dashboard, 2017-2022 (now updated)

What the ASD primed the media to run with as “Russian disinformation” were nothing more than the thoughts of a group of largely pro-Trump accounts on Twitter, hand-picked by them, a neoliberal think-tank. There was no algorithm, no science, nothing behind it other than subjective judgement.

Worse, Twitter knew about Hamilton 68’s utter lack of legitimacy for five years, and never bothered to directly expose the sham or cut off Hamilton 68’s access to their API. In 2017, Twitter executive Yoel Roth reverse-engineered what the Hamilton 68 Dashboard was doing by looking at it’s Twitter Application Programming Interface (API) calls. He pulled back the curtain, and learned that it was nothing more than a curated list of 648 accounts. On October 3, 2017, Roth wrote “It’s so weird and self-selecting, and they’re unwilling to be transparent and defend their selection. I think we need to just call out this bullshit for what it is.” Three months later, he wrote that “the Hamilton dashboard falsely accuses a bunch of right-leaning accounts of being Russian bots.”

On October 3 2017, Roth writes to his colleagues:

The selection of accounts is… bizarre, and seemingly quite arbitrary. They appear to strongly preference pro-Trump accounts, which they use to assert that Russia is expressing a preference for Trump even though there’s not good evidence that any of the accounts they selected are or are not actually Russian.

Yoel Roth to colleagues, internal email, October 3 2017

And later, Roth writes “Real people need to know they’ve been unilaterally labeled Russian stooges without evidence or recourse.”

Russian bots were blamed for hyping the #ParklandShooting hashtag, #FireMcMaster, #SchumerShutdown, #WalkAway, #ReleaseTheMemo and more. If you remember any of those episodes, you can probably recall that somewhere in your media diet, someone probably nudged that the Russians were amplifying this. It was all based upon this phoney list.

Taibbi shared a sample of just some of the stories this dashboard ultimately fed:

Image

Ironically-named “Politifact” used it as the basis for several stories, including this one. Note that Hamilton 68 is cited as a source:

[Image](https://www.politifact.com/article/2017/dec/21/russia-social-media-2016-false-inflammatory/)

Like the piece above, basically none of these publications seem to be correcting their stories, or explaining clearly to their readers that the Hamilton 68 Dashboard upon which they generated oodles of pieces was essentially a sham.

By October 2017, Twitter executive Yoel Roth noticed that a lot of media stories were springing off this disinformation, and internally urged that Twitter make this clear. Yet Twitter executives demurred. Taibbi puts it this way: “Twitter didn’t have the guts to call out Hamilton 68 publicly, but did try to speak to reporters off the record. ‘Reportings are chafing,’ said Twitter communications executive Emily Horne. ‘It’s like shouting into a void.'”

Emily Horne, a Twitter communications VP who was among those putting the damper on exposing the sham, would soon become Biden White House and NSC spokesperson.

Yoel Roth comes across as sincere and heroic in his efforts to raise alarm bells from within Twitter in 2017 and early 2018. But Twitter executives like Emily Horne as well as presumably chief content officer Vijaya Gadde shut him down.

As a result, “journalists” in publications ranging from The Washington Post to Politifact to the New York Times continued to amplify the fake alarm that the ASD dashboard generated. Fake news begat fake news, until we even got to the point that the White House found it imperative to create appoint a new “Disinformation Czar,” led by the memorable (and meme-able) Nina Jankowicz.

Yoel Roth is a fascinating, complex character. He was eventually to become Twitter’s head of “Trust and Safety Council” by 2020, which made a lot of questionable decisions regarding deplatforming people re-sharing the Hunter Biden laptop story, which included the Press Secretary of the United States. Roth was even involved in Twitter’s decision to permanently boot the president of the United States. Musk at first considered Roth trustworthy (though with different political viewpoints), but by late November 2022, Roth was fired. Roth’s character-arc would be a very interesting one to profile for the inevitable “Inside Twitter” documentary.

If you’re looking for the news outlets which were earnestly duped and actually want to be honest and fulsome with their readers, check to see if they’re reporting on the Hamilton 68 scandal. Are they explaining to their readers the times they relied upon this now-discredited dashboard. Thus far, it’s not encouraging. Neither CNN nor Washington Post has any mentions of “Hamilton 68” this year so far.

A parting thought: Whether you like Musk or not, we wouldn’t have known any of this successful effort to deceive the American public had Musk not purchased Twitter and let journalists look behind the curtain. Had Musk not shelled out $44 billion, we very likely would still be watching and reading breathless stories amplifying how “the Russians are coming, and they sure do like these GOP hashtags” on Twitter. These claims would be based on a lie. Twitter leadership would know, ASD’s advisory board would presumably know. And no one would say a word about it. Let that sink in.

Read Taibbi’s full thread on Twitter here, complete with screenshots and source material: The Hamilton 68 Scandal.

Speculation about Musk’s Intentions with Twitter Goes Into Overdrive

Musk decides not to join Twitter’s board, which likely portends poison pill adoption by Twitter, and potentially moves toward greater stake by Musk.

It’s been a busy weekend for Twitter’s CEO and Board. Sunday afternoon, Twitter’s chief executive Parag Agrawal announced that Elon Musk would not be joining its board of directors. That morning, Musk had rejected the board’s offer to join the board in exchange for capping his stake at 14.9% and no doubt other legal restrictions around what he can and cannot say publicly about Twitter.

A battle is shaping up for the Internet’s public square. Here’s a very brief timeline:

March 9, 2020

Silver Lake Partners takes $1 billion stake in Twitter

Private equity fund and activist investor Silver Lake partners scoop up a significant chunk of shares, takes a board seat.

Nov 29, 2021

Jack Dorsey Steps Down

Co-founder of Twitter steps down as CEO. Long-time Twitter veteran and CTO Parag Agrawal takes helm.

Mar 25, 2022

Musk tweets poll

After years of critique / mockery of Twitter policies, Musk posts a poll: “Free speech is essential to a functioning democracy. Do you believe Twitter rigorously adheres to this principle?” He followed this immediately with “The consequences of this poll will be important. Please vote accordingly.” 70% of those responding say “No.”

April 4, 2022

Musk becomes Twitter’s Biggest Shareholder

Over the ensuing days, Musk takes 9.2% stake of Twitter, becoming largest shareholder. The Washington Post notes that he may have run afoul of SEC reporting rules which require public disclosure within 10 days of acquiring a 5%+ threshold. (Seems likely he will pay a fine for this.)

April 5, 2022

Twitter Announces Musk Joins Board

Twitter CEO Parag Agrawal announces that Musk is offered board seat, but the terms of that agreement state that Musk must keep stake capped at 14.9%.

April 8-10 2022

Musk Muses Big Changes

Tweets several new ideas for the social media company, some controversial. One of the most intriguing: Musk suggests that anyone that pays the $3 per month Twitter Blue subscription fee should get a checkmark. A follow-up Tweet clarified that it would be different from the blue badge, but still signify the account is authentic. When challenged in a comment to consider lower prices in other countries, Musk agreed, and suggested a proportional rate tied to the local affordability and currency. (The tweet has since been deleted.)

April 10, 2022, morning

Musk Rejects Board Offer

April 10, 2022, Afternoon

Agrawal tells his team “Don’t Get Distracted”

April 11, 2022

Musk Amends SEC Filing

“From time to time, the Reporting Person [Elon Musk] may engage in discussions with the Board and/or members of the Issuer’s management team concerning, including, without limitation, potential business combinations and strategic alternatives, the business, operations, capital structure, governance, management, strategy of the Issuer and other matters concerning the Issuer.”

Reading the Tea Leaves

What’s really going on?

As Kara Swisher of the New York Times has noted, there’s much to be gleaned from translating some corporate speak:

There are a number of hidden codes:

fiduciary –> “Elon, don’t mock, or speak ill of the company publicly, you have obligations if you’re on the board which hem you in,”

background check –> “we always reserved the right to reject you based on potential SEC investigation and other things, so this is kind of mutual anyway,”

formal acceptance –> “you must agree in writing not to take 14.9% stake, and be liable if you tweet something defamatory,”

there will be distractions ahead –> “this ain’t over”

What happens next?

Musk is mercurial. He could decide he has better things to do and sell his stake.

But it seems much more likely to me that he will continue to increase his stake. If his goal were to make marginal improvements to Twitter, he would have been inclined to stick to their announced agreement and take the board seat.

He initially filed an SEC form saying he was planning to be a passive investor in the company, but amended it today (April 11th, 2022) to indicate he may be more active, and plans to keep criticizing the platform and demanding change: “From time to time, [Musk] may engage in discussions with the Board and/or members of the Issuer’s management team concerning, including, without limitation, potential business combinations and strategic alternatives, the business, operations, capital structure, governance, management, strategy of the Issuer and other matters concerning the Issuer.”

The billionaire has been vocal about some of the changes he’d like Twitter to make. Over the weekend, he tossed out the idea that users who opt into the premium plan ($3/mo), Twitter Blue, should be given automatic verification and see no ads. This one step, of devaluing “blue checkmarks” would be a sea-change to how Twitter is used today. He noted that Twitter’s top accounts are highly inactive, asking “Is Twitter dying?” He mused about turning Twitter’s San Francisco HQ into a homeless shelter, which invited a retort from Amazon’s Jeff Bezos. As Geekwire reported in May 2020, Amazon has already done so in part, quite successfully, in partnership with Mary’s Place.

Twitter’s board may very well adopt a poison-pill defense. But this isn’t a slam dunk; it needs majority board approval. Take a look at the existing composition of Twitter’s board; it’s no longer founders and insiders. Remember, this board said goodbye to Jack Dorsey, and rumor has it this was in part due to sluggish stock price results and activist shareholder discontent. Twitter’s eleven-member board consists of two “insiders,” both Agrawal and Dorsey, an activist value-driven investor (Silver Lake Partners), and eight relatively independent board members with Silicon Valley and/or finance experience. Poison pill adoption often depresses the value of a stock, and some board members might not be persuaded to do this. Several of Twitter’s board members are from the Silicon Valley braintrust, and are unlikely to want to go head to head against Musk — and some may even very well fully agree with him.

Musk is nothing if not bold. He has risked substantial sums and bet boldly on multiple ventures in the past. He stated that “free speech is essential to a functioning democracy,” and has both an internal incentive and external incentives not to be seen as being bested here.

My guess: he’s unlikely to just sit on the sidelines, as Twitter’s biggest but minority shareholder. He could well make a run for the company, though he may prudently wait for the next recession to do so.

And what happens to Twitter’s employee base, and its policies, during this tumultuous time? It may cause some employees to see the writing on the wall, and depart. Or it might cause some to double-down on a heavy-hand. It could be a very interesting few months indeed.

Does Elon Musk like to play it safe? Or lose? What does his track record suggest?

Voice of America 2.0? We Need a Smarter Digital Media Strategy in Wartime.

Food for thought: Given the massive reach of YouTube, Netflix, Apple App Store, Amazon Prime and more, is cutting off access their highest and best use in time of crisis with an adversary?

Lots of consumers lobbied Google, Amazon, Netflix and more to turn off digital media delivery to citizens in Russia. In a cavalcade of announcements, the leadership of Netflix, YouTube Premium, Amazon Prime Video and more have responded, and either shut off entirely or severely restricted their operations in Russia. Right now, it’s basically an all-or-nothing approach, with varying degrees of turning off switches for monetization.

But is outright shutdown of thriving media delivery pipelines like Netflix, YouTube, Apple App Store etc., to consumers in adversary nations the most effective thing we can do with these platforms of reach and scale? Or might it be smarter to encourage or even compel those companies to instead intersperse federally approved Public Service Announcement (PSA) messaging into their programming to certain geo-targeted locations during certain times?

When I was a radio DJ in college in the 80’s, we had a set number of PSA’s per day set by the station’s program director. The PSA’s needed to come from a set of scripts that sat beside the turntables. We read these PSA’s over the air. We could choose which one to read, but we needed to deliver at least one per three-hour show.

PSA’s generally aim to raise awareness or change behavior. Some of the most common in the United States are for Emergency Preparedness or personal health. Contrary to popular belief, the Federal Communications Commission does not mandate PSA’s. They are instead voluntarily offered up by the station. The incentive comes from being a good citizen, but also resume-building: Federal Communications Commission factors in whether a company is acting in “the public’s interest” when renewing or expanding licenses.

Here in America, PSA’s are quite prevalent. In 2014, the Center for Disease Control commissioned a study to look at the delivery volume for just one of its campaigns. The amount of donated airtime to television alone generated up to 251 million impressions for each PSA. And that’s just for one campaign, one client, and one media channel: television. If you’ve ever heard about disaster preparedness or getting tested for blood pressure, or getting a COVID booster, chances are it came from a PSA.

Seems to me something analogous to this approach might be beneficial in the digital media age.

Imagine if we had a live, open source media repository of wartime PSA’s ready to go, but always being updated/expanded. These would be video clips, audio segments, and short written text. They wouldn’t have to be too heavy-handed. They wouldn’t even have to reference the ongoing conflict. They could simply be referencing things we value, like freedom or self determination, or the essential benefits of principles like women’s rights, or what have you. There’s no reason this repository shouldn’t be publicly available to the US for scrutiny. These would be, of course, messages we’d like to go out to the world, subject to Executive Branch approval, Congressional oversight, and the subjective judgment and commentary by every citizen and journalist. It’d be open source.

An appointed government agency would act as manager of this repository, approving or denying contributions to it, overseen by Congress.

Digital media companies like Netflix, Apple and Google have their own brands to protect, of course. So it should be up to them to determine which messages “fit” with their brand, and which do not. They can and should even be contributors to the repository. They could create their own and “pull-request” them into the media repository. Apple, for instance, might make a killer ten-part series about the values that make it great. Many of these corporations are phenomenal at consumer marketing, story-telling and message delivery.

This could be a live, transparent media repository with a variety of messages approved by the US government, from which corporations would be encouraged to choose. Perhaps they’d show the human suffering or destruction on the ground that’s censored from view. Or perhaps they’d report on how the front is truly going. Or explain, as diplomats are doing today, how we have no beef with citizens, but do strongly disagree with the current administration, and why.

Digital media corporations could, at their choosing, pick from this repository and intersperse these messages. Just what might encourage them could be a variety of things. It could be voluntary, based entirely on what their own corporate leadership believes is the right amount of good citizenry. A light touch seems wise, especially given the scale of audience: corporations could even be allowed to present the “Skip” button after n seconds (generally 10), as they do with other ad-driven content, or even a setting to turn it off, as long as the default setting starts with “On.” Even if the vast majority skips through them, messages would still likely reach millions, plenty to encourage conversation.

Perhaps corporations could get a tax break for delivery of these messages. A much more aggressive approach (which I do not favor) could be taxation/fees levied by the US federal government if they failed to meet thresholds of delivery. Or perhaps an agency could outright pay for delivery of these messages. With such a novel program that is prone to launch risks and product-market fit issues, I think it’d probably be best to make it entirely voluntary on the corporations’ behalf initially, but corporations should be encouraged to report their PSA delivery, and how they’ve acted in the public’s interest.

Surely, hostile authoritarian regimes would block platforms which do this over a certain threshold. Putin’s regime has already clamped down very hard on what people can and cannot say. But isn’t it better for adversarial governments to be the one to block entertainment and productivity services from their own populace than to have the US and European-based corporations be the ones who can be blamed as bad guys?

Jack Shafer writes in Politico that Putin is unlikely to win the propaganda war, long-term, in 2022. I fully agree. He’s already off to a very bad start in that regard. Seems we have an opportunity to leverage our cultural power and reach, rather than just treat it like a needed resource akin to oil or wheat, in which termination is seen as punishment. We can be more nuanced in our strategy.

Even with substantial efforts to block and jam signals of Voice of America, it reaches an estimated weekly global audience of more than 311,000,000 people. Doing this kind of “leave service open, but insert messages on occasion” also is far less likely for these corporations to lose global consumers for good. Seems like there’s mostly upside to giving it a try.

This is just a nascent idea at this stage. Obviously such an idea would have substantial second and third-order effects, and those would need to be carefully considered. And I admit, I haven’t thought carefully through the ramifications of this long-term. But I think there’s a kernel of a better strategy here, rather than just turning off the spigots outright.

War Comes to the Small Screen

Russia’s invasion of Ukraine feels like an altogether new phase of social media in warfare. Maybe it’s the verbs which adorn those buttons: Like. Share. Donate. Block. They invite us in, and whisper: “Decide.”

Russia’s invasion of Ukraine has marked a turning point for the use of social media in war.

To be sure, this is far from the first conflict in which social media plays a key role. The “Arab Spring” of 2010-2011 likely gets that distinction, when hundreds of thousands of citizens in Arab-speaking nations networked their common cause on Facebook and elsewhere and rose up in democratic protests against their governments. Then, later in that decade, social media played a key role conveying the gripping stories of the more conventional conflicts of Syria and Afghanistan. By the 2010’s, in our own battle against Islamic extremism, social media featured prominently in recruitment, terror, propaganda, and victory.

From the Arab Spring of 2010 to today, social media’s membership has soared, from “just” tens of millions of people to now nearly 5 of the 8 billion people on earth. Today’s pervasive use of social media in Ukraine conflict feels much bigger in scope, and there’s something new. The stakes — war in Europe and potential for World War III — are higher. But what also seems new is that this time, this conflict already seems far more participatory, involving broad segments of society.

We see the besieged and attackers. We see soldiers, citizens, political and corporate leaders, journalists, corporate brands, celebrities and governments, who all have something to say. We see partisans in the fight flock to “user generated content” platforms, from Github to Yelp to Google Maps. Every social product of any size now needs a wartime strategy.

The Washington Post catalogued many examples of Ukranians using social media to tell remarkable stories, from the everyday citizen moving a personnel mine to a safe location to an elderly gentleman kneeling before a Russian tank. But Twitter, Facebook and Instagram are no longer just storytelling apps, and the events shaking eastern Europe are not read-only. In many ways, they are calls to us for our interaction and engagement. We are not yet, thank God, at World War, but in a profound way, all five billion of us on social media are being invited in.

There are buttons with verbs in social media. Share. Donate. Like. Retweet. Protest. Organize. Support. Report. Mute. Share. Reject. Block. Social media enables all of this from afar. These verbs also whisper to us, ever so quietly: Decide.

And decide we must, because to simply scroll on feels heartless. In the twentieth century, the abominable concept of “Total War” declared civilians and associated resources as legitimate targets. In the era of social media, we citizens seem not just collateral damage, but the door opens to being collateral participants. Do we walk through it?

Governments have shut airspaces down, but not cyberspace. If you are so inclined, you can engage directly with Russian citizens in many corners of the Internet – sites like duolingo and Interpals still offer the ability to chat with Russian-speakers. It boggles my mind that Russia can be firing missiles into Ukraine, and we in the West can be taking unprecedented, aggressive actions which risk cratering their economy, but we can still engage with the citizenry if we’d like, whenever we’d like.

Unlike even twenty years ago, social media now gives us the means to actually “participate”, at some level, from across the globe, not just to register our support, but to do something related to its outcome.

Wartime communication comes in many forms. There are secret tactical and strategic communications among combatants and allies. There’s propaganda, meant to promote a particular cause or point of view. There are morale-boosting missives and stories from the front to the population. There are psychological operations (“psyops”) waged against the enemy. There’s high-level diplomacy. There’s logistics and production planning. And there’s journalism and documentation for posterity.

Today, social media touches all of these forms, and profoundly changes many of them. That’s because social media has many attributes other media does not: it is global, instantaneous, emotional, participatory, and many-to-many.

We’ve already witnessed a few groundbreaking examples of how these attributes have transformed wartime communication.

Social Media Is Global.

Do you want to contribute to the defense of Ukraine without donning a uniform? There are a variety of non-governmental organizations to which you can donate. But brand new for 2022, the official Ukrainian government Twitter account (@ukraine) has a Bitcoin donation link pinned to its profile:

Yes, that’s right. With a few clicks, you can instantly donate money directly to the government of Ukraine. So long, allied war bonds, or even waiting for your own government to send more aid. Supporting a war effort is now as easy as adding an extra 20% tip at Starbucks.

Or do you want to interact with your adversaries more directly, to try to better understand or inform them, cyber-harass them, or attempt to boot them from a given platform? Hacktivist group Anonymous is encouraging people to write reviews of Russian-based businesses and restaurants to convey messages to the people of Russia, to try to get around state-media control.

Are your desires more juvenile? A TikTok video encourages you to go to Google Maps and re-label Russia’s official embassies as “public toilets.” UPDATE: Google has placed restrictions on this activity:

Image

And pro-Russian activists are currently brigading one of the most popular open source code repositories on GitHub: Facebook’s open source React framework. They’re posting pro-Russia messages.

The point: this is a war involving not just combatants in Ukraine, but those of us in the crowd. Every social product of any scale now needs a wartime strategy.

Social Media is Emotional, Ubiquitous and Instantaneous.

Ukraine’s citizens and leaders are sharing heartbreaking videos directly to us on Instagram and Twitter. They’re telling the stories of heroes and victims, crying out for our help.

These direct video pleas are a far cry from how many of us digested international conflicts decades ago: they’re not just an international interest segment tacked onto a nightly newscast. The pleas are integrated into our daily lives as we scroll through our feeds. These are the compelling stories of 43 million Ukrainians, many of whom speak English. They want and need us involved.

We can also watch things unfold as never before. Heard about the 40-mile long Russian convoy lumbering toward Kyiv? We can follow along via a street-level view via Google Maps. Want to watch what’s happening live, via dozens of webcams? There’s a website for that.

Humans are a story-telling species. Wartime communication used to rely heavily upon correspondents, filmmakers, military journalists, radio personalities and famed directors to get the battlefield news to an audience. Now, they are relegated to editorial and summary roles. If you want the very latest information, you rush to Twitter; the nightly newscast operates on a more glacial pace by comparison. Even television journalists now spend a great deal of time highlighting what’s being reported on social media.

In World War II, the process of getting video footage to the home front took months. Hollywood director John Ford traveled to Midway Island in early June 1942 with two cameramen. Two days later, on June 4, 1942, they filmed the first wave of Japanese Zeros as they strafed the island. After the battle, Ford sent the film back to the States, which was developed and hastily edited into a theatrical documentary with voiceover. The result: film of a battle shown in record time to a home-front audience, a mere three and a half months after the first bullets of the Battle of Midway were fired.

Today, not only is storytelling instantaneous, it’s also much more intimate and direct. There’s usually no director. Anyone with a cellphone can tell their own story, and often doing so is more compelling, but fraught with risk of forgery.

In the attention economy, the scarcest resource is our consideration, that which what we pay heed to. Nuance takes longer. So quick, shocking, humorous or heartbreaking memes are often what break through.

We’re getting selfie videos directly from the Prime Minister of Ukraine, via the small screen in our pockets. It’s available everywhere, not just when we’re ready for it. We can be in line at the grocery store checkout, and hear the breaks in his voice through our AirPods. At any time, and at any moment, we can be witness to his steadfast bravery; it’s integrated into our day.

Zelensky is marshaling this breakthrough power capably and creatively. A week ago, he broadcast a powerful speech directly to the Russian people, circumventing journalist intermediaries. With more than 114 million Russians on the Internet, plus the many who were willing to translate, caption and redistribute, the whole world received his message within hours.

To a global audience that has Zoomed its way through the past two years, being able to see the human side of a leader in a war-torn nation speak out through our small screens feels at once both entirely natural, but also surreal. It is often unedited and raw. It is profoundly new.

Zelensky is extremely well-suited to this role. Prime Minister Volodymyr Zelensky is a former entertainer, voice actor and comedian. He was the voice of the Ukraine version of Paddington Bear. He’s charismatic, his cause is clearly just, and he knows how to speak to the camera. His use of Twitter (where he has 4.3 million followers) and Instagram (where he has 13.7 million followers) has been masterful. We see Zelensky making human, passionate pleas, often arm in arm with compatriots. His warmth and humanity come through clearly to millions.

Opposing him, we see Vladimir Putin, a vestige of the nightly broadcast, state television world. He gazes sternly from one end of his 20-foot long gold accented table, bunkered deep in the Urals. He’s formal, rigid, isolated and distant. His mannerisms and demeanor might have been well-suited to the fixed-format communications of the 1980’s, where projecting power and formality spoke volumes. But now he seems anachronistic. He leads a superpower, but gets an F for 2020’s era social media presence to billions of people who value authenticity, warmth and story.

Photos: Putin keeps his distance during meetings

With every communication, the people of Ukraine are saying “we are here, on our land, in our homes, and an invader is trying to take it from us brutally.” Their message cc:’s the world. Messages go out to allies and foes alike. Citizens and leaders of Russia and Belarus are watching.

Russia, meanwhile, is going through its own transformation of media consumption. State television’s former dominance of news is slipping, and the information divide highly age-weighted. Older citizens are much more likely to still pay attention to state television. But the young are much more likely to use the Internet and social media.

The outcome? Note the average age in this photograph from Wednesday’s anti-war protests in St. Petersberg Russia:

It’s Many-to-Many

Over the weekend, Ukraine’s minister of digital transformation Mykhailo Federov reached out directly to Elon Musk to request Starlink (satellite-delivered Internet) terminals from SpaceX, so that his government — and presumably military and resistance groups — would be able to communicate in the likely event of widespread communications outages.

Federov wrote, “While you try to colonize Mars – Russia try to occupy Ukraine!” on February 26th. Within hours, from halfway across the planet, Elon Musk responded: “Starlink service is now active in Ukraine. More terminals en route.”

And then, as if ordered up via Amazon, a planeload of Starlink terminals arrived on the other side of the world two days later. Monday, a grateful Federov tweeted:

Ukraine has thousands of celebrities, corporate leaders and heads of state at their disposal. In short, while Russia has military might, Ukraine has the attention and willing participation of the biggest stars of the attention economy.

Today, all 4.8 billion of us on social media can be both a broadcaster and a receiver. Social media can help a single leader rally a nation, much like broadcast TV. But what’s new is that it’s the first-time senior government officials have been able to directly and publicly call out to key resource-owners in civilian life for critical things and seen them instantly delivered. Even if they’re across the globe.

At this writing, Ukraine may be headed for a long insurgency. And as they need resources, officials and guerilla leaders won’t need someone to find the phone number of some official. They can merely make these requests publicly. It’s not only much faster. It has the added advantage of securing a near-instant affirmative.

Here are some other noteworthy things crossing my social media feed:

Perils and Risks

We’ve already seen social media being used for “astro-turfing,” disinformation, and forgeries in this war. One of the major shortcomings of social media is that consensus can masquerade as truth. And it is likely to get far worse, since deep fake technology makes forgeries much more convincing. Given that this may well become a protracted occupation and insurgency, expect far more psychological operations via social media as Russia attempts to convince the public of the righteousness of its cause.

Early Days

All of this is playing out less than two decades since social media as we know it began. Facebook was founded just eighteen years ago, and Twitter sixteen. What’s ahead is even more acceleration and interconnection — and security risks, forgeries and more. It makes me wonder about how this technology might have shaped prior wars. The colonists had no way to reach the King of England or France or powerful potential benefactors during our own Revolution. Would history have turned out differently if they did?

Even contemporary revolutions in wartime communication seem quant by comparison. Many of us remember the moment thirty one years ago when CNN’s Peter Arnett stood atop buildings in Baghdad and broadcast the first live television coverage of the United States’ opening salvo in Operation Desert Storm, and ushering in a new era in 24×7 cable news. We watched in real-time as the bombs dropped, and saw a major invasion take place via our television sets. But we couldn’t influence its outcome; we were fully bystanders. Broadcasters could infer our engagement, but they couldn’t discern it story by story. And but for taxes and care packages, we certainly couldn’t join in to the degree we can today.

The opportunities and perils that social media presents during the Russia-Ukraine conflict feel like an even greater leap than that which thrust 24×7 cable news to prominence. This isn’t the very first conflict of the social media age, but is altogether new: it is pervasive, at massive scale, and participatory.

Farewell, Facebook

The time has come to de-prioritize Facebook in my life. Here are some of the steps I’m taking if you too are considering it.

Happy New Year 2022!

I’ve decided to deprioritize Facebook in my life. I made this decision back in autumn, but decided to stick it out to be able to engage with people up to and through Seattle’s recent elections.

Engaging on Facebook has taken up more time than I care to admit over the past several years. I joined Facebook in 2007, three years after its founding. During that year, I invited a lot of friends to it. Over the ensuing thirteen years, I’ve made 5,387 posts, and uploaded over 2 gigabytes of photos and video to it.

From about 2014 onward, I’ve used Facebook as a journal of sorts. I’ve posted vacation photos and family updates. But unlike many people who wisely stay away from politics and controversy, I’ve also shared news items and articles and predictions which interest me, and on more than one occasion they’ve run against the grain of a very deep blue political sentiment among family and friends at the moment. I am a huge advocate of breaking one’s own filter-bubble, and I have felt that too many Americans have succumbed to an ever-narrower range of news sources.

I’ve really enjoyed hearing from friends and family on controversial issues, learning from perspectives which aren’t always my own. Put another way, areas of universal agreement are far less interesting to me. Since I always kept Facebook friends to true friends in real life (a cardinal rule throughout), these interactions have nearly always been incredibly respectful and polite. I’ve only had to unfriend one friend and former colleague, out of more than 400 Facebook friends. It was when I called the lab leak hypothesis by far the most credible to me, early on in the pandemic.

As early as late January 2020, before even the first American had died of COVID, I thought Occam’s Razor had something to say:

My friend took instant and strong offense, and considered this to be a racist viewpoint. Remember when polite society equated the lab leak hypothesis with racism? I found that odd then and still today — if anything, the “wet market” and “bat soup” explainers, which were among the original ones floated, seemed if anything the far more culturally-insensitive hypotheses. Mistakes happen all the time, even hugely consequential ones. Moreover I could easily envision myself as a well-intentioned and expert researcher, normally highly careful, inadvertently responsible for a very random or extremely rare accident or unthinking moment of carelessness. Look at Chernobyl, or Three Mile Island, or the Exxon Valdez disasters — these were not intentional. And moreover it was vital to understand how they happened.

Today, most Americans believe a lab leak to be the most likely cause. It’s at least acceptable to discuss in polite company, even in places like New York Magazine. I’ve repeatedly stated that I don’t think it was intentional, if it was indeed an accident, but I’ve lost a friend over it. He kept jumping in with snide and insulting comments, going ad-hominem without ever bothering to engage in the actual substantial and growing circumstantial evidence, much of which 20+ year New York Times Science journalist Don McNeil eventually chronicled in a must-read piece, How I Learned to Stop Worrying And Love the Lab-Leak Theory.

Look, I’m independent. That brings incredible luxury. It means I don’t have to check my tribe’s opinion before voicing my own. And I don’t accept the fashionable rhetorical trick that just because one reprehensible person holds a given view, that anyone else holding such a view must buy into the panoply of their ideas. Adolf Hitler loved dogs, after all; this doesn’t mean dog owners must defend Mein Kampf. No, what matters are the facts and evidence, and the logic and merits of the argument. As a data-guy, evidence is essential to how I think.

Beyond the lab-leak hypothesis, I have had several at-the-time controversial or heterodox opinions over the past several years. I was posting about the high likelihood of a coming pandemic wave to my Facebook friends as early as the first week of February 2020, before the first reports of US infections. I’ve been skeptical about the efficacy of cloth mask mandates starting months ago, after trying and failing to find correlation between mask mandates and changes in spread. I’ve felt we are not doing enough to separate positives from worrisome positives, when few were discussing the idea that PCR tests might be over-sensitive, depending upon the number of cycle-thresholds run. I’ve been harshly critical of the harms of prolonged school closures. I’ve predicted significant inflation from unbridled easy monetary policy, and predicted inflation’s likely durability when we were repeatedly told it was “transitory.” These are just a few examples of discussions which first emerged on my own Facebook threads, and then sometimes headed to my blog. I’m far from infallible. But I think history is very much on my side with respect to each and every one of these once-highly-controversial but now generally accepted viewpoints. They certainly weren’t always what people wanted to hear at the time. At least not sprinkled amidst family updates and vacation and pet photos.

Though discussions like this are intriguing, I don’t think using Facebook in this way is necessarily the best pastime to be my healthiest in 2022 and beyond.

A big concern too is that Facebook (and absolutely, Twitter and YouTube) are narrowing the range of acceptable conversation, through what I consider to be highly undesirable censorship. And they’re harvesting data from us, and manipulating what’s shown to us which amplifies misinformation and can cause emotional harm.

But putting aside for a moment the very important data sharing/mining, censorship and manipulation concerns, there’s also the matter of using the right tool for the job. I really should be updating my blog more. To be sure, I got to the point where Facebook became a bit of a “here’s a controversial issue I’m thinking about” journal, which I would then, over the ensuing weeks or months, add evidence and articles to support my predictions and views to. One of my personal goals has been to write more. But more than once, my wife gently asked me “Um, why are you down in your office, replying to yourself on Facebook?”

Thinking about leaving Facebook too? Be sure to get a backup of everything you’ve posted. Go into Facebook’s account settings and request a download of your Facebook data. A day or so later, you’ll see a set of files you can download, in either HTML or JSON form. (Personally, I recommend JSON form, if you ever plan to export/import them into another tool in the future.)

While full deactivation and deletion of my account altogether is very tempting, I’ve still got a couple startup-specific reasons to not deactivate my Facebook account entirely. And I know that I have a few services out there with my Facebook login, so I want to be sure to leave it live for a few months as I change those.

So, here are the steps I’ve taken to introduce lots of friction into booting-up-Facebook:

  • I’ve deleted the Facebook and Messenger Apps from my phone
  • I’ve installed the excellent UnDistracted Chrome plugin to all the desktop and laptop browsers I use. Since I generally use Edge and Chrome, luckily this extension works on all the browsers where I spend 95% of my time.
  • I’ve signed out of Facebook on all browsers
  • When I sign into services requiring “Log In with Facebook”, I’m taking a moment to change the login method.

My friend Marcelo Calbucci has done a nice blog post on a 12-Step Program To Eliminate Facebook in Your Life if this is of interest to you.