Circling Back to a “Don’t Discount the Lab Leak” Post of January 2020

In January 2020, I posted my view that the outbreak of what was to be called COVID quite possibly came by way of lab accident. It generated over 100 comments and ended at least one friendship. I popped by Facebook yesterday to circle back on that thread.

Though I stopped posting to Facebook in late 2021, I returned yesterday to circle back to an old post.

Hi friends,

Just a brief return to Facebook. I feel compelled to share some thoughts, now that our federal government leans ever closer to “lab origin,” which some of you may recall is an issue I’ve talked at length about here in the past.

Dusting off Facebook’s search feature (hey, nice new icons and web refresh, FB!), I see that on January 26th, 2020, weeks before the first American was definitively known to have died due to the virus, I posted my strong suspicion, linked post below, that the outbreak of that was to be called COVID was likely due to a lab accident.

It’s interesting to review some of the discussion which took place then, and in subsequent posts on the topic.

As we sit here today (March 1, 2023), three years later, the lab-leak hypothesis isn’t some crackpot idea. It’s a majority-held American viewpoint, now publicly endorsed by the FBI (moderate confidence) and the Department of Energy (low confidence), and by more than 70% of Americans. A shrinking minority of Americans (~25% and shrinking) think “natural spillover” was the genesis of the greatest health crisis in our lifetime.

Three years ago, I did not post my own view lightly, nor cavalierly. The president at the time was out calling the virus the “Wuhan virus” and creating an atmosphere of xenophobia. The overall public temperature seemed to suggest it an offensive stance to take. I certainly knew that it’s not something to be flip about.

In fact, my own confidence was actually stronger than I wrote at the time. And it wasn’t a casual observation.

Editorial note: Even in January 2020, there was ample evidence that WIV was doing research on bat-borne coronaviruses, that the facility was rather new, that officials had previously expressed safety concerns about it, that lab leaks elsewhere had happened with alarming frequency, that a major debate had raged in the scientific community about "gain of function" research from 2011-2014. Dr. Anthony Fauci was adamantly on the "pro" side, even writing "it's a risk worth taking." This was all knowable in January 2020, and I had read all these pieces and more by that time. 

Further, as an applied math major, I was pretty familiar with basic Bayesian math, and the chances that a lab studying bat-borne viruses and an outbreak of a novel coronavirus whose closest cousin was a bat-bourne virus were independent things seemed extremely low. Not just geographic coincidence, but temporal coincidence, species coincidence and far more. In shorthand, assume the lab and outbreak are unrelated (ie, natural spillover.) So of all the cities, Why Wuhan? And of all the years, Why 2019? And of all the species, Why Bats? Etc. The odds of all these happening and the bat coronavirus lab in central Wuhan NOT being related in any way are astronomical. Then, add in just how well optimized it seemed for human replication right out of the gate; that is unusual, and if it were from a host animal, wouldn’t it have started in some more remote village(s) first? Then, add in all the adverse inference you can and should draw by deletion of databases, not allowing inspectors in when they have every possible incentive to prove natural zoonoses, wiping the lab(s) clean, etc. All tolled, these are not equal-probability hypotheses, but skew very strongly in one direction.

I worried a bit about Facebook booting me off the platform, and offending friends/family (though it shouldn’t offend!), so I walked up to the edge of it, and simply cautioned people from throwing that in the tin-foil hat bin.

But I believed it then, and I believe it today. There is basically no concrete evidence supporting the “natural spillover” origin theory, and an enormous amount of evidence — circumstantial and otherwise — pointing in the direction of lab accident. And if lab accident is the origin, that means our government failed us, that all this that we lived through these past three years, didn’t have to happen.

Over the past three years, I’ve watched as people who shared my view on this were vilified, ostracized, deplatformed from social media, misrepresented, distorted, and more.

Sure, staying silent was an option open to me. Why didn’t I?

Well, the best analogy I can think of is an earworm. Some of you may get an “earworm” when you hear a song that sticks with you. For me, for about 3 years, I’ve occasionally thought to myself:


It has been overwhelming at times — millions of people’s lives ended prematurely. More than 60x the number of people killed in the first nuclear explosion. Sons and daughters not being able to say goodbye to their loved ones in person. Suffocation on ventilators. Learning loss. Addiction. Trillions of dollars of capital vaporized. Mandatory masking. Mandatory vaccination. Political tribalism. Friendships destroyed. Businesses and dreams destroyed. And so much more.

And I’ve watched as institutions we should trust — academia, the news media, the CDC, politicians and more — have drifted so far afield in their roles. Some responded well to the crisis — particularly the health agencies of Western Europe. Too many others, including our own, did poorly. Too many politicians took “Never let a crisis go to waste” and opted for the corollary, “Preserve the crisis.”

So many Americans assumed that one’s public stance on COVID origin MUST imply one’s stance about politics, or even one’s inherent “goodness.” I’ve never believed that. These issues are, or should be, entirely separate. But for some reason, people have allowed these issues to be fused.

To so many people, it’s better to be wrong for the right reasons than right for the wrong reasons. I’m sorry, I reject that bargain.

As 2020, 2021 and 2022 progressed, I continued to respectfully share my viewpoint, which ran against most progressives’ viewpoint on it. And many (most?) of my friends are progressive, or at least were.

But nevertheless, I felt compelled to stick to what I thought then and still think today was grounded in more significant evidence. I also thought, and still think, the magnitude of this issue — the greatest health, education, lifestyle and economic disruption in our lifetime — was important to talk about, and to process some of those ideas.

But these sentiments were and I think still are a mismatch with social networking. And they’re certainly not ideal for Facebook and interleaved with friendly catch-up notes, as my wife rightly pointed out privately to me in increasing fashion. I departed Facebook at the end of 2021 (and it’s been a good decision, and one I’ll soon return to.)

But I wanted to return to FB momentarily to say that I appreciate that all of you — my many friends and family members — did NOT do what a lot of other people did to those who felt that lab origin was the most likely source of the greatest health crisis of our lifetime. You did NOT take my contrarian viewpoint on this as any kind of statement about me, or who I am. You did not (for the most part, at least) assume that this implied which team jersey I was sporting, or even if I owned one at all. You may have believed in natural spillover. No doubt some of you may even still believe that today, and that’s OK. (If you are at all interested, I’d be happy to patiently walk through the copious evidence suggesting otherwise, but I’ll leave that for a face to face chat if you’d like.)

A few points:

  1. Forgiveness is powerful. As we move closer and closer to consensus of lab origin, there will be many people who want to move onto the retribution phase. But all the evidence suggests that it was accidental, and that we in the US are culpable here too, as this research likely would not have happened were it not for our misguided, impossibly tragic proactive enablement of it.
  2. Please try not to let politics or tribal affiliation keep you from what you think to be true. Break out of your tribe — there are many forces pushing people to choose team A or B. Neither owns the truth. The media has become ever more interested in AFFIRMING, not INFORMING. It’s about engagement now, and nothing engages more than affirmation and outrage. It’s up to you to be your own news editor. Do you have enough respectful dissent in your information diet?
  3. Accidents happen, even catastrophic ones. Never once have I ever implied, nor do I believe, this catastrophe was intentional. I think the research was well-intentioned, but safety precautions lax. SARS, for instance, has leaked from labs multiple times, and from a lab-safety standpoint, this is no different. Chernobyl, Deepwater Horizon, Exxon Valdez and Three Mile Island were all unintentional. I could EASILY envision myself as an earnest medical researcher in a lab in China, unknowingly infected, visiting a market on my commute home. There will be a time to review the legacies of Anthony Fauci, Francis Collins and others, who likely were quite proximal in funding and later obfuscating the research which went on.
  4. What do we do with the knowledge that it’s likely of lab origin? It’s absolutely gob-smacking to basically know, with high probability, that none of this needed to happen. None of it. The deaths, the trillions of dollars of capital vaporized. The Zooms. The masking. The vax mandates. The inflation. The arguments and fissures in our very social fabric.
  5. Do you realize we are STILL funding EcoHealth Alliance with our tax dollars? (And many of the scientists enlisted to debunk the lab leak hypothesis have been granted millions of dollars from NIH. And Fauci’s personally designated successor now heads USAIAD. And. and. and.)

But we can take sensible action.

For one, if there’s ever been a role for Congressional oversight, the premature death of millions certainly calls for it. Second, on a practical level, maybe let’s not locate BSL facilities in major metropolitan areas. There are in fact hundreds of these labs around the world, and we need to consider their existential risk. Third, let’s please determine out how this research WAS funded and approved despite a clear presidential moratorium that was in effect at the time (2014-2017.) The evidence strongly suggests that the presidential moratorium caused federal advocates to look to a third-party packager to continue this research abroad, in what turned out to be much more loosely supervised settings. We need limits on how this new technology that’s been unlocked (ACE2 mice, etc.) stays in responsible hands.

We clearly need to overhaul institutions that failed us (NIH, CDC in particular, but also, it’s been tremendously disappointing to see medical organizations and even medical schools being captured by ideologues.) The role of public health should be to help navigate the path of least overall harm. It failed to do so.

We need Congressional oversight, and it will continue to be political. But let’s try to stick to the science and probabilities about it.

Anyway, it’s been a crazy three years.

Some of you may find issues like Climate Change existential and all-consuming, because you are good people, and you care. For me, to be quite frank, I think this issue has much greater probability to be of existential risk in the next 250 years if we do nothing. It mattered to know how Chernobyl and Deepwater Horizon happened, and how and why planes crash. We have the NTSB for a reason.

I appreciate that nearly all of you are still my friends & family. You may not think Facebook is the right forum for this, and, well, you are right. But I did want to circle back to you and close the loop on this thread, since it’s now not just in the Overton Window, it is a majority-held American view.

The essence of learning is to be able to update one’s own prior assumptions as new evidence comes in. We should not let political tribalism prevent us from doing so.

As we have seen time and time again with COVID — whether it’s relative risk for the young vs. old, the cost/benefit of remote schooling, the strength of natural immunity, how much vax mandates work or don’t, whether mandates or informed consent are superior — the stakes are pretty enormous, and what we are told may not be precisely what is true.

To see just how insane it’s all become, try this counterfactual: Imagine if Trump, a germaphobe, had forced a full nationwide lockdown in 2020, remote schooling, mask mandate, mandated shots, etc. as harshly as he could, and was -adamant- it was natural spillover (“bat soup” has always been more racist to me, than well-intentioned lab worker gets infected, as any of us might.) What would the Democrat position be today?

I am quite well, luckily, and though the above might sound like the rantings of a madman, I’m fine and happy. The past three years have taken a toll on everyone, but far lighter than it could on me. I’m extremely optimistic for what’s ahead.

Love to you all. Stay well,


#TwitterFiles: The Complete List

An index of all the Twitter Files threads, including summaries.

Twitter Files: The Complete List

The Twitter Files are a set of Twitter threads based on internal Twitter Inc. documents that were made public starting in December 2022. Here’s a complete list as of this writing. I offer up my own subjective summary of each, but I urge you visit the thread and form your own opinion. I’ll attempt to keep this index up-to-date as new ones come in.

At the end of this index, you’ll see some “meta” reporting, including Congressional testimony, interviews with authors and more.

Nomenclature: I refer to the pre-Musk era at the company as Twitter 1.0. That runs from the founding of Twitter through late October, 2022.

1. Twitter and the Hunter Biden Laptop Story, Dec 2 2022

Summary: Twitter blocked the New York Post from sharing a bombshell October 2020 story about Hunter Biden’s laptop contents, just prior to the 2020 US presidential election. It also suppressed people re-sharing this story, including the Press Secretary of the United States. Twitter attempted to justify this under its “hacked materials” policy, even though there was considerable debate about whether it legitimately applied.

1a. Twitter Files Supplemental

Summary: The Twitter Files 1 thread was delayed, based upon the surprising revelation that then-employee Jim Baker, former FBI General Counsel and current Twitter Deputy Counsel had been reviewing all materials before handing them to the journalists Musk invited to Twitter HQ. (Musk let Baker go.) Bari Weiss uncovers the Baker story.

Discussion: (#TwitterFiles)

2. Twitter’s Secret Blacklists: Shadow Banning and “Visibility Filtering” of users, Dec 8, 2022

Summary: Was Twitter 1.0 “shadow-banning?” Twitter executives Jack Dorsey and Vijaya Gadde have frequently claimed that Twitter does not shadow-ban, but multiple tools exist within Twitter to limit the tweet distribution and visibility of a given account. “Do Not Amplify” settings exist, as do several settings around propagation of tweets to others.

Discussion: (#TwitterFiles2)
3. The Removal of Donald Trump Part One: Oct 2020-Jan 6 2021, Dec 9, 2022

Summary: On January 7th 2021, Twitter summarily banned the 45th President of the United States from its platform. What led up to their decision, and what were some of the internal conversations surrounding it? Part 1 of 3.

Discussion: (#TwitterFiles3)
4. United States Capitol Attack January 6th 2021, Dec 10 2022

Summary: The ban of Donald Trump from Twitter stemmed directly from the January 6th 2021 attack on the United States Capitol by supporters/protestors/rioters. The stunning event led Twitter executives to finally make the call they had long discussed. Part 2 of 3

Discussion: (#TwitterFiles4)
5. The Removal of Trump from Twitter, January 8th 2021: Dec 12, 2022

Summary: Trump was banned from Twitter on January 8th, 2021. Though Twitter 1.0 was always adjusting discussion rules on the platform, it’s notable that on January 7th, Twitter staff adjusted several key rules to allow for and justify the banning of the then-President. Part 3 of 3

Discussion: (#TwitterFiles5)
6. FBI & Hunter Biden Laptop, Dec 16, 2022

Summary: The FBI attempted to discredit factual information about Hunter Biden’s foreign business activities both after and even before the NY Post revealed the contents of his laptop. Why would the FBI be doing this? And what channels existed between the FBI and Twitter 1.0?

Discussion: (#TwitterFiles6)
7. Twitter, The FBI Subsidiary, Dec 19, 2022

Summary: Twitter’s contact with the FBI was constant, both social and professional, and pervasive. A surprising number of communications from the FBI included requests to take action on election misinformation, even involving joke tweets from low-follower and satirical accounts. FBI asked Twitter to look at certain accounts, suggesting that they “may potentially constitute violations of Terms of Service.”

Discussion: (#TwitterFiles7)
8. How Twitter Quietly Aided the Pentagon’s Covert Online PsyOp Campaign, Dec 20, 2022

Summary: While they made public assurances suggesting they would detect and thwart government-based manipulation, behind the scenes Twitter 1.0 gave approval and special protection to a branch of the US military related to psychological influence operations in certain instances.

Discussion: (#TwitterFiles8)
9. Twitter and “Other Government Agencies”, Dec 24 2022

Summary: The FBI responds to Twitter Files 7, vigorously disputing some of the framing and reporting. Taibbi responds to FBI communication and press releases, and further shares internal documents related to FBI and “other government agency” correspondence.

Discussion: (#TwitterFiles9)
10. How Twitter Rigged the COVID Debate, Dec 26, 2022

Summary: David Zweig illustrates how Twitter 1.0 reduced the visibility of true but perhaps inconvenient COVID information, and discredited doctors and other experts who disagreed.

Discussion: (#TwitterFiles10)
11 and 12.

How Twitter Let the Intelligence Community In, Jan 3, 2023

Summary: Twitter 1.0 responds to governmental inquiry regarding some Russian-linked accounts, attempting to keep the governmental and press focus on rival Facebook.

Twitter and the FBI “Belly Button”, Jan 3 2023

Summary: Twitter 1.0 works diligently to resist acting on State Department moderation requests. In the end, it allowed the State Department to reach them via the FBI, which FBI agent Chan calls “the belly button” of the United States government.

Discussion: (#TwitterFiles11)
13. Twitter and Suppression of COVID Vaccine Debate, Jan 9 2023

Summary: Scott Gottleib, a Pfizer board member, used his influence to suppress debate on COVID vaccines, including from the head of the FDA. Twitter 1.0 frets about the damage the effectiveness of natural immunity might have on vaccine uptake, and Twitter slaps a label on a key tweet former FDA commissioner Brett Giroir’s tweet touting the strength of natural immunity.

Discussion: (#TwitterFiles13)
14. The Russiagate Lies One: The Fake Tale of Russian Bots and the #ReleaseTheMemo Hashtag, Jan 12 2023

Summary: On January 18th 2018, Republican Congressman Devin Nunes submitted a classified memo to the House Intelligence Committee listing abuses at the FBI in getting surveillance approval of Trump-connected figures. His memo also called into question the veracity and reliability of the Steele “Dossier.” #ReleaseTheMemo started trending, but Democrats attempted to discredit this by saying it was all being amplified by Russian bots and trolls, referencing Hamilton 68, a dashboard powered by the Twitter API (See Twitter Files #15, next in the series.) Though Nunes’ assertions would eventually be basically fully verified in a report by the Justice Department, a significant PR campaign was launched to discredit the memo, labeling it a “joke.” This TwitterFiles thread discusses Democrats’ desire to discredit the #ReleaseTheMemo hashtag as being of Russian origin/amplification, and Twitter’s compliance with those requests. Note that there is heavy reliance on the “Hamilton 68 Dashboard” in many of these discussions, which is the subject of Twitter Files 15. The important bit: Twitter executives knew it was fraudulent from about 2017 onward, yet did nothing to discredit it in the media, allowing this DNC-message-benefitting sham to continue.

Discussion: (#TwitterFiles14)
15. Move Over, Jason Blair: Twitter Files Expose Next Great Media Fraud (Hamilton 68 Dashboard), Jan 27 2023

Summary: This thread delves into the Hamilton 68 dashboard referenced in TwitterFiles 14 above. Twitter knew as early as October 2017 that it was simply pulling tweets from a curated list of about 650 accounts, and also knew that very few of those accounts were actually Russian. They knew that the media and Democrat officials were citing Hamilton 68 Dashboard as somehow credible. Though Twitter executive Yoel Roth tried several times to raise internal concern about the integrity of this tool, he was overruled within Twitter, and Twitter 1.0 never directly discredited this tool or explained how it worked.

Discussion: (#TwitterFiles15)
16. Comic Interlude: A Media Experiment

Summary: Matt Taibbi notes how little mainstream media coverage there is of TwitterFiles revelations when they are damaging to the Democrats, but published numerous stories on Trump’s request to get Chrissy Tiegen removed from the platform. New revelations are shown about Maine Senator Angus King (D) calling for suspension of a slew of accounts for spurious reasons, and Representative Adam Smith (D)’s staff request to stop “any and all search results” related to certain keywords. Taibbi notes how the mainstream media has utterly ignored the Schiff requests and what it says about the First Amendment risks presented by government-big-tech cooperation.

Discussion: (#TwitterFiles16)
17. New Knowledge, the Global Engagement Center, and State-Sponsored Blacklists

Summary: Taibbi reports on an effort by “DFRLab,” an entity funded by the “Global Engagement Center”, a shadowy part of the US federal government, to deplatform a bunch of people. The list of 40,000+ people the GEC/DFRLab attempted to get deplatformed under the guise that they were “paid employees or possibly volunteers” of India’s Bharatiya Janata Party (BJP), but the list included lots of everyday Americans. Taibbi characterizes these requests as “State Sponsored Blacklists,” and from the data shared, it’s rather hard to challenge that provocative label. GEC denies it uses US tax dollars to try to get US citizens deplatformed, but the list clearly included Americans. Taibbi explores the requests in detail, some internal discussion which resulted, and lets the reader ponder what these requests suggest about government stances toward free speech, and the “weaponization” of the word “disinformation” for political aims. (For me, I continue to ask — would we know any of this had Elon Musk not purchased Twitter?)

Discussion: (#TwitterFiles17)
18. Statement to Congress

Summary: On March 9 2023, Matt Taibbi and Michael Shellenberger testified before Congress about the network of third parties the federal government has been involved in paying, which in turn were serving up blacklist requests to Twitter.

Michael Shellenberger details it in this 68-page testimony to Congress.

Discussion: (#TwitterFiles18)
19. The Great COVID-19 Lie Machine

Summary: The Stanford Virality Project (VP) was involved in the flagging and push-to-censor several threads and accounts writing “true but inconvenient to the narrative” stories surrounding COVID-19, such as the strength of natural immunity, the fact that the vaccination does not stop the spread, or the existence actual, true adverse vaccination side-effects. It appeared to have the full support from within the US government. Taibbi documents how the narrative became more important than what the facts were saying at the time, how the Stanford Virality Project seemed more interested in narrative enforcement and speech suppression than in the principles of the first amendment.

Discussion: (#TwitterFiles19)
Complete List of “Twitter Files” Threads


MT: Matt Taibbi, Racket News: @mtaibbi

MS: Michael Shellenberger, Michael Shellenberger on Substack: @shellenbergermd

BW: Bari Weiss, The Free Press, @bariweiss

LF: Lee Fang, The Intercept, @lhfang

AB: Alex Berenson, Alex Berenson on Substack, @alexberenson

DZ: David Zweig, The New Yorker, New York Times, Wired, @davidzweig

Congressional Hearings

March 9, 2023: Primary subject – federal involvement in censorship
February 8, 2023: Primary subject – former employee testimony, Hunter Biden laptop

Meta-Story: Behind the Scenes, In the Authors’ Words

Our Reporting at Twitter, Bari Weiss, The Free Press, December 15 2022

Interview with Matt Taibbi, Russell Brand:

Wait, Twitter Knew The “Russian Bot” Narrative Was Fake… For Five Years?

In the most explosive Twitter Files yet, Matt Taibbi uncovers the agitprop-laundering fraud engineered by a neoliberal think-tank.

It’s been a little more than three months since Elon Musk burst into the Twitter headquarters in San Francisco, bathroom sink in tow, wryly captioning his tweeted photo “Let That Sink In.” In the time since (has it really only been fourteen weeks?), Musk has slashed staff and made many internal changes. In a type of “Sunshine Committee” initiative, he’s invited a team of independent journalists to Twitter’s HQ to rifle through internal communications. Musk is letting them uncover what they may. His only proviso is that these journalists must first publish what they discover about the Twitter 1.0 era… on Twitter itself.

And thus, the #TwitterFiles were born. We’re now up to thread Number 15, one of the most interesting ones yet.

In #TwitterFiles 15 published on January 27th 2023, journalist Matt Taibbi documents how an ostensibly bipartisan Washington DC political organization leveraged Twitter to disseminate a mysterious dashboard purporting to reveal the big online narratives that “Russian bots” were amplifying. The dashboard was called “Hamilton 68”, and its name stems from Federalist Paper 68, a treatise warning against foreign influence in elections authored by Alexander Hamilton in 1788. Alexander Hamilton supplied the name, and a thin veneer of high-tech and well-credentialed advisors supplied gravitas.

The organization behind this media tool has one of those “Who Can Possibly Be Against This?” institute names: The Alliance for Securing Democracy (ASD.) Its Advisory Board includes ex-FBI and Homeland Security staffers (Michael Chertoff, Mike Rogers), Obama Administration and DNC officials (Michael McFaul, John Podesta, Nicole Wong), academics and European officials and formerly conservative pundits (Bill Kristol.) Taken as a whole, the ASD is comprised largely of officials affiliated with the Democratic party and this nation’s security apparatus. The Hamilton 68 Dashboard project was led by former FBI counterintelligence official and current MSNBC contributor Clint Watts.

From 2017 up until about one week ago, the Hamilton 68 Dashboard was highly regarded, and cited by numerous mainstream media outlets, from The Washington Post to MSNBC to Politifact to Mother Jones to Business Insider to Fast Company. It was the genesis for countless news stories from 2017 through 2022. Maybe you read Politico’s The Russian Bots are Coming. Or the Washington Post’s Russia-linked accounts are tweeting their support of embattled Fox News host Laura Ingraham.

Or maybe you watched one of CNN’s many stories on the growing threat of Russian bots, such as this one:

Or maybe you watched this piece on The PBS News Hour, warning about how “Russians” are amplifying hashtags like #ReleaseTheMemo:

Or maybe you caught MSNBC’s Stephanie Ruhle casually asserting that “Russians are amplifying this hashtag”, an assertion which came from Hamilton 68 output:

No matter where we heard it, millions of us heard it. And read it. “The Russians are amplifying these terms on Twitter!”

Before we go further, let’s put one thing to bed: Is Russian bot activity, at least to some extent, real? Yes, it is.

Clearly, disinformation efforts have been underway since the dawn of communications, through journalism, radio, television, the Cold War, computer networking, and then, greatly accelerated during the era of social media. Foreign troll and bot activity has been documented first-hand. For that matter, we in the US are no doubt sending and amplifying messages their way too.

But the fraudulent “Hamilton 68” project by ASD deceptively leveraged public desire for bipartisan monitoring, with only the thinnest of high-tech patinas for partisan political gain.

How so? Here’s the shocker: The only thing behind the vaunted “Hamilton 68” Dashboard was… a list. No, not some algorithmically curated list looking at, say, the IP addresses of tweeters. Nor was it a list of known Russian agents, nor even frequent robotic re-tweeters of Kremlin agitprop.

No, the list was simply a bunch of accounts on Twitter that Hamilton 68 staffers hand-picked, and then summarily declared to be Russian bots or Russian-affiliated. The Hamilton 68 Dashboard was simply a list of these 648 Twitter accounts, right-leaning Twitter accounts for sure, but in no way provably Russian “bots.” While there were a handful of Russian accounts sprinkled in that 648, Russian accounts didn’t even represent the majority. The majority of accounts were merely conservative-leaning US, UK or Canadian citizens. You could just as easily have curated your own 648-person list yourself. Had Hamilton68 staffer selected a list of teens, their rigorous “analysis” would have implied “The Russians are amplifying the #TidePodChallenge on Twitter.”

Get that? Quite a racket. Assemble a heavyweight panel of credentialed experts. Build a list of accounts who tend to favoring the messages of your political opponents. Label it “Russian Disinformation,” and add a veneer of high-tech and state-apparatus gravitas. Critically, keep the methodology secret. Then, feed this “advanced dashboard” to the media, and boom — endless “news” pieces about — wouldn’t you know it? — Russian bots preferring GOP-aligned messaging. Opposition research PR has never been so easy.

According to the Wayback Machine, ASD has been trumpeting the Hamilton 68 Dashboard thusly:

These accounts were selected for their relationship to Russian-sponsored influence and disinformation campaigns, and not because of any domestic political content.

We have monitored these datasets for months in order to verify their relevance to Russian disinformation programs targeting the United States.

…this will provide a resource for journalists to appropriately identify Russian-sponsored information campaigns.

ASD Website, Hamilton 68 Dashboard, 2017-2022 (now updated)

What the ASD primed the media to run with as “Russian disinformation” were nothing more than the thoughts of a group of largely pro-Trump accounts on Twitter, hand-picked by them, a neoliberal think-tank. There was no algorithm, no science, nothing behind it other than subjective judgement.

Worse, Twitter knew about Hamilton 68’s utter lack of legitimacy for five years, and never bothered to directly expose the sham or cut off Hamilton 68’s access to their API. In 2017, Twitter executive Yoel Roth reverse-engineered what the Hamilton 68 Dashboard was doing by looking at it’s Twitter Application Programming Interface (API) calls. He pulled back the curtain, and learned that it was nothing more than a curated list of 648 accounts. On October 3, 2017, Roth wrote “It’s so weird and self-selecting, and they’re unwilling to be transparent and defend their selection. I think we need to just call out this bullshit for what it is.” Three months later, he wrote that “the Hamilton dashboard falsely accuses a bunch of right-leaning accounts of being Russian bots.”

On October 3 2017, Roth writes to his colleagues:

The selection of accounts is… bizarre, and seemingly quite arbitrary. They appear to strongly preference pro-Trump accounts, which they use to assert that Russia is expressing a preference for Trump even though there’s not good evidence that any of the accounts they selected are or are not actually Russian.

Yoel Roth to colleagues, internal email, October 3 2017

And later, Roth writes “Real people need to know they’ve been unilaterally labeled Russian stooges without evidence or recourse.”

Russian bots were blamed for hyping the #ParklandShooting hashtag, #FireMcMaster, #SchumerShutdown, #WalkAway, #ReleaseTheMemo and more. If you remember any of those episodes, you can probably recall that somewhere in your media diet, someone probably nudged that the Russians were amplifying this. It was all based upon this phoney list.

Taibbi shared a sample of just some of the stories this dashboard ultimately fed:


Ironically-named “Politifact” used it as the basis for several stories, including this one. Note that Hamilton 68 is cited as a source:


Like the piece above, basically none of these publications seem to be correcting their stories, or explaining clearly to their readers that the Hamilton 68 Dashboard upon which they generated oodles of pieces was essentially a sham.

By October 2017, Twitter executive Yoel Roth noticed that a lot of media stories were springing off this disinformation, and internally urged that Twitter make this clear. Yet Twitter executives demurred. Taibbi puts it this way: “Twitter didn’t have the guts to call out Hamilton 68 publicly, but did try to speak to reporters off the record. ‘Reportings are chafing,’ said Twitter communications executive Emily Horne. ‘It’s like shouting into a void.'”

Emily Horne, a Twitter communications VP who was among those putting the damper on exposing the sham, would soon become Biden White House and NSC spokesperson.

Yoel Roth comes across as sincere and heroic in his efforts to raise alarm bells from within Twitter in 2017 and early 2018. But Twitter executives like Emily Horne as well as presumably chief content officer Vijaya Gadde shut him down.

As a result, “journalists” in publications ranging from The Washington Post to Politifact to the New York Times continued to amplify the fake alarm that the ASD dashboard generated. Fake news begat fake news, until we even got to the point that the White House found it imperative to create appoint a new “Disinformation Czar,” led by the memorable (and meme-able) Nina Jankowicz.

Yoel Roth is a fascinating, complex character. He was eventually to become Twitter’s head of “Trust and Safety Council” by 2020, which made a lot of questionable decisions regarding deplatforming people re-sharing the Hunter Biden laptop story, which included the Press Secretary of the United States. Roth was even involved in Twitter’s decision to permanently boot the president of the United States. Musk at first considered Roth trustworthy (though with different political viewpoints), but by late November 2022, Roth was fired. Roth’s character-arc would be a very interesting one to profile for the inevitable “Inside Twitter” documentary.

If you’re looking for the news outlets which were earnestly duped and actually want to be honest and fulsome with their readers, check to see if they’re reporting on the Hamilton 68 scandal. Are they explaining to their readers the times they relied upon this now-discredited dashboard. Thus far, it’s not encouraging. Neither CNN nor Washington Post has any mentions of “Hamilton 68” this year so far.

A parting thought: Whether you like Musk or not, we wouldn’t have known any of this successful effort to deceive the American public had Musk not purchased Twitter and let journalists look behind the curtain. Had Musk not shelled out $44 billion, we very likely would still be watching and reading breathless stories amplifying how “the Russians are coming, and they sure do like these GOP hashtags” on Twitter. These claims would be based on a lie. Twitter leadership would know, ASD’s advisory board would presumably know. And no one would say a word about it. Let that sink in.

Read Taibbi’s full thread on Twitter here, complete with screenshots and source material: The Hamilton 68 Scandal.

Introducing Local Headlines, Updated Every 15 Minutes

I’ve built a news and commentary aggregator for Seattle. Pacific Northwest headlines, news and views, updated every 15 minutes. is now live in beta.

Its purpose is to let you quickly get the pulse of what Seattleites are writing and talking about. It rolls up headlines and commentary from more than twenty local sources from across the political spectrum.

It indexes headlines from places like Crosscut, The Urbanist, Geekwire, The Stranger, Post Alley, Publicola,, City Council press releases, Mayor’s office press releases, Q13FOX, KUOW, KOMO, KING5, Seattle Times, and more. It’s also indexing podcasts and videocasts from the Pacific Northwest, at least those focused on civic, community and business issues in Seattle.

Seattle isn’t a monoculture. It’s a vibrant mix of many different people, many communities, neighborhoods, coalitions and voices. But there are also a lot of forces nudging people into filtered silos. I wanted to build a site which breaks away from that.

Day to day, I regularly hit a variety of news feeds and listen to a lot of different news sources. I wanted to make that much easier for myself and everyone in the city. is a grab-a-cup-of-coffee site. It is designed for browsing, very intentionally, not search. Click on the story you’re interested in, and the article will open up in a side window. It focuses on newsfeeds which talk about civic and municipal issues over sports, weather and traffic.

I’ll consider a success if it saves you time catching up on local stories, or introduces you to more voices and perspectives in this great city.

How it works

There are so many interesting and important voices out there, from dedicated news organizations like The Seattle Times to more informal ones like neighborhood blogs. I wanted a quick way to get the pulse of what’s happening. Seattlebrief pulls from the RSS feeds of more than twenty local sites, from all sides of the political spectrum: news sites, neighborhood blogs, municipal government announcements, and activist organizations. The list will no doubt change over time.

Many blog sites and news organizations support Really Simple Syndication (RSS) to publish their latest articles for syndication elsewhere. For instance, you can find Post Alley’s RSS feed here. RSS is used to power Google News and podcast announcements, among other things.

RSS is a bunch of computer code which tells aggregation sites: “here are the recent stories,” usually including a photo thumbnail, author information, and description. Seattlebrief uses this self-declared RSS feed, currently from over 20 sources in and around Seattle. It regularly checks what’s new. Another job then fetches each page and “enriches” these articles with the social sharing metadata that is used to mark up the page for, say, sharing on Facebook or Twitter. 

Think of it as a robot that simply goes out to a list of sites and fetches the “social sharing” info for each of them, then puts them in chronological order (by way of publication date) for you. The list of sites Seattlebrief uses will no doubt change over time.


Over at Post Alley, where I sometimes contribute, there was a writers’ room discussion about the Washington Post’s popular “Morning Mix” series. Morning Mix highlights interesting/viral stories around the web.

Sparked by that idea, I wanted to build a way to let me browse through the week’s Seattle-area headlines and commentary more easily. So I built Seattlebrief.

I’d welcome any key sources I’ve missed. Right now, they must have an RSS feed. And regrettably, some important/thoughtful voices like KUOW have long ago decommissioned their RSS feeds. I’m exploring what might be possible there.

Drop me a note.

I’d love it if you checked out, and let me know your thoughts.

War Comes to the Small Screen

Russia’s invasion of Ukraine feels like an altogether new phase of social media in warfare. Maybe it’s the verbs which adorn those buttons: Like. Share. Donate. Block. They invite us in, and whisper: “Decide.”

Russia’s invasion of Ukraine has marked a turning point for the use of social media in war.

To be sure, this is far from the first conflict in which social media plays a key role. The “Arab Spring” of 2010-2011 likely gets that distinction, when hundreds of thousands of citizens in Arab-speaking nations networked their common cause on Facebook and elsewhere and rose up in democratic protests against their governments. Then, later in that decade, social media played a key role conveying the gripping stories of the more conventional conflicts of Syria and Afghanistan. By the 2010’s, in our own battle against Islamic extremism, social media featured prominently in recruitment, terror, propaganda, and victory.

From the Arab Spring of 2010 to today, social media’s membership has soared, from “just” tens of millions of people to now nearly 5 of the 8 billion people on earth. Today’s pervasive use of social media in Ukraine conflict feels much bigger in scope, and there’s something new. The stakes — war in Europe and potential for World War III — are higher. But what also seems new is that this time, this conflict already seems far more participatory, involving broad segments of society.

We see the besieged and attackers. We see soldiers, citizens, political and corporate leaders, journalists, corporate brands, celebrities and governments, who all have something to say. We see partisans in the fight flock to “user generated content” platforms, from Github to Yelp to Google Maps. Every social product of any size now needs a wartime strategy.

The Washington Post catalogued many examples of Ukranians using social media to tell remarkable stories, from the everyday citizen moving a personnel mine to a safe location to an elderly gentleman kneeling before a Russian tank. But Twitter, Facebook and Instagram are no longer just storytelling apps, and the events shaking eastern Europe are not read-only. In many ways, they are calls to us for our interaction and engagement. We are not yet, thank God, at World War, but in a profound way, all five billion of us on social media are being invited in.

There are buttons with verbs in social media. Share. Donate. Like. Retweet. Protest. Organize. Support. Report. Mute. Share. Reject. Block. Social media enables all of this from afar. These verbs also whisper to us, ever so quietly: Decide.

And decide we must, because to simply scroll on feels heartless. In the twentieth century, the abominable concept of “Total War” declared civilians and associated resources as legitimate targets. In the era of social media, we citizens seem not just collateral damage, but the door opens to being collateral participants. Do we walk through it?

Governments have shut airspaces down, but not cyberspace. If you are so inclined, you can engage directly with Russian citizens in many corners of the Internet – sites like duolingo and Interpals still offer the ability to chat with Russian-speakers. It boggles my mind that Russia can be firing missiles into Ukraine, and we in the West can be taking unprecedented, aggressive actions which risk cratering their economy, but we can still engage with the citizenry if we’d like, whenever we’d like.

Unlike even twenty years ago, social media now gives us the means to actually “participate”, at some level, from across the globe, not just to register our support, but to do something related to its outcome.

Wartime communication comes in many forms. There are secret tactical and strategic communications among combatants and allies. There’s propaganda, meant to promote a particular cause or point of view. There are morale-boosting missives and stories from the front to the population. There are psychological operations (“psyops”) waged against the enemy. There’s high-level diplomacy. There’s logistics and production planning. And there’s journalism and documentation for posterity.

Today, social media touches all of these forms, and profoundly changes many of them. That’s because social media has many attributes other media does not: it is global, instantaneous, emotional, participatory, and many-to-many.

We’ve already witnessed a few groundbreaking examples of how these attributes have transformed wartime communication.

Social Media Is Global.

Do you want to contribute to the defense of Ukraine without donning a uniform? There are a variety of non-governmental organizations to which you can donate. But brand new for 2022, the official Ukrainian government Twitter account (@ukraine) has a Bitcoin donation link pinned to its profile:

Yes, that’s right. With a few clicks, you can instantly donate money directly to the government of Ukraine. So long, allied war bonds, or even waiting for your own government to send more aid. Supporting a war effort is now as easy as adding an extra 20% tip at Starbucks.

Or do you want to interact with your adversaries more directly, to try to better understand or inform them, cyber-harass them, or attempt to boot them from a given platform? Hacktivist group Anonymous is encouraging people to write reviews of Russian-based businesses and restaurants to convey messages to the people of Russia, to try to get around state-media control.

Are your desires more juvenile? A TikTok video encourages you to go to Google Maps and re-label Russia’s official embassies as “public toilets.” UPDATE: Google has placed restrictions on this activity:


And pro-Russian activists are currently brigading one of the most popular open source code repositories on GitHub: Facebook’s open source React framework. They’re posting pro-Russia messages.

The point: this is a war involving not just combatants in Ukraine, but those of us in the crowd. Every social product of any scale now needs a wartime strategy.

Social Media is Emotional, Ubiquitous and Instantaneous.

Ukraine’s citizens and leaders are sharing heartbreaking videos directly to us on Instagram and Twitter. They’re telling the stories of heroes and victims, crying out for our help.

These direct video pleas are a far cry from how many of us digested international conflicts decades ago: they’re not just an international interest segment tacked onto a nightly newscast. The pleas are integrated into our daily lives as we scroll through our feeds. These are the compelling stories of 43 million Ukrainians, many of whom speak English. They want and need us involved.

We can also watch things unfold as never before. Heard about the 40-mile long Russian convoy lumbering toward Kyiv? We can follow along via a street-level view via Google Maps. Want to watch what’s happening live, via dozens of webcams? There’s a website for that.

Humans are a story-telling species. Wartime communication used to rely heavily upon correspondents, filmmakers, military journalists, radio personalities and famed directors to get the battlefield news to an audience. Now, they are relegated to editorial and summary roles. If you want the very latest information, you rush to Twitter; the nightly newscast operates on a more glacial pace by comparison. Even television journalists now spend a great deal of time highlighting what’s being reported on social media.

In World War II, the process of getting video footage to the home front took months. Hollywood director John Ford traveled to Midway Island in early June 1942 with two cameramen. Two days later, on June 4, 1942, they filmed the first wave of Japanese Zeros as they strafed the island. After the battle, Ford sent the film back to the States, which was developed and hastily edited into a theatrical documentary with voiceover. The result: film of a battle shown in record time to a home-front audience, a mere three and a half months after the first bullets of the Battle of Midway were fired.

Today, not only is storytelling instantaneous, it’s also much more intimate and direct. There’s usually no director. Anyone with a cellphone can tell their own story, and often doing so is more compelling, but fraught with risk of forgery.

In the attention economy, the scarcest resource is our consideration, that which what we pay heed to. Nuance takes longer. So quick, shocking, humorous or heartbreaking memes are often what break through.

We’re getting selfie videos directly from the Prime Minister of Ukraine, via the small screen in our pockets. It’s available everywhere, not just when we’re ready for it. We can be in line at the grocery store checkout, and hear the breaks in his voice through our AirPods. At any time, and at any moment, we can be witness to his steadfast bravery; it’s integrated into our day.

Zelensky is marshaling this breakthrough power capably and creatively. A week ago, he broadcast a powerful speech directly to the Russian people, circumventing journalist intermediaries. With more than 114 million Russians on the Internet, plus the many who were willing to translate, caption and redistribute, the whole world received his message within hours.

To a global audience that has Zoomed its way through the past two years, being able to see the human side of a leader in a war-torn nation speak out through our small screens feels at once both entirely natural, but also surreal. It is often unedited and raw. It is profoundly new.

Zelensky is extremely well-suited to this role. Prime Minister Volodymyr Zelensky is a former entertainer, voice actor and comedian. He was the voice of the Ukraine version of Paddington Bear. He’s charismatic, his cause is clearly just, and he knows how to speak to the camera. His use of Twitter (where he has 4.3 million followers) and Instagram (where he has 13.7 million followers) has been masterful. We see Zelensky making human, passionate pleas, often arm in arm with compatriots. His warmth and humanity come through clearly to millions.

Opposing him, we see Vladimir Putin, a vestige of the nightly broadcast, state television world. He gazes sternly from one end of his 20-foot long gold accented table, bunkered deep in the Urals. He’s formal, rigid, isolated and distant. His mannerisms and demeanor might have been well-suited to the fixed-format communications of the 1980’s, where projecting power and formality spoke volumes. But now he seems anachronistic. He leads a superpower, but gets an F for 2020’s era social media presence to billions of people who value authenticity, warmth and story.

Photos: Putin keeps his distance during meetings

With every communication, the people of Ukraine are saying “we are here, on our land, in our homes, and an invader is trying to take it from us brutally.” Their message cc:’s the world. Messages go out to allies and foes alike. Citizens and leaders of Russia and Belarus are watching.

Russia, meanwhile, is going through its own transformation of media consumption. State television’s former dominance of news is slipping, and the information divide highly age-weighted. Older citizens are much more likely to still pay attention to state television. But the young are much more likely to use the Internet and social media.

The outcome? Note the average age in this photograph from Wednesday’s anti-war protests in St. Petersberg Russia:

It’s Many-to-Many

Over the weekend, Ukraine’s minister of digital transformation Mykhailo Federov reached out directly to Elon Musk to request Starlink (satellite-delivered Internet) terminals from SpaceX, so that his government — and presumably military and resistance groups — would be able to communicate in the likely event of widespread communications outages.

Federov wrote, “While you try to colonize Mars – Russia try to occupy Ukraine!” on February 26th. Within hours, from halfway across the planet, Elon Musk responded: “Starlink service is now active in Ukraine. More terminals en route.”

And then, as if ordered up via Amazon, a planeload of Starlink terminals arrived on the other side of the world two days later. Monday, a grateful Federov tweeted:

Ukraine has thousands of celebrities, corporate leaders and heads of state at their disposal. In short, while Russia has military might, Ukraine has the attention and willing participation of the biggest stars of the attention economy.

Today, all 4.8 billion of us on social media can be both a broadcaster and a receiver. Social media can help a single leader rally a nation, much like broadcast TV. But what’s new is that it’s the first-time senior government officials have been able to directly and publicly call out to key resource-owners in civilian life for critical things and seen them instantly delivered. Even if they’re across the globe.

At this writing, Ukraine may be headed for a long insurgency. And as they need resources, officials and guerilla leaders won’t need someone to find the phone number of some official. They can merely make these requests publicly. It’s not only much faster. It has the added advantage of securing a near-instant affirmative.

Here are some other noteworthy things crossing my social media feed:

Perils and Risks

We’ve already seen social media being used for “astro-turfing,” disinformation, and forgeries in this war. One of the major shortcomings of social media is that consensus can masquerade as truth. And it is likely to get far worse, since deep fake technology makes forgeries much more convincing. Given that this may well become a protracted occupation and insurgency, expect far more psychological operations via social media as Russia attempts to convince the public of the righteousness of its cause.

Early Days

All of this is playing out less than two decades since social media as we know it began. Facebook was founded just eighteen years ago, and Twitter sixteen. What’s ahead is even more acceleration and interconnection — and security risks, forgeries and more. It makes me wonder about how this technology might have shaped prior wars. The colonists had no way to reach the King of England or France or powerful potential benefactors during our own Revolution. Would history have turned out differently if they did?

Even contemporary revolutions in wartime communication seem quant by comparison. Many of us remember the moment thirty one years ago when CNN’s Peter Arnett stood atop buildings in Baghdad and broadcast the first live television coverage of the United States’ opening salvo in Operation Desert Storm, and ushering in a new era in 24×7 cable news. We watched in real-time as the bombs dropped, and saw a major invasion take place via our television sets. But we couldn’t influence its outcome; we were fully bystanders. Broadcasters could infer our engagement, but they couldn’t discern it story by story. And but for taxes and care packages, we certainly couldn’t join in to the degree we can today.

The opportunities and perils that social media presents during the Russia-Ukraine conflict feel like an even greater leap than that which thrust 24×7 cable news to prominence. This isn’t the very first conflict of the social media age, but is altogether new: it is pervasive, at massive scale, and participatory.

It’s Not All Bad News: Four Monumental Advancements in Tech in 2021

Have the past few years of tech news gotten you down? Here are four recent advancements in tech you may have missed.

[cover image by Nicolas Bouvier]

Theranos has fallen apart, and with it, a lofty dream of “never having to say goodbye too soon.” Its founder is now looking at prison. Remote schooling is now seen as largely a bust. Instagram and Facebook are causing depression in teens. Purveyors of junk “science” and misinformation have let us down through a global pandemic, and tech has in some ways amplified the ability to misinform. Social media and cybercrime risk undermining the very foundations of government. Questions linger about the wisdom and safety of leading-edge gain of function research, as well as its potential, accidental role in the greatest world health crisis in our lifetime.

Is the tech news starting to get you down? Here are four reasons for hope.

1. Renewables have made staggering gains over the past decade, and usage is accelerating

Amidst all the climate doom, we need to recognize that we are, in fact, dramatically changing our ways. According to the New York Times, renewable energy sources now account for nearly 21 percent of the electricity the United States uses, up from about 10% in 2010. Notably, this trend has continued to run its course through both Democrat and Republican administrations. That’s astonishing progress in just a decade.

The US Energy Information Administration (EIA) tracks the energy consumption by source. As you can see, the areas of greatest growth are all renewables, and America’s reliance upon coal has plummeted over the past decade:

Source: U.S. renewable energy consumption surpasses coal for the first time in over 130 years – Today in Energy – U.S. Energy Information Administration (EIA)

Solar power in particular is poised for near-exponential growth. And even though manufacturing and supply chain disruptions have held this growth back, we still added nearly 290 Gigawatts of capacity during 2021. A single gigawatt is enough to power either 700,000 homes, or about ten million light bulbs. The EIA expects solar to account for nearly half of all new US electricity-generating capacity in 2022. The IEA notes an acceleration of almost 60% in electricity-generation compared to the average rate of renewables over the past five years. That’s great news.

Read more: Oil Companies Are Collapsing Due to Coronavirus, but Wind and Solar Energy Keep Growing – The New York Times (

2. Up in the Sky! The James Webb Telescope Is Set to Reveal the Heavens

On Christmas Day 2021, NASA blasted $10 billion worth of the largest and most complex observatory ever built into space.

The James Webb Telescope is a large, infrared-based instrument, more than 100 times as powerful as the Hubbel Telescope. It will allow scientists to peer deep into the history of our universe like never before. Scientists hope it will resolve many unknowns related to our historical record of our universe, in particular, what happened in the first 400 million years after the Big Bang. There’s also the tantalizing possibility it might help identify some of the distant worlds in which alien life is feasible.

What’s more, it’s a much-needed sign of multinational cooperation. We should applaud the international partnership that made it possible; it’s a collaboration between NASA, the European Space Agency, and the Canadian Space Agency.

Webb has unfolded and will now travel 1 million miles, then calibrate its instruments. At this writing, all systems look nominal. By late March, researchers hope it will be capable of sending back its first images.

Webb’s largest feature is a tennis-court sized, ultralight sun-shield, which manages to reduce heat from the Sun more than a million-fold, while also blocking enough light for Webb’s observational instruments to peer deep into dark space. But there are many other innovations that make this possible. Watch this video and be awestruck by the ingenuity of humankind:

3. SpaceX’s Starlink Has Gone Live, a Dramatic Leap for Rural Connectivity

Speaking of space, but sticking much closer to terrestrial needs, there is now a network of low-orbit satellites that’s capable of delivering broadband Internet connectivity to pretty much anyone, anywhere on the planet.

“So what?”, you say. After all, there have been past vendors of satellite connectivity. And if you’re a city or suburban dweller, you probably have had cable or fiber optic access for quite some time. First, try to remember the days of dialup. There are still huge swaths of the country and of course the world that don’t have broadband.

This is a map of America’s broadband problem – The Verge

Yes, DISH network and others exist. But the difference here is that Starlink’s satellite network is sixty times closer to earth, which means this network provides a very fast, low-latency connection, fast enough to allow video and audio streaming. That is, you can make phone calls, do Zoom calls and host live video, where it was never possible before. For millions around the world, and for more of the planet than that which is currently reachable via broadband, it’ll be like upgrading from dialup speeds to broadband.

The opportunities for advancement in rural connectivity, forestry/desert/tundra research and data relay, and even for live feeds for underwater oceanographic communication and more — have advanced many-fold. There will soon be no place on the planet it’s not possible to get broadband connectivity, and that’s critical for a lot of researchers, first responders, farmers, healthcare facilities and remote communities.

Starlink’s goal is to sell high-speed internet connections to anyone on the planet. After years of development and $885 million in venture funding from the FCC in 2020, Starlink now has a fleet of nearly 2,000 satellites overhead.

There is a downside, however. Starlink’s large network of low-earth orbit satellites have the chance of being more visible from earth. As such, Starlink has also earned justified criticism and concern from environmentalists and astronomers.

SpaceX is aware of this and has piloted many mitigation steps to make them near-invisible, such as sun shades, which fold down to block light bounce-back, and orientation of the main “fin” of the satellite directly toward the sun. They claim their pilot tests are quite successful at mitigating this problem, and that their satellites are near-invisible to the naked eye. I file this in a “wait and see,” but think broadband-possible-everywhere is a major achievement with tremendous leverage for good.

4. Artificial Intelligence Cracks A Nagging, Vital Molecular Question: AlphaFold

It has been hailed as “the most important achievement in Artificial Intelligence ever.” No, they’re not talking about the terrific Netflix recommendations you’re getting during this pandemic. They’re giving this lofty praise to AlphaFold 2, a highly accurate model for protein folding. It’s not only the most important achievement in AI ever, its also the most important computational biology achievement ever.

Proteins are essential in just about all the important functions in our body: they help us digest our food, build our muscle, encode our genetic signals and more. Viruses are a small collection of genetic code (either DNA or RNA), surrounded by a protein coat.

In short, proteins are at the core of our biology. And we can influence proteins with nutritional intake, exercise, pharmaceuticals, enzymes and more. But a key question: how do proteins physically take shape at the molecular level, and how are they most likely to interact with biological entities like enzymes or pharmaceuticals, or other proteins? If we had a computer model that could predict how proteins might take shape, or fold, in such interactions, it could revolutionize the process for developing new therapeutics and diagnosing maladies.

The number of protein types is much larger than you may think. Depending upon cell type, there are between 20,000 and 100,000 different protein types in each human cell.

For nearly 50 years, advances in medicine and pharmaceutical research have been hampered by a key question: “How do proteins fold up?” In 2007, one scientist described this question as “one of the most important yet unsolved issues of modern science.”

Sure, we have computers. And from about 1970 through 2010, we’ve tried a “bottoms-up” modeling and brute-force approach. But understanding how proteins fold and unfold is fantastically difficult, because there are so many possibilities. Researchers have estimated that many proteins have on the order of 10^300 of possible options which would satisfy the constraints. “To frame that figure more vividly, it would take longer than the age of the universe for a protein to fold into every configuration available to it, even if it attempted millions of configurations per second,” writes Rob Toews in Forbes.

Scientists have observed for some time that the way proteins fold is not random, but couldn’t decipher any sensible pattern or model accurately. And for decades, researchers aimed with computers have attempted to model the underlying physics of proteins and amino acids to try to build some kind of predictive model. But the truth is, that after decades of such work, it fell short in its reliability.

And then, along came machine learning, and in particular, deep learning. Machine learning is a computational technique where, rather than trying to hand-write the procedural rules in a sort of “if this then that” bottoms-up way, you instead take well-scrubbed, well-marked data sets of inputs and labeled outputs, and use mathematical techniques to “train” a computer to decipher — or learn — underlying patterns. It’s rather like teaching a dog new tricks: repeatedly give them input (“sit”) and (when they do) also give them an output (“cookie”), and over and over again, and the dog learns to associate an input with an output.

Deep learning takes this training process one step further, and breaks down the simple “input-to-output” steps into many layers. This is akin to the way there are multiple layers of neurons in the brain taking raw input (e.g., aural neurons feeling varying wave pressure, other groups of neurons summing that up into phonetic chunks, other groups associating with getting a cookie or not), ultimately collectively leading to “learning.”

The training data for AlphaFold and AlphaFold2 came mainly from the Worldwide Protein Data Bank, which stores a massive archive of all known protein structures. The work derives from the AlphaGo project, funded by Google.

There’s a competition — the Critical Assessment of Protein Structure Prediction (CASP), held every other year. Here’s Forbes’ Toews:

AlphaFold’s performance at last year’s CASP was historic, far eclipsing any other method to solve the protein folding problem that humans have ever attempted. On average, DeepMind’s AI system successfully predicted proteins’ three-dimensional shapes to within the width of about one atom. The CASP organizers themselves declared that the protein folding problem had been solved.

Having a computational model for how proteins fold is sure to help much more rapid advancement in understanding how our biology works. Perhaps fewer tests on animals will be necessary, understanding of viruses will be more accurate, and effective therapeutics more rapidly discovered. It’s an incredibly leveraged discovery for humankind.

Take some time out of your day and watch this terrific overview video about AlphaFold 1.0. Then, note that the advancements even since then have been significant, so much so that the competition hosts now call the problem “solved”:

Read more:

Musk v. Holmes: “Fake It Till You Make It” Only Sometimes Works

The conviction of Elizabeth Holmes should cause any founder to take stock of their claims: hopes, exaggeration, or fraud?

“First they ignore you. Then they laugh at you. Then they fight you. And then, a jury convicts you on four counts of defrauding investors.”

probably overheard in Palo Alto cafe today

We’re only four days into 2022, and already the denizens of Silicon Valley have received a powerful message about the risks and rewards of making wildly exaggerated product claims. I’m referring of course to Tesla’s blowout report of near doubling (87% increase) in electric vehicle deliveries for 2021, which vastly surpassed analyst estimates. The resultant one-day pop in net worth of more than $31 billion was not a bad outcome for Time Magazine’s newest Person of the Year, Elon Musk.

Oh, you mean that other story?

Well, yes. On the day prior, Elizabeth Holmes was found guilty on three counts of committing wire fraud and one count of conspiracy to commit wire fraud. The Stanford dropout turned multi-billion-dollar entrepreneur turned felon is starting off 2022 on a very different note.

Former tech CEO Elizabeth Holmes guilty of fraud and conspiracy - Portland  Press Herald
photo from Portland Press Herald: Elizabeth Holmes leaves federal court

Now, we don’t know exactly where Holmes’ jurors drew the line on when ambition and audacity turned to fraud. Was it when she knowingly slapped the logos of Pfizer and Glaxo-Smithkline on product endorsement statements without their authorization? Was it when she repeatedly misstated or implied that Theranos devices were already in use by the military in operational capacity, even flying in Medivacs? Or was it when she neglected to mention that the vast majority of the patient bloodwork tests were processed on modified third-party machines, not Theranos-built machines? We don’t know.

But we do know that twelve jurors who heard the evidence unanimously found that what Holmes did constituted fraud. Specifically, they agreed it was three counts of wire fraud (that is, financial fraud using electronic means) and one count of conspiracy to commit wire fraud.

Lance Wade, one of Holmes’ attorneys, argued: “Failure is not a crime. Trying your hardest and coming up short is not a crime.” He’s right. Fraud is not merely overpromising. It requires intentional misrepresentation.

Like most tech entrepreneurs, Elon Musk knows about overpromising. Expressing hopes about the future, or even making very bold claims of what is to come — these are not just allowed, but expected of a good CEO.

But making false, consequential claims about the present or the past, even if one doesn’t personally profit from it, can land you in jail. Musk is no stranger to bold statements. Let’s look at one specific claim.

In May of 2019, he promised that Tesla would have “a million robotaxis on the road by 2020.”

There’s a pretty good argument he must have known a million robo-taxis were not likely to be on the road by the end of 2020. After all, this wasn’t a long-range prediction: he made this claim as late as May 2019, a mere 18 months before the marker he set would have to come due.

Musk is still 1,000,000 robotaxis short of this vision.

Machine learning (and in particular “deep learning”) advanced quite rapidly from 2015 through 2020, so his bold claims about full-self-driving were made during a period of disruptive advancements. (So too were some of Holmes’ claims: miniaturization and healthcare tech also experienced some leaps of advancement during the first couple decades of this millennium.)

Musk’s repeated claims and teases of “full self driving by 2020” were untenable to most machine-learning researchers (including this occasional ML tinkerer), with so many edge cases and conditions, let alone the practicalities of getting past governmental review and building a million cars.

At the same time, we allow — even encourage and applaud — these kinds of claims and bold promises. Chances are, you agree with me that this big-thinking is admirable, desirable and even refreshing, when so much of our society seems to be focused on what we cannot or should not do. If you’ve ever lined up at an Apple store for the latest release, you’re also someone who is drawn to the promise of technology and are willing to overlook some of the realities of it.

Full self-driving is achievable in our lifetime, but he was clearly incorrect about the date and scale. Was Musk’s “million robotaxis by 2020” claim — one of many bold claims he’s made — fraudulent? It certainly was short-term beneficial to the company. Arguably, this helped with the Tesla “hype cycle” that boosted the stock price, attracted employees, added to investor and buyer confidence in the nascent electric vehicle sector, which all returned great value to Tesla. (Disclosure: I’m a stockholder and happy vehicle owner, having purchased my first EV in 2015.)

But no, to me, this is not a fraudulent claim, because it was reasonable to believe that revolutionary advancements in learning and training AI models might just enable it, and he wasn’t making a claim that it was available today, nor was he lying intentionally about it. Sure, the probability of it coming to fruition in the timeframe he promised was extremely slim (far less than 1%), but it was possible to imagine. 2017, 2018 and 2019 all had major leaps in machine learning algorithms and techniques — like Generative Adversarial Networks.

At the time of Musk’s 2019 claims, Tesla’s full-self-driving models were delivering pretty promising rates of improvement with every iteration. But full self driving hasn’t happened. Not yet. And stating even known-to-be-impossible goals about the future, by itself, isn’t fraud.

If you asked most people in 2015 — even scientists in the field — whether we were more likely to witness (a) full-self driving robotaxis on America’s highways or (b) cheaper-than-NASA, reusable orbital rockets landing like vertically with precision accuracy, most informed scientists and reasonable people would have opted for (a). And yet, the Michael Bay film became real:

Setting ambitious goals is not only legal, it’s essential for visionary leadership. Setting a “Big, Hairy Audacious Goal” (BHAG) was in fact one of the key markers of enduring companies, as Jim Collins chronicled in his business bestseller Built to Last. All across Big Tech, you’ll find people putting BHAG’s on Powerpoint decks and chatting them up at conferences and strategy offsites.

In my view, the jury was correct to hand down Holmes’ verdict, which will almost certainly be appealed. But it should cause those of us in tech to ponder, again, the sometimes-ambiguous zones between acceptable hype (defined by the FTC as “product puffery“) and outright fraud.

Holmes and Theranos’ biggest failure of duty was to the many patients who were relying upon accurate test results to direct their plans of care. Yes, investors lost money — nearly $1 billion of it. But product errors and known shortcomings, knowingly covered up, could have killed people, and even as it is, might have shortened some patients’ lives.

The prosecution failed to prove the essential patient-harm part of United States v. Elizabeth A. Holmes, losing all counts related to fraud against patients. Most legal analysts attribute this failure to the narrow lines of inquiry the judge would allow. Complicating matters further was the crown-jewel database of patient tests of this multi-billion-dollar startup somehow became totally inaccessible, which sure has a powerful “dog ate my homework” feel.

For the past decade, “minimum viable product” has been a key development paradigm of software delivery. Get the smallest acceptable version of your product out there. Ship it, even knowingly incomplete, and iterate rapidly, based upon market feedback. But one lesson from Holmes’ downfall is that even though software is eating the world, not every business should be run like a video-game company.

If you’re in healthcare, I’d argue that “minimum viable” must also be compliant with the Hippocratic Oath: “First, do no harm.” Put another way, Mark Zuckerburg’s “move fast and break things” ethos works well in some areas of tech targeting the low-risk, luxurious tippy-top of Maslow’s Hierarchy of needs (think video games, video streaming services, maybe even social media), but it’s a terrible model if you’re Boeing, Bechtel or Theranos. Broadly speaking, the lower your new startup focuses on Maslow’s Hierarchy, the higher your duty of care and to quality assurance.

I don’t fault Musk for overpromising. In fact, I admire him in part because he is such a bold thinker. He exists in an era of cynicism, “we can’t because the other guy”, dare-I-say depressing determinism of the day.

Send rockets up into space, and land them vertically on remote-controlled floating platforms in the ocean? You bet. Reshape America’s transportation system with a better way? Very much in progress. One of the reasons he’s Time’s Person of The Year is he eschews small thinking and gets big things done. But yes, he also makes big claims that for now at least are still in the future.

As Holmes spends the next several years between jail and fighting her appeal, we in high-tech should spend some time thinking about the distinction between fraud, product puffery and overpromising. The line sometimes feels blurry, but what is ambition for tomorrow versus what is reality today should be made clear to investors, customers, employees and the press.

Got any Powerpoints you want to update?