How to Add NPR or CBC’s Headlines Back Onto Your Twitter Timeline

Do you miss NPR’s news on your Twitter feed? Simply follow @NPRbrief on Twitter to get the headlines.

You may have heard that National Public Radio (NPR) decided to stop posting to Twitter last week, after the Musk-owned social media company started labeling the NPR account, at first, “State affiliated media,” changed within 48 hours to “Government-funded.” The Canadian Broadcasting Corporation (CBC) also made a similar announcement on April 17th 2023.

If you’re interested in re-adding either NPR or CBC’s news headlines to your news feed, I’ve just released bots which tweet out new NPR and CBC stories: @NPRbrief and @CBCbrief respectively.

It’s up and running, and simply watches NPR stories and tweets them out.

NPRbrief grabs news from the NPR’s website across all eight top-level categories: National, World, Science, Health, Climate, Business, Race, and Politics. It updates every 15 minutes, tweeting a maximum of 100 stories per day. It’s fully automated and is hosted on Microsoft Azure.


Though I’m personally rather critical of NPR’s recent swing leftward, I think it’s extremely important to follow news sources from multiple political lenses. It’s about our only shot right now at knowing what is true. You don’t have to believe the stories coming from any given news source, but it’s useful to see them, and not have them filtered from your life.

I wrote about this in Internet Consensus is not Truth.

As some ideological outlets decide to “depart” Twitter in protest, this trend only accelerates the filter-bubbling of our lives, and I think it’s useful to push back against that “taking our marbles and going elsewhere” move. There are many forces at work to wall people into silos. If other companies think they can just “take their content elsewhere,” that seems to go against a key goal of the Internet’s public square.


Just click the “Follow” buttons above. It’s free. Please consider sharing this with NPR and/or CBC-loving friends.

Circling Back to “Don’t Discount Lab Leak Hypothesis”, Part II

Part II of “Circling Back to Lab Leak Hypothesis”

I’ve moved on from Facebook, but wanted to archive for posterity a few long-form posts I made there. This is Part II, a follow-on to Circling Back to a “Don’t Discount the Lab Leak” Post of January 2020 (Part I).

The essence of learning is to be able to update our prior assumptions as new evidence comes in. I got a few things wrong here, and absorbed what I could of information coming out at the time. In particular, you’ll see my initial credulous assumption that the highly flawed “Proximal Origins” paper had to have some credibility. In truth, that paper — the most cited scientific paper that year — was fraudulent; this fact became evident to me a few weeks later.

April 15 2020

full text below

I know that nuance is still in hibernation in America, but I’m going to ask us once again to try to hold multiple competing thoughts in our mind at the same time:

1) It’s important to know how this whole global pandemic started. In terms of deaths and pauses to our lives and economic toll, it’s the greatest global calamity of most of our lifetimes.

Let’s try to keep it that way. In order to do so, we need to know how to prevent future such outbreaks, and particularly, if there are lessons to be learned here.

2) Just as Chernobyl and Three Mile Island warranted (actually, demanded) forensic investigation, so too does the COVID-19 outbreak.

3) Just as in Chernobyl and Three Mile Island, there should not be any assumption of malicious intent. But this is not the same as saying we should ignore the question of how it began, nor shut down those who wish to respectfully, scientifically, forensically pursue that important question. But like a post-crash NTSB investigation, if we do not know the cause, we have no chance of minimizing such events in the future. And as with Chernobyl and Three Mile Island, let’s hold to account the governments and officials who were involved in any coverup or put people in knowing additional danger because of their actions, assuming they exist (and it already appears they do.)

4) It IS racist to “blame”, say, Chinese Americans, or the billion-plus Chinese citizens who are clearly NOT associated with this in any way. Even if incontrovertible proof were found showing an accidental lab leak scenario, they had NOTHING to do with it. Nothing.

And it IS racist and dangerous to expand criticism or action in any way to various Asian communities. We should NOT do that, and protections and kindness should be foremost. In fact extra kindness is called for.

But that also doesn’t mean avoiding forensic, respectful inquiry.

I am an American-Canadian citizen. I had nothing to do with Three Mile Island. It would be wrong to blame me for Three Mile Island. I would find that very unfair. Yet I definitely am glad we pursued the investigation aggressively as to its origin. Doing so made every nuclear power plant safer.

Just as nearly every single Chinese citizen (and all Chinese Americans) had nothing to do with this outbreak — AND many, many have been working hard to control it. And Chinese heroes have died trying to spread the word — we should know and celebrate them.

5) We should do our best to respect, protect and reach out with kindness to our Chinese American and Asian American friends, because unfortunately, we live in an era where terrible people may wish to exploit this, and already are. But this does not mean that we should stifle forensic investigation.

6) The concepts of “engineered in a lab” and “accidentally leaked from a lab” are two entirely different concepts. One does not imply or require the other. Disproving one does not disprove the other. It is entirely plausible that there was an industrial accident from an academic research effort of a discovered, naturally created, naturally mutated virus with NO malicious intent whatsoever, perhaps through multiple steps in a chain of possession (e.g., academic research injecting it into an animal, followed by discarded animal specimen, not properly secured, then picked up and re-sold or consumed or in some way by some other unrelated, unknowing person, with the virus jumping zoonotically) that spiraled into a huge global catastrophe.

In fact I am among those who believe this is one of the most plausible and probable scenarios, which meets Occam’s Razor, which is not contravened by any known evidence, and is supported by a lot of various pieces of circumstantial evidence that we do know (e.g., lab proximity, prior warnings from science community about lab safety at these labs, the fact that the lab 300 yards away just happened to be researching bat caronaviruses that are found to be 90%+ of the DNA of COVID-19, job postings for bat virus researchers at that very same lab verified on the Internet in November 2019, the fact that the virus shares 90%+ DNA came from hundreds of miles away and are not naturally occurring in the city of Wuhan, the fact that the wet market in question never sold pangolins or bats, the fact that the leading bat researcher actually had publicly stated she wondered to herself whether the virus came from her lab, the fact that the Chinese government initially said this emerged from the sea via seafood (and the DNA shows it did not), internal urgent governmental directives discussing lab safety in early January, destruction of samples and suppression of information, etc. — actually several more circumstantial pieces of evidence here, but you get the idea. It’s starting to sound very plausible.)

In no way do I think the virus was engineered, nor have I ever felt that way. And DNA evidence certainly points away from it. And no, I also do not think there was any malicious intent by any individual involved — just like Chernobyl and Three Mile Island, where no malicious intent caused the accident.

When humans are involved, accidents happen, even major ones with huge global ramifications. Did it happen here? I think so, and have felt it plausible since January, but don’t know for sure.

It’s not only a very valid but a very IMPORTANT question to pursue.

An accident is just one plausible scenario, and I could certainly be wrong. But we do no favors to science, nor hopes of preventing future such calamities, to rule it out or even decry it as racist or to a lesser degree, employing the cheap “what are you really saying?” innuendo at this stage. The totality of all evidence that I’m aware of, though yes, circumstantial, fairly strongly suggests this might have accidentally leaked from a lab. No evidence that I’m aware of currently contravenes that likelihood. Perhaps it was an accident, a calamitous accident of truly epic proportions. And if so, lab safety MUST be transparently investigated and tightened.


7) All of the above can (and should) happen regardless of one’s views about who should be elected president in 2020.

June 25, 2021

From an opinion piece in the NYT today.

Amazing that three of the very signers of the famous Lancet letter declaring near certainty of natural origin are now not only walking it back but reversing their stance.

Perhaps more important, just a few months ago, Facebook was banning these very statements/stances, and Twitter was slapping them with all kinds of warnings, in large part because they took the Lancet letter speculation as the “definitive” view.

You or I would have run the risk of a permanent ban from social media for saying what these three virologists are now stating on record, such as: “a lab leak is more likely than spontaneous natural origin.”

Note that very little brand new information has come to the fore (at least publicly) since then.

All that’s really changed is that it’s now deemed an acceptable view. And these authors were part of the reason why social media teams determined it WASN’T an acceptable view just months ago.

I hope we can pause a moment and think about that.

January 5, 2021

“It has been a full year, 80 million people have been infected, and, surprisingly, no public investigation has taken place. We still know very little about the origins of this disease.”

Huh. New York Magazine, of all publications, is running this piece today under their banner. This piece goes far further than I would (or do) in speculation about origin.

But is it OK to talk about this now, when just months ago, the possibility that this might have been an accidental lab leak (not even engineered, and certainly not intentional) was treated as an atrocious thing to say?

Yes, it’s a sensitive topic. And we could certainly continue to sweep it under the rug. But there are major benefits to determining the specific origins of the greatest world health and economic crisis in our lifetime.

Just as NTSB exists to make flight safer for all. And in that process, it must consider matters of fault-finding (e.g., pilot error, mechanical process, manufacturing defect, etc.) The process isn’t undertaken with a goal of prosecution or retribution, but to make things safer in the future.

We need to give ourselves the permission to discuss it. It’s not a conspiracy theory, it’s just a theory. And we are not only incurious about it, we actively shout down those who want to respectfully consider the possibility. That’s not enlightened. That’s stupid. I applaud publications like New York Magazine for finally starting to allow the discussion to take place within their brand. I wonder though if we would have been even better served to have such an attitude when the evidence was more fresh.

Regardless, I still think the first time we will likely actually talk about this in depth is when the inevitable Hollywood movie comes out in a few years. Only then, when someone else in pop culture has opened the door, can we have the honest and full conversation that’s needed.

Circling Back to a “Don’t Discount the Lab Leak” Post of January 2020

In January 2020, I posted my view that the outbreak of what was to be called COVID quite possibly came by way of lab accident. It generated over 100 comments and ended at least one friendship. I popped by Facebook yesterday to circle back on that thread.

I stopped posting to FB in late 2021 but returned briefly to circle back to an old post.

Hi friends,

Just a brief return to Facebook. I feel compelled to share some thoughts now that our federal government leans ever closer to “lab origin,” which some of you may recall is an issue I’ve talked at length about here in the past.

Dusting off Facebook’s search feature (hey, nice new icons and web refresh, FB!), I see that on January 26th, 2020, weeks before the first American was definitively known to have died due to the virus, I posted my strong suspicion, linked post below, that the outbreak of that was to be called COVID was likely due to a lab accident.

It’s interesting to review some of the discussion which took place then, and in subsequent posts on the topic.

As we sit here today (March 1, 2023), three years later, the lab-leak hypothesis isn’t some crackpot idea. It’s a majority-held American viewpoint, now publicly endorsed by the FBI (moderate confidence) and the Department of Energy (low confidence), and by more than 70% of Americans. A shrinking minority of Americans (~25% and shrinking) think “natural spillover” was the genesis of the greatest health crisis in our lifetime.

Three years ago, I did not post my own view lightly, nor cavalierly. The president at the time was out calling the virus the “Wuhan virus” and creating an atmosphere of xenophobia. The overall public temperature seemed to suggest it an offensive stance to take. I certainly knew that it’s not something to be flip about.

In fact, my own confidence was actually stronger than I wrote at the time. And it wasn’t a casual observation.

Editorial note: Even in January 2020, there was ample evidence that WIV was doing research on bat-borne coronaviruses. There was evidence the facility was rather new, that officials had previously expressed safety concerns about it, and that lab leaks elsewhere had happened with alarming frequency. 

It was clear a major debate had raged in the scientific community about the risks and potential benefits of "gain of function" research from 2011-2014. Dr. Anthony Fauci was adamantly and quite publicly on the "pro" side, even writing in the Washington Post that generating potentially harmful viruses is "a risk worth taking." 

This was all knowable in January 2020. I had read all these pieces and more by that time. 

Further, as an applied math major, I was pretty familiar with basic Bayesian math. The chances that a lab studying bat-borne viruses and an outbreak of a novel coronavirus whose closest cousin was a bat-borne virus were independent things seemed extremely low. There isn't just geographic coincidence, as Jon Stewart was to humorously noted more than a year later, but temporal coincidence. There's also species coincidence and genetic coincidence. 

Bayesian math isn't that complicated. It's common sense. In shorthand, assume the lab and outbreak are entirely unrelated (i.e., assume natural spillover.) So, of all the cities in the world, why did the outbreak happen in Wuhan? And of all the years that humans have contacted coronaviruses, why 2019? Is it just a coincidence that a "natural" outbreak occurred near a lab which officials expressed safety concerns just a few months prior? It's a coincidence that an outbreak occurred just a few years after research was known to have been initiated? OK. Then, of all the species, why bats? Of all the SARS coronaviruses, why is this one the only one with a furin-cleavage site? Is it just a coincidence that this matches a proposal put forth involving the lab in 2018? Etc. 

The odds of all of these "independent coincidences" happening and the bat coronavirus lab in central Wuhan NOT being related in any way are infinitesimal. 

Next, add in just how well-optimized it seemed for human replication right out of the gate. That is unusual. And if it were from a host animal, wouldn’t this highly contagious outbreak have started in some more remote village(s) first?

Then, add in all the adverse inference you can and should draw by China's and NIH's actions. China did not allow independent inspectors in when they had every possible incentive to prove natural zoonoses. They proceeded to wipe the lab(s) clean without independent monitors to collect evidence, etc. Then add in, per recent Congressional testimony from former CDC director Redfield that in fall of 2019 (a) the virus database was taken offline, (b) a new HVAC system was bid-out, and (c) lab oversight was changed from civilian to military. 

If you believe natural spillover more likely, you're saying that all these things -- all of them -- are coincidental. And since these events would normally be independent, you've got to multiply the probabilities of each of these "coincidences" together, since they all occurred. 

All tolled, natural spillover versus research-related outbreak are not equal-probability hypotheses; the math skews very strongly in one direction.

I worried a bit about Facebook booting me off the platform, and offending friends/family (though it shouldn’t offend!), so I walked up to the edge of it, and simply cautioned people from throwing that in the tin-foil hat bin.

But I believed it then, and I believe it today. There is basically no concrete evidence supporting the “natural spillover” origin theory, and an enormous amount of evidence — circumstantial and otherwise — pointing in the direction of lab accident. And if lab accident is the origin, that means our government failed us, that all this that we lived through these past three years, didn’t have to happen.

Over the past three years, I’ve watched as people who shared my view on this were vilified, ostracized, deplatformed from social media, misrepresented, distorted, and more.

Sure, staying silent was an option open to me. Why didn’t I?

Well, the best analogy I can think of is an earworm. Some of you may get an “earworm” when you hear a song that sticks with you. For me, for about 3 years, I’ve occasionally thought to myself:


It has been overwhelming at times — millions of people’s lives ended prematurely. More than 60x the number of people killed in the first nuclear explosion. Sons and daughters not being able to say goodbye to their loved ones in person. Suffocation on ventilators. Learning loss. Addiction. Trillions of dollars of capital vaporized. Mandatory masking. Mandatory vaccination. Political tribalism. Friendships destroyed. Businesses and dreams destroyed. And so much more.

And I’ve watched as institutions we should trust — academia, the news media, the CDC, politicians and more — have drifted so far afield in their roles. Some responded well to the crisis — particularly the health agencies of Western Europe. Too many others, including our own, did poorly. Too many politicians took “Never let a crisis go to waste” and opted for the corollary, “Preserve the crisis.”

So many Americans assumed that one’s public stance on COVID origin MUST imply one’s stance about politics, or even one’s inherent “goodness.” I’ve never believed that. These issues are, or should be, entirely separate. But for some reason, people have allowed these issues to be fused.

To so many people, it’s better to be wrong for the right reasons than right for the wrong reasons. I’m sorry, I reject that bargain.

As 2020, 2021 and 2022 progressed, I continued to respectfully share my viewpoint, which ran against most progressives’ viewpoint on it. And many (most?) of my friends are progressive, or at least were.

But nevertheless, I felt compelled to stick to what I thought then and still think today was grounded in more significant evidence. I also thought, and still think, the magnitude of this issue — the greatest health, education, lifestyle and economic disruption in our lifetime — was important to talk about, and to process some of those ideas.

But these sentiments were and I think still are a mismatch with social networking. And they’re certainly not ideal for Facebook and interleaved with friendly catch-up notes, as my wife rightly pointed out privately to me in increasing fashion. I departed Facebook at the end of 2021 (and it’s been a good decision, and one I’ll soon return to.)

But I wanted to return to FB momentarily to say that I appreciate that all of you — my many friends and family members — did NOT do what a lot of other people did to those who felt that lab origin was the most likely source of the greatest health crisis of our lifetime. You did NOT take my contrarian viewpoint on this as any kind of statement about me, or who I am. You did not (for the most part, at least) assume that this implied which team jersey I was sporting, or even if I owned one at all. You may have believed in natural spillover. No doubt some of you may even still believe that today, and that’s OK. (If you are at all interested, I’d be happy to patiently walk through the copious evidence suggesting otherwise, but I’ll leave that for a face to face chat if you’d like.)

A few points:

  1. Forgiveness is powerful. As we move closer and closer to consensus of lab origin, there will be many people who want to move onto the retribution phase. But all the evidence suggests that it was accidental, and that we in the US are culpable here too, as this research likely would not have happened were it not for our misguided, impossibly tragic proactive enablement of it.
  2. Please try not to let politics or tribal affiliation keep you from what you think to be true. Break out of your tribe — there are many forces pushing people to choose team A or B. Neither owns the truth. The media has become ever more interested in AFFIRMING, not INFORMING. It’s about engagement now, and nothing engages more than affirmation and outrage. It’s up to you to be your own news editor. Do you have enough respectful dissent in your information diet?
  3. Accidents happen, even catastrophic ones. Never once have I ever implied, nor do I believe, this catastrophe was intentional. I think the research was well-intentioned, but safety precautions lax. SARS, for instance, has leaked from labs multiple times, and from a lab-safety standpoint, this is no different. Chernobyl, Deepwater Horizon, Exxon Valdez and Three Mile Island were all unintentional. I could EASILY envision myself as an earnest medical researcher in a lab in China, unknowingly infected, visiting a market on my commute home. There will be a time to review the legacies of Anthony Fauci, Francis Collins and others, who likely were quite proximal in funding and later obfuscating the research which went on.
  4. What do we do with the knowledge that it’s likely of lab origin? It’s absolutely gob-smacking to basically know, with high probability, that none of this needed to happen. None of it. The deaths, the trillions of dollars of capital vaporized. The Zooms. The masking. The vax mandates. The inflation. The arguments and fissures in our very social fabric.
  5. Do you realize we are STILL funding EcoHealth Alliance with our tax dollars? (And many of the scientists enlisted to debunk the lab leak hypothesis have been granted millions of dollars from NIH. And Fauci’s personally designated successor now heads USAIAD. And. and. and.)

But we can take sensible action.

For one, if there’s ever been a role for Congressional oversight, the premature death of millions certainly calls for it. Second, on a practical level, maybe let’s not locate BSL facilities in major metropolitan areas. There are in fact hundreds of these labs around the world, and we need to consider their existential risk. Third, let’s please determine out how this research WAS funded and approved despite a clear presidential moratorium that was in effect at the time (2014-2017.) The evidence strongly suggests that the presidential moratorium caused federal advocates to look to a third-party packager to continue this research abroad, in what turned out to be much more loosely supervised settings. We need limits on how this new technology that’s been unlocked (ACE2 mice, etc.) stays in responsible hands.

We clearly need to overhaul institutions that failed us (NIH, CDC in particular, but also, it’s been tremendously disappointing to see medical organizations and even medical schools being captured by ideologues.) The role of public health should be to help navigate the path of least overall harm. It failed to do so.

We need Congressional oversight, and it will continue to be political. But let’s try to stick to the science and probabilities about it.

Anyway, it’s been a crazy three years.

Some of you may find issues like Climate Change existential and all-consuming, because you are good people, and you care. For me, to be quite frank, I think this issue has much greater probability to be of existential risk in the next 250 years if we do nothing. It mattered to know how Chernobyl and Deepwater Horizon happened, and how and why planes crash. We have the NTSB for a reason.

I appreciate that nearly all of you are still my friends & family. You may not think Facebook is the right forum for this, and, well, you are right. But I did want to circle back to you and close the loop on this thread, since it’s now not just in the Overton Window, it is a majority-held American view.

The essence of learning is to be able to update one’s own prior assumptions as new evidence comes in. We should not let political tribalism prevent us from doing so.

As we have seen time and time again with COVID — whether it’s relative risk for the young vs. old, the cost/benefit of remote schooling, the strength of natural immunity, how much vax mandates work or don’t, whether mandates or informed consent are superior — the stakes are pretty enormous, and what we are told may not be precisely what is true.

To see just how insane it’s all become, try this counterfactual: Imagine if Trump, a germaphobe, had forced a full nationwide lockdown in 2020, remote schooling, mask mandate, mandated shots, etc. as harshly as he could, and was -adamant- it was natural spillover (“bat soup” has always been more racist to me, than well-intentioned lab worker gets infected, as any of us might.) What would the Democrat position be today?

I am quite well, luckily, and though the above might sound like the rantings of a madman, I’m fine and happy. The past three years have taken a toll on everyone, but far lighter than it could on me. I’m extremely optimistic for what’s ahead.

Love to you all. Stay well,


Uploading Images on Paste in Typora

Do you use Wordpress? Consider using Typora and the free image uploader “upgit” to make embedding images automatic.

I’m a big fan of using a very simple user interface for writing. It helps clear the mind.

I generally draft articles using Typora, and have increasingly turned to markdown format for my articles. When I’m ready to publish, I copy and paste them into my WordPress-powered blog. Since I work on both Mac and Windows, I like to keep my draft articles on a personal OneDrive, and then I can edit them on either a laptop or desktop when I want to.

With this system, adding text, heading, formatting and tables works really well, but images are one thing that doesn’t quite work out-of-the-box. That’s because as you write, Typora will save a pasted image locally to your hard disk, but when you go to paste it into WordPress, the links will of course still point to local files. That obviously will break these images when posted. Now, you can manually go through and upload them all, but that’s a hassle when it it comes to publish-time.

Luckily, Typora has a great feature which lets you customize what happens when an image is pasted into the editor. We’ll use this feature to tell it to call a utility called “upgit” to upload the image to a service provider.

Oh, and I’ve chosen Github to host these images — because they let you put a lot of stuff up there for free — but upgit supports a bunch of other image-hosts as well.

Upgit to the Rescue

Go to pluveto/upgit on Github to download and install the version appropriate for your operating system. There’s a configuration file you need to create in the folder you put it in.

I have a Mac and a Windows machine.

On my Mac, I put upgit into the /Applications folder. On Windows, I have a single folder I use for utilities, called “/bin”. I created a folder, “/bin/upgit” and placed the upgit executable in that folder.

If you’re a Mac user, you’ll need to rename your download to simply “upgit”, and then do the following, to make it executable:

You’re almost done, but you also need a config.toml file (the repo gives an example), which at a minimum needs your Github username and Personal Access token, and the name of the repository to which to save images. I created a public repo called “media” to save the media.

Once you’ve got upgit configured to upload images from your local directory, all you need to do is tell Typora to call out to “upgit” when an image is pasted into the file. To do this, go to Typora > Settings:

Go to the “Images” panel, and set the values as indicated here:

Voila! Now, when you paste in an image from the clipboard into Typora, there’s a very short delay and it automatically (1) uploads the image to your Github repo and (2) changes the link reference to that remote image! Fantastic. Simple, and (with the exception of the affordable Typora pricetag), free.

#TwitterFiles: The Complete List

An index of all the Twitter Files threads, including summaries.

Twitter Files: The Complete List

The Twitter Files are a set of Twitter threads based on internal Twitter Inc. documents that were made public starting in December 2022. Here’s a complete list as of this writing. I offer up my own subjective summary of each, but I urge you visit the thread and form your own opinion. I’ll attempt to keep this index up-to-date as new ones come in.

At the end of this index, you’ll see some “meta” reporting, including Congressional testimony, interviews with authors and more.

Nomenclature: I refer to the pre-Musk era at the company as Twitter 1.0. That runs from the founding of Twitter through late October, 2022.

1. Twitter and the Hunter Biden Laptop Story, Dec 2 2022

Summary: Twitter blocked the New York Post from sharing a bombshell October 2020 story about Hunter Biden’s laptop contents, just prior to the 2020 US presidential election. It also suppressed people re-sharing this story, including the Press Secretary of the United States. Twitter attempted to justify this under its “hacked materials” policy, even though there was considerable debate about whether it legitimately applied.

1a. Twitter Files Supplemental

Summary: The Twitter Files 1 thread was delayed, based upon the surprising revelation that then-employee Jim Baker, former FBI General Counsel and current Twitter Deputy Counsel had been reviewing all materials before handing them to the journalists Musk invited to Twitter HQ. (Musk let Baker go.) Bari Weiss uncovers the Baker story.

Discussion: (#TwitterFiles)

2. Twitter’s Secret Blacklists: Shadow Banning and “Visibility Filtering” of users, Dec 8, 2022

Summary: Was Twitter 1.0 “shadow-banning?” Twitter executives Jack Dorsey and Vijaya Gadde have frequently claimed that Twitter does not shadow-ban, but multiple tools exist within Twitter to limit the tweet distribution and visibility of a given account. “Do Not Amplify” settings exist, as do several settings around propagation of tweets to others.

Discussion: (#TwitterFiles2)
3. The Removal of Donald Trump Part One: Oct 2020-Jan 6 2021, Dec 9, 2022

Summary: On January 7th 2021, Twitter summarily banned the 45th President of the United States from its platform. What led up to their decision, and what were some of the internal conversations surrounding it? Part 1 of 3.

Discussion: (#TwitterFiles3)
4. United States Capitol Attack January 6th 2021, Dec 10 2022

Summary: The ban of Donald Trump from Twitter stemmed directly from the January 6th 2021 attack on the United States Capitol by supporters/protestors/rioters. The stunning event led Twitter executives to finally make the call they had long discussed. Part 2 of 3

Discussion: (#TwitterFiles4)
5. The Removal of Trump from Twitter, January 8th 2021: Dec 12, 2022

Summary: Trump was banned from Twitter on January 8th, 2021. Though Twitter 1.0 was always adjusting discussion rules on the platform, it’s notable that on January 7th, Twitter staff adjusted several key rules to allow for and justify the banning of the then-President. Part 3 of 3

Discussion: (#TwitterFiles5)
6. FBI & Hunter Biden Laptop, Dec 16, 2022

Summary: The FBI attempted to discredit factual information about Hunter Biden’s foreign business activities both after and even before the NY Post revealed the contents of his laptop. Why would the FBI be doing this? And what channels existed between the FBI and Twitter 1.0?

Discussion: (#TwitterFiles6)
7. Twitter, The FBI Subsidiary, Dec 19, 2022

Summary: Twitter’s contact with the FBI was constant, both social and professional, and pervasive. A surprising number of communications from the FBI included requests to take action on election misinformation, even involving joke tweets from low-follower and satirical accounts. FBI asked Twitter to look at certain accounts, suggesting that they “may potentially constitute violations of Terms of Service.”

Discussion: (#TwitterFiles7)
8. How Twitter Quietly Aided the Pentagon’s Covert Online PsyOp Campaign, Dec 20, 2022

Summary: While they made public assurances suggesting they would detect and thwart government-based manipulation, behind the scenes Twitter 1.0 gave approval and special protection to a branch of the US military related to psychological influence operations in certain instances.

Discussion: (#TwitterFiles8)
9. Twitter and “Other Government Agencies”, Dec 24 2022

Summary: The FBI responds to Twitter Files 7, vigorously disputing some of the framing and reporting. Taibbi responds to FBI communication and press releases, and further shares internal documents related to FBI and “other government agency” correspondence.

Discussion: (#TwitterFiles9)
10. How Twitter Rigged the COVID Debate, Dec 26, 2022

Summary: David Zweig illustrates how Twitter 1.0 reduced the visibility of true but perhaps inconvenient COVID information, and discredited doctors and other experts who disagreed.

Discussion: (#TwitterFiles10)
11 and 12.

How Twitter Let the Intelligence Community In, Jan 3, 2023

Summary: Twitter 1.0 responds to governmental inquiry regarding some Russian-linked accounts, attempting to keep the governmental and press focus on rival Facebook.

Twitter and the FBI “Belly Button”, Jan 3 2023

Summary: Twitter 1.0 works diligently to resist acting on State Department moderation requests. In the end, it allowed the State Department to reach them via the FBI, which FBI agent Chan calls “the belly button” of the United States government.

Discussion: (#TwitterFiles11)
13. Twitter and Suppression of COVID Vaccine Debate, Jan 9 2023

Summary: Scott Gottleib, a Pfizer board member, used his influence to suppress debate on COVID vaccines, including from the head of the FDA. Twitter 1.0 frets about the damage the effectiveness of natural immunity might have on vaccine uptake, and Twitter slaps a label on a key tweet former FDA commissioner Brett Giroir’s tweet touting the strength of natural immunity.

Discussion: (#TwitterFiles13)
14. The Russiagate Lies One: The Fake Tale of Russian Bots and the #ReleaseTheMemo Hashtag, Jan 12 2023

Summary: On January 18th 2018, Republican Congressman Devin Nunes submitted a classified memo to the House Intelligence Committee listing abuses at the FBI in getting surveillance approval of Trump-connected figures. His memo also called into question the veracity and reliability of the Steele “Dossier.” #ReleaseTheMemo started trending, but Democrats attempted to discredit this by saying it was all being amplified by Russian bots and trolls, referencing Hamilton 68, a dashboard powered by the Twitter API (See Twitter Files #15, next in the series.) Though Nunes’ assertions would eventually be basically fully verified in a report by the Justice Department, a significant PR campaign was launched to discredit the memo, labeling it a “joke.” This TwitterFiles thread discusses Democrats’ desire to discredit the #ReleaseTheMemo hashtag as being of Russian origin/amplification, and Twitter’s compliance with those requests. Note that there is heavy reliance on the “Hamilton 68 Dashboard” in many of these discussions, which is the subject of Twitter Files 15. The important bit: Twitter executives knew it was fraudulent from about 2017 onward, yet did nothing to discredit it in the media, allowing this DNC-message-benefitting sham to continue.

Discussion: (#TwitterFiles14)
15. Move Over, Jason Blair: Twitter Files Expose Next Great Media Fraud (Hamilton 68 Dashboard), Jan 27 2023

Summary: This thread delves into the Hamilton 68 dashboard referenced in TwitterFiles 14 above. Twitter knew as early as October 2017 that it was simply pulling tweets from a curated list of about 650 accounts, and also knew that very few of those accounts were actually Russian. They knew that the media and Democrat officials were citing Hamilton 68 Dashboard as somehow credible. Though Twitter executive Yoel Roth tried several times to raise internal concern about the integrity of this tool, he was overruled within Twitter, and Twitter 1.0 never directly discredited this tool or explained how it worked.

Discussion: (#TwitterFiles15)
16. Comic Interlude: A Media Experiment

Summary: Matt Taibbi notes how little mainstream media coverage there is of TwitterFiles revelations when they are damaging to the Democrats, but published numerous stories on Trump’s request to get Chrissy Tiegen removed from the platform. New revelations are shown about Maine Senator Angus King (D) calling for suspension of a slew of accounts for spurious reasons, and Representative Adam Smith (D)’s staff request to stop “any and all search results” related to certain keywords. Taibbi notes how the mainstream media has utterly ignored the Schiff requests and what it says about the First Amendment risks presented by government-big-tech cooperation.

Discussion: (#TwitterFiles16)
17. New Knowledge, the Global Engagement Center, and State-Sponsored Blacklists

Summary: Taibbi reports on an effort by “DFRLab,” an entity funded by the “Global Engagement Center”, a shadowy part of the US federal government, to deplatform a bunch of people. The list of 40,000+ people the GEC/DFRLab attempted to get deplatformed under the guise that they were “paid employees or possibly volunteers” of India’s Bharatiya Janata Party (BJP), but the list included lots of everyday Americans. Taibbi characterizes these requests as “State Sponsored Blacklists,” and from the data shared, it’s rather hard to challenge that provocative label. GEC denies it uses US tax dollars to try to get US citizens deplatformed, but the list clearly included Americans. Taibbi explores the requests in detail, some internal discussion which resulted, and lets the reader ponder what these requests suggest about government stances toward free speech, and the “weaponization” of the word “disinformation” for political aims. (For me, I continue to ask — would we know any of this had Elon Musk not purchased Twitter?)

Discussion: (#TwitterFiles17)
18. Statement to Congress

Summary: On March 9 2023, Matt Taibbi and Michael Shellenberger testified before Congress about the network of third parties the federal government has been involved in paying, which in turn were serving up blacklist requests to Twitter.

Michael Shellenberger details it in this 68-page testimony to Congress.

Discussion: (#TwitterFiles18)
19. The Great COVID-19 Lie Machine

Summary: The Stanford Virality Project (VP) was involved in the flagging and push-to-censor several threads and accounts writing “true but inconvenient to the narrative” stories surrounding COVID-19, such as the strength of natural immunity, the fact that the vaccination does not stop the spread, or the existence actual, true adverse vaccination side-effects. It appeared to have the full support from within the US government. Taibbi documents how the narrative became more important than what the facts were saying at the time, how the Stanford Virality Project seemed more interested in narrative enforcement and speech suppression than in the principles of the first amendment.

Discussion: (#TwitterFiles19)
Complete List of “Twitter Files” Threads


MT: Matt Taibbi, Racket News: @mtaibbi

MS: Michael Shellenberger, Michael Shellenberger on Substack: @shellenbergermd

BW: Bari Weiss, The Free Press, @bariweiss

LF: Lee Fang, The Intercept, @lhfang

AB: Alex Berenson, Alex Berenson on Substack, @alexberenson

DZ: David Zweig, The New Yorker, New York Times, Wired, @davidzweig

Congressional Hearings

March 9, 2023: Primary subject – federal involvement in censorship
February 8, 2023: Primary subject – former employee testimony, Hunter Biden laptop

Meta-Story: Behind the Scenes, In the Authors’ Words

Our Reporting at Twitter, Bari Weiss, The Free Press, December 15 2022

Interview with Matt Taibbi, Russell Brand:

Wait, Twitter Knew The “Russian Bot” Narrative Was Fake… For Five Years?

In the most explosive Twitter Files yet, Matt Taibbi uncovers the agitprop-laundering fraud engineered by a neoliberal think-tank.

It’s been a little more than three months since Elon Musk burst into the Twitter headquarters in San Francisco, bathroom sink in tow, wryly captioning his tweeted photo “Let That Sink In.” In the time since (has it really only been fourteen weeks?), Musk has slashed staff and made many internal changes. In a type of “Sunshine Committee” initiative, he’s invited a team of independent journalists to Twitter’s HQ to rifle through internal communications. Musk is letting them uncover what they may. His only proviso is that these journalists must first publish what they discover about the Twitter 1.0 era… on Twitter itself.

And thus, the #TwitterFiles were born. We’re now up to thread Number 15, one of the most interesting ones yet.

In #TwitterFiles 15 published on January 27th 2023, journalist Matt Taibbi documents how an ostensibly bipartisan Washington DC political organization leveraged Twitter to disseminate a mysterious dashboard purporting to reveal the big online narratives that “Russian bots” were amplifying. The dashboard was called “Hamilton 68”, and its name stems from Federalist Paper 68, a treatise warning against foreign influence in elections authored by Alexander Hamilton in 1788. Alexander Hamilton supplied the name, and a thin veneer of high-tech and well-credentialed advisors supplied gravitas.

The organization behind this media tool has one of those “Who Can Possibly Be Against This?” institute names: The Alliance for Securing Democracy (ASD.) Its Advisory Board includes ex-FBI and Homeland Security staffers (Michael Chertoff, Mike Rogers), Obama Administration and DNC officials (Michael McFaul, John Podesta, Nicole Wong), academics and European officials and formerly conservative pundits (Bill Kristol.) Taken as a whole, the ASD is comprised largely of officials affiliated with the Democratic party and this nation’s security apparatus. The Hamilton 68 Dashboard project was led by former FBI counterintelligence official and current MSNBC contributor Clint Watts.

From 2017 up until about one week ago, the Hamilton 68 Dashboard was highly regarded, and cited by numerous mainstream media outlets, from The Washington Post to MSNBC to Politifact to Mother Jones to Business Insider to Fast Company. It was the genesis for countless news stories from 2017 through 2022. Maybe you read Politico’s The Russian Bots are Coming. Or the Washington Post’s Russia-linked accounts are tweeting their support of embattled Fox News host Laura Ingraham.

Or maybe you watched one of CNN’s many stories on the growing threat of Russian bots, such as this one:

Or maybe you watched this piece on The PBS News Hour, warning about how “Russians” are amplifying hashtags like #ReleaseTheMemo:

Or maybe you caught MSNBC’s Stephanie Ruhle casually asserting that “Russians are amplifying this hashtag”, an assertion which came from Hamilton 68 output:

No matter where we heard it, millions of us heard it. And read it. “The Russians are amplifying these terms on Twitter!”

Before we go further, let’s put one thing to bed: Is Russian bot activity, at least to some extent, real? Yes, it is.

Clearly, disinformation efforts have been underway since the dawn of communications, through journalism, radio, television, the Cold War, computer networking, and then, greatly accelerated during the era of social media. Foreign troll and bot activity has been documented first-hand. For that matter, we in the US are no doubt sending and amplifying messages their way too.

But the fraudulent “Hamilton 68” project by ASD deceptively leveraged public desire for bipartisan monitoring, with only the thinnest of high-tech patinas for partisan political gain.

How so? Here’s the shocker: The only thing behind the vaunted “Hamilton 68” Dashboard was… a list. No, not some algorithmically curated list looking at, say, the IP addresses of tweeters. Nor was it a list of known Russian agents, nor even frequent robotic re-tweeters of Kremlin agitprop.

No, the list was simply a bunch of accounts on Twitter that Hamilton 68 staffers hand-picked, and then summarily declared to be Russian bots or Russian-affiliated. The Hamilton 68 Dashboard was simply a list of these 648 Twitter accounts, right-leaning Twitter accounts for sure, but in no way provably Russian “bots.” While there were a handful of Russian accounts sprinkled in that 648, Russian accounts didn’t even represent the majority. The majority of accounts were merely conservative-leaning US, UK or Canadian citizens. You could just as easily have curated your own 648-person list yourself. Had Hamilton68 staffer selected a list of teens, their rigorous “analysis” would have implied “The Russians are amplifying the #TidePodChallenge on Twitter.”

Get that? Quite a racket. Assemble a heavyweight panel of credentialed experts. Build a list of accounts who tend to favoring the messages of your political opponents. Label it “Russian Disinformation,” and add a veneer of high-tech and state-apparatus gravitas. Critically, keep the methodology secret. Then, feed this “advanced dashboard” to the media, and boom — endless “news” pieces about — wouldn’t you know it? — Russian bots preferring GOP-aligned messaging. Opposition research PR has never been so easy.

According to the Wayback Machine, ASD has been trumpeting the Hamilton 68 Dashboard thusly:

These accounts were selected for their relationship to Russian-sponsored influence and disinformation campaigns, and not because of any domestic political content.

We have monitored these datasets for months in order to verify their relevance to Russian disinformation programs targeting the United States.

…this will provide a resource for journalists to appropriately identify Russian-sponsored information campaigns.

ASD Website, Hamilton 68 Dashboard, 2017-2022 (now updated)

What the ASD primed the media to run with as “Russian disinformation” were nothing more than the thoughts of a group of largely pro-Trump accounts on Twitter, hand-picked by them, a neoliberal think-tank. There was no algorithm, no science, nothing behind it other than subjective judgement.

Worse, Twitter knew about Hamilton 68’s utter lack of legitimacy for five years, and never bothered to directly expose the sham or cut off Hamilton 68’s access to their API. In 2017, Twitter executive Yoel Roth reverse-engineered what the Hamilton 68 Dashboard was doing by looking at it’s Twitter Application Programming Interface (API) calls. He pulled back the curtain, and learned that it was nothing more than a curated list of 648 accounts. On October 3, 2017, Roth wrote “It’s so weird and self-selecting, and they’re unwilling to be transparent and defend their selection. I think we need to just call out this bullshit for what it is.” Three months later, he wrote that “the Hamilton dashboard falsely accuses a bunch of right-leaning accounts of being Russian bots.”

On October 3 2017, Roth writes to his colleagues:

The selection of accounts is… bizarre, and seemingly quite arbitrary. They appear to strongly preference pro-Trump accounts, which they use to assert that Russia is expressing a preference for Trump even though there’s not good evidence that any of the accounts they selected are or are not actually Russian.

Yoel Roth to colleagues, internal email, October 3 2017

And later, Roth writes “Real people need to know they’ve been unilaterally labeled Russian stooges without evidence or recourse.”

Russian bots were blamed for hyping the #ParklandShooting hashtag, #FireMcMaster, #SchumerShutdown, #WalkAway, #ReleaseTheMemo and more. If you remember any of those episodes, you can probably recall that somewhere in your media diet, someone probably nudged that the Russians were amplifying this. It was all based upon this phoney list.

Taibbi shared a sample of just some of the stories this dashboard ultimately fed:


Ironically-named “Politifact” used it as the basis for several stories, including this one. Note that Hamilton 68 is cited as a source:


Like the piece above, basically none of these publications seem to be correcting their stories, or explaining clearly to their readers that the Hamilton 68 Dashboard upon which they generated oodles of pieces was essentially a sham.

By October 2017, Twitter executive Yoel Roth noticed that a lot of media stories were springing off this disinformation, and internally urged that Twitter make this clear. Yet Twitter executives demurred. Taibbi puts it this way: “Twitter didn’t have the guts to call out Hamilton 68 publicly, but did try to speak to reporters off the record. ‘Reportings are chafing,’ said Twitter communications executive Emily Horne. ‘It’s like shouting into a void.'”

Emily Horne, a Twitter communications VP who was among those putting the damper on exposing the sham, would soon become Biden White House and NSC spokesperson.

Yoel Roth comes across as sincere and heroic in his efforts to raise alarm bells from within Twitter in 2017 and early 2018. But Twitter executives like Emily Horne as well as presumably chief content officer Vijaya Gadde shut him down.

As a result, “journalists” in publications ranging from The Washington Post to Politifact to the New York Times continued to amplify the fake alarm that the ASD dashboard generated. Fake news begat fake news, until we even got to the point that the White House found it imperative to create appoint a new “Disinformation Czar,” led by the memorable (and meme-able) Nina Jankowicz.

Yoel Roth is a fascinating, complex character. He was eventually to become Twitter’s head of “Trust and Safety Council” by 2020, which made a lot of questionable decisions regarding deplatforming people re-sharing the Hunter Biden laptop story, which included the Press Secretary of the United States. Roth was even involved in Twitter’s decision to permanently boot the president of the United States. Musk at first considered Roth trustworthy (though with different political viewpoints), but by late November 2022, Roth was fired. Roth’s character-arc would be a very interesting one to profile for the inevitable “Inside Twitter” documentary.

If you’re looking for the news outlets which were earnestly duped and actually want to be honest and fulsome with their readers, check to see if they’re reporting on the Hamilton 68 scandal. Are they explaining to their readers the times they relied upon this now-discredited dashboard. Thus far, it’s not encouraging. Neither CNN nor Washington Post has any mentions of “Hamilton 68” this year so far.

A parting thought: Whether you like Musk or not, we wouldn’t have known any of this successful effort to deceive the American public had Musk not purchased Twitter and let journalists look behind the curtain. Had Musk not shelled out $44 billion, we very likely would still be watching and reading breathless stories amplifying how “the Russians are coming, and they sure do like these GOP hashtags” on Twitter. These claims would be based on a lie. Twitter leadership would know, ASD’s advisory board would presumably know. And no one would say a word about it. Let that sink in.

Read Taibbi’s full thread on Twitter here, complete with screenshots and source material: The Hamilton 68 Scandal.

Working with Environment Variables (Tech Note)

Here’s a quick cheatsheet on setting and reading environment variables across common OS’s and languages.

When developing software, it’s good practice to put anything you don’t wish to be public, as well as anything that’s “production-environment-dependent” into environment variables. These stay with the local machine. This is especially true if you are ever publishing your code to public repositories like Github or Dockerhub.

Good candidates for environment variables are things like database connections, paths to files, etc. Hosting platforms like Azure and AWS also let you easily set the value of variables on production and testing instances.

I switch back and forth between Windows, OSX and even Linux during development. So I wanted a quick cheatsheet on how to do this.

Writing Variables

Mac OSX (zsh)

The default shell for OSX is now zshell, not bash. If you’re still using bash, consider upgrading, and consider using the great utility “Oh My Zsh.”

will print out environment variables

To save new environment variables:

export BASEBALL_TEAM=Seattle Mariners

To save these permanently, you’ll want to save these values to:


So, in terminal you’d bring up the editor:

and add export BASEBALL_TEAM=Seattle Mariners to this file. Be sure to restart a new terminal instance for this to take effect, because ~./zshenv is only read when a new shell instance is created.

bash shell (Linux, older Macs, and even Windows for some users)

export BASEBALL_TEAM=Seattle Mariners
Seattle Mariners 
< prints all environment variables > 
// permanent setting
sudo nano ~/.bashrc
// place the export BASEBALL_TEAM=Seattle Mariners in this file
// restart a new bash shell 


  • Right click the Windows icon and select System
  • In the settings window, under related settings, click Advanced System Settings
  • On the Advanced tab, click Environment variables
  • Click New to create a new variable and click OK


ENV [variable-name]=[default-value]
ENV BASEBALL_TEAM=Seattle Mariners

Reading Variables


import os

Typescript / Javascript / Node

const db: string = process.env.BASEBALL_TEAM ?? ''


var bestBaseballTeam = Environment.GetEnvironmentVariable("BASEBALL_TEAM");

Enchanted Rose 2.0 with Falling Petals

Revisiting the Enchanted Rose project four years later, this time with an open-source build based on Raspberry Pi.

Enchanted Rose 2.0 with Falling Petals

Got a performance of Beauty and the Beast coming up? For about $200, you can build this Enchanted Rose with Falling Petals.

Dropping The Petals

There are a few approaches to make the petals drop. You’ve got a microcomputer (a Raspberry Pi) which can send electrical current HIGH or LOW on an output pin. How do you translate that into petals physically dropping on command?

In version 1.0, I used servo motors to pull fishing line. That fishing line was attached to a spring-loaded magnet in the bud of the rose. When the fishing line was pulled, the magnet would retract into the bulb and drop the petal. This worked very well, but it’s a delicate mechanism and wouldn’t ship well, and I wanted the 2.0 design to work for shipment.

After horsing around with solenoids (which didn’t work very well because they (a) were too bulky and (b) weren’t strong enough), the design I ultimately settled upon was to use individual, cheap 6v air pumps to push air through a small opening, and blow out a toothpick, to which a petal was attached. I found that I could “overclock” the air pumps at 12V (higher than rated) to push slightly more air out than rated. As long as the rose petals remained relatively light, bingo, it works. Here’s an in-progress video:

Just some notes to the person who bought this prop, but it gives you an idea of the functionality of the final 2.0 design. It also shows how the cloche can interfere with the falling petals if not set properly. TIP: Find as wide a cloche as you can.

Here’s an earlier video showing mock petals which were much lighter — tissue paper:

Design Goals

  • Drop 4 petals on cue, one by one
  • Turn on and off stem lights
  • Turn on and off accent lights, set color
  • Easy to reset between productions
  • Wireless — allow to be set on a pedestal on stage
  • Control via mobile web browser
  • Durable enough to ship
  • Do NOT require that it connect to a local internet
  • Easy to use interface that an elementary school stage crew could use
  • Easy on-site setup. Ideally, just turn it on and it works.

Parts List

image-202302051417313372 half breadboards
image-20230205142057108Raspberry Pi: I used a Raspberry Pi Zero 2W, but any Raspberry Pi which has onboard wifi will work, such as a 3B+, 4, Zero W, etc. Do not choose a Pico; that’s a different product line. You’ll also need a MicroSD card to store the operating system.
image-20230205142134583Jumper wires
image-20230205142242583Cloche: This was the widest and tallest one I was able to find for easy ordering. But I really did prefer the one I was able to find for the 1.0 build 4 years ago. This one is just barely big enough. Shop around — you may have better luck finding a wider one in a local arts and crafts store. You really want one that is fairly wide, because the petals need to fall freely and not get hung up against the side.
image-20230205142448526Stem “fairy” lights
image-20230205142615347Neopixel lights
image-2023020514265009611 AA batteries
image-20230205142809941One 8-AA battery pack
image-202302051428421814 pumps and tubing (you’ll need 2x of these three)
image-20230205143132807Enchanted Rose with silk petals – look for lightweight petals. The standard one on Amazon is too heavy — I went to Etsy to shop for mine.
image-20230205143602612Toothpicks — You are looking for ones which slide in and out of the aluminum tubing very easily, but which also block the most amount of air (to build up pressure — you need some force on these to pop the petal.)
image-20230205143254231Aluminum metal tubing (small) — The purpose of these aluminum tubes is to eliminate a lot of friction that existed between the toothpick and the pneumatic tube. In other words, toothpicks and these tubes sort of act like a blowdart. By using aluminum tubing as a gasket within the pneumatic tubing, a lot of the friction is reduced.
image-20230205143534234heat shrink
image-202302051433527185 Transistors: [TIP102]
Soldering iron, solder and flux
image-202302051437064524 Diodes: 1N4004 or MUR340
3.3v to 5v level converter: 74AHCT125
Coat hanger
Cigar box or wooden box for the base componentry
Wire clippers
Drill and bits
Hot Glue Gun and Hot Glue
Multimeter (helpful but not required)
Cigar box, wooden box or other case for components below the cloche
Pipe insulation foam for vibration-dampening, to just fit around the pumps
Parts List

Before You Begin

You’ll want to get your Raspberry Pi set up with Raspbian OS on a micro-SD card. Make sure you have SSH access to your RPi. You might also want to set this up to present its own SSID/wifi network. I’ve got a blog post on that here.

If you’re new to Raspberry Pi, here’s one video to get you started in setting up the operating system and SSH:

I also loved using Visual Studio Code’s “SSH” remote features. Setting this up can be a tiny bit tricky, but stick with it. Once you get this working, it’s incredibly productive. It allows you to use your desktop (Mac, Windows or Linux) to work with files and directories on your RPi.

NeoPixels will also require ROOT-level permissions on your Raspberry Pi to install some drivers. Now, there are all kinds of security folks telling you that using root on your Raspberry Pi is a bad practice for stuff like this. But since this is a simple stage prop, I had no problem with the idea of just using my root user and a password for the entire build, and it saved a lot of file permission headaches to just be “root” and be done with it:

Set a password for the root user, reboot your pi, and connect in as root from then on. While it may not be “best practice” for security, trust me, you’ll save a lot of headaches with installing libraries etc.


There’s both a hardware circuitry part and a software part. Let’s get some of the basic circuitry working first.

For ease of explanation, the hardware components can be thought of as three different “zones,” all powered by a single Raspberry Pi. These zones don’t need to talk to one another, except that all ground wires should be connected together and also to the RPi’s GND pin.

There are two zones powered by 4.5V: The Fairy lights and the Neopixel lights. There is one zone powered by 12V: the pump zone.

(DO NOT power the Neopixel lights with the 8 AA batteries (12V) used to power the pumps, as you will permanently damage the Neopixels!)

Zone 1: Control the stem lights (4.5V)

Let’s start here, because it’s the easiest circuit. These fairy lights are controlled by 3 AA batteries (4.5V.) You want to be able to have a GPIO pin on the Raspberry Pi toggle tell a transistor to allow current to flow to power these lights.

Cut the fairy light wire, and strip the ends. It’s a bad idea to power the LED directly from the Raspberry Pi, as altogether, the current draw might be more than the Pi can handle. So, you’ll want to use a standard Transistor (like a TIP 102) to switch the current.

Now, run a wire from the GPIO pin of choice — I chose pin 25 (BCM) — to the “base” pin of the transistor. When this is toggled HIGH, it will close the circuit of the LED fairy lights and thus turn them on or off programmatically.


Zone 2: Control the 4 pumps (12V)

The main mechanism to drop these petals is to blow air through tubes which have loose petals. I chose to cut some aluminum tubing to reduce the diameter of airflow and also to reduce the friction with the wooden toothpicks:

A close-up of one of the petals. You can hot-glue a toothpick into red silk. Placing these petals into the metal tubes allows the pump to then “pop” them out when they turn on.

For the electronics, you need a diode for each of the 4 pump circuits, for the reasons described in this video (where this video says “solenoid”, substitute in the “pump” in your mind — the same principle applies):

In my Enchanted Rose prop, I built a breakout board with a bunch of these MOSFET transistors and diodes. I overbuilt this board, making it with 6 different circuits when it only really needed 4. It’s messy looking, but pretty simple. Each of these circuits has a MOSFET transistor and a diode. Take care to get the direction of the diode correct — it needs to be pointing toward the positive load. (A diode is rather like a backflow valve in plumbing — it allows current to flow in only one direction, toward the “ring” on the diode.) The yellow wires in this photo below come from the Raspberry Pi (GPIO pin) and when they are HIGH, they allow current to pass from the “Base” of the transistor to the “Emitter.”

Basically you want four circuits which allow the GPIO pins on the RPi to turn on and off the motor. A single such circuit looks like this. (Note that instead of a 9V battery, I “overclocked” it with a 12V battery pack — I found these pumps can handle short bursts of 12V.)

Each pump motor needs a circuit like the below. Since RPi cannot output 9V, we’ll make use of a transistor to switch a supplied 9V current on and off. (Note that in production, I ended up boosting the power supply to 8 AA batteries, or 12V. The motors can handle this higher voltage for short duration, even though they’re rated for 6-9V.)

imgWiring diagram for pump motors

When you make six such circuits on a PCB board and don’t spend too much time cleaning up the build, it looks like this:

I chose Broadcom pins 21, 26, 6 and 5 to control the air pumps, as the python code shows.

I cut insulation foam for these motors (pipe insulation) which helps reduce the vibration sound a bit.

Battery pack; there are pumps inside these foam cylinders; the foam helps dampen some sound

Since these motors generate current spikes when they are powering down and powering up, it’s very important to protect your RPi with a diode, as follows (the directionality matters! pay attention to the line on the diode and the positive lead from the motor):

Zone 3: Control the Neopixel accent lights (4.5V)


The python code to control the Neopixel needs a set of libraries called “CircuitPython” to work. You’ll need to follow the NeoPixel Adafruit guide to installing CircuitPython.

I strongly recommend you read through the NeoPixel to Raspberry Pi Wiring Guide on Adafruit, which is likely to be more up-to-date than anything I’m providing here. I went to a 74AHCT125 level-shifter chip to map the Raspberry Pi’s 3.3v output to proper 5V levels. Note that the power cord shown in the diagram below simply feeds out to the 4.5V battery pack. The ground rail needs to be connected to the RPi’s ground rail. All the grounds in this project all connect together, and connect back to the Rpi’s ground. You can and should run both the LED fairy lights and the Neopixels off the same battery pack.


Then, check out the “” code that I put up on Github.


I’ve published two Github repos that should really accelerate your work here.

First, let’s run through what you need, because it’s helpful to see this at a high level. If you want to go the easiest possible route, set your RPi to render its own wifi network, and make sure you configure your Raspberry Pi so that it is known as on its own wifi network; that’s what the software is built for.

You will need the following software components set up:

  1. Software to tell the GPIO pins what to do and render it as an “API” which can be called. That’s the purpose of this Enchanted Rose Flask API Repository on Github.
  2. Website software to deliver a nice user interface so that people don’t need to call an API directly to change the lights, drop the petals. See this Next JS Front-End Repository on Github. Run this on an Apache web server on the Raspberry Pi. To do this, set up an Apache Web Server on your Raspberry Pi. Make sure it’s working by visiting the IP address of your Raspberry Pi from a web browser. Next, take the “out” folder from the repository above, and place it in your Raspberry Pi’s /var/www/html folder. You should then have a web front-end that is calling locally to the device’s Flask port (hardcoded in the source as:
  3. Set up the web server and the API, and make sure they start properly at Raspberry Pi boot-up.
  4. Make your Raspberry Pi its own local wifi network, so you don’t have to rely upon tapping into the router or a network wherever your production happens.

Configuring Raspberry Pi for “Ad Hoc Network”

Now, how do we get a mobile phone to reach the Raspberry Pi? Of course, if we know the networking details at the production venue, we could in theory just have the RPi join the wifi network of the theatre, and then somehow go to the theatre’s router settings and look for its address. But that’s a pain and doesn’t scale well.

So I was excited to hit upon the idea of having the prop create its own “Ad Hoc Network” (i.e., it’s own Wifi SSID that you can join.) Not only does this eliminate the need to have the stage manager at each playhouse somehow find a way to connect the prop into their own local network, but it removes any requirement for a display or keyboard to ship with the device for configuration purposes.

All they you to do is plug in the device, wait for boot-up, and it presents its own Wifi Network (“EnchantRose”.) You then simply enter the password, bring up a web browser on their phone, and visit a pre-determined IP address which I’ve configured:

It took a while to figure out how to configure Raspberry Pi to do this, since the instructions have changed seemingly with every release of Raspbian, the RPi operating system. But I figured it out by adopting existing instructions, and blogged about it here. The steps in that blog post provide all you need to do to get your Raspberry Pi setting its own network. To ensure the Enchanted Rose repository code works, be sure to set your network to the instructions given — you want the RPi to be able to reference itself on its own network as

What’s most exciting about this is that it’s a pattern that can has many applications in the world of “Internet of Things” — run Apache2 on your Raspberry Pi to host a website, and render that website visible via the Rpi’s own password-secured wifi network.

Current Status

The new version 2.0 prop is fully functional. I’ve sold it and shipped it to one of the many production managers who have inquired. As this post continues to get indexed by Google, I get hit up for these builds; at this writing it’s over a dozen such requests, from FL, CA, HI, UK, NJ, CO and more.

UPDATE, late January 2023: This 2.0 prop is sold to a production in Hawaii, and has been shipped off.

I’ve decided for now that the logistics and expense of shipping these out to productions is just too complicated, but I’d love to hear from you if you’ve built one of these. If you’ve got specific questions on assembly or the software, drop me a note. On with the show!

Turn a Raspberry Pi into a Web Server with Its Own Wifi Network (Tech Note)

The Raspberry Pi microcomputer is great for building “Internet of Things” devices. This technical post describes the steps to get your Raspberry Pi to broadcast its own local network and bridge to an Ethernet network if available. There are other guides showing how to do this on the Internet, but they are often woefully out of date, leading people down real rabbit holes.

The Raspberry Pi (RPi) microcomputer is great for building cheap “Internet of Things” devices. In this tech note, I outline how to set up a Raspberry Pi device so that it broadcasts its own local wifi network, so you can join it from a mobile phone. From there, you could launch a web browser and point it at the RPi’s website, which in turn controls the board’s I/O devices. This means you could use a mobile phone, say, and a Raspberry Pi device, and control it without any larger Internet connection.

The steps below also essentially turn the RPi into a “wifi access point,” which basically can take a hardline Internet network (LAN) and present a wifi front-end (WLAN.)

Turn your RPi into a wifi access point, or also its own ad-hoc network

So how do you get a Raspberry Pi to create its own wifi network? There are several “How To’s” on the web, but nearly all of them I followed are out of date. You need to install a few services, set up some configuration to define your network, and then reboot the Pi once everything is all set.

One great use-case is to build a standalone device which can be controlled via a web interface from, say, a mobile device. RPi’s can easily run web servers like Apache, so once you set up an RPi to broadcast its own private wifi network, you can easily interact with the Raspberry Pi’s devices and sensors via your website.

You’ve probably seen this kind of behavior if you’ve set up a wifi printer, or smart switch, or wifi security camera. These devices often have modes where they broadcast their own local wifi network, then you use a browser or a configuration app to “join” it, you do some setup process, and then it in turn uses what you’ve input to restart and join your home network with those configuration details thus supplied.

Enchanted Rose Prop 2.0

I had a use-case for this “ad hoc networking” with a stage prop I’m building.

A few years ago, I built an Enchanted Rose prop for my daughter’s school production of “Beauty and the Beast.” It let the stage manager drop petals and turn on lights on queue. It was based on Arduino and Bluetooth. I blogged the instructions in a two-part series. As a result thanks to Google, every couple of months or so, I get an inquiry from a production manager from somewhere in the world who wants to know more about how to build such a prop. Invariably, they follow these instructions and then hit a dead-end (which I’ve now noted in the posts.) The problem is, the version 1.0 device that I built is based upon an Arduino microcomputer (fine) with a Bluetooth add-on board which is now discontinued (not fine.) Worse, its controller was a proprietary Swift app which I wrote in a many-years-out-of-date dialect of Swift, which had to be installed straight from XCode onto a machine, as I opted not to publish it in the App Store. Apple has made many changes to Swift since then. The app as originally written no longer compiles, and Apple itself make it very difficult to distribute “one off” apps to people. (You can’t just post the app download somewhere, or email them a link — Apple requires that the developer add the person to an ad-hoc distribution list.)

So I began to think about how to rebuild it in a more open-source way.

Motivation for Ad-Hoc Network

At the heart of the version 2.0 of this prop is a Raspberry Pi Zero 2W, which runs its own little Apache web server to control lights and the falling petals. Ideally, a stage manager would simply need a mobile phone or iPad or some kind of web browser on a device equipped with wifi to communicate with the prop.

I’ve heard interest in this prop from stage managers in the UK, California, Colorado, Texas and more, and I wanted to re-build this prop in a more robust, even shippable way. So on the mechanical front, instead of using delicate springs and fishing line, it now uses pumps and air to push the petals off the rose. I’ve got the motors and circuits all working fine. Before moving on to the final aesthetics, I’m now working through matters related to the deployment setting(s) this prop will run in.

There’s a high school in California putting on this play in March. I don’t know what network environment is at that school, nor should I particularly care. I certainly don’t want to request and configure wifi passwords for the prop to “join” before shipping the device out. Rather than equip the prop with some kind of user interface to allow it to “join” the local WAN (needlessly wasteful screen and keyboard and instructions), it’s far better for it to broadcast its own tiny network, and be controlled from backstage just on its own. You’ve probably seen this type of “ad hoc” network when setting up, say, a printer or security camera, smart speaker, or smart switch.

But wow, getting ad-hoc networking up and going in Raspberry Pi is complicated!

THAR BE DRAGONS. It has taken a full day to get this up and running. Not only is Linux itself pretty arcane, but a much bigger problem is that so many of the instructions on ad-hoc networking for Raspberry Pi’s are wildly out of date. The Raspberry Pi Foundation changes these methods seemingly with every release of the OS.

So, to save my future self (and anyone Googling who lands here) many headaches… After much tinkering, the following instructions successfully configured a Raspberry Pi Zero 2W device to create and broadcast its own wifi network with a password. It allows clients (such as mobile browsers) to join it, and visit its control webpage hosted by the device on its Apache server. In short, the below just plain worked for me. Many, many other “how-to’s” on the web did not. A day wasted; I don’t want to do that again.

After you complete the below, you should be able to use a wifi device and “join” the network that the Raspberry Pi broadcasts. I successfully did so from both a Mac desktop and an iPhone. I could also then visit the apache page hosted on the Pi from these devices. This opens up a world of possibilities for “build and deploy anywhere” devices.

The notes which follow are adapted from: documentation/access-point-bridged.adoc at develop · raspberrypi/documentation (

There are a lot of steps here, but once you’re finished configuring your RPi, it should have its own private wifi network, and yet still be connectable via Ethernet. This is called an “Access Point” configuration.

Prerequisite: Use Raspbian “Buster,” a prior version of Raspbian OS

Make sure you use Raspbian “Buster” for the instructions. “Buster” is not, in fact, the most current version of the OS.

For the instructions below to work, you must use the older “Buster” version of Raspbian. Let me repeat, because some of you will be very tempted to use the latest version of Raspbian OS, and then will wonder why the network isn’t showing up. The Ad-hoc networking steps described below have only been tested to work on the “Buster” version. In fact, I tried and failed to get them to work on the latest (“Bullet”) version of Raspbian OS. Perhaps I did something wrong; perhaps by now it’s all resolved. But I tried twice, and only “Buster” worked perfectly.

The Raspberry Pi Foundation is great, but it’s pretty frustrating that they’re constantly tweaking the network setup code and drivers. There. are so many web pages which refer to old versions and drivers. (One clue you’re looking at an outdated how-to: if you see mention of “/etc/network” or the “interfaces” file. This method is discontinued and you’ll see no mention of it below.)

If you are dedicating your Raspberry Pi to its own ad-hoc network, I highly recommend you start over with it, back up whatever files you have on the SD card, and then use the Raspbian OS imager to create a bootable card with a fresh copy of Raspbian OS “Buster” on it. So, grab an archived copy of Buster. Unzip it. Then, launch the Raspberry Pi Imager, and click the GEAR icon to:

  1. ENABLE SSH with a password, and
  2. Set it up with your home wifi’s network name (SSID) and password.

After your card is written, you should be able to pop it out and put it in your RPi, and turn on the device. After a few moments, you should be able to ssh into it (Mac) or use PuTTY (Windows) to connect. To find the device’s IP address, go into your wifi’s router and look for a recently joined Raspberry Pi device. (Generally, the IP address will be the same from session to session.)

For Mac OSX, this looks something like:

ssh pi@192.168.x.y

where x and y are numbers. Every Internet device connected to your wifi is assigned a different address; this will vary from device to device. The default password for “pi” is “raspberry”.

Later in these setup instructions, we’re going to have it create its own wifi network with its own subnet, and connect on the backend with Ethernet. This is called a “Wide Area Network Access Point” or “Wifi Access Point” configuration, in which the RPi will basically have two IP addresses: one over the wire, and the other wireless.

1. Set up “root” user

Instead of the default user “pi”, for some operations you may need root access. For instance, NeoPixel libraries need Raspberry Pi “root” permission to run. So it’s best to set up a root user password first thing:

Then, log out of ssh and log back in with ssh root@<ip address>

From here on, you’ll want to sign in as root.

Enable remote login for the “root” user

  1. sudo nano /etc/ssh/sshd_config
  2. Find this line: PermitRootLogin without-password
  3. Change to: PermitRootLogin yes
  4. Close and save file.
  5. reboot or restart sshd service using: /etc/init

2. Get web server running

Install Apache2 Web Server with one line of code:

sudo apt install apache2 -y

This will create a folder in /var/www/html which the Apache web server will run. The install will also ensure that it starts at boot time.

3. Set up Ad Hoc Networking

OK this is the crucial part. HAVE PATIENCE, and follow this closely. (Full notes are at this Github location.)

Setting up a Routed Wireless Access Point

We will want to configure our Raspberry Pi as a Wireless Access Point, so that it broadcasts a wifi ssid, and so mobile devices can connect to it. Ideally, the RPi would also remain connectable to an Ethernet network for development purposes, to install new libraries, etc.

So a “Routed Wireless Access Point” is perfect for our needs.


You can find out what OS version you’re running with the following command:

A Raspberry Pi within an Ethernet network can be used as a wireless access point, creating a secondary network. The resulting new wireless network (called “Enchanted Rose” in my case) is entirely managed by the Raspberry Pi.

A routed wireless access point can be created using the inbuilt wireless features of the Raspberry Pi 4, Raspberry Pi 3 or Raspberry Pi Zero W, or by using a suitable USB wireless dongle that supports access point mode. It is possible that some USB dongles may need slight changes to their settings. If you are having trouble with a USB wireless dongle, please check the forums.

This documentation was tested on a Raspberry Pi Zero W2 running a fresh installation of Raspberry Pi OS Buster.

Before you Begin

  1. Ensure you have root access to your Raspberry Pi. The network setup will be modified as part of the installation: local access, with screen and keyboard connected to your Raspberry Pi, is recommended.
  2. Connect your Raspberry Pi to the Ethernet network and boot the Raspberry Pi OS.
  3. Ensure the Raspberry Pi OS on your Raspberry Pi is up-to-date and reboot if packages were installed in the process.
  4. Take note of the IP configuration of the Ethernet network the Raspberry Pi is connected to:
    • In this document, we assume IP network is configured for the Ethernet LAN, and the Raspberry Pi is going to manage IP network for wireless clients.
    • Please select another IP network for wireless, e.g., if IP network is already in use by your Ethernet LAN.
  5. Have a wireless client (laptop, smartphone, …) ready to test your new access point.

Install Access Point and Management Software

In order to work as an access point, the Raspberry Pi needs to have the hostapd access point software package installed:

Enable the wireless access point service and set it to start when your Raspberry Pi boots:

sudo systemctl unmask hostapd
sudo systemctl enable hostapd

In order to provide network management services (DNS, DHCP) to wireless clients, the Raspberry Pi needs to have the dnsmasq software package installed:

Finally, install netfilter-persistent and its plugin iptables-persistent. This utility helps by saving firewall rules and restoring them when the Raspberry Pi boots:

sudo DEBIAN_FRONTEND=noninteractive apt install -y netfilter-persistent iptables-persistent

Software installation is complete. We will configure the software packages later on.

Set up the Network Router

The Raspberry Pi will run and manage a standalone wireless network. It will also route between the wireless and Ethernet networks, providing internet access to wireless clients. If you prefer, you can choose to skip the routing by skipping the section “Enable routing and IP masquerading” below, and run the wireless network in complete isolation.

Define the Wireless Interface IP Configuration

The Raspberry Pi runs a DHCP server for the wireless network; this requires static IP configuration for the wireless interface (wlan0) in the Raspberry Pi. The Raspberry Pi also acts as the router on the wireless network, and as is customary, we will give it the first IP address in the network:

To configure the static IP address, edit the configuration file for dhcpcd with:

sudo nano /etc/dhcpcd.conf

Go to the end of the file and add the following:

interface wlan0
    static ip_address=
    nohook wpa_supplicant

Enable Routing and IP Masquerading

This section configures the Raspberry Pi to let wireless clients access computers on the main (Ethernet) network, and from there the internet.

NOTEIf you wish to block wireless clients from accessing the Ethernet network and the internet, skip this section.

To enable routing, i.e. to allow traffic to flow from one network to the other in the Raspberry Pi, create a file using the following command, with the contents below:

sudo nano /etc/sysctl.d/routed-ap.conf

File contents:

# Enable IPv4 routing

Enabling routing will allow hosts from network to reach the LAN and the main router towards the internet. In order to allow traffic between clients on this foreign wireless network and the internet without changing the configuration of the main router, the Raspberry Pi can substitute the IP address of wireless clients with its own IP address on the LAN using a “masquerade” firewall rule.

  • The main router will see all outgoing traffic from wireless clients as coming from the Raspberry Pi, allowing communication with the internet.
  • The Raspberry Pi will receive all incoming traffic, substitute the IP addresses back, and forward traffic to the original wireless client.

This process is configured by adding a single firewall rule in the Raspberry Pi:

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now save the current firewall rules for IPv4 (including the rule above) and IPv6 to be loaded at boot by the netfilter-persistent service:

sudo netfilter-persistent save

Filtering rules are saved to the directory /etc/iptables/. If in the future you change the configuration of your firewall, make sure to save the configuration before rebooting.

Configure the DHCP and DNS services for the wireless network

The DHCP and DNS services are provided by dnsmasq. The default configuration file serves as a template for all possible configuration options, whereas we only need a few. It is easier to start from an empty file.

Rename the default configuration file and edit a new one:

sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig
sudo nano /etc/dnsmasq.conf

Add the following to the file and save it:

interface=wlan0 # Listening interface
                # Pool of IP addresses served via DHCP
domain=wlan     # Local wireless DNS domain
                # Alias for this router

The Raspberry Pi will deliver IP addresses between and, with a lease time of 24 hours, to wireless DHCP clients. You should be able to reach the Raspberry Pi under the name gw.wlan from wireless clients.

NOTEThere are three IP address blocks set aside for private networks. There is a Class A block from to, a Class B block from to, and probably the most frequently used, a Class C block from to

There are many more options for dnsmasq; see the default configuration file (/etc/dnsmasq.conf) or the online documentation for details.

Ensure Wireless Operation

Note, I successfully skipped this subsection

Countries around the world regulate the use of telecommunication radio frequency bands to ensure interference-free operation. The Linux OS helps users comply with these rules by allowing applications to be configured with a two-letter “WiFi country code”, e.g. US for a computer used in the United States.

In the Raspberry Pi OS, 5 GHz wireless networking is disabled until a WiFi country code has been configured by the user, usually as part of the initial installation process (see wireless configuration pages in this section for details.)

To ensure WiFi radio is not blocked on your Raspberry Pi, execute the following command:

This setting will be automatically restored at boot time. We will define an appropriate country code in the access point software configuration, next.

Configure the AP Software

Create the hostapd configuration file, located at /etc/hostapd/hostapd.conf, to add the various parameters for your new wireless network.

sudo nano /etc/hostapd/hostapd.conf

Add the information below to the configuration file. This configuration assumes we are using channel 7, with a network name of EnchantedRose, and a password AardvarkBadgerHedgehog. Note that the name and password should not have quotes around them, and the passphrase should be between 8 and 64 characters in length, or else “hostapd” will fail to start.


Note the line country_code=US: it configures the computer to use the correct wireless frequencies in the United States. Adapt this line and specify the two-letter ISO code of your country. See Wikipedia for a list of two-letter ISO 3166-1 country codes.

To use the 5 GHz band, you can change the operations mode from hw_mode=g to hw_mode=a. Possible values for hw_mode are:

  • a = IEEE 802.11a (5 GHz) (Raspberry Pi 3B+ onwards)
  • b = IEEE 802.11b (2.4 GHz)
  • g = IEEE 802.11g (2.4 GHz)

Note that when changing the hw_mode, you may need to also change the channel – see Wikipedia for a list of allowed combinations.

Troubleshooting hostapd: If for some reason your access point is not coming up, try running hostapd manually from the command line: sudo hostapd /etc/hostapd/hostapd.conf

You’ll likely get some kind of error message back.

Running the new Wireless AP

Now restart your Raspberry Pi and verify that the wireless access point becomes automatically available.

Once your Raspberry Pi has restarted, search for wireless networks with your wireless client. The network SSID you specified in file /etc/hostapd/hostapd.conf should now be present, and it should be accessible with the specified password.

If SSH is enabled on the Raspberry Pi, it should be possible to connect to it from your wireless client as follows, assuming the pi account is present: ssh pi@

If your wireless client has access to your Raspberry Pi (and the internet, if you set up routing), congratulations on setting up your new access point!

If you encounter difficulties, contact the forums for assistance. Please refer to this page in your message.

Once this is done, the network should be “EnchantedRose”, and the main web address should be

Sifting Through the FTX Rubble

The sudden collapse of the world’s second biggest cryptocurrency exchange in November 2022 shocked the crypto world and left more than a million creditors hanging, with the 50 biggest being owed a staggering $3.1 billion.

Update, December 12 2022: Bankman-Fried Arrested

2022 has been quite a year for the co-founder of crypto exchange FTX, Sam Bankman-Fried.

In February, Bankman-Fried and 99 million other Americans watched Curb Your Enthusiasm and Seinfeld comedian Larry David stump for FTX during the Super Bowl. During the spring and summer, Bankman-Fried deployed approximately $5 billion in a series of buyouts: crypto player Liquid Global in February. Video game maker Storybook Brawl in March. Canadian crypto exchange Bitvo in June. Crypto exchange Blockfolio in August. Alameda even deployed about $11 million to a tiny rural bank here in Washington State, with aims to help it bootstrap a crypto bank on American soil.

By August 2022, SBF was being hailed by Bloomberg and CNBC’s Jim Cramer as “the JP Morgan of this generation,” a reference to when JP Morgan helped stabilize America’s economy during the panics of 1893 and 1907.

The NFL’s Tom Brady, modeling’s Giselle Bundchen and Shark Tank’s Kevin O’Leary were all singing his praises. The FTX brand was everywhere. It was emblazoned on the enormous Miami Heat stadium, after FTX secured 19-year naming rights in 2021. Major League Baseball Umpires even wore an FTX patch on their uniforms (two patches, actually) all season long.

A Fortune Magazine piece likened SBF to value investor Warren Buffett, something Buffett, a famous crypto-disbeliever, no doubt disagrees with.

And Bankman-Fried himself was popular for another reason: social change. He was an evangelist for the philosophy of “effective altruism“, which posits that the most effective way to do best for people is to spend one’s productive years amassing a huge sum of wealth, and then give as much of it away as possible. The cargo-shorts wearing, Toyota Corolla-driving Bankman-Fried played the part well.

Video blogger Nas Daily flew to the Bahamas to hail him as the “World’s Most Generous Billionaire”:

Bankman-Fried wasn’t about to wait until his retirement years to start spreading the millions around. He dolled out $42 million to Democrats during the 2022 midterms, as its second largest donor.

Sam Bankman-Fried. Image via Inside Bitcoins

Entering into the fourth quarter of 2022, SBF was riding high. He was worth more than $10 billion on paper, and the exchange he created was valued at more than $32 billion. FTX had over 5 million active users, and on average, its daily trading volume in 2021 exceeded $12.5 billion. According to Bankless, it was on track to reach $1.1 billion in revenue for 2022.

It all collapsed in less than one week. The sudden collapse of the world’s second biggest cryptocurrency exchange in November 2022 shocked the crypto world, and left more than one million creditors reeling. According to bankruptcy filings, the 50 biggest creditors alone are owed a staggering $3.1 billion.

As Bankman-Fried put it to the crowd of movers and shakers gathered in NYC at November’s NYT “Dealbook” conference, he has “had a bad month.” FTX and its sister company Alameda Research declared bankruptcy on November 11, 2022, and SBF is at serious risk of federal prosecution that could send him behind bars for a very long time.

Now worth $0, Bankman-Fried is going before any audience he can find, ignoring his attorneys’ advice otherwise, because, well, he wants us to know that he is sorry. That he “fucked up.” That he didn’t pay near enough attention to proper accounting or risk management. But even though he messed up, he will tell any audience who will listen, “I want to work to make this right,” and “I didn’t ever try to commit fraud.”

Bankman-Fried’s implicit message at the moment has been, more or less, that he did not possess mens rea (a “guilty mind”.) To him, he didn’t knowingly co-mingle customer funds with those of his own hedge fund. He didn’t intentionally mislead investors about where their money was going. He didn’t deliberately cause more than $30 billion of paper wealth (and more than $3 billion of actual creditor dollars) to evaporate.

Whether Bankman-Fried commited fraud or not in one of the decade’s biggest corporate collapses so far should be the subject of fierce federal investigation. And while that may well be occurring, there aren’t many visible signs that the feds are on this collapse with the furvor they had for, say, Bernie Madoff or Enron. Bankman-Fried was politely invited to testify before Rep. Maxine Waters’ House Financial Services Committee, and at first demurred.

One can hope this is underway, but it’s been a month, and not much word from federal lawmakers yet.

Tomorrow Sam Bankman-Fried will appear before the House Financial Services Committee. Joining him will be John J. Ray III, whom FTX’s board appointed as CEO to oversee the post-bankruptcy process.

Cynics speculate that the questioning might be fairly light-handed from Representative Maxine Waters (D, CA). Waters appeared with him in photos just a few months ago, and appeared to blow kisses his way at their last appearance in Washington DC.

Happier times: Bankman-Fried and Rep. Maxine Waters in Washington DC (Twitter)

As mentioned earlier, Bankman-Fried was the second largest donor to the Democratic National Committee for the 2022 midterms, at more than $40 million donated. And another senior FTX executive, co-founder Ryan Salame, donated $24 million to the Republicans. They may have bought themselves a bit more time.

Ray’s prepared remarks to the Committee are brutal: “Never in my career have I seen such an utter failure of corporate controls at every level of an organization, from the lack of financial statements to a complete failure of any internal controls or governance whatsoever.”

UPDATE: Sam Bankman-Fried has been arrested in the Bahamas after US Prosecutors filed charges.

FTX and Alameda Research

The world of cryptocurrency is awash in buzzwords which can complicate understanding.

So here is the collapse in its simplest terms. The allegation is that a handful of FTX executives knowingly co-mingled billions of dollars of FTX end-customer funds with its own closely-held hedge fund, run by sister company called Alameda Research. Compounding matters, Alameda made a staggeringly bad set of leveraged bets with those funds, whose downside results were greatly compounded by a crypto-crash in the Spring of 2022.

Through a series of transactions between FTX and Alameda, Alameda amassed a gigantic position in FTX’s own token (called “FTT”), a cryptocurrency which was highly correlated with FTX’s own market value. (In the non-crypto world, you might liken this to shares of its own stock, since it moved in a very correlated fashion to FTX’s own perceived value.) This “worked” for a short while, as FTX’s private market gain and apparent momentum appeared to convey some value in FTT.

But FTT was highly illiquid. Only a little bit of it traded every day. FTT was risky, far riskier than its mere stock price chart showed at the time. That’s because due to illiquidity, the stock price could be sent rapidly down by a big seller dumping it.

On November 2nd 2022, journalists at crypto trade publication Coindesk published a blockbuster piece: Divisions in Sam Bankman-Fried’s Crypto Empire Blur on His Trading Titan Alameda’s Balance Sheet. Somehow, Coindesk had come across internal documents of Alameda and FTX which detailed Alameda holdings, and what these documents revealed sent shockwaves through the crypto market.

Coindesk reporters noted that of the nominal $14.6 billion that Alameda had amassed on its balance sheet, more than half of it was in FTT/FTX-related currency.

Why is that bad? Not only is a concentrated position in one asset a large risk factor for any hedge fund, but the asset Alameda owned in gobs and gobs was also highly correlated to FTX’s own company value. While FTT was trading between $25-52 throughout most of 2022, it had pretty low trading volume. Not many people wanted to buy it up. A massive unloading of this currency would therefore send its value plummeting. And if that happened, Alameda would get “margin called” by lenders on its substantial loans and have to liquidate some securities to pay them off.

Any sizable drop in value of FTT would put enormous financial pressure on FTX’s solvency.

The Coindesk report revealed the extent to which FTX and Alameda were intertwined. It caught the attention of Changpeng Zhao (CZ), who owned a very large position of FTT. On November 6th, CZ tweeted “As part of Binance’s exit from FTX equity last year, Binance received roughly $2.1 billion USD equivalent in cash (BUSD and FTT). Due to recent revelations that have came to light, we have decided to liquidate any remaining FTT on our books.”

He signaled his intent to sell, and proceded to dump a large volume of FTT into a pretty illiquid market, far more than Alameda or FTX could attempt to buy back.

FTX traders and FTT holders alike noticed this, which triggered a “run on the bank,” i.e., causing customers (fearing FTX’s bankruptcy) to say “I want my money back!” FTX was at first able to process the first billion dollars or so of redemption requests, but the downward spiral accelerated, taking down the whole house of cards within a 72 hour period.

By November 11th 2022, FTX was filing for bankruptcy protection.

The entire company lasted just five years. Alameda Research was Bankman-Fried’s first venture; he founded it in November 2017. It began with a fairly simple (and legal) business model, making arbitrage trades of Bitcoin between domestic and Asian markets. Bankman-Fried had noticed that the price of Bitcoin would generally be cheaper in the US than in, say, South Korea. So Alameda made a tidy profit for a while automatically buying Bitcoin on the cheap and then reselling those same coins on Asian markets.

By May 2019, Bankman-Fried’s ambitions grew larger, and he decided to create a crypto-trading exchange. He hired a new CEO, Caroline Ellison, a former colleague of his from his brief time at Wall Street’s quant firm Jane Street Research, to run Alameda Research.

He also convinced Changpeng Zhao (CZ), the founder and CEO of the world’s largest crypto exchange (Binance) to invest in his new venture. SBF had come to know CZ through his arbitrage trades with Alameda.

What’s a Crypto Exchange? A centrally-controlled crypto exchange like FTX is a place where end-users can go to buy and sell crypto currency, and do trades between “fiat currencies” (like the US dollar or British pound) and various crypto coins. If you were an FTX customer, you’d wire in funds to your account, and then trade those funds with other buyers or sellers of cryptocurrency. FTX would benefit from trading fees.

That’s how an exchange is supposed to work.

But it appears that FTX and Alameda co-mingled customer funds; a fact confirmed by current CEO Ray in his prepared remarks to Congress. For the end customer, their balance might display as owning, say, $100 worth of US currency or $100 worth of some crypto coin (minus trading fees), but the big allegation here is that Alameda was taking some or all of those funds, and betting elsewhere at various times.

Making Alameda’s own JENGA-tower shakier, Alameda appears to have had significant positions in Luna Terra, a “stable coin” which utterly collapsed between May 7-12 2022. Needing to cover losses somehow, there’s speculation on Crypto Twitter that Alameda became quite tempted to dip into customer funds.

It appears as though Bankman-Fried’s two business entities blurred several lines, treating customer funds as their own to bet with. Bankman-Fried often points to a second customer agreement allowing for margin trading between accounts, but it’s quite unclear what fraction of customers opted into this type of agreement. End-user deposits which many customers reasonably thought were isolated were instead deployed for risky bets unrelated to what end-users wanted to do with their own funds.

This not only violates their Terms of Service with customers, it would be a pretty clear violation of traditional securities and exchange laws. As FTX’s own terms of service describe:

  • “You control the Digital Assets held in your Account,” says Section 8.2 of the terms.
  • “Title to your Digital Assets shall at all times remain with you and shall not transfer to FTX Trading.”
  • “None of the Digital Assets in your Account are the property of, or shall or may be loaned to, FTX Trading; FTX Trading does not represent or treat Digital Assets in User’s Accounts as belonging to FTX Trading.”

So, Bankman-Fried’s life will be pretty interesting in 2023, just not in the same way that 2022 was. He remains ensconsed in the Bahamas, not yet indicted, on a virtual press tour of all press tours. He’s spoken with the New York Times, Good Morning America, George Stephanaplous, numerous Twitter Spaces and podcasts. Anywhere there’s a microphone, he’s out telling his story.

Tomorrow, he’ll be telling his story (or taking the Fifth) before the House Financial Services Committee. And we’re sure to hear that whatever he did, he didn’t mean it.