Engines of Wow: Part II: Deep Learning and The Diffusion Revolution, 2014-present

A revolutionary insight in 2015, plus AI work on natural language, unleashed a new wave of generative AI models.

In Part I of this series on AI-generated art, we introduced how deep learning systems can be used to “learn” from a well-labeled dataset. In other words, algorithmic tools can “learn” patterns from data to reliably predict or label things. Now on their way to being “solved” via better and better tweaks and rework, these predictive engines are magical power-tools with intriguing applications in pretty much every field.

Here, we’re focused on media generation, specifically images, but it bears a note that many of the same basic techniques described below can apply to songwriting, video, text (e.g., customer service chatbots, poetry and story-creation), financial trading strategies, personal counseling and advice, text summarization, computer coding and more.

Generative AI in Art: GANs, VAEs and Diffusion Models

From Part I of this series, we know at a high level how we can use deep-learning neural networks to predict things or add meaning to data (e.g., translate text, or recognize what’s in a photo.) But we can also use deep learning techniques to generate new things. This type of neural network system, often comprised of multiple neural networks, is called a Generative Model. Rather than just interpreting things passively or searching through existing data, AI engines can now generate highly relevant and engaging new media.

How? The three most common types of Generative Models in AI are Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs) and Diffusion Models. Sometimes these techniques are combined. They aren’t the only approaches, but they are currently the most popular. Today’s star products in art-generating AI are Midjourney by Midjourney.com (Diffusion-based) DALL-E by OpenAI (VAE-based), and Stable Diffusion (Diffusion-based) by Stability AI. It’s important to understand that each of these algorithmic techniques were conceived just in the past 6 years or so.

My goal is to describe these three methods at a cocktail-party chat level. The intuition behind them are incredibly clever ways of thinking about the problem. There are lots of resources on the Internet which go much further into each methodology, listed at the end of each section.

Generative Adversarial Networks

The first strand of generative-AI models, Generative Adversarial Networks (GANs), have been very fruitful for single-domain image generation. For instance, visit thispersondoesnotexist.com. Refresh the page a few times.

Each time, you’ll see highly* convincing images like this, but never the same one twice:

As the domain name suggests, these people do not exist. This is the computer creating a convincing image, using a Generative Adversarial Network (GAN) trained to construct a human-like photograph.

*Note that for the adult male, it only rendered half his glasses. This GAN doesn’t really understand the concept of “glasses,” simply a series of pixels that need to be adjacent to one another.

Generative Adversarial Networks were introduced in a 2014 paper by Ian Goodfellow et al. That was just eight years ago! The basic idea is that you have two deep-learning neural networks: a Generator and a Discriminator. You can think of them like a Conterfeiter and a Detective respectively. One Deep Learning model, serving as the “Discriminator” (Detective), learns to distinguish between genuine articles and counterfeits. It penalizes the generator for producing implausible results. Meanwhile, a Generator model learns to “generate” plausible data, which, if it “fools” the discriminator, becomes negative training data for the Discriminator. They play a zero-sum game against each other (thus it’s “adversarial”) thousands and thousands of times, and with each adjustment to the Generator and Discriminator’s weights and attributes, the Generator gets better and better at “learning” how to construct something to fool the Discriminator, and the Discriminator gets better and better at detecting fakes.

The whole system looks like this:

Generative Adversarial Network, source: Google

GANs have delivered pretty spectacular results, but in fairly narrow domains. For instance, GANs have been pretty good at mimicking artistic styles (called “Neural Style Transfer“) and Colorizing Black and White Images.

GANs are cool and a major area of generative AI research.

More reading on GANs:

Variational Autoencoders (VAE)

An encoder can be thought of as a compressor of data, and a decompressor, something which does this opposite. You’ve probably compressed an image down to a smaller size without losing recognizability. It turns out you can use AI models to compress an image. Data scientists call this reducing its dimensionality.

What if you built two neural network models, an Encoder and a Decoder? It might look like this, going from x, the original image, to x’, the “compressed and then decompressed” image:

Variational Autoencoder, high-level diagram. Images go in on left, and come out on right. If you train. the networks to minimize the difference between output and input, you get to a compression algorithm of sorts. What’s left in red are lower-dimension representation of the images.

So conceptually, you could train an Encoder neural network to “compress” images into vectors, and then a Decoder neural network to “decompress” the image back into something close to the original.

Then, you could consider the red “latent space” in the middle as basically the rosetta stone for what a given image means. Run that algorithm numerous times over multiple images, encoding it with the text of the labeled images, and you would end up with the condensed encoding of how to render various images. If you did this across many, many images and subjects, these numerous red vectors would overlap in n-dimensional space, and could be sampled and mixed and then run through the decoder to generate images.

With some mathematical tricks (specifically, forcing the latent variables in red to conform to a normal distribution), you can build a system which can generate images that never existed before, but which have some very similar properties to the dataset which was used to train the encoder.

More reading on VAEs:

2015: “Diffusion” Arrives

Is there another method entirely? What else could you do with a deep learning system which can “learn” how to predict things?

In March 2015, a revolutionary paper came out from researchers Sohl-Dickstein, Weiss, Maheswaranathan and Ganguli. It was inspired by the physics of non-equilibrium systems: for instance, dropping a drop of food coloring into a glass of water. Imagine you saw a film of that process of “destruction”, and could stop it frame by frame. Could you build a neural network to reliably predict what a reverse might look like?

Let’s think about a massive training set of animal images. Imagine you take an image in your training dataset, and create multiple copies of the image, each time systematically adding graphic “noise” to it. Step by step, more noise is added to your image (x), via what mathematicians call a Markov chain (incremental steps.) You apply a normally-distributed distortion, let’s say, Gaussian Blur.

In a forward direction, from left to right, it might look something like this. At each step from left to right, you’re going from data (the image) to pure noise:

Adding noise to an image, left to right. Credit: image from “AI Summer”: How diffusion models work: the math from scratch | AI Summer (theaisummer.com)

But here’s the magical insight behind Diffusion models. Once you’ve done this, what if you trained a deep learning model to try to predict frames in the reverse direction? Could you predict a “de-noised” image X(t) from its more noisier version, X(t+1)? Could you could read each step backward, from right to left, and try to predict the best way to remove noise at each step?

This was the insight in the 2015 paper, albeit with much more mathematics behind it. It turns out you can train a deep learning system to learn how to “undo” noise in an image, with pretty good results. For instance, if you input the pure-noise image in the last step, x(T), and train a deep learning network that its output should be the previous step x(T-1), and do this over and over again with many images, you can “train” a deep learning network to subtract noise in an image, all the way back to an original image.

Do this enough times, with enough terrier images, say. And then, ask your trained model to divine a “terrier” from random noise. Gradually, step by step, it removes noise from an image to synthesize a “terrier”, like this:

Screen captured video of using the Midjourney chatroom (on Discord) to generate: “terrier, looking up, cute, white background”

Images generated from the current Midjourney model:

“terrier looking up, cute, white background” entered into Midjourney. Unretouched, first-pass output with v3 model.

Wow! Just slap “No One Hates a Terrier” on any of these images above, print 100 t-shirts, and sell it on Amazon. Profit! I’ll touch on some of the legal and ethical controversies and ramifications in the final post in this series.

Training the Text Prompts: Embeddings

How did Midjourney know to produce a “terrier”, and not some other object or scene or animal?

This relied upon another major parallel track in deep learning: natural language processing. In particular, word “embeddings” can be used to get from keywords to meanings. And during the image model training, these embeddings were applied by Midjourney to enhance each noisy-image with meaning.

An “embedding” is a mapping of a chunk of text into a vector of continuous numbers. Think about a word as a list of numbers. A textual variable could be a word or a node in a graph, or a relation between nodes in a graph. By ingesting massive amounts of text, you can train a deep learning network to understand relationships between words and entities, and numerically pull out how closely associated some words and phrases are with others. They can be used to cluster together the sentiment of an expression in mathematical terms a computer can appear to understand. For instance, embedding models are now able to interpret semantics and relationships between words, like “royalty + woman – man = queen.”

An example on Google Colab took a vocabulary of 50,000 words in a collection of movie reviews, and learned over 100 different attributes from words used with them, based on their adjacency to one another:

img

Source: Movie Sentiment Word Embeddings

So, if you simultaneously injected into the “de-noising” diffusion-based learning process the information that this is about a “dog, looking up, on white background, terrier, smiling, cute,” you can get a deep learning network to “learn” how to go from random noise (x(T)) to a very faint outline of a terrier (x(T-1)), to even less faint (x(T-2)) and so on, all the way back to x(0). If you do this over thousands of images, and thousands of keyword embeddings, you end up with a neural network that can construct an image from some keywords.

Incidentally, researchers have found that about T=1000 is about all you need in this process, but millions of input images and enormous amounts of computing power are needed to learn how to “undo” noise at high resolution.

Let’s step back a moment to note that this revelation about Diffusion Models was only really put forward in 2015, and improved upon in 2018 and 2020. So we are just at the very beginning of understanding what might be possible here.

In 2021, Dhariwal and Nichol convincingly note that diffusion models can achieve image quality superior to the existing state-of-the-art GAN models.

Up next, Part III: Ramifications and Questions

That’s it for now. In the final Part III of Engines of Wow, we’ll explore some of the ramifications, controversies and make some predictions about where this goes next.

Engines of Wow: AI Art Comes of Age

Advancements in AI-generated art test our understanding of human creativity and laws around derivative art.

While most of us were focused on Ukraine, the midterm elections, or simply returning to normal as best we can, Artificial Intelligence (AI) took a gigantic leap forward in 2022. Seemingly all of a sudden, computers are now eerily capable of human-level creativity. Natural language agents like GPT-3 are able to carry on an intelligent conversation. GitHub CoPilot is able to write major blocks of software code. And new AI-assisted art engines with names like Midjourney, DALL-E and Stable Diffusion delight our eyes, but threaten to disrupt entire creative professions. They raise important questions about artistic ownership, derivative work and compensation.

In this three-part blog series, I’m going to dive in to the brave new world of AI-generated art. How did we get here? How do these engines work? What are some of the ramifications?

This series is divided into three parts:

[featured image above: “God of Storm Clouds” created by Midjourney AI algorithm]

But first, why should we care? What kind of output are we talking about?

Let’s try one of the big players, the Midjourney algorithm. Midjourney lets you play around in their sandbox for free for about 25 queries. You can register for free at Midjourney.com; they’ll invite you to a Discord chat server. After reading a “Getting Started” agreement and accepting some terms, you can type in a prompt. You might go with: “/imagine portrait of a cute leopard, beautiful happy, Gryffindor outfit, super detailed, hyper realism.”

Wait about 60 seconds, choose one of the four samples generated for you, click the “upscale” button for a bigger image, and voila:

image created by the Midjourney image generation engine, version 4.0. Full prompt used to create it was “portrait of a cute leopard, Beautiful happy, Gryffindor Outfit, white background, biomechanical intricate details, super detailed, hyper realism, heavenly, unreal engine, rtx, magical lighting, HD 8k, 4k”

The Leopard of Gryffindor was created without any human retouching. This is final Midjourney output. The algorithm took the text prompt, and then did all the work.

I look at this image, and I think: Stunning.

Looking at it, I get the kind of “this changes everything” feeling, like the first time I browsed the world-wide web, spoke to Siri or Alexa, rode in an electric vehicle, or did a live video chat with friends across the country for pennies. It’s the kind of revolutionary step-function that causes you to think “this will cause a huge wave of changes and opportunities,” though it’s not even clear what they all are.

Are artists, graphic designers and illustrators doomed? Will these engines ultimately help artists or hurt them? How will the creative ecosystem change when it becomes nearly free to go from idea to visual image?

Once mainly focused at just processing existing images, computers are now extremely capable at generating brand new things. Before diving into a high-level overview of these new generative AI art algorithms, let me emphasize a few things: First, no artist has ever created exactly the above image before, nor will it likely be generated again. That is, Midjourney and its competitors (notably DALL-E and Stable Diffusion) aren’t search engines: they are media creation engines.

In fact, if you typed this same exact prompt into Midjourney again, you’ll get an entirely different image, yet one which also is likely to deliver on the prompt fairly well.

There is an old joke within Computer Science circles that “Artificial Intelligence is what we call things that aren’t working yet.” That’s now sounding quaint. AI is all around us, making better and better recommendations, completing sentences, curating our media feeds, “optimizing” the prices of what we buy, helping us with driving assistance on the road, defending our computer networks and detecting spam.

Part I: The Artists in the Machine, 1950-2015+

How did this revolutionary achievement come about? Two ways, just as bankruptcy came about for Mike Campbell in Hemingway’s The Sun Also Rises: First gradually. Then suddenly.

Computer scientists have spent more than fifty years trying to perfect art generation algorithms. These five decades can be roughly divided into two distinct eras, each with entirely different approaches: “Procedural” and “Deep Learning.” And, as we’ll see in Part II, the Deep-Learning era had three parallel but critical deep learning efforts which all converged to make it the clear winner: Natural Language, Image Classifiers, and Diffusion Models.

But first, let’s rewind the videotape. How did we get here?

Procedural Era: 1970’s-1990’s

If you asked most computer users, the naive approach to generating computer art would be to try to encode various “rules of painting” into software programs, via the very “if this then that” kind of logic that computers excel at. And that’s precisely how it began.

In 1973, American computer scientist Harold Cohen, resident at Stanford University’s Artificial Intelligence Laboratory (SAIL) created AARON, the first computer program dedicated to generating art. Cohen was actually an accomplished, talented artist and a computer scientist. He thought it would be intriguing to try to “teach” a computer how to draw and paint.

His thinking was to encode various “rules about drawing” into software components and then have them work together to compose a complete piece of art. Cohen relied upon his skill as an exceptional artist, and coded his own “style” into his software.

AARON was an artificial intelligence program first written in the C programming language (a low level language compiled for speed), and later LISP (a language designed for symbolic manipulation.) AARON knew about various rules of drawing, such as how to “draw a wavy blue line, intersecting with a black line.” Later, constructs were added to combine these primitives together to “draw an adult human face, smiling.” By 1995, Cohen added rules for painting color within the drawn lines.

Though there were aspects of AARON which were artificially intelligent, by and large computer scientists call his a procedural approach. Do this, then that. Pick up a brush, pick an ink color, and draw from point A to B. Construct an image from its components. Join the lines. And you know what? after a few decades of work, Cohen created some really nice pieces, worthy of hanging on a wall. You can see some of them at the Museum of Computer History in Menlo Park, California.

In 1980, AARON was able to generate this:

Detail from an untitled AARON drawing, ca. 1980.
Detail from an untitled AARON drawing, ca. 1980, via Computer History Museum

By 1995, Cohen had encoded rules of color, and AARON was generating images like this:

img
The first color image created by AARON, 1995. via Computer Museum, Boston, MA

Just a few months ago, other attempts at AI-generated art were flat-looking and derivative, like this image from 2019:

img

Twenty seven years after AARON’s first AI-generated color painting, algorithms like Midjourney would be quickly rendering photorealistic images from text prompts. But to accomplish it, the primary method is completely different.

Deep Learning Era (1986-Present)

Algorithms which can create photorealistic images-on-demand are the culmination of multiple parallel academic research threads in learning systems dating back several decades.

We’ll get to the generative models which are key to this new wave of “engines of wow” in the next post, but first, it’s helpful to understand a bit about their central component: neural networks.

Since about 2000, you have probably noticed everyday computer services making massive leaps in predictive capabilities; that’s because of neural networks. Turn on Netflix or YouTube, and these services will serve up ever-better recommendations for you. Or, literally speak to Siri, and she will largely understand what you’re saying. Tap on your iPhone’s keyboard, and it’ll automatically suggest which letters or words might follow.

Each of these systems rely upon trained prediction models built by neural networks. And to envision them, a branch of computer scientists and mathematicians had to radically shift their thinking from the procedural approach. A branch of them did so first in the 1950’s and 60’s, and then again in a machine-learning renaissance which began in earnest in the mid-1980’s.

The key insight: these researchers speculated that instead of procedural coding, perhaps something akin to “intelligence” could be fashioned from general purpose software models, which would algorithmically “learn” patterns from a massive body of well-labeled training data. This is the field of “machine learning,” specifically supervised machine learning, because it’s using accurately pre-labeled data to train a system. That is, rather than “Computer, do this step first, then this step, then that step”, it became “Computer: learn patterns from this well-labeled training dataset; don’t expect me to tell you step-by-step which sequence of operations to do.”

The first big step began in 1958. Frank Rosenblatt, a researcher at Cornell University, created a simplistic precursor to neural networks, the “Perceptron,” basically a one-layer network consisting of visual sensor inputs and software outputs. The Perceptron system was fed a series of punchcards. After 50 trials, the computer “taught” itself to distinguish those cards which were marked on the left from cards marked on the right. The computer which ran this program was a five-ton IBM 704, the size of a room. By today’s standards, it was an extremely simple task, but it worked.

A single-layer perceptron is the basic component of a neural network. A perceptron consists of input values, weights and a bias, a weighted sum and activation function:

Frank Rosenblatt and the Perceptron system, 1958

Rosenblatt described it as the “first machine capable of having an original idea.” But the Perceptron was extremely simplistic; it merely added up the optical signals it detected to “perceive” dark marks on one side of the punchcard versus the other.

In 1969, MIT’s Marvin Minsky, whose father was an eye surgeon, wrote convincingly that neural networks needed multiple layers (like the optical neuron fabric in our eyes) to really do complex things. But his book Perceptrons, though well-respected in hindsight, got little traction at the time. That’s partially because during these intervening decades, the computing power required to “learn” more complex things via multi-layer networks were out of reach computationally. But time marched on, and over the next three decades, computing power, storage, languages and networks all improved dramatically.

From the 1950’s through the early 1980’s, many researchers doubted that computing power would be sufficient for intelligent learning systems via a neural network style approach. Skeptics also wondered if models could ever get to a level of specificity to be worthwhile. Early experiments often “overfit” the training data and simply output the input data. Some would get stuck on local maxima or minima from a training set. There were reasons to doubt this would work.

And then, in 1986, Carnegie Mellon Professor Geoffrey Hinton, whom many consider the “Godfather of Deep Learning” (go Tartans!), demonstrated that “neural networks” could learn to predict shapes and words by statistically “learning” from a large, labeled dataset. Hinton’s revolutionary 1986 breakthrough was the concept of “backpropagation.” This adds both multiple layers to the model (hidden layers), and also iterations through the neural network model using the output of one or more mathematical functions to adjust weights to minimize “loss” or distance from the expected output.

This is rather like the golfer who adjusts each successive golf swing, having observed how far off their last shots were. Eventually, with enough adjustments, they calculate the optimal way to hit the ball to minimize its resting distance from the hole. (This is where terms like “loss function” and “gradient descent” come in.)

In 1986-87, around the time of the 1986 Hinton-Rumelhart-Williams paper on Backpropagation, the whole AI field was in flux between these procedural and learning approaches, and I was earning a Masters in Computer Science at Stanford, concentrating in “Symbolic and Heuristic Computation.” I had classes which dove into the AARON-style type of symbolic, procedural AI, and a few classes touching on neural networks and learning systems. (My masters thesis was in getting a neural network to “learn” how to win the Tower of Hanoi game, which requires apparent backtracking to win.)

In essence, you can think of a neural network as a fabric of software-represented units (neurons) waiting to soak up patterns in data. The methodology to train them is: “here is some input data and the output I expect, learn it. Here’s some more input and its expected output, adjust your weights and assumptions. Got it? Keep updating your priors. OK, let’s keep doing that.” Like a dog learning what “sit” means (do this, get a treat / don’t do this, don’t get a treat), neural networks are able to “learn” over iterations, by adjusting the software model’s weights and thresholds.

Do this enough times, and what you end up with is a trained model that’s able to “recognize” patterns in the input data, outputting predictions, or labels, or anything you’d like classified.

A neural network, and in particular the special type of multi-layered network called a deep learning system, is “trained” on a very large, well-labeled dataset (i.e., with inputs and correct labels.) The training process uses Hinton’s “backpropagation” idea to adjust the weights of the various neuron thresholds in the statistical model, getting closer and closer to “learning” the underlying pattern in the data.

For much more detail on Deep Learning and the mathematics involved, see this excellent overview:

Deep Learning Revolutionizes AI Art

We’ll rely heavily upon this background of neural networks and deep learning for Part II: The Diffusion Revolution. The AI revolution uses deep learning networks to interpret natural language (text to meaning), classify images, and “learns” how to synthetically build an image from random noise.

Gallery

Before leaving, here are a few more images created from text prompts on Midjourney:

You get the idea. We’ll check in on how deep learning enabled new kinds of generative approach to AI art, called Generative Adversarial Networks, Variable Autoencoders and Diffusion, in Part II: Engines of Wow: Deep Learning and The Diffusion Revolution, 2014-present.

I’m Winding Down HipHip.app

After much thought, I’ve decided to wind down the video celebration app I created, HipHip.app.

After much thought, I’ve decided to wind down the video celebration app I created, HipHip.app.

All servers will be going offline shortly.

Fun Project, Lots of Learning

I started HipHip as a “give back” project during COVID. I noticed that several people were lamenting online that they were going to miss big milestones in-person: celebrations, graduations, birthdays, anniversaries, and memorials. I had been learning a bunch about user-video upload and creation, and I wanted to put those skills to use.

I built HipHip.app, a celebration video creator. I didn’t actually know at the time that there were such services — and it turns out, it’s a pretty crowded marketplace!

While HipHip delivered hundreds of great videos for people in its roughly two years on the market, it struggled to be anything more than a hobby/lifestyle project. It began under the unique circumstances of lockdown, helping people celebrate. That purpose was well served!

Now that the lockdown/remote phase of COVID is over, the economics of the business showed that it’s unlikely to turn into a self-sustaining business any time soon. There are some category leaders that have really strong search engine presence which is pretty expensive to dislodge.

I want to turn my energies to other projects, and free up time and budget for other things. COVID lockdown is over, and a lot of people want a respite from recording and Zoom-like interactions, including me.

It was a terrific, educational project. It kept me busy, learning, and productive. HipHip delivered hundreds of celebration videos for people around the world.

I’ve learned a ton about programmatic video creation, technology stacks like Next.js, Azure and React, and likely will apply these learnings to new projects, or perhaps share them with others via e-learning courses.

Among the videos people created were graduation videos, videos to celebrate new babies, engagement, birthdays, anniversaries and the attainment of US Citizenship.

In the end, the CPU processing and storage required for online video creation meant that it could not be done for free forever, and after testing a few price-points, there seems only so much willingness to pay in a crowded market.

Thanks everyone for your great feedback and ideas!

Financial Systems Come for Your Free Expression: Don’t let them.

I’ve been a PayPal customer for more than a decade, but closed my account last week. 2022 has shown glimpses of what a social credit system might look like in America. Decentralized, yet singular in ideology.

My local bagel store, barbershop and dry cleaner now only accept cashless transactions. Ubiquitous touchscreen displays and tap-to-pay checkouts now happily whisk customers through the line. Cashless transactions have been a boon for customer, employee (more tips!) and retailer alike. Mostly, I love it.

But with it has come unprecedented information flow on who we are, and increased temptation by platform providers to start monitoring who can be in their club and who cannot be.

While there isn’t yet any grand centralized design of a social credit system along the lines of what the Chinese Communist Party operates, I cannot help but worry that we are assembling the ideal toolset for ideological enforcement, monitoring and control should someone, some day, wish to network it all together.

Does that sound alarmist? Consider the overall trajectory of these recent stories:

October, 2022: PayPal Attempts to Fine Customers for What it Deems “Harmful” Ideas

On October 7th 2022, PayPal published amendments to its Acceptable Use Policy (AUP), which would have granted the payment provider legal authority to seize $2,500 from customers’ bank accounts for every single violation of what it deemed the spreading of “harmful” or “objectionable” information.

The determination of whether something is “harmful”, misleading or “objectionable” would come at PayPal’s sole discretion. These changes were set to go into effect on November 3rd, 2022, but were quietly retracted. PayPal only explained their policy reversal via emails to a few news outlets on October 8th; remarkably, you still cannot find any commentary about this episode on their Twitter feed.

What is harmful misinformation? Well, that’s subjective. We might all agree that businesses that explicitly promote murder shouldn’t be on the platform. But then it starts to get trickier. What if you believe the path of least overall harm was to reopen schools sooner? Or let vaccination be an informed choice, and not a mandate? Is being pro-choice or pro-life more “harmful?” Depends on who is answering the question.

Anything deemed harmful or objectionable by PayPal would be subject to such a fine.

Let’s review several recent statements which were authoritatively deemed “harmful misinformation”:

  • “Prolonged school closure is a mistake. Learning loss will happen, suicide rates might increase. We need to reopen schools urgently.” (Misinformation in 2020, True today.)
  • “Vaccination does not in fact significantly slow spread of COVID-19 to others.” (Misinformation in 2020, True today.)
  • “Naturally-acquired immunity is as strong as immunity acquired through vaccination, if not stronger.” (Misinformation in 2020, True today.)
  • “COVID-19, the most significant public health crisis in our lifetime, might well have emerged from a lab accident.” (Misinformation in 2020, officially declared at least equally plausible by the US government today.)
  • “Hunter Biden’s laptop contained clear and troubling signs of influence peddling.” (Declared misinformation in 2020, yet now verified by New York Times, Washington Post and others.)
  • “For younger males, the risk of myocarditis from vaccination may actually exceed the hospitalization risk of COVID itself.” (Declared misinformation in 2020, yet backed by empirical evidence today.)

Within a brief span of just thirty months, each of these statements has gone from “misinformation” or “harmful-information” as vehemently declared by authorities and name-brand “fact checkers” to now-majority American and empirically-validated viewpoints.

Further, who is paying attention to these stealth changes to terms and conditions? It wasn’t the New York Times, The Verge, nor the Washington Post that brought this major policy change of PayPal’s to America’s attention. It came from the right, who have become the most vocal critics of a creeping state-corporate symbiosis which they call the “Blue Stack.” The Blue Stack includes progressive technocrats, corporate media, and ostensibly independent big tech firms which work to enforce an ideology that inevitably tilts leftward.

The blue stack presents America’s elite with something they’ve always craved but has been out of reach in a liberal democracy: the power to swiftly crush ideological opponents by silencing them and destroying their livelihoods. Typically, American cultural, business, and communication systems have been too decentralized and too diffuse to allow one ideological faction to express power in that way. American elites, unlike their Chinese counterparts, have never had the ability to imprison people for wrong-think or derank undesirables in a social credit system.

Zaid Jilani, The Blue Stack Strikes Back, Tablet

Were it not for Ben Zeisloft, writer for the right-wing website Daily Wire, the public would likely not have known about PayPal’s major policy shift. But once Zeisloft’s piece hit (New PayPal Policy Lets Company Pull $2,500 From Users’ Accounts If They Promote ‘Misinformation’ | The Daily Wire), it caught fire on social media. And PayPal was forced into crisis-response mode, as the unwelcome press and cancellations started pouring in.

This wasn’t misinformation. PayPal’s new policy stated precisely as Zeisloft had identified. #CancelPayPal quickly started trending on Twitter, TikTok and Instagram. The proposed AUP changes are now gone from PayPal’s website, but here’s what it said on the web archive for October 27, 2022:

And here’s what it said after being as of November 11, 2022:

The company’s former CEO, David Marcus, blasted PayPal on Twitter, saying “It’s hard for me to openly criticize a company I used to love and gave so much to. But @PayPal’s new AUP goes against everything I believe in. A private company now gets to decide to take your money if you say something they disagree with.”, he wrote on Saturday.

PayPal handled this PR crisis very poorly. While they walked it back, they only did so via private emails to publications like Snopes, which dutifully penned “No, PayPal Isn’t Planning to Fine Users $2.5K for Posting Misinfo.” Snopes fails to clearly state that yes, PayPal indeed had.

And PayPal executives have yet to clearly explain to customers how this AUP change even arose. It all gives one the impression that the only “error” with this policy rollout is that someone skeptical noticed it. Their main Twitter handle, @paypal, was and still is silent on the rollout and stealthy walk-back. They reached out one-on-one with a few media organizations to state that it was an error, but they didn’t apprise the public. They didn’t explain how such an “error” could make it onto their corporate website.

October, 2022: JP Morgan Chase Summarily Closes Kanye West’s Accounts

I’ve never been a fan of Kanye “Ye” West’s music, erratic persona, nor many of his MAGA-political views. And his recent clearly antisemitic statements, deserve condemnation. I think they’re abhorrent.

Yet I’m also unsettled by JP Morgan Chase, Inc. summarily indicating they are closing his bank accounts based on his recent speech.

On the plus side, they’ve given him thirty days’ notice.

I admit on this I’m conflicted — I’m fine with this specific decision, but not what it says about the age we are now it. It is in effect a major bank saying you need to not only be a customer in good standing, but not stray from what its executives think are the norms of good speech. Are they saying it’s not just the bank’s services that set their brand, it’s the collective words and deeds of its customers?

Something feels new here.

Have corporate banking giants been arbiters of what we can and cannot say in our private lives? Do we want them to be? They’re private companies, after all, but who expected investment banks of all entities to be the enforcers of what they perceive to be social acceptability?

It feels absurd for bankers, of all people, to be America’s moral compass. Do you consider bankers to be America’s new home for ethicists, who will be able to determine what is and is not societally righteous?

February, 2022: GoFundMe Deplatforms “My Body, My Choice” Truckers

GoFundMe is the #1 marketplace and payment processor for fundraisers. As you may recall, the Canadian truckers who objected to that nation’s vaccine mandate headed en masse to Ottawa to protest the government’s mandate via what they termed a “Freedom Convoy.”

After raising over $10 million through GoFundMe, from people around Canada and the rest of the world, on February 4th 2022, executives at GoFundMe unilaterally decided to lock the truckers’ fundraising account. Further, in their initial statement, GoFundMe signaled they would distribute those funds to charities of their own choosing. “Given how this situation has evolved, no further funds will be distributed directly to the organizers of Freedom Convoy,” GoFundMe wrote about the decision. “We will work with the organizers to send all remaining funds to trusted and established charities verified by GoFundMe.”

After massive outcry, GoFundMe provided an update and said that they would instead refund donations. Many noticed their initial action and found it indicative of who they are. #BoycottGoFundMe made the rounds on social media for weeks.

Critics are right to point out that GoFundMe has hosted numerous fundraisers for Antifa, CHOP/CHAZ and other protest groups — even those around whom violence has occasionally happened — without cancelation or threats of unilateral fund-seizure. You can see just a few of them by searching PayPal’s site.

[Editorial note: I have stated my own views on vaccination: — I’m in favor of it personally and for most older demographics especially, but believe it to be a personal choice. It is now clear that vaccination does not measurably nor durably reduce spread (one such study here, others corroborate), I think vaccination should be an informed choice. I am firmly opposed to COVID vax mandates.]

PayPal, GoFundMe and JP Morgan Chase are each private companies, and have every legal right to set their own terms and conditions of use. But look also what’s happening at the governmental level.

August, 2022: Massive Increase to IRS budget, Considering Lowering Reporting Threshold to $600

In 1972, the Bank Secrecy Act started requiring banks to report deposits of $10,000 or more (in 1970 dollars.) Together with adjustments made by The Patriot Act in 2002, banks need to report to state and local authorities all deposits or withdrawals of $10,000 or more. (Even multiple transactions broken up into smaller pieces are tracked, and known as “structured transactions,” and that in and of itself is illegal.)

More recently, in 2021, Treasury Secretary Janet Yellen and others started advocating for lowering the threshold to $600. This hasn’t yet been adopted, but it’s being strongly advocated. With inflation, $600 is the new $500, so essentially most critical expenditures, like rent, furniture, car purchases, healthcare, travel and more are on the radar.

We often hear about “87,000 IRS Agents” authorized by the so-called Inflation Reduction Act, but really what the Act includes is a massive $79 billion budgetary expansion of the Internal Revenue Service. The IRS has every incentive and desire to start wiring in end-to-end tracking of cashless transactions.

Should the US want to pursue a social credit system along the lines of the Chinese state, all that really will be needed is the “right” lawmakers to authorize it doing so.

Republican Rep. Jefferson Van Drew has introduced HR 5475, known as the Banking Privacy Act, to stop the Biden Administration’s proposal, which has been referred to the House Financial Services Committee. Should the Republicans win control of the House, it’s possible this will be taken-up.

Of course, as the old saw goes, if you’re not doing anything illegal, you have nothing to worry about. After seeing the way COVID was and is handled, and the creeping power of what writer Zaid Jilani calls the Blue Stack, do you still have that confidence?

Themes

Fast-forward the videotape, and it is plain that without new regulation, we are fast headed to ideological groupthink being enforced by the financial world, who are of course also susceptible to the whims of government leaders. Consolidation and a pandemic-accelerated move to a cashless society are making a social credit system much easier to snap-in some day.

True, these actions are mostly the work of free enterprise. Companies aren’t state-controlled in America the way they are in China, and they are free to devise their own legal terms and conditions. We consumers are free to opt in or opt out. No one has to use PayPal, GoFundMe, JP Morgan Chase, Twitter or Facebook for that matter. I’m not aware of any of these activities being illegal.

But we need to pause a moment and recognize how a cashless society with higher concentrations of information flow and money are extremely tempting components for regulators and authoritarians on America’s political flanks. It’s a far cry from the local savings account that was largely agnostic to your speech.

Increasingly, outside groups, state and local governments and employees from within are pressuring banks, big technology companies and other corporations to take manifestly political/ideological stances, and expel people for wrongthink. Our massive migration toward a cashless society makes this easier.

That’s all fine, you might say, “I support Canada’s vax mandate for truckers, I think Kanye West is reprehensible that JP Morgan has every right to de-bank him from their system, and I think think the IRS’s investment in tracking every $600 will inure to much greater revenue to US coffers.”

But recognize that these monitoring tools and platforms themselves are entirely agnostic tools; their application merely depends upon who is in power. And that can change at any election.

So it’s an important exercise to take a moment to imagine the power which you may now applaud in the hands of your worst ideological foe. Are you still comfortable with how this is all trending?

For me, though I love technology and the convenience it offers, these trends to include speech and behavior in whether people can participate in a platform start to fill me with unease, especially when they are phrased in such a subjective way. After more than two decades as a customer, I closed my PayPal account last week. If you’d like to close your account, you can do it in a few clicks as I explain on Twitter.

And while I’m still fine with debit and credit cards, I’m beginning to suspect this sense of ease might not last forever. It only takes VISA or Mastercard to say “we will fine our customers for harmful misinformation.” For these and other reasons, this long-time advocate of technology is now becoming reacquainted with check-writing. I’m not ready to switch back to paper just yet, but it can’t hurt to re-learn how it worked in the 70’s.

Medicine Should Be About Care, Not Self-Righteousness

On the unwillingness of University of Michigan medical school students to hear views that might conflict with their own.

I am not, nor ever have been, a medical professional. I am also among the 61% of Americans that do not consider ourselves “pro-life.”

But there was something profoundly unsettling about the walk-out of incoming med school students at University of Michigan’s medical school convocation last week:

Dr. Collier is one of the most popular professors at University of Michigan Medical School. That’s why she was selected, by a combination of students and faculty, to be the keynote speaker for the “white coat” ceremony, in which incoming med school students get their symbolic professional jacket.

Why the walkout? It’s because Dr. Collier also happens to be among the 39% of Americans who define themselves to be “pro-life.” OK. That’s a rational, quite common viewpoint on a complex issue.

Now, she didn’t even speak to abortion at all. Her keynote address was far more general, and inspiring. It was that physicians should do everything possible to keep from being a machine. They should not perceive themselves as “task-completers,” but rather a physician, a human who cares. That they should be grateful. Appreciate a team.

She chose not to delve into abortion as a topic at all. But what if she had? What if she had decided to mention (gasp) her own perspective of a pro-life medical professional? Is that so appalling that it must be shunned? Is there no learning that’s possible by hearing that viewpoint out?

As it happened, dozens of students didn’t hear that message, because they preemptively walked out, before her address. Call me old-fashioned, but I believe that medical care should begin with empathy, and empathy begins with listening. We can, and must, tolerate and listen to perspectives other than our own.

More than any generation I can remember, far too many young adults that we are raising seem to be interested in hearing out viewpoints other than their own. They even think it’s noble to shut out those views.

39% of Americans — more than one out of every three — declare that they are “pro-life.”

Recession: What’s in a Name?

The White House kicks off its effort to change the most commonly accepted criterion of recession: two or more successive quarters of negative GDP growth.

In a clear sign that the Administration is anxious about Thursday’s Gross Domestic Product (GDP) report from the Commerce Department, the White House is making an all-out push this week around what constitutes a “recession.”

While it’s true there is no single steadfast criterion for a recession, by far the most common indicator used for recession has been two or more consecutive quarters of negative growth in Gross Domestic Product (GDP.) That’s the accepted shorthand criterion that MBA students like me were taught in our macroeconomics classes. You’ll find it in many economic textbooks, investment dictionaries, and nightly news segments.

And there’s the rub, because the US GDP declined in the first quarter by 1.6%, and signs are pointing to a decline again in the second quarter:

Source: US Bureau of Economic Analysis

No politician wants to go into midterms in a recession.

Beyond purely political motivations, naming the beast can plausibly make it worse. That’s because when consumers feel anxious about future employment or wage prospects, they may postpone durable goods orders and cut down spending to the bare essentials. Ditto for corporate boards: as soon as they decide a contraction is here, many will rationally postpone investment and cut back hiring. Chief Executives of Bank of America and Goldman Sachs have each decided that a recession is likely here.

On Thursday, the White House began an effort to get out in front of the Thursday report, releasing How Do Economists Determine Whether the Economy is In a Recession? on the official White House blog. They note that the official recession scorekeeper is the National Bureau of Economic Research (NBER), which defines it as “a significant decline in economic activity that is spread across the economy and lasts more than a few months.” But there appears to be no statutory language that officially makes NBER the umpire. It’s a subjective term which has a long-accepted criterion, and the White House is arguing for a softer definition.

Clearly, Democrats do not wish to head into the midterm elections in an official “recession.” To that end, Treasury Secretary Janet Yellen was dispatched to Sunday news shows and downplayed the risk of recession, arguing that consumer spending is growing, that the economy has added an estimated 400,000 jobs, and still has a relatively low unemployment rate of just 3.6%.

Congress? They use the 2-Quarter Definition.

Interestingly, a definition of recession actually does exist in statutory code passed by Congress, in the Gramm-Rudman-Hollings Act of 1985. It adopts the two-consecutive quarter definition:

Source: 2 USC 904, via @PhilWMagness

Warning Signs

Regardless of what we call it when, there are several economic warning signs.

Inflation, of course, is the metric that everyone feels every time they visit the grocery store. Inflation rose 9.1% year over year in June, which is the highest year over year increase in 40 years. Seattle-area prices are on an even bigger tear, jumping 10.1% since last year.

To try to get a handle on inflation, the Federal Reserve has been steadily hiking interest rates, which tends to slow demand. The slowing of demand typically relieves upward price pressure. But the Fed wants to hit the brakes without slowing it so much that it turns the economy into full-blown recession.

This is extremely challenging, because soaring inflation isn’t solely due to Fed actions, but also to factors out of their control, like disrupted supply chains which have caused scarcity of some key goods, the war in Ukraine, energy and particularly refinery constraints and capacity reductions, the trillions poured into the economy through the American Rescue Plan, and more. Inflation, in short, has many fathers.

The Federal Open Market Committee continues to raise interest rates, and the Federal Reserve is tightening access to money, signaling that still more is likely ahead. This does tend to put the brakes on growth, as money itself becomes more expensive to borrow or acquire.

The White House is leaning on “strong labor market” as its main justification as to why we’re not really in a recession.

Press Secretary Karine Jean-Pierre tackled this question today, again emphasizing the “strength” of the labor market:

But specifically on the hiring front, it’s a very interesting and mixed picture. The topline number of employment looks good at first glance. But while labor shortages exist in many essential roles (often lower-wage), the precise opposite appears to be true at the higher end of the wage spectrum.

Seattle presently has critical staff shortages in essential workers, particularly in public safety (police, firefighters, healthcare workers), and also ferry workers, teachers and more. But growth in high-wage, high-tech knowledge-worker hiring is seeing a significant slowdown. In the past three months, tech firms like Microsoft and Google have announced slower hirings. Layoffs at startups appear to be growing. Apple, Meta, Uber and Amazon have also joined the list. Venture funding of early-stage startups (Series A and B rounds) was down about 22% year over year in the second quarter, according to Pitchbook.

Yellen and Jean-Pierre remind us that the nation’s overall unemployment rate is low, as can be seen in this chart from the St. Louis Federal Reserve:

Source: St. Louis Federal Reserve

But the “low unemployment rate” masks a lot. The picture is far more complex, and less rosy than such a chart might initially suggest. That’s because overall unemployment measures the percentage of people in the workforce or seeking to be in the workforce who are unemployed, but the labor force participation rate (i.e., the percentage of all Americans who are in the workforce) must also be considered to get a true picture of America’s current workforce trends.

Looking at the labor force participation rate, notice that the pandemic brought a “Great Resignation,” and many Americans — particularly those in essential jobs (e.g., healthcare hospital workers, firefighters, teachers, warehouse workers, retail) still haven’t rejoined the workforce:

Source: St. Louis Federal Reserve

The noticeable decline in workforce participation rate suggests that people have been living off of household savings. And indeed, that appears to be the case:

These charts tell a story: many opted out of the workforce and have been living off of savings and/or asset appreciation. But these savings are usually tied to assets (stocks, bonds, home valuation etc.) which are likely to deflate as the economy contracts. One of the key risks that doesn’t get enough attention is that it may be very challenging for millions of Americans to try to re-enter the workforce during an economic downturn once they hit the end of their savings.

Is the “let’s talk about the definition of recession” simply a good-faith effort to introduce nuance back into the American political conversation, and curtail an even worse downturn by not naming it? Or is it a cynical attempt to sweep a major political liability under the rug by redefining a long-accepted word? I’ll let you decide.

I’ll be sticking to the colloquial definition of recession — two or more consecutive quarters of negative GDP. But we should recognize that this is likely to be a recession with very unique characteristics, and won’t be easily mapped to recent ones.

Internet of Things: How to Remotely Monitor Propane Tank Level via Raspberry Pi

For less than $100, a little coding and a small circuit, you can remotely monitor a propane tank from anywhere.

One great use-case for the Internet of Things is to remotely monitor gauges and levels. In addition to spot-checks, wiring them up also allows you to auto-log values over time, so you can get a nice graph of usage, economize where you can, and project future usage.

If your property is served by liquid propane gas (LPG), there might be occasions where you want to be able to check the tank’s propane level from your phone. Ideally, you might even want it to notify you when the tank is, say, at 25% or lower, so you can schedule a refill. And wouldn’t it be useful to see which seasons, use-cases or appliances tend to consume the most propane?

There are commercial vendors for remote propane monitoring, but they’re usually tied to some kind of subscription service, some of them come bundled with a mandatory cellphone plan, and often tied to the fuel vendor themselves. I wasn’t interested in paying, say, $5 per month just to be able to check a value.

This made a perfect use-case for a simple microcontroller project.

WARNING: Fiddling with electricity, even very low 1-3V such as what we’re talking about here, around combustible material is a very bad idea. Only do this if you know what you are doing. Be sure all your connections are fully sealed and weatherproofed. Maker assumes all risk.

Ready to accept that risk? OK, here’s how I built it:

First, Some Background: How do Propane Gauges work?

At first, I thought that these gauges somehow have a physical pass-through into the tank, but they’re much more clever than that. They make use of magnets to keep the integrity of the tank fully sealed.

There’s a very good video here from DPS Telecom (a vendor of remote monitoring solutions for businesses and homes) which explains how such fuel gauges work:

So to proceed, you’ll need a gauge for your tank which is “Remote Ready.” This has been standardized in America, and you’ll want what’s called an “R3D compatible” gauge. They can be had for about $9-15.

This simply means that the gauge both reads and conveys a magnetic value. As the needle moves, a magnet also moves, which can be read by a Hall Effect Sensor placed just over the needle (under the black plastic.) Luckily, this standardization means that these gauges will almost certainly work on your existing tank, simply by removing two screws, removing the existing gauge, and screwing the new one in. Or, if your existing gauge says it’s “remote ready,” as many do, you’re already set.

But in looking for a pre-made “reader” cable, not only are they hard to find and rarely sold separately, I was shocked to see several vendors asking $150+ for them! You don’t need a commercial reader cable — you can make your own, with a Hall Effect sensor chip which you can find for about $1.

Building Our Remote Gauge

OK, now that we know how these gauges work, how can we get the Raspberry Pi to “read” it? We need to translate that magnetic signal into some kind of voltage. Once that’s done, since RPi doesn’t have a native analog to digital converter built in (as the Arduino does), you’ll also need some kind of Analog to Digital Converter (ADC) chip, so that RPi can make use of the signal in digital form. I chose the MCP 3008 for this project, which you can find on Amazon for about $9.

Parts List

  • Raspberry Pi (I used an RPi4)
  • Analog to Digital Converter MCP 3008 Chip
  • 49E Hall Effect Sensor Chip
  • Half-size breadboard
  • 1 10KOhm resistor
  • An RD3 Compatible Propane Gauge
  • Wires for connecting breadboard
  • 3-wire outdoor shielded cable
  • Small magnet for testing and setting values in the gauge during prototyping
  • Glue/epoxy to set the Hall Effect sensor in place
  • Outdoor Enclosure plus cable glands

The key piece of circuitry is the Hall Sensor. I chose the 49E, which are super cheap and accurate enough for the purpose:

Hall-sensor-49e

Here’s a demonstration of what I’ve built:

Yes, that’s Scotch tape on the prototype gauge. I’ll likely be epoxying it into place after tests are complete.

Experienced makers will no doubt notice that the Raspberry Pi 4, a full-fledged Linux computer, is extreme overkill for this application. But I’ll be using that same Pi to handle several other home automation tasks; this is just one of them.

Among the things to notice is that this MCP3008 chip can convert up to 8 analog voltage inputs. This project uses one of the 8. I’ll likely be using this same setup to remotely monitor things like humidity, temperature, soil moisture, etc. over time.

Wiring Diagram

Figure 1. Wiring Diagram

Steps

  1. Get Raspberry Pi configured with Raspbian
  2. Configure it for headless development via SSH. I used Visual Studio Code and its ssh remote development feature.
  3. Install prerequisites: CircuitPython and MCP3008 library
  4. Install pip and flask.
  5. Get sample code from Github. I’ve put simple Flask code (python) here: stevemurch/whidbey-raspberry-pi (github.com)
  6. Make circuit: Use the MCP3008 to take an analog input and convert it into a digital signal. See wiring diagram above. [Addendum from diagram: I put a 10k Ohm “pull down” resistor connecting the analog input data pin (MCP3008 channel 0) with ground. This is optional, but it tended to steady the values a bit more.]
  7. Solder Hall Effect Sensor 49E chip to 3 wires. Check the 49E datasheet and connect a wire for VCC, Ground and Voltage Out.
  8. Crucial step: Position the Hall Effect Sensor Chip exactly in the small channel, with the smaller side of the trapezoid facing DOWN (i.e., toward the gauge). With my 49E chip, this means the wires, from top to bottom, are VCC, GROUND and SIGNAL.
  9. Using the Raspberry Pi, take measurements at key values. This will vary considerably based upon how long your wires are, and where precisely you have the Hall Effect Sensor placed. My own values are shown in Table 1 below.
  10. Install the apache2 server and pretty NextJS/React app to show the gauge.
  11. Make sure the simple flask application (which is the API that the NextJS front-end calls) starts at RPi boot time. I’ve used gunicorn for this. I’ve set a value in the /etc/rc.local file for it to run at startup.
Table 1. The values I got, once I secured the Hall Effect Sensor (49E) in place

Code

I’m putting some finishing touches on this code and will provide a link to the GitHub repository shortly. There are three main parts:

gauges.py: A library to spot-read a voltage and spit back out the gauge value

server.py: the flask API

frontend: a simple NextJS app which reads the API and presents a nice looking gauge to the user

Learnings

  • The trickiest part is actually affixing the Hall Sensor to the gauge. Good results are extremely sensitive to precisely where this chip is mounted, and there’s no easy way to lock it in place. I chose a hot glue gun method, but I might revise that to some kind of epoxy.
  • The 49E Hall Sensor has a very narrow sensitivity range — highest values it ever reported were around 2.08v and lowest value around 1.3v. But that’s enough fidelity to translate it into a percent-full between 100 and 0. Here, it’s good to remember not to let the perfect be the enemy of the good. I don’t actually need single-digit precision; in this use-case, accuracy to within about 5% is more than adequate.
  • I’m still determining whether or not the RPi is going to live outside. My preference is that it not live outside, and that only the 3-wire strand goes out to the gauge from the RPi. If it does have to live outside, it will be powered by Power Over Ethernet (POE) to keep cables to a minimum. I’ve purchased a POE splitter which will live in the external enclosure if so. This approach will allow just one cable to serve both power and data (ethernet.) But for weather-proofing reasons, I’d rather have the RPi mounted inside (in, say, a garage), and have a long-ish low voltage wire going out to the gauge. I’ve experimented with cable lengths and thus far can confirm that about 15 feet still works very well, even at the low voltages we are talking about. I don’t know if the voltage drop will be too significant beyond that length.

Conclusion

I’ll be adding in a simple email notification system when the value gets below a certain value, and I’ll likely connect the sensor values to something like Azure IoT services to log the levels over time. This will provide nice usage graphs, and even help show if there’s a leak somewhere that needs to be addressed. Hope this technote is a useful starting point for your next RPi project!

Introducing Seattlebrief.com: Local Headlines, Updated Every 15 Minutes

I’ve built a news and commentary aggregator for Seattle. Pacific Northwest headlines, news and views, updated every 15 minutes.

Seattlebrief.com is now live in beta.

Its purpose is to let you quickly get the pulse of what Seattleites are writing and talking about. It rolls up headlines and commentary from more than twenty local sources from across the political spectrum.

It indexes headlines from places like Crosscut, The Urbanist, Geekwire, The Stranger, Post Alley, Publicola, MyNorthwest.com, City Council press releases, Mayor’s office press releases, Q13FOX, KUOW, KOMO, KING5, Seattle Times, and more. It’s also indexing podcasts and videocasts from the Pacific Northwest, at least those focused on civic, community and business issues in Seattle.

Seattle isn’t a monoculture. It’s a vibrant mix of many different people, many communities, neighborhoods, coalitions and voices. But there are also a lot of forces nudging people into filtered silos. I wanted to build a site which breaks away from that.

Day to day, I regularly hit a variety of news feeds and listen to a lot of different news sources. I wanted to make that much easier for myself and everyone in the city.

Seattlebrief.com is a grab-a-cup-of-coffee site. It is designed for browsing, very intentionally, not search. Click on the story you’re interested in, and the article will open up in a side window. It focuses on newsfeeds which talk about civic and municipal issues over sports, weather and traffic.

I’ll consider Seattlebrief.com a success if it saves you time catching up on local stories, or introduces you to more voices and perspectives in this great city.

How it works

There are so many interesting and important voices out there, from dedicated news organizations like The Seattle Times to more informal ones like neighborhood blogs. I wanted a quick way to get the pulse of what’s happening. Seattlebrief pulls from the RSS feeds of more than twenty local sites, from all sides of the political spectrum: news sites, neighborhood blogs, municipal government announcements, and activist organizations. The list will no doubt change over time.

Many blog sites and news organizations support Really Simple Syndication (RSS) to publish their latest articles for syndication elsewhere. For instance, you can find Post Alley’s RSS feed here. RSS is used to power Google News and podcast announcements, among other things.

RSS is a bunch of computer code which tells aggregation sites: “here are the recent stories,” usually including a photo thumbnail, author information, and description. Seattlebrief uses this self-declared RSS feed, currently from over 20 sources in and around Seattle. It regularly checks what’s new. Another job then fetches each page and “enriches” these articles with the social sharing metadata that is used to mark up the page for, say, sharing on Facebook or Twitter. 

Think of it as a robot that simply goes out to a list of sites and fetches the “social sharing” info for each of them, then puts them in chronological order (by way of publication date) for you. The list of sites Seattlebrief uses will no doubt change over time.

Origin

Over at Post Alley, where I sometimes contribute, there was a writers’ room discussion about the Washington Post’s popular “Morning Mix” series. Morning Mix highlights interesting/viral stories around the web.

Sparked by that idea, I wanted to build a way to let me browse through the week’s Seattle-area headlines and commentary more easily. So I built Seattlebrief.

I’d welcome any key sources I’ve missed. Right now, they must have an RSS feed. And regrettably, some important/thoughtful voices like KUOW have long ago decommissioned their RSS feeds. I’m exploring what might be possible there.

Drop me a note.

I’d love it if you checked out Seattlebrief.com, and let me know your thoughts.

Washingtonians, How’s your State of Emergency Going?

Governor Jay Inslee of Washington was the first to declare a State of Emergency, and he may be the last one to rescind it. “Never let a crisis go to waste” has a corollary, and that is: “Preserve the crisis.”

It’s April 12th, 2022. Governor Jay Inslee has had State of Emergency powers for 772 days. He issued his emergency proclamation on February 29th, 2020, more than two years ago. He was the first governor in the nation to do so, making Washington State’s COVID emergency longer than any other.

Come Saturday, we are again completely alone in the Pacific Northwest in our heightened condition. Oregon rescinded its emergency powers declaration on April 1st, 2022. Idaho’s Governor Brad Little is ending their State of Emergency this Friday. Governor Dunleavy ended Alaska’s State of Emergency a year ago. Montana ended its State of Emergency last June.

We are significant outliers not just in the Pacific Northwest, but when compared with the nation as a whole. According to the National Academy for State Health Policy, only Washington State and West Virginia remain in an indefinite State of Emergency. Thirty-seven states either had no state of emergency or those declarations have already expired (lightest green.) Eleven more are set to expire later this month (slightly darker green.) One is set to expire in May (Illinois), and one in June (California.) In West Virginia, the Governor signaled in March that he’d end the State of Emergency soon but hasn’t done so yet. Local West Virginia news suggests that potential loss of federal health insurance money have likely driven him to delay.

Data from National Academy for State Health Policy, updated to reflect April 12th 2022

The question which matters most is: are we in an emergency?

No matter how you evaluate it, the resounding answer is no. Statewide, just 2.04 people per 100,000 are hospitalized with COVID, not even necessarily due to COVID. In our entire state of 7,710,000 people, just 157 of us are hospitalized for any reason with a COVID-positive test, not even because we have COVID. Remember: most hospitals routinely test all patients upon admission in an abundance of caution. Therefore, patients hospitalized due to, say, highway accidents who then also test positive for mild, asymptomatic COVID will be counted in that 157 tally.

Even erroneously counting every single one of these hospitalizations as being caused by COVID, that’s a current hospitalization rate of 0.002% of our population. Are we in any imminent danger of “overwhelming” hospitals? No.

If this is an emergency, then everything is.

Inslee’s emergency declaration included eight “WHEREAS” justification statements. None of them seem timely or relevant at the moment. Better still, in terms of vaccination, we seem among the most prepared states for future waves, be they minor or major. According to the Washington State Department of Health dashboard, fully 81.5% of Washington’s population over 5 years old have received at least one vaccination dose, and 74% are fully vaccinated. Vaccination will be ongoing and at people’s discretion, with many starting to get their second booster shot. Thankfully, that groundwork for optional self-protection for those most at risk has been laid.

How’s hospitalization trending? I tried to get an accurate hospitalization trend chart from the State of Washington Department of Health dashboard, but was greeted by this:

In what kind of Emergency do we shut down reporting of detailed hospitalization trends?

If you look at deaths with COVID (again, not necessarily due to COVID), the folks at 91-DIVOC have a helpful chart. Does this say “emergency” to you?

Deaths with COVID-19, not necessarily due to COVID-19, out of 7.71 million Washingtonians

Even if you look just at case rates, there’s no need for alarm. Omicron is a milder variant, which results in lower severity of outcome:

COVID Case Rates, Washington State
COVID case rate, Washington State, per State Department of Health

Why is this continuing?

The Governor said the quiet part out loud in his response to KOMO’s Keith Eldridge on Monday: “We want to make sure that federal money keeps coming, so it’s important to keep this in place right now.”

I’m sorry, Governor, but how is that not fraud? I know, that’s a pretty bold accusation, but let’s open the dictionary. Fraud is defined as “wrongful deception intended to result in financial gain.” This is clearly deception. There’s no COVID emergency currently in our state, and there hasn’t been for months.

Or, we can put aside the dictionary, and simply look to legalese. How does Washington State Law define an “emergency?”

The Revised Code of Washington (RCW) 38.52.010 states:

Emergency or disaster‘ as used in all sections of this chapter except RCW 38.52.430 means an event or set of circumstances which: (i) Demands immediate action to preserve public health, protect life, protect public property, or to provide relief to any stricken community overtaken by such occurrences; or (ii) reaches such a dimension or degree of destructiveness as to warrant the governor proclaiming a state of emergency pursuant to RCW 43.06.010.”

Do these conditions still exist?

Every Washingtonian knows that the conditions since 2020 have dramatically changed. We are no longer in lockdown. We are no longer required to wear masks. Kids have been back in school in every district in our state for months. County cases and hospitalizations are low and have been for months.

Bit by bit, the various mandates which were imposed during the State of Emergency are being rescinded. Inslee’s decided that drivers license renewal and learner permit extensions can now expire (proclamation April 1st 2022). He’s decided a mask mandate isn’t necessary; that ended March 12th 2022. Even Seattle’s City Council let the long-lasting eviction moratorium imposed during COVID finally expire.

Adding to the absurdity, both Inslee and his Lt. Governor went on vacation recently. I have no problem with them taking time off, but how can one possibly take a vacation during a state of emergency?

Former White House Chief of Staff Rahm Emmanuel had a famous quip: “Never let a crisis go to waste.” For Inslee, this appears to have a corollary: Preserve the crisis.

Speculation about Musk’s Intentions with Twitter Goes Into Overdrive

Musk decides not to join Twitter’s board, which likely portends poison pill adoption by Twitter, and potentially moves toward greater stake by Musk.

It’s been a busy weekend for Twitter’s CEO and Board. Sunday afternoon, Twitter’s chief executive Parag Agrawal announced that Elon Musk would not be joining its board of directors. That morning, Musk had rejected the board’s offer to join the board in exchange for capping his stake at 14.9% and no doubt other legal restrictions around what he can and cannot say publicly about Twitter.

A battle is shaping up for the Internet’s public square. Here’s a very brief timeline:

March 9, 2020

Silver Lake Partners takes $1 billion stake in Twitter

Private equity fund and activist investor Silver Lake partners scoop up a significant chunk of shares, takes a board seat.

Nov 29, 2021

Jack Dorsey Steps Down

Co-founder of Twitter steps down as CEO. Long-time Twitter veteran and CTO Parag Agrawal takes helm.

Mar 25, 2022

Musk tweets poll

After years of critique / mockery of Twitter policies, Musk posts a poll: “Free speech is essential to a functioning democracy. Do you believe Twitter rigorously adheres to this principle?” He followed this immediately with “The consequences of this poll will be important. Please vote accordingly.” 70% of those responding say “No.”

April 4, 2022

Musk becomes Twitter’s Biggest Shareholder

Over the ensuing days, Musk takes 9.2% stake of Twitter, becoming largest shareholder. The Washington Post notes that he may have run afoul of SEC reporting rules which require public disclosure within 10 days of acquiring a 5%+ threshold. (Seems likely he will pay a fine for this.)

April 5, 2022

Twitter Announces Musk Joins Board

Twitter CEO Parag Agrawal announces that Musk is offered board seat, but the terms of that agreement state that Musk must keep stake capped at 14.9%.

April 8-10 2022

Musk Muses Big Changes

Tweets several new ideas for the social media company, some controversial. One of the most intriguing: Musk suggests that anyone that pays the $3 per month Twitter Blue subscription fee should get a checkmark. A follow-up Tweet clarified that it would be different from the blue badge, but still signify the account is authentic. When challenged in a comment to consider lower prices in other countries, Musk agreed, and suggested a proportional rate tied to the local affordability and currency. (The tweet has since been deleted.)

April 10, 2022, morning

Musk Rejects Board Offer

April 10, 2022, Afternoon

Agrawal tells his team “Don’t Get Distracted”

April 11, 2022

Musk Amends SEC Filing

“From time to time, the Reporting Person [Elon Musk] may engage in discussions with the Board and/or members of the Issuer’s management team concerning, including, without limitation, potential business combinations and strategic alternatives, the business, operations, capital structure, governance, management, strategy of the Issuer and other matters concerning the Issuer.”

Reading the Tea Leaves

What’s really going on?

As Kara Swisher of the New York Times has noted, there’s much to be gleaned from translating some corporate speak:

There are a number of hidden codes:

fiduciary –> “Elon, don’t mock, or speak ill of the company publicly, you have obligations if you’re on the board which hem you in,”

background check –> “we always reserved the right to reject you based on potential SEC investigation and other things, so this is kind of mutual anyway,”

formal acceptance –> “you must agree in writing not to take 14.9% stake, and be liable if you tweet something defamatory,”

there will be distractions ahead –> “this ain’t over”

What happens next?

Musk is mercurial. He could decide he has better things to do and sell his stake.

But it seems much more likely to me that he will continue to increase his stake. If his goal were to make marginal improvements to Twitter, he would have been inclined to stick to their announced agreement and take the board seat.

He initially filed an SEC form saying he was planning to be a passive investor in the company, but amended it today (April 11th, 2022) to indicate he may be more active, and plans to keep criticizing the platform and demanding change: “From time to time, [Musk] may engage in discussions with the Board and/or members of the Issuer’s management team concerning, including, without limitation, potential business combinations and strategic alternatives, the business, operations, capital structure, governance, management, strategy of the Issuer and other matters concerning the Issuer.”

The billionaire has been vocal about some of the changes he’d like Twitter to make. Over the weekend, he tossed out the idea that users who opt into the premium plan ($3/mo), Twitter Blue, should be given automatic verification and see no ads. This one step, of devaluing “blue checkmarks” would be a sea-change to how Twitter is used today. He noted that Twitter’s top accounts are highly inactive, asking “Is Twitter dying?” He mused about turning Twitter’s San Francisco HQ into a homeless shelter, which invited a retort from Amazon’s Jeff Bezos. As Geekwire reported in May 2020, Amazon has already done so in part, quite successfully, in partnership with Mary’s Place.

Twitter’s board may very well adopt a poison-pill defense. But this isn’t a slam dunk; it needs majority board approval. Take a look at the existing composition of Twitter’s board; it’s no longer founders and insiders. Remember, this board said goodbye to Jack Dorsey, and rumor has it this was in part due to sluggish stock price results and activist shareholder discontent. Twitter’s eleven-member board consists of two “insiders,” both Agrawal and Dorsey, an activist value-driven investor (Silver Lake Partners), and eight relatively independent board members with Silicon Valley and/or finance experience. Poison pill adoption often depresses the value of a stock, and some board members might not be persuaded to do this. Several of Twitter’s board members are from the Silicon Valley braintrust, and are unlikely to want to go head to head against Musk — and some may even very well fully agree with him.

Musk is nothing if not bold. He has risked substantial sums and bet boldly on multiple ventures in the past. He stated that “free speech is essential to a functioning democracy,” and has both an internal incentive and external incentives not to be seen as being bested here.

My guess: he’s unlikely to just sit on the sidelines, as Twitter’s biggest but minority shareholder. He could well make a run for the company, though he may prudently wait for the next recession to do so.

And what happens to Twitter’s employee base, and its policies, during this tumultuous time? It may cause some employees to see the writing on the wall, and depart. Or it might cause some to double-down on a heavy-hand. It could be a very interesting few months indeed.

Does Elon Musk like to play it safe? Or lose? What does his track record suggest?