We Need to Talk About The Crisis in Young American Men

There’s a conversation we’re not having. How are the young men doing in your life?

My wife and I are parents of two sons and a daughter, who are now in their teens and twenties. As we come out of COVID isolation and reconnect face-to-face with other parents, I can’t help but notice a theme. A surprisingly high number of parents of boys describe young men who are — for lack of a better term — struggling to chart their next step. Yet at the same time, most parents of daughters do not generally echo this concern.

There’s something going on with young American males in their “life’s launchpad” years, and few Americans wish to speak about it.

These young men attended the same schools as their sisters. They have loving, intelligent, driven and attentive parents. We’ve known many of these kids from a young age. And now, twenty years on, so many of the boys are opting out. They’re searching. Some are still in extended gap year. Three have dropped out of college. Those who aren’t in college don’t yet have full time jobs. They’re not apprenticing. In many cases these boys are living at home, or in apartments funded by mom and dad. In short — as they now hit their twenties, what Dr. Meg Jay calls the “Defining Decade,” they haven’t found their trail.

The daughters, by and large, are on a more determined path. They’re engaged at college. They’ve landed summer jobs. They have outlets and direction.

This is a very small sample size, and it could just be a unique snapshot. So try this. Know a parent whose kids were born around 2000-2005, who are now high school grads? Ask those with adolescent boys to give an honest assessment of how their son’s friends and peers are doing. Have they found purpose? Are they on the taxiway ready for liftoff? Then, ask parents of adolescent daughters how their friends are doing. Statistically, you’re likely to hear a very different picture.

Zooming out, the data from Pew, the Census Bureau the CDC and more tell a pretty consistent story. America’s young men are in a crisis which is worsening.

Boys’ high school graduation rates are 6% lower than females, and in some states 15% lower. Boys are twice as likely to have a substance use disorder. Boys are more than twice as likely to be diagnosed with attention-deficit/hyperactivity disorder, according to the CDC, which likely says something both about boys, and our tendency to intervene, medicalize and label otherwise normal-spectrum behavior a “disorder.” Boys are five times as likely as girls to spend time in juvenile detention. While females are more likely to exhibit suicide ideation, American males are 3.5 times more likely than American females to actually commit suicide.

Across the country, college women now constitute 60% of the student body, and are also now clearly outpacing men in graduation:

A line graph showing that women in the U.S. are outpacing men in college graduation
Source: Why the gap between women and men in college graduation? Pew Research

Pew’s survey reveals that a significant driver is personal choice: “Roughly a third (34%) of men without a bachelor’s degree say a major reason they didn’t complete college is that they just didn’t want to. Only one-in-four women say the same.” Younger respondents are also far less likely (33%) than older Americans (45%) to say that college experience was “extremely useful” in helping them develop skills and knowledge which could be used in the workplace.

Male labor force participation rates have been on a steady decline since the 1980’s, while women’s has been on the rise. In 1960, 93% of men aged 25-34 were in the labor force; by 2021, that figure had fallen to 68%.

Not only are the snapshot statistics worrisome, but nearly every single one is trending in the wrong direction. The consequences for America are grave. NYU Professor Scott Golloway puts it starkly: America is producing too many of “the most dangerous person in the world: a broken and alone man.”

This isn’t just bad for men. 78% of American females report that a steady job is very important to them in selecting a spouse or partner. And fully half report that their mate must have equal or better education than them.

And yet, to sound the alarm — or worse, attempt to diagnose or work through whether any of this has cultural, sociological or pedagogical roots — brings out the pitchforks and vitriol.

Andrew Yang wrote a piece about this in the Washington Post in February, The Boys Are Not Alright. The most upvoted comment, from user IrisClover, is typical of the rejoinder: “I love the chorus of dudes proclaiming that male failure to thrive (i.e., to be superior) is all about an unfair system or current social conditions. Rampant violence against women, poverty, unfair pay, females doing nearly all unpaid labor, unequal representation for centuries…oops, they didn’t notice that.” Whew. Well, that conversation went well, didn’t it?

Far too many have conflated discussing the clear crisis in young men with rejection or pushback against the many wonderful advances for women, or a denial of celebration for all that’s been achieved.

But it should be possible to discuss the worrying trends without setting off bad-faith conversation about “what this really means.” Perhaps it means that the boys are in crisis, and we need to diagnose why. Perhaps it means that after three decades of focusing on female empowerment, we also need to ask, do men still have enough heroes, role models, and realistic goals they’re motivated to achieve? Perhaps an “oh, they’ll be fine” response isn’t quite hitting the mark.

We need to bring this conversation to the fore, and engage in it maturely, eggshells and all. It affects every single one of us. Somehow, through a combination of culture, parenting, schooling and more, we are exacerbating very bad trendlines for the future. Perhaps we should actually discuss it cogently, with data, and brainstorm some ways out of it.

After two decades of cultural emphasis on female empowerment, we need to add back to the conversation why and how tens of millions of young American males are opting out, standing on the sidelines, or otherwise falling through the cracks.

New HHS Reporting Guidelines Risk Stoking Greater Worry About Pediatric COVID

The department of Health and Human Services is dropping the requirement that hospitals report daily COVID deaths, yet adding a bunch of pediatric metrics which will make pediatric problem look much larger.

On January 6th 2022, the Department of Health and Human Services (HHS) made significant changes to the guidelines by which hospitals report COVID-related statistics. New fields were added, and several fields were dropped. HHS mandates that these new changes be complied with by February 2nd. But these changes don’t seem benign to me — they risk misinterpretation and overstatement of the level of worry we should have about serious COVID in America’s youngest. And two full years into the pandemic, HHS missed yet another easy, obvious opportunity to help us better distinguish between worrisome and non-worrisome cases.

HHS has required that hospitals report key indicator data for a long time. They make adjustments to what is required from time to time. That, in and of itself, is not new. What is new is that they are now requiring several additional fields related to pediatric patient statistics that in aggregate risk the creation of alarming new headlines in a month or two, specifically about pediatric COVID. At the same time, HHS is also dropping the requirement that America’s hospitals report daily COVID-19 deaths. Yes, you read that right; it’s a very surprising directive; we’ll get to that in a moment.

The net result of these changes is that it is we will likely soon see media reports along the lines of “Pediatric COVID Hospitalizations Rise Alarmingly,” and “Pediatric ICU Beds Near Capacity,” when the underlying truth is that worrisome COVID among our youngest adults has been, and remains, extremely low — at the level we should worry about seasonal influenza.

What new data does HHS now want? They’re highlighted in blue in the screenshot below [source: COVID-19 Guidance for Hospital Reporting and FAQs (hhs.gov).]

Image
COVID-19 Guidance for Hospital Reporting and FAQs (hhs.gov)

Needless to say, the downstream impact of overstating the severity of pediatric cases can be very significant. It “bolsters” the case for unnecessarily prolonged mandates, school and college restrictions, elimination of activities, mandated boosters, masking, and yanking yet more childhood away from those who have always been least at risk of severe outcomes. Kids have inarguably already paid an enormous price in their childhood and mental health toward COVID mitigation, and it’s important that we get the most accurate picture of risks to them to chart the path of least overall harm. This reporting change and the downstream misinterpretations are quite likely to prolong it.

The risk to 0-18 year-olds of severe outcome due to COVID is exceedingly low. At this writing, of the more than 850,000 Americans who have died with or from COVID, the number of 0-18 year-olds who have died with or from COVID stands at fewer than 900. Kids are at greater risk of dying from suicide and vehicular accidents than COVID. They’re not at zero risk, but their infection fatality rate due to COVID is below 0.003%. Every day, when we drive our kids on the highway, we’ve decided that the risk/reward tradeoff is worth it, and that while highway danger is present, it’s not so alarming as to keep everyone home forever.

Overall, risk of fatality due to COVID is lowest for our youngest. Luckily, we as a society now generally understand that. But we still greatly overestimate just how age-weighted COVID is.

Number of COVID-19 deaths in the US as of January 12, 2022, by age. Statista.com. Keep in mind that this is death WITH COVID, not necessarily DUE TO COVID.

The New York Times visualized CDC mortality risk data this way, back in May of 2021:

But now, because hospitals are being required to report pediatric ICU beds and the number of pediatric patients in ICU with COVID, it will tend to make for easy, alarming click-bait headlines.

How so? Well, let’s say you’re operating a medium-sized hospital. More than a year ago, your hospital wisely instituted a policy during this infectious pandemic: all admitted patients are to be tested for COVID, regardless of whether they show symptoms.

That’s precisely as my own local hospital has instituted (see purple highlighted text); I encourage you to take a moment to check your own:

UW Hospital Tests All Admitted Patients. Stop for a moment, and check your own local hospital.

That’s a wise policy; I have no problem with that.

But here’s how that, plus the new metrics above, add fog. Let’s say you have 5 ICU beds dedicated to your pediatric wing. A recent icy weekend has caused four of those beds to be occupied: two by kids with broken bones for serious injury, one for a grave car accident, and one who has presented for appendicitis. Note that zero of them presented to the hospital because of COVID, but upon mandatory routine testing, it was detected that four of them are asymptomatically COVID-positive because there’s an outbreak in your region at the moment.

Your accurate reporting to HHS, under these new guidelines, will be that 80% of your pediatric ICU beds were taken up by COVID patients. Now, sum that same scenario up from thousands of hospitals across the country, and render it on a nice-looking dashboard. Journalists from around the country will read the dashboard and rush to their media feeds with semi-accurate but extremely misleading headlines: “80% of Pediatric Beds Taken Up By COVID Patients,” or “Alarming Rise in Serious COVID Cases in Young Adults.” Animated infographics will show that pediatric COVID is on a sharp rise because, at a minimum, hospitals weren’t required to break out this data before. Scary infographics will course through social media, and be trotted out by the largest teachers unions and other “let’s close the schools” advocates.

Most maddeningly, HHS is entirely passing up an easy opportunity to actually differentiate between worrisome cases and non-worrisome cases. It’s as simple as this: Did the patient present to the hospital because of COVID-like symptoms, or not? Doctor, nurse or even patient — check a box. Then, allow the slicing and dicing based upon that key variable. Yet here we are, two years into this pandemic, and HHS is not even attempting to ask hospitals to start reporting whether patients are in the hospital because of COVID, or whether they incidentally tested positive.

It’s not like this is some kind of secret revelation, unknown to officials. People like me have been pointing the “with vs. from” distinction out for more than a year. And finally, even Dr. Fauci admitted in late December 2021 that these two conditions, incidental vs. causal, are quite different, explaining it as though it were a new concept:

It’s not just what HHS is adding, it’s what reporting requirements they are dropping. I want to draw your attention to these five in particular:

HHS is removing the requirement that hospitals report the prior day’s death with COVID-19. Daily deaths will still flow through to the CDC via a separate process, but removing daily death counts will make it much harder to separate worrisome cases from non-worrisome cases, because we won’t be able to see infection-fatality-rates for same-hospital data. We also won’t be able to know that zero patients are on ventilators, for instance — another (old) indication of severity.

Note too that HHS is shielding from reporting the need for hospitals to disclose anything about staffing shortages — both the quantity and explanatory fields. It cannot go unnoticed that a substantial cause of many staffing shortages (NPR) at hospitals and elsewhere has been the imposed mandates that they be vaccinated. More here (Business Insider, and here (WSJ).

So, why these changes? As always, I must ask — who benefits by obscuring this picture? I have some hypotheses. But I’ll leave that exercise for now, to the reader.

Inflation Shoots to 40-year High

This is new for a whole lot of Americans. The last time inflation was this high, you hadn’t ever heard of the term “e-mail.”

The Department of Labor announced this morning that the Consumer Price Index for All Urban Consumers (CPI-U) increased 6.8% before seasonal adjustment for the 12-month period ending in November 2021. That’s a little more than twice the annual average of the past 15 years. This condition is new for about a hundred million working Americans. Consider that the last time America’s annual inflation was this high, you hadn’t ever heard the term “e-mail.” Millions of American households are under pressure, and therefore so are Democrats just as 2022 midterm campaigning is set to begin.

The CPI-U is an index that attempts to estimate the cost of a basket of goods and services approximating the spending of metropolitan-area consumers, who collectively represent about 92% of all Americans.

On an unadjusted basis, the line items which soared most dramatically over the period were petroleum-based energy (both gasoline and natural gas), vehicles, particularly used cars and trucks, and food, especially meat.

Making spring, summer and fall headlines, gasoline prices have shot up 58.1% over the last twelve months, the largest increase since 1980.

On the positive front, increases in rent and shelter were relatively modest, and medical care commodities and services thankfully rose very slowly.

Washington State and the West

The Department of Labor provides a Western regional breakout, which includes Alaska, Arizona, California, Guam, Hawaii, Idaho, Nevada, Oregon and Washington. Our region’s inflation was just under the national average; prices are up 6.5% from one year ago. Gasoline prices were the West’s single biggest contributor to escalating prices, and all items minus food rose 4.8% over the year. Three of the biggest reasons for inflation in the Western US were meat (+15.4%), natural gas (+21.6%) and transportation (+19.7%.)

At the heart of several of these line-items are carbon energy prices. Taken as a whole, Energy prices rose 30.0% over the past 12 months, its largest 12-month increase since September 2005. Gasoline prices rose 49.6% in the West, and now sit at their highest level since September 2014.

Chart 1. Over-the-year percent change in CPI-U, West Region, November 2018-November 2021

American goods aren’t just more expensive for Americans, but for the rest of the world, too. In late November, the Bureau of Labor and Statistics reported that prices for U.S. exports rose 18% for the twelve months ending October 2021. That’s the largest increase since they began publishing the index (1984.) Such price hikes to the rest of the world may reduce consumption from American goods in the future, as these record-high spikes will surely offer a stronger incentive to consuming nations to produce and buy their own goods where they can.

This is new territory for a lot of Americans. True, we’re not yet in era of double-digit increases per the late 1970’s, but roughly 7% is closer to double-digit inflation than it is the recent-past 2%-3% range. You have to go back nearly four decades to see these rates, which means that most working Americans haven’t ever seen such high increases in prices before, and that brings with it huge political ramifications.

Chart via the Wall Street Journal, December 10, 2021

Even if you received a 5% pay raise this year, your daily budget lost ground (5% – 6.8% = -1.8% change in real earning power), assuming the basket of goods and services you buy are roughly equal to that which underpins the Consumer Price Index.

Inflation can be thought of as a particularly pernicious form of regressive taxation, hitting low-income households hardest of all. High-income consumers who could waltz through a Whole Foods, unbothered about prices in 2019 won’t feel much impact from nearly 7% inflation. But if you’re watching every penny, this hits very, very hard.

Inflation is therefore very often political poison to the party in charge. NPR has the receipts. After Republicans lost the White House and got wiped out in Congress during the mid ’70’s Watergate era, they were reduced to barely one-third of the seats in both houses of Congress. Despite this, President Carter, saddled with a combination of bad luck of new-found OPEC cartel muscle and some poor messaging and decision-making exacerbating inflation, saw inflation rise from 6.5% in 1977 to 18% during his re-election run of 1980. With prices soaring, the Fed sharply hiked interest rates at the worst time for him, bringing on a major recession just in time for the re-election. And Reagan won in a landslide.

Today, OPEC is less powerful, and not the primary cause of our woes. What’s behind these sharp increases? They’re more likely the result of a perfect storm:

  • Massive injections of federal spending in two waves (first during the Trump administration and again at the start of the Biden administration)
  • Monetary easing by the Federal Reserve, lowering interest rates making borrowing cheaper
  • Executive office decision-making regarding energy, such as halting gas pipelines, and placing limits on exploration, which reduced the pace of American energy production
  • Supply-chain woes, exacerbated by a variety of factors (high demand, clogged ports, lack of truck chassis, new regulations and limits placed on offloading)
  • Increased pricing power by large corporations, as small businesses struggle relatively more during the pandemic
  • Increased consumer demand

The Fed has two primary governing legislated mandates: they are to promote price stability and full employment. When the pandemic hit, correctly predicting major disruption to economic activity, the Federal Reserve opened the floodgates.

How? The Fed’s primary way to do so is to adjust interest rates. The Fed sets both the rate at which banks can borrow amongst themselves (the “fed funds rate”) and also the rate at which banks can borrow directly from the central bank of the United States (the “discount rate.) By setting these two interest rates, the Fed can make money “cheaper” or “dearer” to borrow. A rock-bottom interest rate allows corporate and individual borrowers to invest and acquire on the cheap. A higher effective interest rate tends to put the brakes on borrowing and overheated economic activity.

When the pandemic hit in the first quarter of 2020, the Fed dropped the short term interest rate to near zero. The rationale was that businesses and individuals would need capital to stay afloat, pay workers, and make it through hibernation.

Why doesn’t the Fed always keep interest rates this low? Three of the big negative consequences to doing so are:

  • They hurt fixed-income investors looking for low-risk investments, particularly seniors. When was the last time you bought a Certificate of Deposit? CapitalOne offers a 5-year CD at 1% per year. Any takers?)
  • It can overheat an economy, leading to inflation.
  • Too-low interest rates can encourage over-leverage and lead to bubbles which pop.

Warren Buffett once spoke to my section at business school, and a classmate of mine asked him about the leveraged buyout (LBO) craze of the early 90’s. Buffett responded “Debt can be good, and debt can be bad. I like to think about debt akin the following proposition: I’ll pay you $20,000 to mount a sharp dagger on your car’s steering wheel. If you think you’ll only be driving on smooth roads, you’ll take that bet. But if you don’t…”

By the way, LBO’s still exist, and are simply called “private equity” now. Private equity firms basically amass capital on the cheap, then buy out companies, ramping up debt, and hoping and expecting to pay for their investment several times over by better operations and higher multiples.

And even through the pandemic, these deals continue to motor along: “Private Equity is Smashing Records with Multi-Billion M&A Deals“, Bloomberg, September 2021.

Value of Private Equity Deals in the United States from 2011 to 2020
Source: Statista

Meanwhile, Congress authorizes spending, and they’ve passed two measures during the pandemic, from the first stimulus during the Trump administration to the bipartisan infrastructure deal signed into law on November 15th, 2021.

Where to from here?

It compounds problems for Dems; high inflation is political poison for the party in office. Not only is a weakened budget highly unpopular (duh!), but high inflation will accentuate a major philosophical divide in the party between its two major factions. Progressives and Moderate Democrats have very different economic films playing in their minds about how economies work or do not work. The progressive wing favors a path which involves more government largesse for struggling families — yet more spending on top of that already promised. They will also argue vehemently for continuation of eviction bans and imposition of rent-control in many cities. Yet at the same time, moderates like Senators Joe Manchin (WV) and Kristen Sinema (AZ) will interpret it as a clear signal that additional multi-trillion dollar federal spending is highly unwise, especially when we haven’t even fully spent the prior stimulus funds yet. Meanwhile, high gas and grocery prices are catnip for a sniping GOP; the attack ads are writing themselves with every fill-up.

The pressure is on the Fed to raise rates, which will make borrowing more expensive and often is stock-market bubble-bursting in nature. At a minimum, it will likely put the brakes on large-capital projects like factory-building, private equity M&A, home-buying and construction. In non-pandemic scenarios, the federal reserve’s typical reaction to the prospect of inflation would have been to raise interest rates by now. Hiking interest rates makes money “more expensive” to borrow, thus tapping the brakes on the economy. But thus far, the Fed has done nothing of the kind. However, interest rates will rise soon — in September 2021, the Fed signaled it could hike rates “six or seven times by the end of 2024,” and it’s expected to speed up the end of bond-buying (the “tapering” of its quantitative easing).

Passage of “Build Back Better” is now more at risk than ever. A solid argument can be mustered that we haven’t yet spent all the money from the two existing stimuli, and that the last thing we need is yet more borrowing and federal largesse. When your constituents are feeling price hikes everywhere, it’ll be easy for the opposition to make the case that trillions more spending shouldn’t happen. Inflation also erodes the political capital of the president, who repeatedly promised he’d “shutdown the virus, not the economy.”

Momentum will grow for a return to normalcy. High inflation during a pandemic places greater pressure on governors and the executive branch to justify the potential economic impacts of intervention. But it’s fraught with conflict, because we don’t have a shared vision of how to best respond. While Reagan made hay of “stagflation” of the Carter era to win in a landslide in 1980, we should recall that the enemies were mostly outside our border: OPEC and the Iran Hostage Crisis were unifying to Americans. Today, we face inflation with wildly varying strategies, and nothing close to national consensus or unity. The GOP is rather unified that a highly interventionist stance toward COVID should end, but within the Democrat party, there are wildly varying views as to how to best respond.

This is likely to make a divided society even more divided. Unlike the 70’s, when OPEC and Iran presented us with easy, unifying foes, people are arriving at different diagnoses of how we got here, and which dragon needs slaying.

Today, major institutions like Governors, Congress, teacher unions, and the press are viewing the pandemic through very different lenses. It is harder than ever to get to a national consensus as to how to best respond. We were told for months that inflation is transitory, but now are told that it doesn’t really matter what you call it. We are told even that inflation is somehow good. Senator Elizabeth Warren casts inflation primarily as an oligopoly problem (looking at you, Big Chicken) and clamors for trillions more federal subsidy to Americans. Other leaders in her own party think a strategic pause in stimulus spending is in order. And the Republicans, out-of-power, have it easy. They need only hawk t-shirts festooned with a fuel pump and “Let’s Go, Brandon.”

For me, I think it’s time to recognize the connection between our responses to the pandemic and the economic impact on millions of Americans. Public policymaking is about resource allocation and minimizing harm. We have been extremely focused at one type of harm, but not broad-eyed enough at multiple types of harm. Let’s face it: COVID-zero is not going to happen. The virus and its variants are now endemic. I’m not saying we need to throw caution to the wind — sensible measures are still in order — but we need to have a much better sense of how elevating fear and continually teasing more remote schooling, and more labor-pool-restricting mandates have major impacts on our society.

We now — mercifully, miraculously — have vaccines and antivirals, and we can and should protect the vulnerable. But society itself should seek to return to normalcy.

Putting aside vaccination, which clearly help reduce the worst outcomes (hospitalization and mortality), it’s actually rather difficult to muster a case the interventions we’ve taken have been incredibly effective and linked to reductions in per-capita hospitalization or mortality, when you compare intervention regimes across the states. Democrats, in particular, seem very unwilling to let that reality sink in, and if they don’t change course soon, they will likely see what happened in Virginia last month on a national scale, come November 2022.

Public policy is about resource allocation and tradeoffs. Doing more of A often means you get less of B. In a pandemic, resource allocation is ideally driven with a framework (yes, a very subjective one) about the best way to reduce overall expected harm. At the beginning of the pandemic, we had massively wide “beta” on what the expected harm might be. Two years on, with vaccines and antivirals, and knowledge of the relative weakness of efficacy of non-pharma interventions, we should turn our attention to other harms we are imposing in the calculus.

In the state of Florida, we have Governor Ron DeSantis pushing back on vaccine and mask mandates. Yet in New York, we have Governor Hochul issuing statewide edicts which foreclose all elective surgery. Those of us who believe that we are reaching — or have already reached — the endemic stage of COVID favor a quicker return to normalcy, or at least clear offramps as to when we will be able to do so. Businesses and schools plan on long-lead time. It’s important that the metrics be stated, and there’s no excuse that they not be. The public and media don’t even seem to be demanding off-ramp metrics.

Thus far, we don’t even have the off-ramps, and there’s a huge psychological impact to that. If you’re a parent of a K-12 student in a blue state, you know intimately what I’m talking about. It’s time for us to get clear on when we can move on from a crouch. Some of us believe that the non-pharmaceutical interventions (NPI’s) we’ve imposed, which include remote schooling, lockdowns, supply chain headaches, workforce reductions, forced masking for K-12’ers, concomitant mental health crises, lack of childcare, etc. are now imposing far more collective harm than we can justify. There is little clear connection to reduced hospitalization or death with respect to these interventions. What evidence does exist might have been justifiable in a period of so much fear, but we are now a bit smarter, and should update our priors, and think more broadly about total societal harm, and ways to reduce it.

The Democrats best hope is that some leader emerges to make such a case, before the November fate is sealed. But they will have a heavy lift. They’ll have to bring most of the party, and the media, to that entirely new mindset, or at least remind them that a return to normalcy is a big part of what they actually ran on last year, before a surprise win of all three branches of government in the Georgia Senate runoffs filled their eyes with visions of sugarplum.

Children: Vectors of COVID Spread?

A new study from the UK suggests that children are not, in fact, significant vectors of COVID risk.

How much would you say that having one or more young, unvaccinated children (0-11 year olds) in the household increases the household’s risk for COVID?

Well, luckily we have some empirical data on that.

A massive BMJ study of 12 million people from the UK found that the increase in COVID risk is a whopping 0.01%-0.05% in the household if you’ve got a child 0-11 in the household. The numbers are similar to those living with 12-18 year olds.

Not only were increases in COVID very small, but this did not translate into materially increased risk of COVID-19 mortality.

Can we perhaps stop treating children as second-class vectors of spread? When will their social, educational and emotional needs matter enough to be paramount again?

Are We Overstating COVID Positives?

Evidence is growing that we may be overstating COVID positives, lumping in those whose immune system has already defeated the virus.

Here’s what’s been on my mind for the past several weeks: Are our COVID tests too sensitive? Have they been too sensitive all along? Are we overcounting positives, lumping in defeated COVID fragments which are not dangerous with those full viruses of infectious patients? Are we throwing away or deliberately obscuring some of the most useful signal data we have when we simply report “positive” or “negative?”

I don’t think we are doing enough to distinguish between “worrisome” positives and “less concerning” positives. I think in several years’ time, we will look back on this period and say we made a major mistake in the way we lumped them all together, when it was possible not to do so.

There is considerable evidence that in the “Positive” case counts, we are also counting those who pose no infectious spread risk and in fact are successfully processing or have already processed the virus, which helps toward herd immunity. From a public policy standpoint, given that so many of our “reopen” plans hinge upon “case counts must be below X%”, it is vital that we do as much as we possibly can to separate “worrisome” positives from “non-worrisome” positives.

And, though less urgent: Are our tests and thresholds even consistent between nations? Between public health departments within the US? Between college campuses? We should seek better standardization, so that counts can be better compared.

Let’s Dive Into “Positives”

When we look at “positive test cases”, we first need to understand what that number means.

First, does every positive case represent someone who is contagious or poses substantial risk of spread?

No. And evidence is growing that in many cases, perhaps even the majority of cases, a positive test does not mean they are necessarily infectious. That’s because “positive” counts will also include those whose immune system is well-functioning, who have low viral load, and even those who have inactive viral fragments and have “beaten” the virus, with or without their knowledge. Positives will also detect infectious people, as well as those who have low viral loads but are at the beginning stages of their infection.

It’s remarkable we are able to see into the molecular level and do this. PCR technology is incredible.

But each of these positive cases are very different circumstances which I would argue deserve different public health responses. And the ambiguity in a “positive” arises from the technology we use, which detects fragments or markers of the virus, not necessarily an entire active virus itself. It will pick up fragments of a defeated virus, extremely rare and tiny ones if something called the “Cycle Threshold” is high enough.

Some scientists are now suggesting ways to better distinguish between these very different types of positives (explained below.)

But importantly, for what must be various public policy reasons and inertia, we appear not to presently want to embrace or even much explore it with the urgency I think it deserves.

At a minimum, I feel it’s vital that we know and report hospitalization counts along with positives, but we also need to know something called the Cycle Threshold of viral detection, as well as both the aggregate and individual trends in that number.

Impact

Given the importance we place in the positive test counts now that we’ve moved from “flattening the curve so hospitals aren’t overloaded” to today’s I-don’t-really-know-what-but-sort-of “no positive test growth must be observed”, I’m surprised this issue of what constitutes a “positive” isn’t getting much more attention. Positive test rates now impact everything from school and campus closures to quarantine and isolation to individual mandatory quarantining to business and retail closures and cancellations and more.

We label a large number of positive test cases an “outbreak.” But in reality, a high cluster of positives might not represent an “outbreak” at all, it might in fact represent a cluster of, say, young people who have each successfully defeated the virus. A cluster of positives is a cluster of positives — reason for caution, dynamic, preventative action until more is known (and it generally can be known within days or weeks), and also, deeper investigation.

Shouldn’t we get this number right, and work hard to disambiguate between worrisome and not-yet-worrisome cases? Can adjustments to our testing protocols be made to better distinguish between those who might be on the early end of infection from latter tail? I think yes, below, see “Recommendations.”

What percentage of us actually knows what “positive” specifically means?

Shouldn’t we be reporting some kind of broken-out number that clearly distinguishes between those with strong enough viral load to be concerning versus those with a miniscule level of viral load which may only be partial? Shouldn’t we be establishing new protocols for those who come in with “high viral load signals” versus low viral load signals (e.g., testing the latter soon thereafter, to detect directionality — whether it’s worsening, stable or improving?)

By boiling down our reporting into a simple binary, we are losing a lot of valuable information, and causing a lot of unnecessary harm and restriction in our public response.

Background To Know

The main test technology we use in the US, polymerase chain reaction (PCR), is an exceedingly powerful technology. It works by duplicating the genetic material in a sample to the point where a specific genetic sequence is detected. Samples are put in a machine. Reagents are added, and multiple passes, or “cycles” are run, doubling the molecules each time. Ct is the “cycle threshold”, the number of amplification cycles needed to detect a particular genetic strand.

Note that PCR machines aren’t generally trying to detect the entire, active virus, but rather fragments. In the case of COVID-19, it’s key RNA markers. They may or may not be parts of a complete viral protein.

If the virus is detected in fewer cycle thresholds, a high viral load is likely to be in the sample. The more cycles (Ct) the test runs, the more likely it is to detect a small or even miniscule viral load, such as fragments of the virus that are not viable or contagious.

The cycle threshold (referred to as the Ct value) is the number of amplification cycles required for a fluorescent signal to cross a certain threshold. This allows very small samples of RNA to be amplified and detected. Each pass represents an approximate doubling of molecules in the sample – so there is a huge difference even between Ct 30 and 31.

A very important thing to know is that a person who has RECOVERED, with or without their knowledge, will have a small amount of viral debris and fragments in their body, and generally, from what I’ve read, evidence is emerging that 30 cycle passes (Ct==30) might not tend to detect those tinier and lower-viral-load fragments of the “already recovered or recovering”, but (Ct>= ~40) probably would, because the amplification is enormous at high ends.

A Harvard University epidemiologist recently suggested that positives produced with more than 30 cycles are unlikely to find infectious patients and some studies are confirming that conclusion (some of which are below.), But this is controversial, and is at the intersection of both longstanding public policy practices and agency/manufacturer recommendations and more. A leader of Minnesota’s public health lab said there is “no convincing proof” and that her lab is confident in the federally required cycle threshold (CT) of 38 for its COVID-19 test. Of course, the difference between 38 and 30 cycles is eight doublings — an enormous degree of sensitivity difference.

The WHO’s guidelines appear to be 45 cycles (Ct==45!)

The FDA appears to recommend Ct values of 40, as of late July 2020 (page 34-35.)

Worse, different manufacturers have varying recommendations on the “right” number Ct’s by which to run their machine. A growing number of researchers are recommending the Ct value be more like 30, and that any fragments picked up beyond 30 passes tend to NOT yield a patient who is contagious (studies linked below), but rather a person who has too little of the fragments to be a public health concern. Is it time to change the guidelines for these tests?

The FDA apparently has regulations or guidelines which prevent the Ct value from being reported by labs, only “positive” or “negative.” Why are we deliberately throwing away this potentially useful information? I am still trying to find a rational reason why we are deliberately mapping a fairly big continuum to a simple 1 or 0. That’s like taking a beautiful photograph and downsampling and grey-scaling it to a 16x16px black and white icon. You can never get that information back, once filed away and fragmented.

Related Evidence and Readings

Predicting Infectious SARS-CoV-2 from Diagnostic Samples: In this study, scientists were unable to demonstrate any viral growth in “positive” samples with Ct > 24, but could do so with 24 and below. “SARS-CoV-2 Vero cell infectivity was only observed for RT-PCR Ct < 24 and STT < 8 days. Infectivity of patients with Ct >24 and duration of symptoms >8 days may be low.” As a layman, I read that and say, “well, scientists tried hundreds of times and seem to be unable fostering any reproduction of the virus in Ct levels below 25. Probabilistically, it would seem to suggest that we really ought to pay attention to low-Ct positives and maybe less so those positives which are over some buffer beyond that level.”

To Interpret the SARS-CoV-2 Test, Consider the Cycle Threshold Value: “…[H]igh sensitivity for viral RNA can be helpful for initial diagnosis. However, reporting as a binary positive or negative result removes useful information that could inform clinical decision making. Following complete resolution of symptoms, people can have prolonged positive SARS-CoV-2 RT-PCR test results, potentially for weeks, as Xiao et al report. At these late time points, the Ct value is often very high, representing the presence of very low copies of viral RNA [5–8]. In these cases, where viral RNA copies in the sample may be fewer than 100, results are reported to the clinician simply as positive. This leaves the clinician with little choice but to interpret the results no differently than for a sample from someone who is floridly positive and where RNA copies routinely reach 100 million or more.”

Strong Inverse Correlation Between SARS-CoV-2 Infectivity And Cycle Threshold Value: “Correlation between successful isolation of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in cell culture and cycle threshold (Ct) value of quantitative reverse transcription polymerase chain reaction (RT-PCR) targeting E gene suggests that patients with coronavirus disease 2019 (COVID-19) with Ct above 33 to 34 are not contagious and can be discharged from hospital care or strict confinement, according to a brief report published in the European Journal of Clinical Microbiology & Infectious Diseases.”

Are you infectious if you have a positive PCR test result for COVID-19? Oxford University. Noted in this meta-review is the following chart, which shows how the probability of infectiousness is greater (red bars) when the cycle threshold is lower (blue line.) It also shows that when symptoms to test time is shorter, it’s also more infectious:

Shedding of infectious virus in hospitalized patients with coronavirus disease-2019 (COVID-19): duration and key determinants medRxiv 2020.06.08.20125310.

Comparison of viral levels in individuals with or without symptoms at time of COVID-19 testing among 32,480 residents and staff of nursing homes and assisted living facilities in Massachusetts

In the above study of 32,480 subjects in Massachusetts nursing homes (residents and staff), it was observed that those with Cts less than or equal to 23 harbored 99.9% of the cumulative viral load, and those with Cts less than or equal to 26 harbored 99.99% of the population cumulative viral load.

Rethinking Covid-19 Test Sensitivity — A Strategy for Containment, New England Journal of Medicine (just published yesterday), focuses on the logistical and time delay involved in using PCR testing because of its high sensitivity, arguing that it would be more effective to go with cheaper (but less accurate) technology which can be done on-site. This article is peripherally related to the main issue of this blog post, but provides added caution about overemphasis on PCR and general goals of the upper ends of high-sensitivity.

Backgrounder on PCR and Cycle Thresholds from Public Health England

Very good backgrounder. But note their summary at the end — is it excessively cautious given the cost of lockdown?

Ct is a semi-quantitative indicator of the concentration of viral genetic material in a
patient sample.

From a laboratory perspective, Ct values should only be reported and applied for clinical
interpretation and action where the linearity, limit of detection and standard quantification
curves are assured.

Ct values are not directly comparable between assays and may not be reported by
some RT-PCR platforms in use. Interpreting single positive Ct values for staging
infectious course, prognosis, infectivity or as an indicator of recovery must be
done with context about the clinical history.

Low Ct values (high viral load) more likely indicate acute disease and high infectivity.

High Ct values (low viral load) can be attributed to several clinical scenarios whereby the
risk of infectivity may be reduced but interpretation requires clinical context.
Serial Ct values have greater utility for interpretation but are generally only undertaken in
hospital settings for the purpose of clinical management rather than infection control
purposes.

Understanding_Cycle_Threshold__Ct__in_SARS-CoV-2_RT-PCR_

CDC Guidelines

As of August 4, 2020, the CDC is displaying this Q&A with respect to cycle threshold values, which seems to be contradicted by several of the studies above:

Source: CDC website

Other Parts of the World

  • Spain also seems to have changed their guidelines this week about Ct’s, recommending that they stay in the 30-35 range when reporting positives
  • India also has now declared that COVID-19 test reports must now state cycle threshold

There are many more such studies and meta-reviews emerging, and I’ll update this list as I discover them.

Twitter Threads On the Subject

In the United States, one of the key scientists raising some of these questions is Michael Mina, an epidemiologist, immunologist and physician and Harvard Public Health/Medical School. His thread below is well worth reading. Click through on the link, and read each of the 17 posts in the thread:

Media Coverage

Given what’s at stake — millions of people in lockdown and quarantine, introduction of enormous anxiety for those who test “positive”, many of whom might not have major cause for alarm — this issue has gotten fairly light coverage. It has been drowned out by a host of other stories, from the national election to wildfires to other urgent crises. But a few reports have broken through:

Here’s a good interview with Dr. Michael Mina, Assistant Professor of Epidemiology at Harvard and director of molecular virology about the Ct issue, and why institutional practices on reporting have been slow to adapt:

Cases vs. Hospitalizations

A trend I’ve been noticing in many US dashboards is that hospitalizations used to follow cases after about 14 days or so, but often no longer do.

For instance, my son is a Northwestern University student, and so I follow the dashboard of Suburban Cook County IL closely. Let’s take a look at case counts vs. hospitalizations since this crisis began:

What’s going on here? Why did hospitalizations follow cases in April & May, and yet are virtually unaffected by positive cases starting in July? There are several possible explainers, which I’ll delve into in a future post, and it’s unlikely that there is just one and only one contributor. But one contributing factor is very possibly that a “positive” is also counting in those who have successfully processed the virus and/or who pose considerably less risk to public health. And yet, our public policy is essentially to treat all positives the same.

Key Questions

Are high positive test counts always a sign of the problem worsening within a community?

It doesn’t seem that this conclusion would naturally follow. It could also indicate clusters of “herd immunity / passed virus” forming, since a “positive” at high Ct can also indicate someone who is well past processing of the virus. We should ask this question particularly among student populations on campuses, since that age cohort has extremely low mortality from COVID.

Could overly-sensitive “positives” be a major contributing factor in why we’ve not seen the expected rise in hospitalizations and mortality following positive-counts? Back in March, spikes in positives would lead to strong spikes in hospitalizations and deaths. In late September, hospitalizations and mortality largely do not follow as closely, at anywhere close to the same order of magnitude as back in wintertime.

Could the way we report positives be improved to better differentiate “worrisome” positives from “less worrisome” positives?

Given the fact that people whose immune system has functioned well and fought off the virus will still test “positive”, should we still be using the term “outbreak” when we observe high positive test counts, or should we look more toward either low-Ct values or hospitalizations for such a descriptive term?

Most important, perhaps, what steps can or should we take to better distinguish between positives-likely-to-be-infectious and positives-likely-not-infectious in our public health reporting and public health response?

Recommendations

I am not alleging any kind of conspiracy or malicious intent to hide the truth. I am also continuing to mask-up, particularly in confined spaces like retail, etc. I am also not intending to suggest that COVID and associated risks are not serious — it is, and they are.

But I am also reminding us that we as humans are learning about this virus every day, and shouldn’t have our initial implementation of policies ossify just because they’re were established as such in emergency conditions. Public health officials are doing their best and have crafted responses. But some of those responses I think need to be adjusted, some perhaps dramatically. Certainly I think we could use more dynamism in the policies that are implemented — I’m not sensing a lot of movement or dynamic revision. That’s understandable — institutions and agencies don’t turn on a dime — but I don’t think it’s adequate, given the high costs being imposed.

Specifically, from what I’ve read on this subject, I don’t quite understand why we aren’t now altering our public health response in the following ways:

  1. Move beyond a simple binary “positive/negative.” Sure, continue to report positives and negatives, but also: report Ct values on every test, and aggregate them into a histogram-style chart, as well as trends in that Ct value: medians, modes, and standard deviations over time. Set a benchmark for what constitutes a “high” viral load. (Several scientists now seem to coalesce around the number 30 or so.) How many subjects were detected with “high” viral loads? “Low” viral loads? What is the trend in that signal? Get a better picture of the “severity and infectiousness” of the virus.
  2. Always report hospitalizations, whenever “positives” are reported. Hospitalization counts are very important disambiguating metrics. Every dashboard needs to also report hospitalizations and mortality. Yet I’m not seeing hospitalization counts on many community and college campus dashboards. Just showing “positive test cases” creates alarm where it might not need to, because it is also including those in the population whose immune system is functioning well and have defeated the virus and are not at risk of infectious shedding.
  3. Shift from a “one and done” testing model to situational immediate followup testing for directionality, particularly among those with “low” viral load. That is, if you test positive but with Ct over some potentially “less concerning” threshold value (e.g., 34, or whatever health officials determine), yes, go ahead and isolate/quarantine but also test again in 18-24 hours to disambiguate between those who might be early in their infection versus those who might be beyond it. If the Ct of detection in the second or third test continues to be high (or higher), perhaps consider loosening quarantine protocols. Consider less rigorous quarantine or lockdown for those with high Ct values per the research above. If that number drops, quarantine and isolate, as it appears the course of the virus might be growing.

I think in a few years, we may look back on this time and say we made a major mistake overcounting positives, and considering that a high number of positives is an “outbreak.” It is vital that we at least take a moment to examine whether we should be making changes to the way we are reporting COVID positives.

DISCLAIMER

I am not a physician. I’m not a scientist. Nor do I have any medical training. I’m simply Very Online, and I read a lot. Like a lot of us, I enjoy building, I plan, and I invest. I generally find I need to have a mental model of what the next 0-24 month time horizon looks like in order to build and plan and invest successfully, and I work pretty hard at trying to have a good guess about that near to mid-term horizon.

On The Accuracy of Self-Reported Data

In the urgent debate around Seattle’s homelessness crisis, many articles (such as this otherwise great one in Crosscut) cite the statistic that 35% of those who are homeless in the Seattle region have some level of substance abuse. It’s often a very central part of the framing, especially by those who wish to portray substance abuse as a relatively low contributor to the problem. Among other things, presenting that statistic at face-value implies that presumably, 65% of homeless individuals don’t have any kind of substance abuse issue, so perhaps addiction is a less significant reason that causes, follows, accompanies or perpetuates homelessness.

Now, 35% is of course much higher than the national average for housed/unhoused combined, so even that level of just over a third should be alarming and worthy of investment in treatment services and treatment-on-demand, and even thoughtful consideration of increasing requirements on those who choose not to enter treatment. This is because not getting help is a danger both to themselves and in some cases, to others.

But here, I’m concerned with the presentation of that statistic itself and central reliance upon it without any context.

Too often left unmentioned is this key asterisk: The 35% statistic is based entirely on self-reporting.

35%’s Origin

The 35% figure comes from the very worthwhile annual “Point in Time” report, gathered by volunteers, which King County calls the Count Us In Report. Here is the key paragraph summarizing the oft-cited statistic:

Approximately 70% of Count Us In Survey respondents reported living with at least one health condition. The most frequently reported health conditions were psychiatric or emotional conditions (44%), post-traumatic stress disorder (37%), and drug or alcohol abuse (35%). Twenty-seven percent (27%) of respondents reported chronic health problems and 26% reported a physical disability. Over half (53%) of survey respondents indicated that they were living with at least one health condition that was disabling, i.e. preventing them from holding employment, living in stable housing, or taking care of themselves.

King County Count Us In Point-In-Time Report, 2018

Survey’s Great. Interpretation? Often, Not So Much.

Please understand what I’m saying, and do not misrepresent it: The Point In Time Survey does not in any way misrepresent or hide information about what this statistic represents. The report itself characterizes it accurately.

But other articles and online discussions and political polemics which cite this statistic and accept it as the ground truth often do. Note, for instance, the framing in that Crosscut article, at least at this writing. The lead sentence of this article reads: “Contrary to what some may assume, most people living homeless do not have a substance use disorder (SUD): it’s about 35%, according to a recent local survey.”

No, that’s not what the Count Us In Report says.

It is not accurate to say that 35% of homeless individuals have substance abuse issues; it is accurate that 35% say they do.

I am in no way critical of the considerable effort to gather and report this data. I’m very supportive of it, and I applaud the many volunteers who give of their time do it. It provides extremely useful snapshots-in-time for things like total counts, vehicular counts, age, gender, regional comparisons, trends and more. That kind of data — i.e., pure counts of things which can be clearly observed and independently verified — is pretty reliable.

And what people self-report is also very useful in a way as well. I’m a fan of collecting it.

The problem I have is when essays and analyses and endless online debates blindly rely upon the “35% of homeless individuals in Seattle have some form of substance addiction” figure without stating — or in some cases, even seemingly knowing — what that figure represents.

35% of homeless individuals surveyed report that they have drug or alcohol abuse as a health issue.

What the 35% means is that, when asked one night by a volunteer stranger whether they have a drug or alcohol issue, 35% of those who are homeless respond “Yes, I do.”

We can and should ask: what is the likely accuracy of that number? Would that tend to undercount, accurately count, or overcount reality? Statistically speaking, does it tend to generate a lot of false negatives, little error, or false positives?

Intuitively, it would seem that highly likely that this is an undercount. After all, what is the incentive for someone who is not addicted to answer “Yes, I am.” Conversely, for those who are addicted — to opioids, meth, alcohol — it’s such a truism that there is denial about addiction that it’s become a cliche, at least about alcoholics in particular. To assume that 35% represents reality is to assume that denial, when it comes to admitting substance abuse to strangers, is nonexistent.

The City’s Own Lawsuit Against Purdue Pharma

Even the City itself has data which is very hard to square with a 35% addiction rate figure.

In the City’s own case against Purdue Pharma, it says: “Seattle’s Navigation Team (…) estimates that 80% of the homeless individuals they encounter in challenging encampments have substance abuse disorders.”

Seattle v Purdue 1:17-md-2804-DAP p8 par18

Now, this is only for those “challenging encampments” encountered by the Navigation Team, and it doesn’t count those homeless individuals living in private or publicly funded shelter. But note too that the lawsuit is focused on opioid abuse, not the broader alcohol and meth (fastest growing) addiction issues. It is very hard to square the 80% substance-abuse figure in this subsegment with an overall 35% rate, unless one assumes that the other segments have dramatically lower than US average level of substance abuse, essentially 0% substance abuse of any kind, which seems unlikely.

Page 32 of the Count Us In Report, embedded below, shows 41% of the homeless population living in unsheltered tents/encampment/streets and another 16% living in vehicles, with both segments growing rapidly:

Further, the Seattle Navigation Team reports than in cleanups of camps, about 80% of them have needles and other physical evidence of substance addiction. That’s of the needles not taken and/or disposed of prior to the final closure, and left behind.

And in the Seattle Is Dying piece, it was very anecdotal and not at all scientific, but it sure did seem like those people interviewed with direct knowledge — the reporters and first responders and even at least one individual who has spent years in either encampments or have reported on them claim a much higher level than just 35% — most saying “100% or close to 100%.”

How can the self-reporting be so low, but these datapoints above be so high?

Do Studies Measure Accuracy of Self-Reporting? Yes!

Surely, this problem has been studied before. What’s the accuracy rate of self-reporting when it comes to substance abuse? Are there studies which ask people and then, say, do lab tests to verify truthfulness?

Initially, I ran across several studies that showed a shocking 89%+ accuracy rate overall, and was quite surprised by them. That doesn’t match my initial intuition. That is, when some stranger asks you about potentially illegal activity, or activity that might make you ineligible for services, or activity that might cause incarceration or at a minimum carries at least some stigma to many, that you’d answer honestly ~90% of the time? Seems odd. Can that really be true?

False Negatives are Very High for Those Not Seeking Treatment

But then I read The Impact of Non-Concordant Self-Report of Substance Use in Clinical Trials Research, which really made total sense to me, and resolved the basic question. There are two ways this ~89% accuracy estimate is an overestimate in situations like the Point In Time overnight counts.

Essentially, the super-high 89+% overall accuracy rates are generally for studies either of (a) people who have decided to seek treatment — e.g., they’re already in the lab and know or think a test is about to happen OR (b) studies which blend the overall population, which has an overwhelming number of non-addicts (90.4% of Americans haven’t used hard drugs in the past month, according to NIH.)

That is, for the individuals in former group of studies (those who are seeking treatment), there’s a very strong incentive and desire to be accurate — and knowledge they’ll likely be lab-tested on it anyway. And for the latter group, the overwhelmingly high number of accurately-answering subjects in the population (i.e., those who have no incentive whatsoever to create false positives) swamps the weighted average accuracy for the group as a whole and brings the forecast accuracy artificially higher.

Toward a Better Estimate

I’m no expert, but it feels like much more likely that 35% — the self-reported rate — is the floor of the accurate normal distribution range, and an implausible one at that. In other words, the notion that 35% represents reality is highly unlikely. Accepting that statistic as reality essentially implies that you believe that 100% of all respondents will answer a question like “Do you have a substance abuse health problem?” honestly (unless you believe that those who aren’t addicted will somehow decide to state that they are, in large numbers), and no study suggests that they do.

There are almost zero pressures on the true number being lower than what is self-reported, and significant evidence that the false-negative rate can be at least 30-50%+. And the true range is very sensitive to your view on the level of error in reporting. If you think that the error rate on the sample is 50% — and again, it is intuitively entirely due to false negatives — you end up with a true substance abuse rate in the surveyed population of 70%, not 35%.

To me, if you gross-up the self-reported 35% estimate by a more reasonable factor given the likelihood of false negatives, you end up with a more plausible, more aligned, and much more compatible with the City’s own lawsuit true addiction range of 45% (that’s quite conservative) to 70%+.

When talking with one acquaintance whose (multiple) loved ones have directly suffered from addiction, service providers and doctors generally have told them that self-reporting is usually off by a factor of 2 to 3x. (Solely relying upon this anecdotal feedback, even 70% would be low.)

At a minimum, when people cite the 35% statistic, I think we should encourage an asterisk that this is self-reported data — what people say about themselves.

Update, October 8, 2019:

The Los Angeles Times pursued its own analysis of Los Angeles, and compares the counts to self-reporting, pretty much fully agreeing with the estimates above: https://www.latimes.com/california/story/2019-10-07/homeless-population-mental-illness-disability?fbclid=IwAR0JZSWG4N2791Gour_KYjh9ZSuBpTXmAjHgSRJI71lPegpmSUWezzabcqE

Reference

nihms-763177

FINALDRAFT-COUNTUSIN2018REPORT-5.25.18

“Seattle is Dying” : KOMO News Special

Last night, an hour-long program aired without commercial interruption in Seattle on the addiction crisis and homelessness. It’s an important watch. I found it devastating, riveting and motivating, all at once.

There is already much being made over the fact that (a) it comes from KOMO News, a station now owned by Sinclair Broadcasting, a large conglomerate which has an unquestionably conservative lens. And that (b) there is direct footage within this broadcast of people — who, I should add, are in public, not private places — who are suffering, some in mental breakdown and in crisis. And (c) the piece leads off with the recent System Failure Report, an important report about repeat offenders (read it!) — but also which, as SCC Insight carefully noted, has some some important caveats to consider about its interpretation.

Before I go further, it must not be left to a footnote that the main journalist, Eric Johnson, has been a part of the Seattle journalism community for more than two decades, has deep ties to the Puget Sound community, and has long profiled Seattle life with high quality, decades before the change in station ownership. If you’d like to thank him for bravely breaking out of the “it’s all about Amazon” media narrative and tackling a difficult, controversial issue which will no doubt bring him a lot of ideological heat, please drop by his Twitter feed.

Issues (a) through (c) above are each legitimate concerns, and they are worth deeper sidebar discussions. They should absolutely add some nuance to the viewing, and people should discuss them.

But they should not be allowed to overshadow the central message, because none of them invalidate the central, monumental, devastating theme of what is being reported. And what’s being reported is that we have an addiction crisis in Seattle, that the homelessness crisis is primarily but not solely driven by addiction, and our existing policy regime is broken.

Very broken. People need help from our leaders, and that includes both those suffering, our first responders, residents and business communities. This is not, primarily a “tech displacement” story, nor is it primarily a “lack of compassion” story, nor is it primarily a “police need to do a better job” story. And it is absolutely not a “let’s do more of what we’re doing” story.

If you’re more focused on the filming of individuals already in public who are suffering than the systemic problem our policymakers exacerbate, stop and ask yourself what you are fighting for.

KOMO News Special: Seattle is Dying

Is our policy approach working? No? OK, we agree on that. Do we want far fewer people addicted and homeless? Yes? OK, we agree on that too.

So let’s say we snap our fingers and there’s suddenly thousands of permanent, publicly-operated individual housing units with addiction treatment services. That’s great, and I support that. But before going further, answer: What then? Would you be willing (as I am) to mandate that those who are addicted to go through such treatment, or go to jail? Do you believe that without mandating treatment, those profiled will suddenly get well? Do you believe that data about their unique needs and touch-points with the city are important to measure, to know more about whom we are treating, what they need, and how they’re progressing? Do you believe that there are a substantial number of crimes that could be prevented if we did warrant checks among the population at various touchpoints with taxpayer-paid services? Do you believe the citizens who fund these services have any right to demand the most basic of database checks? Do you believe the unhoused are actually most at risk for violence and crime, and that we have an obligation to do what we can to limit it in the shelters or encampments we do tacitly allow? Do you believe that public safety will magically improve with free housing, without a change in how we handle addiction and repeat offenders?

In other words, would you be willing to implement a Providence RI style approach? Would you be willing to try it on a small scale at least? Ask yourself that, and state your public answer, and the ideological divide between approaches might begin to be bridged.

Which is more important — fewer people addicted and suffering, or proving the “correctness” of your own (or my own) specific political ideology or power desires or what-have-you which we don’t want challenged? Does the story above suggest that simply building more housing and low-barrier tiny villages without any “asks” of those residing with them will magically solve the problem? Has it, where it’s been tried? (That was basically the Licton Springs model. How well did it work?)

Or does it appear to be that the central problem is that of addiction and our revolving, permissive approach, and a City Council that prefers not to touch an entrenched, ideological third-rail, and one which actually benefits politically by leaving it alone? What do first responders say? And why do they feel they need to be anonymous to be honest?

When our police feel they must be anonymous or retired in order to be honest, what does that suggest?

I don’t think most Seattleites would agree that the current policy approach is working, and I would ask what you are truly fighting for. We need the City Council to start holding hearings on this in an honest, transparent way. We need to begin exploring much more proactive approaches, such as the ones used in Providence Rhode Island and Snohomish County and Boston and elsewhere — those regions have seen dramatic improvements in the very measures where we’ve seen failures.

Some of us think it’s time to change the broken policy agenda that we’ve been pursuing, and time to take a new approach.

Seattle City Council, can we please stop spending time and money and focus passing symbolic resolutions on national issues and get to work? Can we please stop dishonestly portraying homelessness as just (or even primarily) a tech affordability displacement issue? It’s been more than three years since we declared a state of emergency on homelessness. Our approach isn’t working.

Seattleites, you can help push for better solutions. There are many ways to help. Write the Seattle City Council and demand better action. Contact the City Attorney and ask some good questions. Send your thoughts, even this blog post if you care to, to the Mayor. Push back when you hear people championing the shibboleth that it’s just about needing even more money without a change of policies. Consider joining one of multiple volunteer organizations. One of the ones I’ve found suits me best is the diverse group of citizens seeking evidence-based solutions at SPEAK Out Seattle.

At a minimum, resolve to improve your information diet. Don’t just read The Stranger’s take on it. Follow SOS at https://twitter.com/sos_seattle, or better yet, join us at SPEAK OUT Seattle on Facebook. Read the System Failure Report. Share this video, even this post if you care to, with neighbors and coworkers. And please consider attending the free City Council candidate forums in March through June in your neighborhood (see @SOS_Seattle on Twitter), and ask your district candidates what their take is on the addiction crisis in Seattle. And of course, there are numerous organizations that directly help today’s urgent need and those struggling with poverty and addiction, from King County United Way to Community Lunch to food banks like the amazing University Food Bank to Habitat for Humanity Seattle – King County to clothing donors and more. Most important, for the mid and long-term, we have to right this ship.

Major citywide elections are coming up in August (primary) and November (general), where 7 of 9 City Council spots are up for vote. This provides hope, for the first time in a long while, to change our direction. I’ll have more to say about the City Council candidates I feel have the best shot at changing our policies for the better in a future post; right now, I’m only starting to eliminate a few from further consideration. If interested, follow me on Twitter.

Disclosure: I donate time/money to several of nonprofit organizations mentioned above, but I do not speak for any of them.

“Green New Deal”: Math Check

Raising the marginal rate on $10 million or more in income to 70% might optimistically help “pay for” a boost of about 1.6% to overall federal spending.

The Washington Post’s Jeff Stein estimates that raising the marginal tax rate to between 60 and 70 percent on incomes above $10 million might raise as much as $720 billion dollars over a decade, or $72 billion per year. There are some 16,000 households that meet that criteria — fewer than 0.05% of all US households. Collectively, their taxable income was about $405 billion in 2016, on which they paid $121 billion in taxes.

Current overall federal spending: $4.47 trillion per year. So raising this marginal rate might help “pay for” a boost of about 1.6% to overall federal spending.

Stein adds, “The real number [of projected new revenue from such a marginal tax hike] is probably smaller than that, because wealthy Americans would probably find ways around paying this much-higher tax.”

One chart that should be of interest — inflation-adjusted per-capita federal spending. It looks like this:

I’m all for discussing policy ideas and investments, but it’d be great to start with numbers that foot.

A migration from a 17% renewables economy to a 100% renewals economy, with Medicare for All and “free” education requires much higher taxes on all brackets. (One study by George Mason University has estimated the cost for Medicare for All to be at $32.6 trillion, about 452x the incremental revenue the marginal tax optimistically would raise.) And, for what it’s worth, science that at least for now, we don’t currently have.

 

Survivorship Bias

In WWII, researcher Abraham Wald was assigned the task of figuring out where to place more reinforcing armor on bombers. Since every extra pound meant reduced range and agility, optimizing these decisions was crucial. So he and his team looked at a ton of data from returning bombers, noting the bullet hole placement.

They came up with numerous diagrams that looked like this:

See the source image

Most of his team members observed “Wow! Look at all those bullet holes in the center of the fuselage and on the wing tips! The armor clearly ought to go there, because those are the areas that are most marked-up in red!”

But Wald realized that they were only looking at those bombers which SURVIVED, and he correctly argued that these areas were instead precisely the damage areas that were already most survivable, while the areas which were NOT marked by bulletholes meant they were fatal. In so doing, he helped us understand “survivorship bias” — that is, if we only sample from the successful outcomes, we avoid seeing the crucial factors that caused failure, which in many cases are the most important factors of all.

Such survivorship bias can lead to conclusions and strategies which are precisely the opposite of optimal, so pay attention to the datapoints that you may have already artificially and incorrectly eliminated. 

DeOldify: Auto-color B&W Photos with Machine Learning

I’ve just discovered an incredibly cool project on Github: DeOldify, which uses deep learning to automatically colorize old black & white photos. It’s not perfect, but what it’s able to do is pretty amazing, and improving rapidly.

In addition to ninja-level coding, author Jason Antic (@citnaj on Twitter) does a terrific job writing up how the algorithm works in the README file.

Essentially, his code uses a deep learning technique called a Generative Adversarial Network (GAN.) GANs consist of two components: a “Generator” and a “Discriminator.” In brief, the Generator (its own neural network) attempts to synthesize fakes or originals, and the Discriminator attempts to figure out if the submitted instance is “real” or fake. In this way, a Generator can be considered a “Counterfeiter” trying to fool the Discriminator, and the Discriminator can be thought of as “The Police” trying to catch the counterfeiter passing off a fake. 

A zero-sum game then is played over and over, thousands of times, with each party trying to maximize its winnings, within some constraints.

Over time, the counterfeiter gets better and better at counterfeiting, until it can stand on its own and truly generate something that is pretty close to good output. To me, this is somewhat analogous to how children learn to better tell the truth when they experiment with fanciful lies in pre-adolescence. (Some, sadly, never learn the lesson fully.)

Back to DeOldify. The input/output is pretty impressive, with several examples provided at the link above.

Examples

I went through the Google Colab (Jupyter Notebook) powered harness developed by Matt Robinson, and input a few photos. Keep in mind that none of this involved any Photoshop work on my end:

Titanic Survivors
WWI Soldiers
Winston Churchill
Chuchill reviews troops

Not perfect, but remarkable nonetheless!

GANs are perhaps the most interesting thing coming out of the work in Machine Learning these days. 

I plan to spend some time with this project and see how well it does on some family photos from decades past, and perhaps build a web front-end to make it easy to try out. (It’s computationally expensive, so perhaps that front-end will need to rely upon donations.)

UPDATE: Jason Antic continues to make incredible progress on the model. The new images are much better than those above. Give it a whirl at https://deoldify.ai/!