The Body Count of Self-Driving Cars is Dangerously Low

On March 18, 2018, self-driving cars claimed their first victim. Elaine Hertzberg was crossing the street, and an autonomous car ran her over, killing her. The car’s human minder was being distracted by anther marvelous piece of technology (she was watching a video on her phone), but still: without self-driving cars, Elaine would still be alive.

As it turns out, a whistleblower had suggested grounding that fleet of cars just days before, which could have averted the tragedy.

What can self-driving car advocates say to this?

We can say: what took so long?

Technology is risky, and technologies involving moving tons of metal around places where humans walk are riskier still. But we shouldn’t privilege the status quo: in the US, there are 30,000 motor vehicle fatalities per year, mostly caused by distracted drivers. Evolution didn’t optimize humans for making split-second decisions at seventy miles per hour, so we’re likely close to the local minimum for preventing traffic fatalities; we can lower them, but only at the cost of tradeoffs (segregating car traffic and foot traffic, reducing speed limits).

You can think of it as an efficient frontier where we use regulations to trade off between convenience and casualties. For now, we’ve decided that 30k is about the lowest we’re willing to go.

Of course, efficient frontiers aren’t static. When technology changes, they shift. Better driving technology could either get us faster commutes for the same death toll, fewer deaths for the same commute time, or some happy compromise between the two.

I’m rooting for compromise.

But to get there, we have to think seriously about risks. This forces us to make some hard decisions, but we’re not the first. When Winston Churchill was the British Secretary of the Navy, he was a strong advocate for oil over coal. Referring to the Germans, he once wrote “They have killed 15 men in experiments with oil engines and we have not killed one! And a damned fool of an English politician told me the other day that he thinks this is creditable to us!”

More recently, the US space program had its share of tragic deaths early on. And what’s happened since is that, through tens of billions of dollars of R&D and decades of research, NASA has discovered a surefire method for safe space travel: staying put. Perhaps you feel that this is all for the best, that space travel isn’t a worthy use of our finite resources. In that case, I hope when we spot the asteroid I or my descendants have time to tell you or yours that we told you so.

The Calculus

To decide on an appropriate tradeoff between deaths now and deaths later, we need some measure of the exchange rate between present and future lives. This is hard to calculate: on the one hand, the world’s population is expanding; there are more future people than present people, so perhaps any one of them matters less. On the other hand, rich countries spend a lot on healthcare, and the richer they are, the more of their incremental income they spend this way. So it’s entirely possible that the subjective value of a human life is a superlinear function of economic growth.

(If you don’t think the economy will grow in the future, why are you bothering to read an essay about the future? You should be having more fun—since the only coherent options are prosperity or apocalypse, a lack of long-term optimism is just unthinking nihilism. Go see if there’s anything good on TV.)

We can probably settle on the theory that future lives are worth at least as much as present lives, perhaps much more. But to be conservative, we can stick with a 1:1 tradeoff: anything we can do that has an expected cost of roughly one human life due to a poorly-configured self-driving car, but that is likely to accelerate the arrival of self-driving cars by 18 minutes, is a win.

This sounds like one of those creepy bloodless calculations economists are fond of making, but don’t worry: it’s plenty bloody! Tens of thousands of lives per year are at stake!

The question of whether to accept deaths from self-driving cars, or to hold researchers to a zero-casualties standard, is only easy to answer if you massively privilege the status quo. But by the standards of the future, the present is rife with needless death. Refuse to make hard decisions now, and your ancestors will despise you forever.

Stalin probably never said “A single death is a tragedy. A million is a statistic.” Think of it: by delaying self-driving cars, you’re espousing a view so bad that one of the most evil humans in history wasn’t actually willing to say it. (“Maybe he was thinking it, too,” you say. But Stalin probably said what was on his mind: his management strategy was to get all of Russia’s senior leadership blind drunk every single night to see who would admit something unfortunate, and presumably Uncle Joe let some things slip then, too. We probably have a good idea of the most evil things he ever explicitly thought.)

We should view self-driving cars as akin to the Manhattan Project, but with more certainty of long-term benefits. The Manhattan Project claimed some lives directly (Louis Slotin, for example), and of course dropping the bomb killed tens of thousands of civilians; since it diverted 0.25% of GDP during wartime, the loss of men and materiel probably also led indirectly to allied servicemen’s deaths. But in the end, it was a massive saver of lives. Not because of its impact on the war itself—the Russian invasion probably played a larger role. But by pushing forward the development of nuclear power, the Manhattan Project eliminated the possibility of direct conventional war between superpowers, and also gave us a low-emissions source of energy that crowded out some coal, which now kills a mere 800,000 people per year.

(This calculation is a little tricky as well, since one effect of nuclear power is that it raised the odds of a world-ending war from zero to something higher. War deaths follow a power law distribution, and nuclear weapons raise the alpha by some hard-to-estimate increment. A nuclear exchange now seems unlikely now. Do our bombs even work any more? Is there anyone still alive who knows how they operate?)

Concluding Thoughts

It’s a bit extreme to say that self-driving car research is acceptable if it starts killing as many people as conventional cars. On the other hand, it sounds extreme to me to say that we should accept the preventable deaths of millions of people because we’ve always killed lots of innocent people that way.

I can say this, because I don’t work in the self-driving car industry. (Having said it, I never will.)

But we can compromise. Let’s set an acceptable death threshold for self-driving car research: the baseline is that R&D can kill 1% of the people that not-having-self-driving-cars kills. As self-driving cars rise from a minuscule fraction of total miles driven to a single-digit share to the vast majority (expect that last transition to happen startlingly fast. Double-digit months, not double-digit years), we can raise their casualty budget as we reduce the acceptable death toll from the status quo.

Eventually, if the world doesn’t end, I expect commutes to happen in autonomous vehicles; manually-steered cars will be fun for hobbyists and essential for revolutionaries, but will otherwise be a nonissue. The only question to me is: how many people have to die before we get there?


The Great Mid-Century Sort

The twentieth century sure was different. Mostly in ways we’d like to forget. But one way we shouldn’t forget is that, at least for American workers, it was a fabulous time. Real GDP per capita grew from about $6,300 to about $46,500. The US economy was growing fast before the twentieth century, too, but economists who look at late 19th century growth can explain most of it through capital spending and population growth. In other words, 19th century growth is better explained by the nearly 200,000 miles of railroad we built—and the millions of people we imported to build them—than by the invention of the steam locomotive.

Nineteenth-century economists could model the economy pretty simply, or could have, if they’d had the tools: workers times productivity equals output, where productivity is a function of capital. In the 1950s, Solow and Swan developed a more robust model, where economic growth came from population growth, capital, and “Total Factor Productivity,” where TFP is an extremely high-class way to describe the growth that the basic model can’t explain. Like a lot of economics, this sounds either stupid or tautological as a statement of fact, but economic models are about illustrating the bounds of our ignorance: of those three variables, TFP is the one to explain.

There are a couple versions of what TFP is. Economists usually think it’s some combination of:

  1. More education: in 1900, around 51% of people ages 5-19 were enrolled in school. In the 90s, the number topped out above 90%.
  2. More human capital and trust: if you know your employees and business partners won’t slack off or try to rob you, you can save money on supervisors and Pinkertons.
  3. Better technology—a million current dollars worth of factory equipment or trains goes a lot further than it did a century ago.

Each of these is unsatisfying in its own way. Take schooling: there’s ample evidence that schooling is more about demonstrating that you have traits employers desire, not creating those traits. While some school is useful, we’ve reached the point where everyone smart enough for high school has a shot at spending at least a few years(‘ worth of tuition) at college. If education caused higher TFP growth, we’d see productivity growth accelerating throughout the twentieth century. Instead, we see a decline after the 1970s.

Human capital and trust are surely part of the equation. It’s hard to build a big enterprise in a company where bigness makes you a target for shakedowns. Look at Russia, The Mark Zuckerberg of Russia doesn’t live in Russia’s Silicon Valley; he lives in Dubai. He doesn’t own much of the Facebook of Russia, either; a friend of a friend of Putin’s bought it from him.

Human capital is a better theory, because we’ve seen serious declines in trust and social cohesion in the last generation. The timing works, too: the US was pretty monocultural (relatively) in the first half of the century, had a surge of disorder in the 60s and 70s, and has evolved into a tense Cold War-style standoff today. But if that were the dominant explanation, you’d see homogeneous countries like Japan put up excellent numbers. Japanese TFP grew rapidly until 1970, and has been down and then flat since. Weird!

Technology is the best explanation, but “technology” is not one monolithic explanation. It’s thousands and thousands of tiny innovations, all of which slowly nudge productivity growth upwards. And, again, we have to explain the curve of TFP: higher growth from the 30s through the 70s, slower growth ever since. We can explain the slowdown in a couple ways: maybe the world gradually got more regulated, or we slowly picked the low-hanging fruit. But what of the acceleration? If low-hanging fruit is the issue, there’s always more of it in the past.

What gives?

The Top 1% of the Middle 90%

I disagree with the alt-right on a whole lot, but give a broken clock credit where credit is due: I blame the Jews. Specifically, I believe that the best explanation for the growth in TFP, and the subsequent slowdown—and a few other phenomena I’ll get to—is the fact that from the 1930s through the 1950s, elite schools dropped their aggressively discriminatory admissions policies, and switched to admitting students based on objective measures of talent. Another way of saying this is that the started admitting Jews, and another way of saying that is that they stopped discriminating against a group whose IQ is about a standard deviation above the mean. Malcolm Gladwell, in another broken-clock moment, has a great piece about this:

In 1905, Harvard College adopted the College Entrance Examination Board tests as the principal basis for admission, which meant that virtually any academically gifted high-school senior who could afford a private college had a straightforward shot at attending. By 1908, the freshman class was seven per cent Jewish, nine per cent Catholic, and forty-five per cent from public schools, an astonishing transformation for a school that historically had been the preserve of the New England boarding-school complex known in the admissions world as St. Grottlesex.

The difficult part, however, was coming up with a way of keeping Jews out, because as a group they were academically superior to everyone else. Lowell’s first idea–a quota limiting Jews to fifteen per cent of the student body–was roundly criticized. Lowell tried restricting the number of scholarships given to Jewish students, and made an effort to bring in students from public schools in the West, where there were fewer Jews. Neither strategy worked. Finally, Lowell–and his counterparts at Yale and Princeton–realized that if a definition of merit based on academic prowess was leading to the wrong kind of student, the solution was to change the definition of merit.

The admissions office at Harvard became much more interested in the details of an applicant’s personal life. Lowell told his admissions officers to elicit information about the “character” of candidates from “persons who know the applicants well,” and so the letter of reference became mandatory. Harvard started asking applicants to provide a photograph. Candidates had to write personal essays, demonstrating their aptitude for leadership, and list their extracurricular activities. “Starting in the fall of 1922,” Karabel writes, “applicants were required to answer questions on ‘Race and Color,Religious Preference, Maiden Name of Mother, Birthplace of Father, and ‘What change, if any, has been made since birth in your own name or that of your father? (Explain fully).'”

All very socially acceptable in 1922. Not so socially acceptable post-1945. James Conant, Lowell’s successor, slowly eased restrictions, and eventually the quota dissolved. Now, Harvard admits Jews, Blacks, Hispanics, a carefully-pruned subset of qualified Asians, Catholics, Atheists, Marxists, Republicans who turn their time as a Republican at Harvard into their entire identity, and pretty much anyone else with extraordinarily high standardized test scores and GPAs. They admit the smartest and hardest-working 18-year-olds in the country, basically.

This didn’t magically change the number of well-educated people. Harvard undergraduate enrollment was about 1,100 people in the early 1940s and about 1,200 in 1975. What it did, though, was give talented people with poor pedigrees much more access to high-status, high-impact jobs where they could quickly contribute. (Where were all the talented Jewish students going back when Harvard wouldn’t have them? Fun trivia question: CCNY had eight Nobel Prize-winning alumni, all of whom graduated between 1933 and 1954.)

History gives us frustratingly small sample sizes. N is usually around 1. But we do have another test of what happens when a country integrates a high-IQ minority, then changes its mind: in early twentieth-century Germany, German Jews were highly integrated into German society. How highly-integrated? During the First World War, Jewish investment banks in the US got boycotted by clients—because it was assumed that they were loyal to the Kaiser. Even into the 1930s, there were Jewish Germans who supported Hitler, on the grounds that he was the only person who could defeat the communists, and anyway the Jew-baiting was just a way to appeal to Johann Q. Öffentlichkeit, not a serious policy proposal.

19th century Germany had a stellar academic reputation (I don’t have good information on how much of this was Jews, though). Since the Second World War, they’ve done… okay. But I tend to like Steve Sailer’s theory about Rammstein’s Amerika, that it’s a wistful song about how Germany, not America, should be the center of the scholarly and technological universe.

Midcentury Meritocracy

Of course, scrappy outsiders didn’t gatecrash the WASP party instantly. There was a long transition period, during which some fields bifurcated into mostly-WASP and mostly-not companies.

In clothing, for example, gentiles owned the big factories (to run a factory with a lot of expensive capital equipment, it helps to inherit money and inherit a close relationship with a banker). But the low-margin, asset-light middlemen were disproportionately Jewish. In advertising, WASPs dominated brand ads—the kind you buy to get some vague halo of prestige—while direct-response firms who actually got hired and fired based on the sales they could produce were more Jewish and Catholic. In finance, Morgan Stanley didn’t hire a single Jewish employee until 1963(!), even though J.P. Morgan himself had been chummy with one of the Salomons of Salomon Bros. Once again, the WASPs ruled the part of the business that was driven by connections (underwriting large bond issues from blue-chip companies), while Jewish firms excelled in market-making, smaller company underwriting, and other subfields where there’s skin in the game. As Michael Lewis puts it in Liar’s Poker,

In 1979 a good guess at who would revolutionize finance in the coming decade would have been made as follows: Search the unfashionable corner of Wall Street; eliminate everyone who appears to have just emerged from a Brooks Brothers catalog, everyone who belongs or claims to belong to exclusive clubs, and everyone who comes from a good WASP family. (Among the leftovers would have been not only Milken and Ranieri but Joseph Perella and Bruce Wasserstein of First Boston, the leaders in corporate take-overs and, coincidentally, the other two men who helped Ronald Perelman chase Salomon Brothers.)

This, by the way, is a great heuristic for sanity-checking racist hiring, or nepotistic hiring (same thing, smaller family). The people being discriminated against will sort themselves into hard-to-fake, easy-to-quantify fields like engineering or sales. Nepotistic hires will cluster in fields that are impossible to judge (“strategy”), or where success comes from making few mistakes rather than from contributing something new and excellent (HR, PR).

In the arts, the picture is blurrier. Non-WASP writers were plenty successful at both prestigious and popular writing, so the same dynamic didn’t play out in precisely the same way. Perhaps we can blame Playboy: by packaging naked breasts together with fine writing, Hugh Hefner managed to publish what was, by orders of magnitude, the world’s highest-circulation literary magazine, thus blurring the boundaries between mass market and hoity-toity:

By 1967, the magazine had gold-plated contributors like Vladimir Nabokov, James Baldwin and Ray Bradbury; a booming circulation of 4 million; and, to a remarkable degree, mainstream acceptance.

Lapping It

There are two puzzles in late twentieth century American economics: why did productivity growth slow down, and why did intergenerational income mobility slow, too? The mid-century meritocratic sort, coupled with heritability of intelligence, conscientiousness, openness, and other success-maximizing traits, explains both. In 1960, a smart kid from a poor family could get into Princeton on test scores alone, and go on to have a good career. By 2018, nearly all the smart kids are the children of people who got accurately sorted early. Go down the Forbes 400, and try to find someone born after 1960 who grew up poor. I’ll tell you what you’ll find: second-generation immigrants. The only person I can think of who was born after the Sort, grew up poor, and got rich is Marc Andreessen. And even then, it’s hard to tell from his background the extent to which his parents were poor, as opposed to practicing a neurotically extreme variant of Protestant asceticism.

Compare recent Physics Nobelists to a few generations ago.

  • Donna Strickland, 2018: daughter of an electrical engineer and an English teacher.
  • Arthur Ashkin, 2018: son of Russian immigrants; dad ran a dental laboratory (and his older brother worked on the Manhattan Project).
  • Barry Barish, 2017: unclear what his dad did, but his mom had a college scholarship, but wasn’t allowed to attend.
  • Kip Thorne, 2017: his dad was a chemist, and his mom was the first woman to earn a PhD in Economics at her alma mater.
  • Rainier Weis, 2017: son of a “physician, neurologist, and psychoanalyst,” per Wikipedia, and an actress.

Let’s try a half-century earlier:

  • Richard Feynman, 1965: dad was a sales manager.
  • Julian Schwinger, 1965: dad was a garment manufacturer.
  • Shin’ichirō Tomonaga, 1965: dad was a philosopher (I didn’t say zero sorting, just less).
  • Charles Townes, 1964: son of a lawyer.
  • Alexander Prokhorov, 1964: Wikipedia says his parents were “revolutionaries.”

You see some elite children of elite parents here and there in the past, but plenty of upward mobility. In the more recent crop, everyone is either the child of recent immigrants or the child of upper middle-class and above parents.

If you start measuring things from the 1960s, you see a collapse in social mobility and a coincident drop in productivity growth. But go back another century, and you see a coherent story: we used to give people authority based on who their parents were, but in the early twentieth century we made a concerted effort to find all the smart people and give them prestigious credentials and good jobs. Now, almost all the smart people were discovered a generation or two before they were born; if we want productivity growth and intergenerational mobility to re-accelerate, we’ll have to find some new force to make it happen.

When it comes to productivity growth, I’m worried, and I believe it’s important to reverse the trend. But intergenerational inequality is another matter: it shifted, once, and then it settled. To make the Fifties happen again, you need another massive untapped reservoir of smart people, but somehow you have to find the smart people Harvard has overlooked in its century-long quest to admit the thousand people most likely to earn a gazillion dollars and donate some of it to Harvard.

Good luck with that.

Team Trump, Trumpism, and Trump

The establishment in this country is stuck in early November, 2016. Donald Trump is a bad dream, but the bad dream is almost over: soon, DC will once again by ruled by a member of the Bush and/or Clinton dynasties, and all will be right in the Beltway once more.

For those of us who have woken up, understanding the Trump phenomenon is the most important problem in politics. There are three basic models you can use to explain Trump’s victory: Team Trump, Trumpism, and Trump. The left and the establishment right (but I repeat myself) suffer from a halo/horns effect: if they don’t like Donald’s style, they won’t like his policies; if they don’t like his policies, they don’t think much of his allies either.

This is, I’m sorry to say, pretty stupid. I get it: you don’t like Donald Trump. But, Trump won, and there’s a sort of conservation principle in play, where every time you get further evidence that he is, say, a poor public speaker, that’s further evidence that either his policies were more appealing than you thought, or his team was better than it looked.

Establishmentarians try to cheat by using Russian hacking as deus ex machina, as if Russia hacked white Americans’ serotonin production to make them start killing themselves well before they elected Trump—as if Russia hacked Nielsen for years to make it appear that Trump was a natural on television with a large and devoted following.

But I always say: don’t get mad; get inside your enemy’s OODA loop and then get even. The Russian conspiracy theory is in fact a subset of my framework; Putin is just a key member of Team Trump.

Team Trump

The day after election day was a very bad day for some poor apparatchik, who had to tell Vladimir Putin that, despite lots of hard work, he had colossally fucked up, and Donald Trump had actually won. We will never know how hard Russia tried, or what they wanted, but let’s consider two possible outcomes in 2016:

  1. The extremely loose-cannon Donald Trump becomes President, and does god knows what over the next four years. Maybe he’s Vlad’s best buddy; maybe he randomly decides that Russia should start paying tribute to Mongolia again. Who knows?
  2. Donald Trump wins roughly 268 electoral votes, and parlays his enormous media presence into a nonstop campaign against Hillary Clinton’s legitimacy, such that the biggest accomplishment of her entire presidency is that @realDonaldTrump has an even bigger follower lead over @POTUS.

More on this in a later post, but: it’s abundantly clear that Russia’s geopolitical ambitions are defensive with respect to America, not offensive. Anything they can do to keep America weak is worth doing, and the two best things you can do to keep America weak, circa 2016, are putting Hillary Clinton in charge and giving Trump reason to think he deserves to be in charge instead.

Team Trump is more than just Vladimir Putin and a squad of spirited bot writers and for-profit trolls. He also had Michael Cohen, the most divorce lawyer-looking guy ever to practice law; Paul Manafort, who even Washington DC lobbyists thought was an amoral scumbag; Roger Stone, a dirty trickster who bragged about playing dirty tricks (and, incidentally, once ran an ad in a swingers magazine, using his real email address); and on down the line. If DC is a giant high school, Trump did 100% of his recruiting from the Rejects Table in the cafeteria.

And yet, anyone who has kept in touch with their high school friends has probably noticed that the rejects are, if not more successful than average, at least higher variance.

Every establishment goes through a life cycle where the founders are high-variance, and the followers are lower and lower-variance. The more important the institution, the more it prizes stability over skill. But even stability is something smart flakes can effectively fake. So we end up with this paradox where DC is full of boring people, and of talented people who apply their talents to appearing boring.

If you’re optimizing for following a well-trodden path to the Presidential nomination, you should absolutely hire boring grinds whose resumes consist entirely of institutions with great name recognition. But if you want to beat the odds and you know you’re likely to lose, you might as well go for broke: Roger Stone might have a dozen ways to ruin your campaign and one way to save it, but that’s better than hiring Chad Van Der Beltway, who is absolutely guaranteed to take you from 3% in the polls all the way up to 3.5%.

This is not some brilliant contrarian strategy on Trump’s part. This is Trump and everyone else merely responding to incentives. When Trump started running, he wasn’t especially prepared, so all the establishment talent was already working for the establishment. In the early days of his campaign, Trump still looked like a sideshow, which meant that mocking Trump was a way for Republicans to cheaply virtue-signal. Trump, the tell-all memoirists all tell us, values loyalty. He’s reluctant to hire someone who’s dissed him in the past or seems likely to in the future. (As Scaramucci demonstrates, this is a good instinct.) So in the first three months or so of the campaign, every Republican operative took themselves out of the running unless either a) they were too busy doing real work to care about elections, or b) they were on board with his views.

What is Trumpism?

Academics invent fake political views, but every genuine political movement is the result of some great leader just doing whatever makes sense to them, after which a bunch of smart political scientists start figuring out what the pattern is. Xi Jinping Thought is not something Xi Jinping thought up. American Affairs, The Journal of American Greatness, and a few other outlets have tried to articulate a coherent version of Trumpism, which usually goes like this:

  1. Long block quotes from Strauss, Schmitt, Burnham, or somebody like that.
  2. ???
  3. MAGA

I believe the core of Trumpism can be summed up in two lightly-contradictory precepts. First: America has overpromised in some broad, off-the-balance-sheet sense. And second: we’d be a great country again if we acted like just another country, instead of trying to rule the world.

If you look at a lot of American viewpoints from a non-American perspective, they’re attempts to universalize stuff that’s really particular to us. We act like “freedom of religion” is some kind of universal value, but in lots of the world it’s simply not—many branches of Islam, for example, don’t recognize the legitimacy of the state except in the sense that it helps to enforce Islamic law. The current Pope doesn’t emphasize this much, but “separation of church and state” has been specifically condemned by the church; it’s the heresy of Americanism.

“Free Trade” is another one of these pseudo-universalist terms that is actually a very American term. I recently read a fascinating anecdote in Steve Coll’s Private Empire about the US State Department trying to persuade the Chinese that there’s no such thing as a strategic interest in oil, since oil is so cheap to transport. As long as you have a port, you can ship oil to your country for about a dollar a barrel, so it’s ridiculous to get your own supply. Of course, the Chinese are listening to this, and what they’re hearing is “You don’t need your own supply of oil unless you ever plan to risk pissing off the US Navy,” which in Mandarin translates directly to “You need your own supply of oil.”

American foreign policy reminds me of many other times that Americans have flubbed at localizing their products. Facebook allegedly thought West Africa was full of bisexuals, because everyone said they were “Interested In” both men and women—it being out of the question that you’d have, much less talk about, any sexual orientation other than straight. Users just thought “Interested in” was a broad question with some oddly specific answers. Trump knows what sells, and where; he didn’t try marketing Trump Steaks in India.

There’s a theory about marketing that you mostly market by identifying and exploiting people’s doubts about themselves. Trump’s a master of that, because he identified some very reasonable doubts: American can barely be all things to all Americans, much less to another six and a half billion people we don’t understand all that well.


What do we make of Trump, the person? If hiring losers and weirdos were the secret to success, we’d have a lot more successes. If practicing the politics of strategic unambition—of only trying to achieve realistic goals, instead of fantasizing nonstop about the impossible, then every honest person who’s ever taught or attended public school would have a shot.

Even if these conditions are both necessary for a Trump presidency, they’re not sufficient. You need Trump, the guy.

Trump’s appeal is a huge validation of Schmitt’s Concept of the Political. Here’s a man who is indeed defined by his enemies.

How does Trump define his enemies? The intellectual wing of Trumpism tries to identify coherent principles behind the fights DJT picks, but this is just a way for people with high verbal IQs to show off, like when people try to find justifications in their religious texts for whatever the whim of the moment is (“Upon this rock I will build my church” is obviously a reference to the importance of guitars at mass.)

What’s actually going on is that Trump has an insane talent for bullshit detection, and he’s in a target-rich environment. Time after time, Trump picks a fight, I’ll assume it’s stupid, and then I’ll realize that—hey, now that you mention it, that guy’s right.

Take the Universal Postal Union, for example. At first this sounded like Trump randomly trying to find a way to fuck over Jeff Bezos, a task that appears to occupy more of his waking hours than I, personally, would want a President to spend that way. It turns out, though, that the postal treaty was really stupid and should have been fixed long ago. It made sense at a time when poor countries weren’t converging with rich countries, when it was in effect a subsidy for personal letters and little gifts. But convergence is happening, mostly due to the direct and indirect effects of China’s resurgence, and in that world it makes absolutely zero sense that the cost to ship something from Shenzhen to San Antonio is lower than the cost to ship it from Dallas.

When America was fat and happy, we could afford not to question our assumptions, even as they got more expensive by the year. But now that’s catching up with us. Trump can just pick one shibboleth at a time—the Post Office is good! Immigration is great! Free trade creates wealth for everyone!—and he can assume that it’s gone too far and needs to be reined in.

I still maintain the semi-autistic libertarian instincts of my youth, so I worry a bit when Trump goes after something like free trade. Doesn’t he know Economics 101? Given that he went to Wharton, yeah, he probably does. He probably knows that the Econ 101 models that support free trade make some assumptions about the fungibility of capital that aren’t quite valid, viz. in a Ricardian world, having 1/10th of a semiconductor industry is 1/10th as profitable as having 100% of one. But in market after market, China has found that they can master the labor-intensive end of the supply chain and work their way up.

That’s particularly profitable when you run the Beijing IP Two-Step. It works like this: you find an American company with some technology it would be nice to have. You tell them you will help them market their technology to a billion Chinese consumers, but unfortunately there’s a bit of red tape, and the fastest way to get to market is to form some sort of joint venture where your American IP gets transferred to a Chinese entity.

A few months later, by some miracle, Chinese knockoffs of this gadget are available wherever electronics are sold, at a low, low price that reflects the absence of any R&D costs to recoup. How do you say “shrug emoji” in Cantonese?

China’s GDP doesn’t really grow at 6.5% per year, but this is one of the ways they get close: by stealing our stuff in what is ultimately a mad-libs version of a Nigerian email scam.

Donald Trump probably doesn’t know this, but he certainly senses that something like it is true, just through the hyper-political skill of figuring out where his opponents seem to have the most doubt. Everyone at Cato knows what free trade is doing to American workers; they know we messed up on the cost side of the cost-benefit calculus, and we’ve got the overdose deaths to prove it.

Every time you’re confused by a Trump success, which should be often, apply this pattern and see if it has some explanatory power. It really does work, across all sorts of cases, even situations like North Korea.

Foggy Bottom experts: “North Korea will always have the upper hand, since their leader is more willing than ours to risk nuclear war.”

DJT: “Hold my Diet Coke.”

There’s a yin and yang to Trump’s skill at sensing weakness: he’s also hard for other political players to read, because he’s immune to shame. Normally, when a politician does something naughty, like avoiding taxes, calling someone fat, defending a guy who tried to seduce multiple teenage girls, having sex with a playboy model while his (ex-model) wife is caring for his newborn son, etc., he is expected to apologize.

Donald Trump… does not meet expectations, here.

We’ve apparently forgotten what it’s like to deal with someone who just isn’t going to back down. Maybe establishment types in DC are just too soft and coddled, and more of them should spend a summer doing construction or something. There is this culture of admitting your moral failings and denying your intellectual failings, which Trump reverses; he’s perfectly willing to tacitly concede his mistakes by doing a complete 180 on a policy decision, but he’s utterly intransigent about the playmate stuff: he didn’t do it, and if he did aren’t you jealous?

A Unified Theory of Trump

In history, our sample size is 1 at best. Sometimes it’s actually zero, because we’ve gotten the story so completely wrong we’re basically addressing fiction. Given this tiny sample size and the highly contingent nature of history, any sufficiently satisfactory theory is going to overfit.

What I’m doing in this post is not so much positing a theory of Trump as sketching out a range of possibilities. Ultimately, Trump won. The establishment is going to deal with that. They can deal with it by whining, begging their shrinks to up the dose, resolving to be Even More Ready For Her next time (assuming she’s not Ready For Hearse by then), experimenting with novel electoral strategies like calling your opponents racists, etc. Or they can sack up and figure out what Trump got right and they got wrong.

There’s a risk, though, o ye of the Beltway. As it turns out, the most coherent theory of why Trump won is that, in many ways and on many issues, Trump is actually right.

“Mr. President, Do You Know Who I Am?”

So. Some guy wrote the most scathing Glassdoor review of all time, and the New York Times ran it. Great job, guys. When the newspaper exfiltrates gossip from a double-agent working in close proximity to the lawfully-elected President of the US, it really puts a damper on all those “deep state” conspiracy theories.

It’s sure to be a DC parlor game to guess the identity of the writer, at least for the next few months before he resigns to a life of book deals, speeches, and board seats.

“It used the word ‘lodestar’, so I’m sure it’s Pence.”

“No, it said ‘first principles,’ so I’m sure it’s McMaster.”

“Well,” *extremely bluecheck voice* “I’m sure the writer is a hero.”

This article is the distilled essence of Beltway Conservatism:

Don’t get me wrong. There are bright spots that the near-ceaseless negative coverage of the administration fails to capture: effective deregulation, historic tax reform, a more robust military and more…But these successes have come despite—not because of—the president’s leadership style…

The good stuff is all the writer’s doing; the bad stuff is all somebody else’s fault.

I will grant the author’s premise, here: the Trump administration has been a shocking success so far. We’ve started to fix our foolishly backwards corporate tax system. We’ve backed down from free trade orthodoxy. We’ve reversed the steady increase in the average cost to get new prescription drugs approved, something I didn’t think would happen until somebody went McVeigh on the FDA. And we’ve taken the first tentative steps towards having a real border again, which is to say that we’ve taken the first steps towards being a real country again.

The problem, apparently, is the style, not the substance. Sure, Trump is the most effective conservative President since at least Reagan, but does he need to be so tacky about it? Did he really need to cut in line, to skip the boring couple decades of political grunt work and go straight to the top?

Of course he did. I think all Trump fans can acknowledge that as a day-to-day operator, The Donald leaves some things to be desired. But a great leader isn’t defined by his flaws; he’s defined by the areas in which he excels. And Trump has excelled in two important areas.

First, he spent half of his career cultivating the skill that a modern democracy demands of the head of state, but usually doesn’t let him cultivate until he’s close to high office—Trump knows how to play himself on television.

Second, crucially, Trump has identified that something is going wrong in America. You can’t quite put your finger on it, although in his “American carnage” inaugural he came close. But it’s there. America has taken on too many responsibilities abroad, and neglected too many at home. The DC assembly line produces politicians who will make gentle course corrections, but that’s not what we need when we’re headed in the wrong direction.

All this is naturally outrageous to the establishment. They’ve spent their entire careers slowly climbing up the ranks, and this blustery guy from Queens decides to get revenge for a comedy roast by becoming leader of the free world. It’s even worse for establishment Republicans, because the guy they want to rage against is also the guy doing what they’ve merely talked about.

Some day, probably soon, the author of the NYT op-ed will be outed, by himself or somebody else.

The name will flash on the Fox News chyron.

And Donald Trump, the President of the United States, will look at it and say “…who?”

Follow @matthews_bd on Twitter, for more op-eds too hot for the Times.

The Truth About Theranos

Theranos, Through the Looking Glass

NEW YORK (AP) — Theranos, Inc., raised $5.4 billion in the largest IPO of 2022, and the largest healthcare IPO in history. Their founder, Elizabeth Holmes, 38, is now the world’s wealthiest self-made woman.

Theranos’s path to IPO has not been without setbacks. In 2020, the firm settled allegations that it had misled investors about the progress of its devices in 2014—the devices in question would not be fully functional until the next year. Theranos has declined to disclose the terms of the settlement, citing a confidentiality agreement.

In 2018, Holmes faced what her attorney, David Boies, referred to as “the reductio ad absurdum of the Me Too movement,” when a senior Theranos executive, Ramesh “Sunny” Balwani, accused Holmes of coercing him into a sexual relationship in exchange for keeping his job. Following weeks of back-and-forth allegations, including accusations of PR “dirty tricks” on both sides, Balwani dropped the case.

Shortly before his death last year, Theranos advisory board member Henry Kissinger penned a thank-you letter to Holmes upon resigning from the board. “Theranos,” he wrote, “Is a shining example of what one person can accomplish when she truly believes in herself.”

At what point did Theranos become a fraud? It was definitely a fraud when employees were faking lab data and sending customers bogus test results. Was it a fraud during the prototype stage, when Elizabeth Holmes showed investors mockups and strongly implied that they were working products? When Theranos bought third-party testing equipment and claimed it was running tests on its own hardware? Was it a fraud from the day it started?

There’s not a clean answer. It’s like asking when someone became an alcoholic, or when a marriage fell apart. At some point, you can look back and say it happened, but at the time it’s such a slow slide as to be imperceptible.

What Theranos Did Wrong: The Reality Distortion Field

You might argue that Theranos became a fraud the first time Elizabeth Holmes lied, but this misunderstands the nature of animal spirits and human accomplishment. We’re lying to ourselves all the time, and the founders of great endeavors are preternaturally talented at telling themselves bigger lies. Founding a medical device company out of your Stanford dorm room is absurd, ridiculous, patently bizarre, like some Harvard sophomore deciding to rewire human interaction during winter break, or a moped hobbyist farm boy making his name synonymous with self-propelled vehicles.

The lifecycle of a startup goes something like this: you start out exaggerating your potential by a factor of 100:1 or more. This attracts naive investors, and similarly delusional employees. Over time, as you get more successful, the exaggerations shrink. By the time you’re at risk of being sued for securities fraud, your exaggerations are down to the level of white lies and omissions; nothing really actionable.

Basically, a successful business is a reputational check-kiting scheme that funds a legitimate project that, if all goes well, just barely pays back the losses from the original scam. The magic is that it’s a postmodern scam, where the person who is most deceived is the founder, and where the other victims of deception make out like bandits.

You can view the concept of equity, as opposed to debt, as a legal/social acknowledgement that this happens, and that it’s fair only insofar as the people who originally got conned get to take home a lot of money. Much the same way that marriage is an admission that humans are irredeemably horny, or the way fine cuisine admits that we’re gluttons and ought to force ourselves to slow down and appreciate what we’re eating.

All these institutions are bridles we put on vices to direct them in the right direction. But vices sometimes drag you astray. In the case of Elizabeth Holmes, she messed up to an uncommon degree, but in quite common direction: Holmes was the world’s biggest wantrepreneur.

What’s a wantrepreneur? If you’ve been to conferences or meetups for work, you’ll recognize the type. A wantrepreneur has speaking gigs but doesn’t talk to clients. A wantrepreneur writes blog posts about business based on airport books about business, which they read on a plane, but not on a business trip. A wantrepreneur prints business cards before building a prototype. And every wantrepreneur worships Steve Jobs.

It’s unclear when Holmes started identifying with Jobs. While she took it to a kooky extreme, she’s just an unusual case of a phenomenon for which we can blame lazy journalists. The way it works is that if journalists associate you with some trait, whether it’s “knows a lot about topic X” or “is a snappy dresser” or “Will defend Donald Trump no matter what,” they learn to contact you when they want someone with that trait. As a consequence, whatever is the most distinctive part of your public persona eventually swallows the rest. Did Holmes invent her Jobs persona, or did the media do it for her? Both: like Amy Winehouse, the media turned her flaws into a marketing hook, and she exaggerated them in turn.

Some Jobs fans copy the idea of wearing a single outfit every day. Holmes copied the exact outfit Jobs wore. And I don’t think I’ve ever heard of creepy celebrity emulation going quite this far before:

The Audi had no license plates—another nod to Steve Jobs, who used to lease a new Mercedes every six months to avoid having plates.

Many people study Jobs’ later Apple career and try to emulate Apple’s management structure. Holmes seemed to just randomly copy-paste.

Elizabeth scheduled the meetings on Wednesdays after learning that Apple’s creative meetings with the agency had always been that day of the week. She told [an employee of Chiat Day, the ad agency she hired because Jobs once hired them] she admired the simplicity of Apple’s brand message and wanted to emulate it.

A wantrepreneur is already on the edge of self-parody, and Holmes was a parody of that.

The Jobs emulation is just a microcosm of what Theranos got wrong: the slavish imitation of the surface-level traits of startups.

The tragedy is that what Theranos got right was the deep truth that we need more hyper-ambitious startups, and that as a rule any truly ambitious startup faces such long odds that its CEO needs to be somewhat crazy to think it’s even worth attempting.

Cults, Religions, Companies

Here’s the single most important line in Bad Blood:

The ability to perform so many tests on just a drop or two of blood was something of a Holy Grail in the field of microfluidics. Thousands of researchers around the world in universities and industry had been pursuing this goal for more than two decades, ever since the Swiss scientist Andreas Manz had shown that the microfabrication techniques developed by the computer chip industry could be repurposed to make small channels that moved tiny volumes of fluids.

But it had remained beyond reach for a few basic reasons. The main one was that different classes of blood tests required vastly different methods. Once you’d used your micro blood sample to perform an immunoassay, there usually wasn’t enough blood left for the completely different set of lab techniques a general chemistry or hematology assay required. Another was that, while microfluidic chips could handle very small volumes, no one had yet figured out how to avoid losing some of the sample during its transfer to the chip. Losing a little bit of the blood sample didn’t matter much when it was large, but it became a big problem when it was tiny. To hear Elizabeth and Sunny tell it, Theranos had solved these and other difficulties—challenges that had bedeviled an entire branch of bioengineering research.

How far we have fallen. Back in the middle of the twentieth century, the idea that a company would exist to commercialize some technology previously thought impossible was just, well, what it meant to be a technology company. One year, the President says we’re going to the moon. A year later, a contractor building some impossibly tiny component to impossibly strict specifications is in business. A few years after that: liftoff, and IPO.

Now, the fact that Theranos was trying to do something amazingly difficult is itself damning. Stick to something reasonable, like a photo-based social network!

When you look at the technical limitations Theranos faced, you see a problem that requires a near-miraculous solution. And the role of the miracle, and of the inspired founder, ought not to be discounted in these matters.

Let’s compare Theranos to another startup, also an insanely ambitious project that required a solution to a longstanding (albeit obscure) technical problem: Bitcoin. Both Bitcoin and Theranos:

  1. Tackled a massive economic opportunity (blood testing and payments/savings);
  2. Had to solve serious technical problems to do so (running multiple blood tests on a small volume of blood; keeping a global ledger synchronized across every node while scaling up transactions);
  3. Had to solve the social problem of getting people hyped up about #1 long enough to actually solve #2; and
  4. Had ambitious but under-credentialed founders.

The jury’s out on Bitcoin, although it’s had most of the order-of-magnitude increases in value it needs to be worth as much as all the world’s currencies combined. However, we can agree that Bitcoin hasn’t failed in the way that Theranos has.

What’s notable here is that in the early days of Bitcoin, it had exactly the same tension that Theranos did: for people to be excited about working on the project, it had to sound much more plausible than it really was. In the early days of Bitcoin, skeptics abounded—“even if you can solve that, in six months you’ll be stuck solving this”—and the skeptics were fundamentally right. Bitcoin looked like a hard problem, and turned out to be a harder problem than it appeared to be. Had the early contributors known how much effort it would take, they probably wouldn’t have attempted it. Only through Satoshi’s ability to delude people did he create an institution that made those delusions real.

You can compare early Bitcoin/Theranos to a cult; Bitcoin is trying to evolve into more of a religion, while Theranos turned out to be Scientology. Smug atheists are right to point out that a religion is just a cult that’s been successful long enough to be boring, but they’re wrong to think this is a mortal blow—it just means that cults are a useful feature of the natural world for anyone who wants to spread a belief system.

This is why people who try to frame Bitcoin in terms of ponzi/pyramid schemes go astray (here’s a well thought-out example). Currencies, companies, social norms, religions—these all exist in an indeterminate state, where they were obviously scams in retrospect if they collapse, but were obviously going to succeed all along if they don’t. Why is a dollar worth a dollar? Because the recipient can be confident that the next recipient will also think it’s worth a dollar. Why isn’t the local Lutheran church going to make you drink poisoned Flavor-Aid next weekend? Because they’ve lasted long enough to work through all the craziness. Why do they exist in the first place, though? Because Martin Luther was 100% willing to drink the Flavor Aid.

All this could lead to nihilistic PoMo cynicism. Not only is there no such thing as truth, even arbitrary principles have an expiration date! But that misunderstands the nature of the cult/religion or startup/success transition. There are certain traits that are like booster rockets; necessary to get off the ground, but jettisoned in order to reach a stable orbit.

Post-Mortem of an Empty Grave

Journalists have tried to turn Theranos into an indictment of startup culture, but this only works on an unsophisticated audience. Theranos raised money from DFJ early on, but a large proportion of their investors were far from the usual Silicon Valley suspects—Oracle’s founder put in some money, but so did Rupert Murdoch and the CEO of Wells Fargo. Less a VC Who’s Who and more of a VC Uh, Who?

Some writers try to somehow tie Elizabeth Holmes into the sexism in tech story, but it’s inconvenient that the biggest startup blowup in history was organized by a woman.

So what they’re left with is the argument that Theranos sold an impossible dream and wasted vast sums of money, to which I say: great. Let’s have more of that. Let’s have way, way, way more. American productivity growth has been in decline since the 70s. Longer life expectancies will increase our dependency ratio; we’re behind other countries, but we’re going to catch up fast. All of this means that we need fundamental advances in technology, and we need them yesterday.

When I got to the part of Bad Blood that talked about what a huge breakthrough Theranos would have been, if it hadn’t been a fraud, it reminded me of another big project that sent a couple billion dollars down the hatch on a bold technical venture:

In an enterprise such as the building of the atomic bomb the difference between ideas, hopes, suggestions and theoretical calculations, and solid numbers based on measurement, is paramount. All the committees, the politicking and the plans would have come to naught if a few unpredictable nuclear cross sections had been different from what they are by a factor of two.

  • Emilio Segre, quoted in Richard Rhodes’ The Making of the Atomic Bomb

Imagine how much boldness it takes to run the numbers and realize that there’s some chance that you’ll expend titanic effort, waste vast sums of money, and find out that physical reality simple doesn’t allow you to build the gadget you wanted to build. Now imagine knowing this risk and proceeding anyway. There’s an adverse selection probelm in two directions: if you look at a potential business and see no competition, you might ask “How hard can it be?” Try it, and you’ll find that you’re in for a lot of competition, and that the fundamental problem you’re solving is a social problem, not a technology problem. Consider the opposite approach: when you think about a business and say “There’s no way that’s possible, at least not in this universe,”—investigate. Your only competitors will be lunatics, so you’re guaranteed a monopoly if you survive.

A few people know the stakes, and are putting money where their mouths are. Peter Thiel has written about the need for bigger advances outside of software. Marc Andreessen, who likes to say that “software is eating the world,” has also put money to work in drones, robotics, and spaceflight. Josh Wolfe runs a fund that’s all-in on this. (He’s less famous than the other two names, for now.)

But we need more. Much more. Few startups are so ambitious that their failure would constitute front-page news; in general, if a startup dies it dies from fraud or excess overhead—the corporate equivalents of death from cirrhosis and diabetes. Be a hero! Find a business plan that could actually get you killed.

The exact failure of Theranos could have been averted earlier. Maybe they could have called it quits after the Series B, gotten acqui-hired by a bigger medical device company, eventually tried something else. But Theranos-sized failures, Theranos-looking failures—those we need. We live in a world where when you say “technology company” you think guys using MacBooks, not guys using soldering irons, centrifuges, fissile materials, or anything potentially deadly. Tech is increasingly a bits-only world where the biggest risks can be resolved by rolling back to a previous version, or restoring a database from backup.

An absence of boondoggles bespeaks an absence of boldness. Let a thousand Holmeses bloom!

They promised us flying cars, but we got 280 characters. Follow me: @matthews_bd.

The Prince and The Suit

The Prince

Let’s set the stage. The year is 1513. Niccolo Machiavelli, once a promising young bureaucrat, has been imprisoned, tortured, and exiled after his patron lost a war. Stuck on the farm, he spends his time reading Livy and writing lively letters to friends, often on the topics of ancient history, recent Italian history, and the art of governance. Some of those letters later evolve into The Prince.

Imagine: underemployed for stupid political reasons, constantly reading old books, writing long diatribes on the importance of brutally effective absolutist government—Machiavelli was the world’s first neoreactionary blogger.

You can see why The Prince is in print five centuries later. It’s a quick read, and it indulges in your basest political instincts. “Men ought either to be well treated or crushed because they can revenge themselves of lighter injuries, but of more serious ones they cannot.” Pow! “Destruction caused by [mercenaries] is put off only as long as the attack lasts. In peace one is robbed by them, and in war by the enemy.” Hard to find good help these days. “If everything is considered carefully, it will be found that something which looks like virtue, if followed would be his ruin; while something else, which looks wrong, may bring him security and wealth.” I, too, find my own flaws charming.

Dictators and wannabe dictators eat it up.

Of course, it’s possible that Machiavelli is joking. Or at least, it’s possible to read The Prince on at least three levels.

  1. A completely earnest book about how leaders ought to behave
  2. A parody of people who earnestly rationalize the way awful leaders behave
  3. Something more subtle, like an argument that monarchies are too unstable to be trusted and only republics can thrive. (Machiavelli likes to mention how hard Republics are to govern, but in a fairly admiring way.)

It could even be some layered combination of the three—maybe Machiavelli hopes a smart leader will read between the lines and see that he really is as cynical has he’s pretending he’s only pretending to be. Sort of like Mike Judge’s Idiocracy; the joke is on the person who thinks the author is 100% kidding.

Machiavelli is worth reading, not for any one interpretation, but because his context is oddly close to our own. At the time that he was writing, Northern Italy was one of the richest places in the world, albeit one that suffered from persistently dysfunctional politics. The economic ground was shifting beneath their feet, though, as seafaring traders found cheaper trade routes to India and China. Like us, Machiavelli is setting down the rules even as the economic and technological fundamentals underpinning those rules are undergoing unprecedented changes.

Also, from one of his letters, there’s this:

When evening comes, I go back home, and go to my study. On the threshold, I take off my work clothes, covered in mud and filth, and I put on the clothes an ambassador would wear. Decently dressed, I enter the ancient courts of rulers who have long since died. There, I am warmly welcomed, and I feed on the only food I find nourishing and was born to savour. I am not ashamed to talk to them and ask them to explain their actions and they, out of kindness, answer me. Four hours go by without my feeling any anxiety. I forget every worry. I am no longer afraid of poverty or frightened of death. I live entirely through them.

(Emphasis added.)

The Suit

I reread The Prince in parallel with Michael Anton’s The Suit. Anton first came to my attention through The Flight 93 Election, an essay that took a lot of what I sensed about my vague fondness for Trump, and put it into forceful words. Before that essay, I thought of myself as a Trump opponent opponent, but that’s what shifted me to proponent.

The irony that a Straussian wrote the best straightforwardly conservative defense of Trump is not lost on me, or on anyone else.

The Suit is a book with a conceit. It’s a parody of The Prince, not in the loose sense that it applies Machiavellian dicta to male attire, but in a fractal sense: same structure, same tone, lots of examples paraphrased (Machiavelli’s chapter 12, “How Many Soldiers There Are, And Concerning Mercenaries” becomes “How Many Silhouettes There Are, And Concerning Designer Suits”—as Machiavelli condemns mercenaries, so does Anton condemn designer suits).

Even down to the level of individual sentences, the parody continues: in a chapter that digresses into a history of late twentieth century Presidential attire, we have:

The first George Bush dressed formally, and—he thought—not at all dandified. Yet the people recognized in all those repp ties, sack suits, and linen hankies the quality known as preppyness. Because this reminded them of his patrician heritage, it fixed in their mind the impression that he did not understand or care about them, and so he was ruined. His son, learning from this error, dresses with the crisp formality of a CEO, but without a hint of preppyness, so that if he is ruined, it will not be because of his clothes.

All this talk of presidents’ outfits being limited by their perceived patrician upbringing maps to Machiavelli’s discussion of the paradox of Roman emperors, that the worse the emperor the longer he ruled, since bad emperors launch long wars, which keeps the troops too well-paid and occupied to bother with coups.

It’s a level of parodic depth only reached by Weird Al Yankovic, and I’ve never seen it done at book length.

The Suit closes with an Exhortation to Seize Dress and Save It From the Vulgarians (Machiavelli, naturally, ends The Prince with a chapter about saving Italy from the barbarians). And it’s a good point. As it turns out, the suit/dress shirt/pocket square/tie combo is, if not perfect, at least a local maximum—the result of centuries of evolutionary struggle between the desire to look conventional and the willingness for conventional-looking rich people to pay their tailor any sum necessary to make looking good feel comfortable.

I’m not in a good position to judge the advice. I have a tendency to dress at the lowest end of whatever’s socially acceptable. But… why? Cheap clothes look cheap, even if they’re comfortable. Nice clothes look great, and are often more comfortable (especially if you’ve been hitting the squat rack; you’re either stuck with Fat Guy Jeans or, basically, yoga pants made out of denim).

So, I’ve decided to step my wardrobe up a bit. It’s only honest—in a country that nearly elected Hillary and could have elected Bernie, we conservatives are the adults in the room and ought to dress accordingly.

It’s time to suit up, gentlemen. Let’s dress to oppress.

Follow @matthews_bd on Twitter, if it suits you.

AIDS: A Black Swan

Later, everybody agreed the baths should have been closed sooner; they agreed health education should have been more direct and more timely. And everybody also agreed blood banks should have tested blood sooner, and that a search for the AIDS virus should have been started sooner, and that scientists should have laid aside their petty intrigues. Everybody subsequently agreed that the news media should have offered better coverage of the epidemic much earlier, and that the federal government should have done much, much more. By the time everyone agreed to all this, however, it was too late. Instead people died. Tens of thousands of them.

  • Randy Shilits, And the Band Played On

Why do faggots have to fuck so fucking much?!“

  • Larry Kramer, Faggots

How elegant! How rare! How gay! To think, one doesn’t have to pay.

  • Edward Gorey, The Evil Garden

AIDS has killed roughly 40 million people. Less than the 1918 flu, more than any other modern disease. Retrospectives on AIDS tend to agree that most of these deaths were preventable, but they differ over assigning blame. The two schools of thought are: you shouldn’t have done that, and you should have done more. They fall neatly into a right/left spectrum.

On the right, we talk about transmission vectors: ranked by risk of infection per exposure, blood transfusion and childbirth are at the top, but just below them are needle-sharing and unprotected anal sex. So Larry Kramer is right: there’s an easy solution here. Progressives argue that the disease was under-funded relative to the number of people infected, and that this was because Reagan didn’t like gays, or heroin addicts, or Haitians. You can compare AIDS to Legionnaire’s Disease, whose first known outbreak happened to be among veterans. Note that the veterans were not, as far as we know, shooting up or having sex with each other. Legionnaire’s disease was addressed quite promptly by government authorities.

Both sides have a point, but there’s a more interesting argument here: AIDS spread because it fit into social and cultural gaps in America’s medical system; a counterfactual version of AIDS that didn’t have those traits wouldn’t have been so deadly. The gaps:

  1. Because of its long incubation period, the number of people known to have AIDS was always well below the number of people who had it, and were spreading it. Funding per person looked higher than it was.
  2. The populations infected with AIDS were politically unsympathetic, particularly to the Republicans who controlled the executive branch during the 1980s.
  3. Anonymous, promiscuous sex was the Third Rail of gay politics.


The incubation period was the first thing that made AIDS hard to track. If you didn’t know AIDS existed, a young guy coming down with a weird cancer would just be a piece of trivia. Five of them, still trivia. Fifty, and you might start to notice patterns. Now, we know what fifty AIDS patients meant: it mean thousands of people who had AIDS and didn’t know it.

Some of the first people diagnosed with AIDS had sexual partner counts that look like typos. Gaëtan Dugas, a flight attendant, estimated 250 partners per year. With a 1% per act transmission rate and a ten-year incubation period in which he’s asymptomatic but still sexually active, this implies that Dugas could have infected 25 other people. And those people were, of course, not celibate after after Dugas.

An illuminating and slightly counterintuitive piece of math is The Friendship Paradox, which holds that your friends, on average, have more friends than you. This is intuitive if you work backwards: someone is more likely to be your friend if they have lots of friends. Actually proving it is harder, but doable. The upshot is that for any social network, whether it’s friends of friends of friends or partners of partners of partners, the nodes linked to any one node tend to have more links to other nodes. However promiscuous you were if you lived on Castro Street in 1979, you were even less promiscuous than at least some of the people you hooked up with.

This explains what is otherwise mysterious about AIDS: how did the disease spread if the risk from any given sex act maxes out at 3%? For the disease to spread at all, the average person with AIDS needs to have 3% ^ -1 partners, or over 33 partners. The answer is that 33 partners was a busy month for some of these sexual Olympic athletes. One great data point from And the Band Played On: in the early 1980s, the CDC undertook to study what lifestyle features led gay men to get AIDS. To do this, they identified men with AIDS, and control groups. One of their control groups was these men’s straight friends; one control group was gay friends of AIDS patients with whom the AIDS patients hadn’t had sex. And that control group was basically empty. At the time, and in that place, gay men just didn’t meet each other without having sex.

All these dynamics are obvious now, but it would have been a lucky guess to know them in the early 1980s. Whatever else you can say about early AIDS researchers, you can forgive them for underestimating the size of the epidemic. The numbers—sex acts per year, partners per person—are, indeed, unbelievably high. In fact, if AIDS didn’t exist, I personally wouldn’t believe them.

This, by the way, is not something unique to gay men. Instead, relatively low sexual partner counts are something unique to societies with straight women. Heterosexual athletes and rock stars rack up impressive numbers of partners—and typical pornography viewing patterns show that, given the opportunity, straight men will compulsively switch “partners” at a pace limited only by their broadband connection.


Why didn’t Reagan do anything about the AIDS? Because he didn’t particularly care. They weren’t his base, and their fans weren’t his base, either.

Why didn’t Democrats do anything? Because it would hand a cudgel to Reagan.

Normally, governments like to intervene on behalf of small groups that are suffering a lot. It plays well on election day. When those groups have partisan loyalties, though, it doesn’t pay at all: it wins zero votes, but gives the opposition something to rail against. (A cynic might view the last sixty years of Democratic policy with respect to black Americans as proof of this dynamic.)

Politics II: Revenge of the Poppers Pushers

But what about the gay lobby? Every special interest has an interest group, and those groups exist solely to lobby in favor of things that matter to the group but don’t matter much to anybody else.

This is where it gets interesting. Think of what it would mean to support a gay organization circa 1981. It would mean dirty looks from the neighbors, trouble at work, lost friendships, or worse. Of course, if you were gay, you had less to lose if you supported a gay-rights group. But you probably also had less, period.

There was only one group of people who were unafraid to identify as gay, and who had the means and willingness to financially assist gay rights. And they were the people who ran bathhouses. For younger readers, a “bathhouse” is basically an orgy. You show up, take off your clothes, and find someone to hook up with. Eye contact equals consent. Bathhouses were profitable; if you can set up a toll booth between men and sexual release, you can generally make a fortune. They sold memberships, booze, and poppers, and probably tacitly allowed the sale of harder stuff. There wasn’t a ton of competition, since straight people didn’t want to run them and bankers were not exactly thrilled to be involved. Imagine combining the sex business and the drug business, and doing it with a near-monopoly.

The effect of this was that among openly gay political activists, there was one and only one taboo: you couldn’t talk about the dangers of promiscuous unprotected sex, or poof goes your funding.

It was a political perfect storm: the indifferent right, the prudent left, and the co-opted special interests.

Why It Happened

In retrospect, the story of every disaster is a story of all the missed opportunities to avert it. Any of a dozen background checks or alert airport security guards could have averted 9/11. An early whistleblower could have stopped Enron. The Deepwater Horizon missed a few safety checks, but three separate emergency systems had to fail for the explosion to happen as it did. The Maginot Line was a flawless solution to almost exactly the right problem. In the modern world, where our sample sizes are huge and our checklists are lengthy, disaster is never about one thing going wrong; it’s about everything going wrong at once.

But another way to look at it is that a disaster fits the deficiencies of a system the way a key fits a lock. It wasn’t so much that precautions unluckily failed; it was that every possible AIDS-like epidemic didn’t come into existence, because the crack AIDS slipped through was precisely AIDS-shaped.

This is not a happy thought, but it’s an illuminating one. The bigger a problem is, the keener we are to look for someone blame. Events like AIDS are impersonal and inevitable, though. Our institutions and culture were immunocompromised before that first inexplicable case of Kaposi’s sarcoma.

Follow @matthews_bd on Twitter for more of that sweet, sweet edginess.