← See all posts

Inadequate Equilibria: A Summarized Remix

In late 2017, Inadequate Equilibria spread like wildfire across my Twitter feed. Having worked through it, I’d now easily rank this book as one of the most impactful I’ve ever read on the way I see the world and think about my career. It is quite a bit of work though — my first read-through was near-incomprehensibility, alongside fragments of ideas that I found tremendously insightful and hinted at a very different way of interpreting the world than I was used to. That held my interest long enough that I reread this book several times; each time resolving the intent and ideas to a finer degree, like a developing Polaroid. Nevertheless, with each reading, I felt that the abstruse writing made the book hard to understand and share with other people. That’s really unfortunate. So, as a way to pay that forward, I’ve decided to remix it — these are my notes, presented in complete sentences, using friendlier terms and assuming fewer conceptual prerequisites so that it reaches a wider audience. I’m sure I’ll lose some technical precision around the edges, but hopefully I’ve got it mostly right.

The core thesis in Inadequate Equilibria is a bold perspective for interpreting the world: there are identifiable instances where you shouldn’t have blind trust in the world’s competence at doing something when the underlying systems don’t deserve that trust. The book presents a collection of situations that archetypically identify types of systemic failure; this type of failure is couched in a roughly quantifiable concept named “adequacy”. Under such systems, it’s possible for you to do better than the best option that’s broadly available and you shouldn’t anxiously second-guess yourself when you identify such an opportunity. But this isn’t always the case: in some cases the world is highly competent; in other cases you may be able to do better on a dimension that no one actually cares about; in still other cases there simply isn’t an exercisable mechanism for doing better.

The three economics concepts that underlie the book:

  • Efficiency: The extent to which anyone can reliably predict future behavior in a system. The efficiency of given system is perceived to be higher or lower by different people — for example, the short-term US stock market is completely efficient for most people, in the sense that most people can’t reliably predict short-term behavior well enough to consistently make a profit; on the other hand, the short-term US stock market is not particularly efficient for a small population of traders and hedge fund managers who can consistently predict future behavior and make a profit. (Aside: Arguably, this is also true of the long-term US stock market, as economist Paul Samuelson noted when he quipped that “U.S. stock prices had correctly predicted nine of the last five American recessions”).
  • Inexploitability: A generalization of efficiency to all systems that have “no free energy”. For example, predictability in a system like the US stock market would mean obvious profits which represent “free energy”. In an efficient system, behavior therefore can’t be predictable by definition. On the other hand, a system can be inefficient (i.e. many people may be able to predict behavior) but remain inexploitable (i.e. there’s no free energy for some reason). For example, it’s not possible to profit from a clearly-overpriced housing market that you’re not already bought into (such a market is considered inexploitable) because no mechanism exists for shorting it or otherwise betting that prices will decline, whereas it is possible to profit from (“exploit”) a clearly-underpriced housing market by buying houses in that market.
  • Adequacy: How effectively a system is converting one resource into another (e.g. medical researchers converting dollars into additional years-of-quality-life) and how that conversation rate compares to other possible rates that the system could attain if the people in it behaved differently. An inadequate system is one in which its actual, outputted conversation rate is far from the best possible rate. In such systems, there are “obvious” things that are not getting done.

Despite everyone already doing the best they can1, the sub-optimal output of an inadequate system results from limits imposed by misaligned incentives, some kind of systemic blockage, and/or because large-scale coordination would be required to do better (where everyone in the system would need to change their behavior at once). These limits are often insurmountable by motivated individuals within the system even when many or most of the people in the system would prefer to perform at a higher conversion rate; these systems are thus inexploitable.

In some cases, you might be able to perform better than inadequate systems at a personal or local scale, where you can exercise a level of competence that isn’t constrained by large-scale incentives or competed away. In other words, you might credibly be able to do better than the world at large. More on this later.

Systemic brokenness

Very inadequate systems tend to appear like massive overlooked opportunities, situations where seemingly-obvious things aren’t happening or resource conversion is happening at a shockingly inefficient rate. These systems can be broadly characterized by three root causes:

First, there are situations where decision-makers are not beneficiaries (i.e. decision-makers have misaligned incentives). For example:

  • In the scientific community, researchers and grant-makers are not incentivized to confirm and replicate existing studies — prestige accumulates to those who make new discoveries, even though society would greatly benefit from the confirmation and refinement of existing studies. In adequacy terms, the conversion rate from research dollars or researcher hours to durable experimental findings is low.
  • In the 1990s, the Japanese central bank was constraining the growth of its economy (on the order of trillions of dollars) by limiting monetary supply (in contrast, Japan has experience significant growth since that policy was relaxed in the early 2010s). As it happened, the compensation of Japanese central bankers was not directly tied to the performance of the broader economy; at the same time, keeping monetary supply low is typically a virtuous course of action for central bankers. Therefore, Japanese central bankers were not properly incentivized to do the ostensibly obvious thing. These situations are often inexploitable because they’re bottlenecked on a central decision-maker.

Second, there are situations of asymmetric information, where decision-makers can’t reliably learn the information they need to make the right decisions even though that information exists. For example:

  • The lemons problem generalizes to the “indignation market” of people pointing out problems — for most problems that you might come across, you aren’t able to say “this problem is really important” and make a trustworthy request for resources or action. Other people can’t reliably determine if you’re actually pointing to a real problem, or merely a personal inconvenience, or if you’ve actually identified a real problem worth solving. There’s no free energy in the indignation market.
  • Sticky traditions and cultural institutions persist in part because participants believe that other participants believe in the system, even though each individual participant may be ambivalent or would actually prefer to not participate. Startups go to zero if they can’t raise money from investors before becoming self-sustainable, so most startup investors fund companies they think other investors will want to fund. As described by the Gell-Mann Amnesia Effect, purported experts and “influencers” (in a very broad sense of the word) remain influential even if everyone knows they’re not perfect. Cultural concepts feel taboo until they break through the Overton window and suddenly become acceptable discourse, masking the underlying shifts trends in individual people’s thoughts (which may or may not reach a tipping point until someone tests the Overton window).

These situations are often inexploitable because there’s no free energy in their respective information markets.

Third, there are systems that are simultaneously broken in multiple places that would require coordinated action to move to a better state. For example:

  • Many career paths are stuck in a signaling equilibrium in the form of institutions that grant prestige. Top universities have historically produced high-performing people, and as a result employers strongly prefer candidates who have graduated from top universities. Despite the inordinate costs in money and years-of-life, this demand incentivizes aspiring high-performers to go to top universities, resulting in systemic stickiness (technically an “inferior Nash equilibrium”). Similarly, researchers want to be published in a handful of prestigious journals, allowing those journals to enforce procedures and reporting standards that may be suboptimal. In either case, deviators from the system appear unconventional, leading to a degree of delinquency and ostracism at existing institutions: “the whole system is a stable equilibrium that nobody can unilaterally defy except at cost to themselves.” The entire system is so locked in place that institutions can extract significant rents from participants — for example, Elsevier owns many prestigious journals and charges handsomely for access to them, but individual researchers continue accessing and publishing in those journals because they don’t want to switch to other journals with less prestige.
  • US presidential elections converge on two candidates from the leading parties because the US voting system essentially throws away votes for non-frontrunner candidates, nudging voters to pick the frontrunner they dislike the least when they would in fact prefer someone else. Changing the election system would require a massive coordinated effort across the country, and it would be exceedingly difficult to convince a large portion of the population that your idea is better than any of the myriad other proposals to reform elections out there.

These situations are often inexploitable because there’s no free energy in explanations — people are used to a haze of plausible-sounding arguments in favor of a wide variety of proposals, and it’s unclear that your proposal is any better than any other. Furthermore, there are many people who have invested a lot of time, money, and effort into the traditional shibboleths and would be loath to see someone else go through a less-painful process.

It’s possible to find massive systemic brokenness that simply isn’t being meaningfully addressed by anyone — Inadequate Equilibria uses the example of improperly-formulated baby feed leading to permanent injury and premature death2. The people who are most likely to be working on these problems tend to embody a combination of characteristics:

  • A desire to maximize altruism/personal impact
  • Precise-enough reasoning to clearly notice and define a problem, or ability to do independent research in cases where convincing studies haven’t already been done
  • Executive ability to start and run a project
  • Actual interest in a particular problem

For any given problem, it’s exceedingly rare to find someone with these characteristics, and so it’s entirely possible that there is no one working on any given large-scale problem.

Maybe you can do better?

Could you ever do better than all of human civilization on some particular problem? Could you be reasonably confident that you know something experts don’t, or that you’ve discovered a novel solution to a big problem, or that you can “do better” than the best the rest of the world has to offer?

For most people, saying yes would be difficult. The notion that you, one random people out of seven-billion-plus, would have the best solution the world has ever seen, or an insight that the world’s experts haven’t realized, seems unlikely, even comical.

But maybe it’s not so far-fetched, given that it’s quite plausible no one is working on some specific problem that you’ve come across. For example, the field of carbon capture — ways to delay or reverse climate change by removing carbon dioxide from the atmosphere3 — only consists of a few hundred credible contributors globally. Out of those, many are researchers; the number of people who can understand and evaluate commercial capabilities (What viable technologies already exist? How much carbon could they remove, and at what price?) could likely fit in an office conference room.

Furthermore, because systems are often dumber than the people in them (as a result of misaligned incentives, information asymmetry, and various forms of existential lock-in), and many big problems are coordination problems (where the hard part is convincing other people you’re right), you may be able to perform more adequately or discover a remarkably effective solution just for yourself (without also having to convince other people that you’re onto something). For example, the best system that society has come up with for connecting with friends and acquaintances is probably Facebook, but you may have your own ideas for improving Facebook. You may be able to build out those ideas, and they may in fact be better than Facebook for you. If that particular idea solves your need, then you’ve done better than all of Facebook! However, it’s unlikely that you’ll be able to convince everyone else to switch to your product.

Note that doing better than human civilization doesn’t necessarily require original contributions; it’s often much easier to identify the correct expert in a pre-existing dispute between experts — either as a way to hone your mental heuristics to help you make better decisions in your day-to-day life, bet on an outcome (if it’s an exploitable situation), or specifically guide a decision for something you’re working on.

How can you do better?

Making systems more adequate — finding the right expert, figuring out the most adequate solution for yourself, or convincing the world you’re legitimately attempting to do something better — requires a set of mental capabilities that can be developed through self-awareness and practice.

At a high level, it looks like this:

  1. Realize that systems that appear mysterious at first can be understood by looking at incentives and the gating factors that lead to systemic brokenness. These incentives and gating factors come from both systemic failures (described above) and sociological forces (described below). You can build a mental model of which failure modes appear to be at play in a particular system and hypothesize about how they lead to the observed behavior or performance.
  2. Notice the mental heuristics you use to identify exploitable situations, and tweak their accuracy given your mental models until they don’t always indicate “you can’t possibly do better” (leading to too many false negatives) or “I’ve figured out the secret” (leading to too many false positives).
  3. Constantly fine-tune both your models and your heuristics against reality.

Mysteries require models. When faced with a complex system that you want to understand, it’s sometimes not clear which (if any) existing analogue you can point to — you’ll often be better off coming up a theory for why things are the way they are or building a model (even if it’s not a great model), rather than trying to shoe-horn the system into a known analogy or relying on a purely empirical approach. Creating a model a priori (ahead of gathering evidence) requires a degree of agency to understand which explanations are plausible. A degree of self-awareness also allows you to test and refine theories — consider how “forced” your explanation is, avoid cherry-picking data points, figure out further consequences of each explanation, and remain alert to alternatives — to avoid genuinely bad theories. This is in contrast to a blindly empirical approach based on experimentation and feedback which falls down in unusual or extraordinary situations because iterating forward only leads to incremental answers that build incrementally from where you already are.

Accurately modeling the world requires you to accurately determine who and what to believe. Some experts are frequently right and it’s prudent to listen to them when you would otherwise disagree; other experts exhibit high variance in their performance and so you should check their work each time. You can fine-tune your modeling by determining how reliable some experts’ work output is relative to your own and being fair about where specifically your reasoning breaks down compared to where other people’s reasoning breaks down. The goal of this meta-reasoning is to avoid predictable mistakes in a known direction (e.g. if you’re frequently wrong, especially in the same way, relative to someone else). Doing this well requires you to update hard on each observation (especially your first, rather than dismissing it as “just one data point”), being open-minded about your abilities4, and getting good at saying oops5.

Occasionally you will also want to reflect on and debug how your introspection-of-how-you-think breaks down (a meta-meta-reasoning process).

Sociological impediments

In some contexts, status regulation — a phenomenon where people try to preserve everyone’s relative social standing — makes it hard to build and share models. For best results, status and competence should be perceived as separate dimensions, but they often aren’t. Many people modestly assume the world is fully adequate, and in such a world, people who outperform must be “smarter” or “better”. If you have a model and claim to “have an idea” about a complex topic, people often interpret that as an assertion of high status — because you think that you can do better, you’re implying that you are better. In reality though, systems end up dumber than the people in them, and as a result it’s quite possible for an individual (within the system or not) to deliver better results than the system itself.

If you keep trying to do better than existing systems, eventually someone will ask you for your hero license6 — “where did you go to school?”; “what do you do”; “how old are you?” — to check that you already have enough status to reach higher and are thus “allowed” to do what you intend to do. In hero licensing, the amount of tolerable reach is proportional to the status you already have7. This obviously discourages many people from pursuing big projects, but once someone has started, it also entrenches their existing approach and makes it harder to change their mind if they’re actually wrong because they’ve already had to fight for their hero license.

Status regulation is an extrinsic constraint on individual performance. Analogously, a sense of underconfidence (especially, but not always, relative to demonstrated ability) is an intrinsic constraint. Underconfidence and overconfidence are theoretically symmetrical ideas, but the former is rarely discussed while the latter is vilified. Conversations tend toward forced agreement, rushing the intellectual process rather than lingering in a state of disagreement while you and your interlocutors figure out what’s actually true; the temptation is to call an early halt to risky lines of inquiry, to not claim to know too much at the risk of seeming overconfident. On the other hand, many individual ideas are dismissed by anxious underconfidence, a self-inflicted “I don’t think I can do that” without even trying, even when it would be cheap to test your abilities by trying something. Anxious underconfidence manifests as a fear of failure that is frequently disproportionate to the consequences of trying something and failing, such that anything you might not be able to do is automatically crossed off the list, or not even considered as an option in the first place. Projected outwards in interactions with other people, this leads to reactions along the lines of “I can’t do that, and you can’t either” — a way to put down other people’s ambitions and simultaneously save face by framing an endeavor as an impossibility (which is socially acceptable) rather than a failure (which is not).

If you’re reading this (or the original book), you’re probably more susceptible to underconfidence rather than overconfidence, and you likely can do better in life by noticing and navigating around status regulation and anxious underconfidence. Learn to disregard credentials and defining people according to what they’ve already accomplished. Independently form theories and do the research to figure out if they hold or not. Regularly try things outside of your comfort zone; if you can’t remember any time in the last six months when you failed, you aren’t trying to do difficult-enough things8.

You’re likely to find that you can follow arguments and meta-reasoning clues clearly enough to pick the right side in an existing dispute in a significant number of cases — enough to bet on outcomes, either literally or by investing time and energy in pursuit of an objective. Sometimes, in following these pursuits, you’ll be able to build a better outcome at least for yourself (and maybe a handful of close friends). And, if you’d like, a combination of careful reasoning, correctly identifying exploitable inadequacies, trying hard things, and years of effort may, very rarely, lead you to change the world.

You can read the whole book from its official website.

Thanks to Jade Tumazi and Anirudh Pai for reading my drafts and providing feedback.

Glossary

A couple of the terms used in the book that were unfamiliar to me:

Object-level: Directly pertaining to the real world (as opposed to the meta-level).

Epistemology: A way of thinking.

Inside view: A forecast based on what you specifically know about the current situation, extrapolating current trends.

Outside view: A forecast based on comparisons to other known, similar situations, ignoring the specific details of the current situation.

Footnotes


  1. In many cases, if you (think you) can do better along some dimension, that dimension turns out to not be one that existing participants care about. It’s worth checking for signs of inexplicability and whether the system remains inexploitable by your ability to do better along that dimension.

  2. See https://equilibriabook.com/molochs-toolbox/#i

  3. It’s a thing! Companies like Stripe, Shopify, Microsoft, and Delta Airlines are making serious commitments in the space.

  4. In particular, you’ll want to avoid blanket generalizations about what you can, can’t, and won’t do. In particular, if you’ve tried something and succeeded or failed at it before, you’ll have to be honest with yourself about whether you or the world has changed enough such that your previous experience may no longer apply.

  5. Generalized, this is part of improving your decision-making process.

  6. A friend commented that this is reminiscent of Venkatesh Rao’s Be Slightly Evil.

  7. Why does this happen? Watching people “cheating” the system by skipping the customary shibboleths and getting away with it results in a degree of disdain from the people who have. If you’ve gotten an advanced degree or are an expert in some field, and come across someone who claims to know a lot about that field because they read a lot about it on Wikipedia, how would you initially react?

  8. See also this generalization of this idea.