Notes on Fasting

The hardest part of not eating for 120 hours and 11 minutes was the boredom. From the afternoon of May 17th until a late lunch on May 22nd, it was as if I found myself living in a boundless void. Time stretched out forever. I’m not sure if it’s a psychological thing — not having meals to break up the day or look forward to — or if it’s a physiological response, a dilation of time designed to give the primal senses a better chance of finding the next meal. It might be a combination of both.

IMG_2790
My official fasting timer — Zero

Along with feeling like I almost had more time than I knew what to do with, I found it very difficult to remain focused on the work I was trying to get done. My attention was rabidly looking for any sort of new stimuli. Curiously, I rarely thought about food or felt hungry.

Why do I do this to myself?

Rationale

Everything written here is personal observation and conjecture, not medical advice.

Assuming sleep, hydration, and electrolytes are dialed in (more on this below), fasts under 7 or so days are relatively low-risk; I’m much more interested in the upsides.

The absence of food triggers cellular apoptosis (controlled cell death) and autophagy (recycling cells and tissues). Both processes cull cells, organelles, and proteins naturally damaged or weakened by time; the accumulation of damaged cells is thought to contribute to the aging process. More pertinently, there are indications that this process disproportionately purges cancerous and pre-cancerous cells, which are more voracious for energy (glucose in particular) than healthy cells.

There is also research that fasting “induce[s] immune system regeneration” by activating dormant stem cells.

With the right preparation ahead of time (more on this below), fasts of several days are also effective at dropping body fat. To be fair, most of the observed weight difference is water weight, which is regained shortly after breaking the fast, but beyond the third day or so (for me), I observed a sustained drop in body fat and waist circumference.

Finally, I think it’s a good life skill to practice hard things from time to time, if for no other reason than sheer curiosity.

Electrolytes

I didn’t realize how important electrolytes were the first time I did a multi-day fast (which I decided to embark on after outlasting two full turns of the table across from me at a Vegas lunch buffet). 40 hours in, I was laid out in bed with the rapid onset of extreme nausea, although I didn’t know the cause at the time. I managed to order some food, and ended that experiment.

In subsequent attempts, I made sure to supplement sodium (via sea salt) and potassium (chloride). I added magnesium (glycinate) for the first time this time. Multiply the elemental amount by the serving size, weight it out, and sip with water during the day.

Side note: magnesium glycinate in water tastes like rotten eggs with an aftertaste of rotten seafood a few seconds after swallowing. Drink up!

Before starting, I discussed with my doctor the amount of each I should be consuming. The RDA is up to 2300mg sodium, 4700mg potassium, and likely around 400–500mg magnesium. I intended to get a full RDA each day, but found it incredibly difficult to get through it all. A few days in, I started noticing lethargy and a lack of focus when I hadn’t had consumed electrolytes in a while, and a reliable bump in energy levels once I did. I stopped measuring and simply relied on that feedback loop to cue my next salt hit.

Ketosis

Entering ketosis is the key to getting through extended fasts. If a fast begins while in the body’s default glycolytic state (using glucose for energy), the subsequent gluconeogenesis will break down muscle tissue for glucose. Entering ketosis lowers energy requirements from glucose in favor of ketone bodies produced from fat stores.

To minimize muscle wasting, I entered mild ketosis in the days leading up to my fast, primarily via ketogenic foods, overnight intermittent fasting, and a small amount of exogenous ketones in the morning.

IMG_5639
The last meal (Souvla pork salad) …
IMG_7198
… and “Sherpa Coffee”

Since ketogenesis begins when liver glycogen is depleted (the liver can store about 100g of glycogen), I walked and biked for a few hours during the first 24 hours. Between moderate exercise and my ~1900-calorie BMR, I was able to enter ketosis before the end of the first full day.

Results

I weighed 172.0 pounds on the morning of May 17th, and 162.2 pounds on the morning of May 22nd (a difference of about 4.5kg). Assuming a total of ~500g stored glycogen between my liver and muscles, most of which would have been depleted, and a 4:1 ratio of water to glycogen and 1:1 ratio of water to fat tissue, I’d expect about 2.5 – 3 pounds of fat loss. Net of the corresponding water, I’d expect my weight to return to about 166–167 pounds, which is inline with what I’ve measured in the week since I broke my fast.

While I didn’t measure body composition or muscle mass, I matched (or was within normal variance) my pre-fast numbers on my bench press, squat, and other functional strength exercises when I went back to the gym four and five days after the fast.

Unexpected, this adventure also renewed my curiosity in optimizing my health, nutrition, and exercise. After tweaking my routines in early January, I’d fallen into a rut of behaviors that weren’t necessarily beneficial or leading to results. Instead, I’ve been redesigning my meals, exercise plan, and daily schedule, and have already noticed some improvements … more on this in a future post.

Photo via QuoteFancy. Quote from Hermann Hesse’s Siddhartha.

Thoughts on Personalizing Software

I’ve been obsessed with productivity software for most of my life. I’ve sampled dozens, from simple personal tools (like Apple Notes) to full-featured productions complete with the kitchen sink (like Phabricator and JIRA). None of them ever felt comfortable — many were too simplistic, some were too prescriptive, and some were overwhelmingly customizable (although not necessarily in the ways that I would’ve liked).

The fundamental challenge is that software products are more-or-less centrally designed, and even assuming a product is well-designed with many use cases covered, software products are exactly as-is, no more or less. Without the ability to add specific functionality, change certain flows, or remove unnecessary complexity, most software ends up being prescriptive and less-than-ideal for most specific use-cases.

For example:

Google Calendar is a complex and extensively-designed product, but at some scale, any product will overlook or choose not to include certain use-cases that might be important to a subset of users. The end result is a product that’s not “just right” for anyone.

But does it have to be this way? Does software have to built towards a one-size-fits-all paradigm that exists in real-world products at the intersection of economics and the laws of physics?

To be fair, there are examples of customizable and extendible software. Some apps ship with extensive options. The proliferation of APIs provides a foundation for countless products built on the same underlying data (Nylas is a great example of an easy-to-use API over email and calendars, which normally depend on arcane protocols). And, developer experience aside, Salesforce Apex is a first step towards a world of personalized software.

Screen Shot 2018-05-20 at 12.17.04
Customization options for iTerm 2

I’d love to see a world of software that allows any user to change the way certain workflows work, add custom inputs, and automate steps that should happen after a particular trigger.

Technically, we’re starting to see the building blocks that would make this possible — serverless execution is getting faster, and GraphQL makes it easier to discover and understand APIs.

Practically, I think this would mean a shift in the way we think about software product design along two dimensions:

  • How extensive is the base product? At the minimalist extreme, the product is essentially an API to the underlying data, with minimal functionality out-of-the-box. At the kitchen-sink extreme, the product is likely complex, with built-in solutions for a lot of use cases, but may be harder to customize.
  • How much customizability gets exposed? It’s easy to follow the rabbit hole of maximizing optionality, leading to a product that lacks cohesion and becomes incredibly difficult to maintain or change.

Beyond that, there’s also the question of how customizations would be implemented. Perhaps writing software will eventually become a form of basic literacy, and everyone can plug in personal code into a product at customization points. Or perhaps a product figures out an effective visual programming environment for its domain, and customization becomes possible without having to “write code”.

The obstacles between prescriptive software and personalized software echo fundamental debates in software engineering and UX design. But personalized software could unlock a massive amount of value in knowledge work if every worker could leverage software’s scale and computation capabilities in a way that matches the way they work.

Photo by rawpixel on Unsplash

Work On Good Ideas

Despite what some people might want to believe, there is such a thing as bad ideas. In the context of building valuable ventures, bad ideas are typically bad in a few ways:

  • they’re trivial to implement — ideas that are easy rarely become valuable. Peter Thiel presents this argument as “Competition is For Losers”.
  • they’re wrong, either because they’re built on a misunderstanding of reality or faulty logic. For example, most recent ideas to use blockchain are based on a grossly oversimplified understanding of its benefits without understanding its drawbacks. The early-internet mantra of making up losses on volume (apocryphal though it might’ve been) is an example of faulty logic. Regardless, in this case, humbly remember that you don’t know everything and you could, in fact, be wrong.
  • they’re a marginal improvement in the best case. Even if successfully implemented, these ideas result in incremental improvements (measured in single- or low-double-digit percentages), often for a small audience (tech-literate knowledge workers and people with lots of disposable income are common examples). This is probably the most insidious and pervasive type of bad idea in the modern tech industry, especially because it’s often easy to hypothesize an ideal-although-unlikely world where the idea actually changes how something fundamentally works.

Last week, my team at work launched Disrupt Tech, a blog and community aimed at unsticking our industry from the allure of bad ideas. We see too much capital and talent wasted on incremental problems (like optimizing ad click-throughs) and also-ran products, while there exist big, real problems to be solved. While Trustwork is attempting to solve some of these problems, there are many more in the world, and we want to see more people working on them.

Disrupt Tech’s north star and manifesto is below:

Technology was supposed to make everyone more productive, unleash creativity, and bring abundance to the world. But at some point, we lost our way. As an industry, we’ve gotten caught up in our clichés and euphemisms and forgotten what it means to create real value.

The modern technology era began in the late 1960s, when popular culture collectively envisioned a future of space travel and valuable computation for everyone, and companies like Fairchild, Intel, and descendants kicked off the race to build that future. The subsequent decades ushered in a Cambrian explosion in hardware and software, unlocking world-changing innovations in communication, manufacturing, biology, and countless other fields.

The 90s saw the rise of the Internet, the foundation for an “information superhighway”, a nascent ideal that would’ve enriched the lives of every citizen. Instead, people decided it was more important to build warehouses to deliver pet food.

Since then, the internet has brought massive opportunity to more than 3 billion people, and fundamentally changed the lives for many of the 2 billion who carry access in their pockets. Yet we’ve seen massive amounts of capital and human talent directed towards incremental improvements, made-up problems, and tools for other techies. Our best talent isn’t working on real tech problems that could unlock massive value for most of those people — there is much more we can do with the internet than A/B test ad clickthroughs or build glorified chat apps.

We think our industry has gotten stuck in a rut. But we don’t think it has to be this way. Every day we talk to incredible workers who’ve figured out ways to work effectively and live abundantly. We see examples of human ingenuity, as well as people who want an outlet to express that creativity and build products for the rest of the world. We believe that technology can be a a life-changing tool for billions of people, and, in the hands of great people, we’ll figure out how to continue advancing humanity. And that is why we’re on a mission to disrupt tech.

Cover photo by freddie marriage on Unsplash

Tech Stacks are Overrated

In the process of interviewing dozens of junior and intermediate engineers, the questions candidates ask implicitly say as much about them as the rest of the interview. One question that comes up occasionally is some variation of “what tech stack are you using”? List some of the myriad Javascript libraries-du-jour and I get a murmur of approval; mention something mature and be met with silence or a disappointed “oh”. In fact, many outright say that they want to be working with the latest or “bleeding edge” technologies.

I get it, the “right” technologies are cool and shiny and have undeniable appeal. You feel invigorated when using them. For me, I’m excited about Elm, Crystal, and GraphQL.

But focusing on tech stacks and looking for the coolest technology during job interviews isn’t very valuable, and distracts from more valuable questions. Companies build software to reduce costs or capture value. Customers don’t care what tech stack companies are using, as long as they can get things done. A company using Node and bleeding-edge ES2018 doesn’t get to charge a coolness premium over a competitor using Rails; the shinier tech, by itself, doesn’t automatically create more value.

It can, however, increase costs. Mature technologies have a well-worn path to success: there are documentation or blog posts for everything you’d want to do, Stack Overflow questions for any issue you might run into, and a patch for every bug that might’ve existed in a v1. None of that might be true with the new and shiny, where any one of a dozen configuration options or plugins could break everything if you breathe the wrong way. There’s no clear path to a maintainable codebase and your ability to consistently create value in the long term.

Rather than ask “what’s your tech stack”, a more interesting question is why that tech stack makes sense for what’s being built. As a interview candidate, listen for a clear reasoning that makes sense — that’s a stronger signal of a company that’s more likely to be successful (and one where you can learn) than one that picked its technology based on what was cool at the time they started. To a good interviewer, that’s also a more impressive question.

It’s even more valuable to go beyond the technology and focus on the product and problem. What problem is the company trying to solve? How are they thinking about the problem, and what is their proposed solution? What kinds of problems do you want to solve? What kind of products do you want to build? As an interviewer, I’m looking for alignment between what we’re building and the problems and products you’re passionate about. As an interviewee, determine alignment around these questions first — and then you can ask about the technology, and whether that’s a reasonable choice for the problem. It’ll make for a much more interesting conversation for everyone.

If this makes sense to you, and you want to use technology to create value, I’m hiring a few engineers to empower every worker on Earth.

Photo by Jaz King on Unsplash

Service objects as test fixtures

In our Rails codebase, we often have tests that begin with many lines of setup code — declaring relevant variables, creating and updating models — to setup the database so we actually test what we intend.

Snippet 1
Most of this test’s body is setup

For background: we have Projects, which can have multiple Bids (each of which is associated with a different user — in other words, users can submit a bid to a project). The project’s creator can “accept” a bid by offering the bidder a Contract.

Tests with lots of manual model setup causes at least three problems:

  1. Tests become harder to read, because the setup code isn’t logically important to the test. You end up having to skim through a lot of code clutter to get to the important test code.
  2. Tests have to know exactly how models fit together, and if that changes, all the corresponding tests have to change as well. For example, in the example above, you’d have to know that bid models and contract models are both associated with a project (they’re not valid unless you specify a related project), and the status of both the bid and contract have to be as specified (otherwise you’ll end up with an invalid state error). Easy enough if you just wrote the underlying code, but impossible to keep in mind for someone new to the code (which could be you, a few weeks later).
  3. Sometimes you have user-facing concepts that are implemented as a “derived” state of data models. For example, in addition to the concepts above, our UI has the concept of “direct offers”, which is implemented as a particular combination of bid state and contract state. This leads to a logical disconnect when we want to test some aspect of that functionality, but the setup code doesn’t say anything about a “direct offer”.

For unrelated reasons, we started moving business logic functionality into service objects, especially when there are side effects that need to happen in certain cases. The thinking behind this is a subject for a different post, but the end result is that (for example) we can create a bid for a user on a project by simply calling BidOnProject.new(user, project).create, which takes care of creating the Bid instance, updating statuses, setting prices, and creating and sending notifications. Creating a direct offer is similarly simple: BidOnProject.new(user, project).create(direct_offer: true)

Lately, I’ve found myself using these service objects to setup test state as well, with code that roughly looks like this:

Snippet 2
Simplified setup via a service object

Much less setup, much more readable test code. Since this runs the same code as “normal” app, tests don’t have to know the details of how models and state fit together, and changes to that only need to happen in one place. Using service objects is better than using custom factories for this reason as well — why duplicate the business logic? Finally, to the extent that you have service objects for user-facing concepts, tests become more coherent and clear, which ultimately make them more reliable and valuable.