From Einstein to Bezos: What Science Can Teach Us About Creating and Disrupting Growth Models

One of the things that has struck me in my transition from neuroscience labs to tech is the deep connections in tech to the way the world of science operates (including heavy adoption of scientific terms and techniques), but a general lack of depth in the understanding of what science can really teach us about how to do these things well. Even at ResearchGate, the largest platform at the intersection of tech and science, our professional interest in science rarely extends past specific experimentation techniques. I believe this misses a great deal of insight; that there is a lot more that we in tech can learn from how science has worked for hundreds of years to build knowledge and drive human progress. 

Many of the tough, important things that tech claims to be “inventing” are actually problems that science has been grappling with for generations. The kinds of scientists and systems we need during different phases of growth, crisis, disruption, and return to growth can be learned not only from Steve Jobs’ biography, but also from Einstein and Newton’s. In this article, I’ll focus narrowly on the crisis that happens as an entrenched model is pushed to the breaking point, but by doing so I hope to also provide a more wide-angle lens for how to interpret great scientists as sources of inspiration for modern innovation.

Knowledge/paradigm fit vs product/market fit

Steven Dupree, currently Head of Marketing at Amava, gave a usefully simple and intuitive description of the growth process in tech as “the scientific method applied to a company’s key metrics.” For the most part, the specific brand of scientific method employed by tech companies is rapid iterative experimentation designed to optimize existing connections and loops. Thomas Kuhn in his canonical book “The Structure of Scientific Revolutions” calls this “normal science,” where existing paradigms or models are strengthened and further interconnected.

As with science, it’s fair to assume that most of product growth does and should operate in this way. Nonetheless, at a certain point, the existing model or paradigm defines its own local maximum and enters a period of crisis characterized by decelerating or even stalled growth. At this point, the science of optimization is no longer sufficient, and we need to look toward intuitive thinkers, generalists like Einstein and Bezos, as examples of how to get out of the rut by shifting our fundamental models and strategies. 

Laying out Kuhn’s different phases in layman’s terms makes it easy to see where analogies exist to business*. 

Screen Shot 2020-01-28 at 4.06.29 PM.png

A few points are worth mentioning before diving into specifics on any particular phase:

  1. You may be exceptionally smart and have knowledge/experience in all of these phases, but it’s quite certain you aren’t wired to be good at all of them because natural skill in one is often natural weakness in another. Developing that self-awareness (or as Eugene Wei calls it “your own invisible asymptotes”) is key.

  2. The entity that goes through this can and often is an entire company, but it may also be one product or even in some cases one major feature. This also means at a larger company, you might have one area in crisis and another in “normal science.” 

  3. In product growth, we most often experience deceleration when our first successful acquisition channel becomes saturated, and we begin searching for ‘the next viable channel’. While any deceleration can feel like a crisis, constantly testing new channels and optimizing existing ones is essentially ‘normal science’ and doesn’t meet the definition of ‘crisis’ within this mental model. 

Is it time for a crisis?

The content out there on how to run “normal growth” is vast, familiar, and fairly scientific already. So I’m going to focus on the “crisis” and “paradigm shift” phases. It’s in these challenges (where instinct is traditionally assumed to be the only option) that creative science can unearth new tools growth.

So what can science tell us about when a particular model is headed for its “growth horizon”? Here are a few key signals: 

  • Decreasing success rate of experiments: People wrongly imagine science as mixing various things together and just seeing what happens, but like A/B testing a new signup flow, doing it properly requires having a hypothesis about what effect you will see and why. If you’re correct, you strengthen your existing views; if you aren’t, you change them. Either one fuels your next hypothesis, making it more precise and over time increasing the efficiency and impact of your work. At the growth horizon, more knowledge doesn’t yield significant wins because the room for optimization is too small.

Whether you’re working on a topical area like acquisition or you’re the head of growth for the whole company, keeping an eye (even roughly) on what % of your hypotheses lead to successful experiments (defined as a significant result above or at the level you expected) makes sense. I talked about building this into your experimentation tooling in a post years ago, or I think for many companies just looking at OKR grading over time would be a decent high-level proxy. It’s important to take experiment velocity and complexity into context, but generally speaking, if it feels like wins are harder to come by, it may be worth thinking about the health of the model as a whole.

  • Increasing failure to align results across teams or initiatives: Complex products can hide growth horizons via model whack-a-mole: what appears to be progress is simply shifting value from one place to another. This can happen especially if teams are competing for a resource that is essentially zero-sum, such as a person’s time, disposable income, or even attention. We have lots of technical solutions to try and combat this and as managers one of our main responsibilities is the proactive work of setting up OKRs and metric trees/north stars so that the whole organization is pushing in parallel towards the same destination. Doing this same work retrospectively, however, is even more revealing and less common; we humans are unfailingly biased in how we evaluate future possibilities by the current state rather than how we predicted the current state, and I’ve seen many companies hold off-sites to do quarterly or yearly planning across the org but very few do the same for quarterly or yearly retrospectives, despite how much richer past data is than future plans. Look at the wins and losses you actually had and try to fit them into a cohesive narrative for the increased value you’re delivering to users or customers. Can you attribute the wins in certain areas to losses in another or vice versa? Did your overall strategic hypothesis hold up? These questions can be invaluable for determining that much of your work is shifting around value in a model that is becoming a fairly dry sponge.

The Crisis Spotting Gene

The only progress I can see is progress in organization.

-Albert Einstein

Simply put, if you are in the weeds focused on a narrow bit of optimization, it’s unlikely that you will recognize the moment of crisis. Only by surveying the entire landscape and uniting knowledge gathered from a variety of perspectives can you see the full picture. It’s critically important to understand that this is basically opposite to the process of normal science itself, where you follow increasingly specific insights down a particular path (depth vs breadth is appropriate here). For this reason, Kuhn posits (and I agree) that most scientists who excel at “normal science” will not be the ones to identify a crisis. Take Einstein, who was never an experimentalist interested in “normal science;” his career was defined by trying to unite previously disconnected theories into a unified understanding of the physical laws of the universe. Unified-model thinkers like Einstein, who crave the process of finding common threads and organizing different fields of knowledge, will naturally identify weaknesses in the current paradigms. They definitely aren’t who you want optimizing your landing page conversions though.

Especially at larger companies, this can cause problems. Let’s say the head of your product org cut her teeth as a highly efficient leader of normal growth within narrower scopes: she led an impressively efficient team or few teams through the process of rapid iteration against the existing growth model. She A/B/Z tested subject lines and took an ax to every main flow in the product. Unless you have a true unicorn on your hands, it’s unlikely that she is the kind of person who will naturally sniff out the first signs of crisis and the need for a new model. She wants to have her face deep in the book, not organize the library.

Most product leaders at mature companies are somewhere in the middle: indeed the line between model optimizer vs model innovator is far from black and white. Rapid A/B testing of minute components of a funnel can be a highly specific engineering exercise and less dependent on abstract knowledge to drive hypotheses. Model creation or innovation as I’m describing here is closer to theoretical science. But most of the time, the priorities of the company and the focus of its leaders are squarely in the middle. 

Having product leaders who are squarely between engineers and theoretical scientists means they can lead and support a wide range of more specialized practitioners. Nonetheless, it means they’re not the most natural fit for a crisis spotter or paradigm creator, so setting up metrics and processes can help support where natural tendencies fail.

New Model, Anyone?

If trying and failing to organize knowledge into a model creates the crisis, then trying and succeeding to do so resolves it. I would highly doubt anyone could come up with a rubric or framework for how to do this across the board given how complex and particular to each business and discipline it is (see the limitless literature on losing product/market fit as markets or technology change), but I do think science and experience can suggest a few ways to set the people responsible for this effort up for success.

Make sure they have a strong voice and connection to resources/freedom needed for experimentation. The Einstein analogy again fits well here, because it’s likely that a non-insignificant part of his success came from his ability to draw attention to his work. Through some luck (especially in timing) and quite a bit of charm, Einstein’s fame far outpaced his peers, some of whom were likely equally intelligent. This meant he could write theoretical papers and suggest experiments that might prove his theory. For example, his General Theory of Relativity suggested a phenomena involving the bending of light by the sun’s gravitational field, which was proved by a huge effort from the part of Sir Arthur Eddington in which Eddington traveled to Brazil and the coast of Africa to witness solar eclipses and measure the positions of stars. Had Einstein not been able to inspire others, his early theories would have remained exactly that, and he would not have gained the notoriety that allowed him to continue to attract more resources to his ideas.

Keeping engineering and other precious resources away from unproven projects is increasingly common in tech, but often not accompanied by a way for those projects to gain Einstein’s magnetic pull over time. A company I work with recently split off a “product discovery” team consisting purely of PMs, UX designers, and analysts. Their charge was to validate the potential of an entirely new business model for the company. Before too long the inability to run experiments outside of user interviews or surveys etc created what might be called “idea bloat;” too many ideas and theories floating around without the full suite of experimentation tools to test their potential fit. A model-innovating project needs to have the personalities and communication tools to marshall resources and pull engineers into these uncertain areas once the reward is worth the risk.

Find model thinkers, and make sure they have the right reporting and mentoring connections. Let’s say your Director/VP of Product is more of the engineering “normal science” type, which works because you’re mostly undergoing “normal” growth and his aggressive iteration moves metrics forward quickly. You also have a long-term growth project that would disrupt the current model, and so you find a generalist model-thinker to run that effort. Good so far! But if that generalist does her job correctly, it is going to appear to the tightly organized Director/VP that things lack structure. Here a quote from the biography of Vannevar Bush (the man largely responsible for creating the NIH and NSF) helps describe how “innovative” science happened at The Office of Scientific Research and Development (OSRD) during WWII:

The success of OSRD is explained not so much by administration as by a kind of fruitful chaos. The organism grew so big so fast, under a steady rain of government funds, that top administrators could not hope to reach down into every laboratory to instruct an investigator to halt this line of research or take up that. Thus when a researcher had a hunch that penicillin would be useful in fatal subacute bacterial endocarditis, and his committee turned his hunch down, he just went ahead and proved that in large doses over enough time it cured many cases. (…) [The OSRD] was set up to release the energies of the young, and this it did in great measure.

The OSRD was so successful in fact that the government realized that funding basic science with no established application in industry was key to the long-term success of the country. Before then very little academic science was funded by the government; now nearly all of it is.

The need for some controlled “fruitful chaos” and hunch-driven decision making means that in these projects the person tasked with validating a new model should likely be connected closely with entrepreneurial types (often the CEO), even reporting to them directly. The creation of the Kindle is an excellent example of this; Bezos spun off a siloed team tasked with disrupting Amazon’s core business at the time: online sales of physical books. To get the project up and running they were “sequestered away” from the rest of Amazon and gave progress updates directly to Bezos. Later they used the same model to launch Amazon’s early experiments into retail, which now includes Whole Foods, Amazon Go, and a lot more on the horizon.

I would also recommend setting these people up with advisors who are highly trained “model thinkers” that constantly need to imagine and pass quick judgment on new potential models, such as venture capitalists and management consultants (I also think these backgrounds make great hires for disruptive growth for that reason). And finally, be aware that one of the challenges is that the leaders of a company are often the ones who created the model in need of disruption in the first place. As in science, this can cause conflict even for the best and brightest – late in his life Einstein was famously skeptical and even hostile to the theories of quantum mechanics (now largely accepted as true) proposed by Niels Bohr and others that challenged some of the laws he helped create.

Shift, Scale, Rinse, Repeat

This process never ends primarily for one reason –  your market doesn’t sit still. It is always moving.  These days markets are moving/changing at an accelerating pace.  As your market moves, your product needs to move with it making product/market fit a pulse that you need to constantly keep your thumb on. Additionally, in the effort to maintain growth, companies tend to expand their target audiences to new segments causing the need to step through this process again.

-Brian Balfour  https://brianbalfour.com/essays/product-market-fit

I’m lucky to have been deeply involved with Reforge during my time in SF and to continue counting on Brian for advice because his writing on model thinking and the endless cycle of product/channel/model/market fit explains this concept generally better than I can hope to do here. My hope is simply to extend the connection to science past where it has been traditionally seen as valuable. Models, experiments, and p-values are obviously scientific but so are the ways we pitch for resources, build knowledge structures, and manage a network of teams or labs with different specialties all working together to produce a whole that is greater than the sum of its parts.

It’s not to say that science does these things perfectly; in fact, the richness of history often comes from the learnings of failure and missed opportunity. My hope in writing this is just to suggest that science can offer a window into a system that has been iterating on many of the topics at the core of innovation for hundreds of years. The cycle of growth, crisis, and paradigm shifts is one of those, but new opportunities to find inspiration in science stretch back as far and wide as the possibilities of tech open ahead.

As a final resource, I’ll offer a few of my favorite science-focused books below that I would recommend to any product leader.

  • Einstein: His Life and Universe by Walter Isaacson

  • Isaac Newton by James Gleick 

  • The Structure of Scientific Revolutions by Thomas Kuhn

  • The Model Thinker: What You Need to Know to Make Data Work for You by Scott E. Page

  • Endless Frontier by Gregg Zachary

  • Thinking, Fast and Slow by Daniel Kahneman


*It’s worth noting that some have criticized Kuhn for his desire to package up the complexity of science into neat rules and I’m sure I will (fairly) suffer the same critique; my intention is to help create a simple mental model that can help tech draw insight from science, but of course I understand just how unique every company feels (and is!) especially when you’re in it.

from Blog – Reforge http://www.reforge.com/blog/2020/1/28/from-einstein-to-bezos-what-science-can-teach-us-about-creating-and-disrupting-growth-models