This book covers a lot about transforming a traditional enterprise into a newer, more suited Lean based company. It covers from finances to portfolio management going throught adoption, technical practices and lot more.
The book is long. And tiring. But full of very valid useful bits of knowledge. I’ve covered Part I in a previous post.

This is about Part II which is about how to explore the opportunities that exist around us. The authors point very early on that we tend to jump into solutions (and love them) before exploring the problem space. And, to add more to it, once we start spending to build a solution, the sunk cost fallacy makes us grip even stronger to the solution we’ve started to build regardless of the result we’re getting. To avoid it, the big advices are:
  1. Define the measurable business outcome to be achieved
  2. Build the smallest possible prototype capable of demonstrating measurable progress towards that outcome
  3. Demonstrate that the proposed solution actually provides value to the audience it is designed for

The first one helps us shape a goal that, even if the players (a.k.a. the product’s developers) follow unconventional paths to reach the goal, it is still beneficial for the product. The second allows us to spend as little as possible to try out ideas that may bring forward appropriate revenue. Finally, the third one requires us to ensure that we are, indeed, being rewarded by our audience for solving a real problem they needed solved.

In order to achieve those 3 steps, we want to ensure that, instead of listing down requirements, we focus on hypotheses that we are putting forward. Something like “If we provide a high fidelity visual representation of the product we are offering, then we will observe a higher percentage of customers adding those products to their cart" instead of “Display high quality pictures on the product page". As a consequence of the hypothesis approach, it is clear that one should try to observe that the “then" piece of the sentence yields once we start developing a version (usually very simple, narrow and small) of the “if" piece rather than implementing high quality pictures. So one would run an experiment such as the following:
You would find a single (or a handful) of products, take high quality pictures and show it to certain customers while others would see the old lower quality pictures and try that out. If it didn’t work, you wouldn’t go through the trouble of putting in place processes and operations to ensure all products have high quality pictures before discovering it doesn’t yield any quantifiable results.

A few other highlights of that section:
  • “The job of an experiment is to gather observations that quantitatively reduce uncertainty." Note that the higher the level of uncertainty, the less information is needed to reduce uncertainty. On the other hand, the less uncertainty we have on a specific thing, the more information we need to reduce it even further. This hints us to search for a balance in which we have done all easy/low fidelity/quick experiments that reduced the bulk of the uncertainty but without having invested too much to reduce such uncertainty.
  • Measurement: A quantitively expressed reduction of uncertainty based on one or more observations. This means a measurement is a probability distribution that models how likely each result is to happen.
  • The fundamental question is not whether we can build a solution but rather if we should. And the answer should come from, first, a shared understanding of what problem is being addressed and how we might tackle it. A useful tool is the Business Model Canvas. There are other canvases to address testing product/market fit such as the Lean Canvas, the Opportunity Canvas and the Value Proposition Canvas.
  • A Minimum Viable Product, as defined by Marty Cagan, is “the smallest possible product that has three critical characteristics: people choose to use it or buy it; people can figure out how to use it; and we can deliver it when we need it with the resources available — also known as valuable, usable and feasible" to which the authors add “delightful". This is not Eric Ries’ definition which Cagan rebrands as MVP test and describes something that already has commercial value (as opposed to Ries’ definition which serves to show potential value).
  • There are many different types of MVPs. A few are:
    • Paper: Throwaway hand-sketched drawings to illustrate a user experience or design.
    • Interactive prototype: Clickable, interactive mockup of a prototype or design
    • Concierge: A personal service instead of a product, which manually guides the customer through a process using the same proposed steps to solve the customer problem in the digital product.
    • Wizard of Oz: Real working product however behind the scenes all product functions are carried out manually unknown to the person using the product
    • Micro-niche: Reduce all product features to the bare minimum, socialize and drive paid-for traffic to the product to find out if customers are interested or willing to pay for it
    • Working software: Fully functioning working product to address a customer problem, instrumented to measure customer behavior and interactions
  • There is The One Metric That Matters (OMTM). It should be close enough to the hypothesis so that it answers whether the assumptions hold or not. It provides a ground for focused discussion. It should be timely so that it can be quickly acted upon.
  • Defining what we measure will define how we will behave. Vanity metrics as described by Ries do not offer guidance to what action to take and are, therefore, useless. Good metrics change the way you behave. A couple examples are “Number of visits" vs “Funnel metrics or cohort analysis", “Number of downloads" vs “User activations". For service-oriented businesses, use cohort analysis on Acquisition, Activation, Retention, Revenue and Referral (AARRR - the pirate metrics).
  • In addition to Ries’ three growth engines (Viral, Pay, Sticky), enterprises should also consider two other: Expand in which the initial business model grows either in categories, geographically or adjacently. And Platform in which, upon establishing a successful, product, the company developers other products that integrate with the first one transforming it all in a platform.
  • When migrating between investment horizons, there are disruptions. Those should happen very consciously and with some enablers. From explore to exploit, there are five:
    • Market: The early adopters must be part of larger group and we can understand how to reach those
    • Monetization model: How is the investment going to be recovered? This tends to be hard to change in the future but critical to success.
    • Customer adoption: Albeit important to get customers, it is important not to sacrifice your vision and product to attract users quickly.
    • Don’t “big bang" releases: Test and learn, use alpha and beta launches frequently and surely.
    • Team engagement: Collaboration needs to keep flowing between the innovation and operations teams.

And you can keep going about Part III now.