Do your experiments fail even before they are run?

(Many thanks to Scott W. Ambler for his tweet which inspired this article: “The experiment failed successfully!“)

One of the many traits associated with business agility is the frequent running of experiments. Experimentation might be related to assessing the market viability of a new business offering, determining if a proposed solution will overcome a technical uncertainty or understanding whether a specific practice such as pair programming can improve delivery outcomes.

Most of us studied the scientific method during our formative years, but as a refresher, it can be distilled into the following four steps:

  1. We define a hypothesis
  2. We design an experiment to test that hypothesis
  3. We run the experiment in a controlled manner
  4. We study the resulting data and determine how to proceed based on that data

The application of this process within a business context requires certain factors to be present which we tend to take for granted in the scientific context. And the absence of any one of these means that we might be building a house of cards that will collapse under scrutiny.

So what do we want to confirm?

  • An openness to receive data which refutes our hypothesis. The individuals running the experiment might be content knowing they need to go back to the drawing board, but will the stakeholders supporting them be equally open-minded? This requires both a sufficient level of psychological safety within the group that is conducting and sponsoring the experiment but also a lack of ego tied to the hypothesis itself.
  • The willingness to take the time required to define a good hypothesis. If we don’t spend some effort defining what the problem is and creating shared understanding within the team on what we are trying to prove or disprove, then we may not know how to proceed when we get to the final step of the scientific method.
  • An appetite for designing a minimally sufficient experiment. This is the Achilles Heel of many MVPs. The hypothesis related to market receptiveness is well understood, but stakeholders are unwilling to trim product scope and solution approach to the minimal level required to run an experiment. While they might still gather valid data, the cost of delay tied with learning something valuable is much greater.
  • The integrity to construct an experiment which might disprove our hypothesis. Stacking the odds in favor of a desired outcome will yield positive data, but it won’t help in the long run. An example of this is when a leadership team wants to assess whether an adaptive approach might result in a higher degree of success than a predictive one. If they staff this team with the “best of the best” the results will likely be better than with prior projects but this is as a result of the people more than the process.

John Lasseter – “With science, there is this culture of experimentation, and most of the time, those experiments fail.

 

 

Categories: Agile, Facilitating Organization Change, Project Management | Tags: , , | Leave a comment

Post navigation

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a free website or blog at WordPress.com.