In 2000, Southwest Airlines Flight 1455 was transporting 142 passengers and crew from Las Vegas to Burbank airport in California. The pilot was a man named Howard Peterson. During landing, the plane approached the runway at too steep an angle and was travelling too quickly to be able to come to a halt. Instead of stopping gently at the gate the plane smashed through the blast shields intended to slow down runaway aircraft and came to a halt in the middle of a busy street in Burbank, pinning a car that was driving along the highway.
Thankfully, everyone survived.
When interviewed about the incident, the pilot explained that he had been concerned about another plane that was lined up to land ahead of flight 1455. His plane was rapidly catching up to it. This led to the plane approaching the runway at too steep an angle and travelling too fast to be able to make it. Both he and his first officer were aware of the fact that they were not lined up to make a safe landing. They both knew that the procedure should have been to notify the tower about their issues, abort the landing, circle the airport and try again. Despite knowing all this, neither did anything about it at the time.
As the plane came to a halt in the middle of the highway, Peterson was recorded on the in-cockpit recording equipment saying “well, there goes my career”.
Unfortunately for him, his prediction came true. Later that year he and his first officer were fired by Southwest Airlines. The airline determined that the crash was due to pilot negligence. This was despite the fact that both the pilot and co-pilot were extremely experienced, with tens of thousands of hours of flight time logged between them.
Although the consequences of failure in developing software products and features are thankfully much lower I still see many of the same patterns show up. Seemingly experienced, well-trained teams and product leaders create products that dramatically skid to a halt having provided no value to users or companies. This often happens despite the information being available to them to detect and remedy the situation before it’s too late. I’ve seen many teams and product leaders damage their careers as a result of these failures.
In later years, NASA and the Ames Research centre performed a study of many airline crashes with similar circumstances to this, where the cause of the accident was initially attributed to ‘human error’. In most of these cases the response has always been to beef up training and/or disciplinary measures. In this study NASA instead attributed the cause of 75% of these accidents to ‘plan continuation bias’. This is defined as: “The unconscious cognitive bias to continue with the original plan in spite of changing conditions”.
This means that once you’ve invested in creating a plan, you’re more likely to blindly follow that plan – despite information and signals that might tell you to do otherwise. Often, when creating software products and features we can succumb to the same thing. The more we’ve invested time and effort into a plan or a backlog, the less likely we are to veer away from that plan, even if it doesn’t serve us anymore. This is often referred to as ‘get-there-itis’.
The bad news is that ‘get-there-itis’ gets worse the more stressed we are and the closer we are to our destination!
This bias is a recognition that the root cause of this and many other crashes is not simply human error. Instead, they should be treated as system errors. The systems that we create and work in should make it easy for us to succeed. Instead they all-too-often make it difficult not to fail. We need to correct the systems in which we develop products for our customers.
Your likelihood of getting good outcomes from your product development are not only guided by the quality of your feedback loops, but also the quality of the decisions you make as a result
In the airline industry, one of the responses to ‘get-there-itis’ has been an approach called Crew Resource Management (CRM). The intention of CRM is to encourage dissenting voices and challenge in situations where mistakes have high consequences. Crew members are invited to question decisions (regardless of where they sit in the hierarchy) and to critically analyse the plan using all of the information available to them. The goal is to improve the quality of decision making and make it easier to succeed.
Why not make a similar systemic improvement to make it easier for our teams and products to be successful? Which brings us to Sprint Reviews. I’ve long had an issue with Sprint Reviews. They are often empty demonstrations of what’s been built to a roomful of stakeholders who are too far from the problem to have any useful input. I’ve seen too many powerpoint driven meetings, too many polite rounds of applause, too many teams no better off than they were before. After all, what’s the point of these sessions if nothing ever changes as a result?
Instead, use these meetings as true navigation aides that help overcome our Plan Continuation Bias. If the pilot in the story above had some impartial observers to help challenge his decisions to continue with his plan, would he still have made the same mistakes?
The next time you’re holding a sprint review, I’d recommend trying the following structure:
🖥 Demonstrate what you’ve built and where the product is at
This is where most teams stop. They demo what they've built, ask half-heartedly for feedback and then wrap up.
🎯 Share your product goals
Share:
What does success look like?
What are you trying to achieve?
What metrics or behaviours are you trying to change?
📊 Share the relevant insights and metrics you’ve gathered
Share:
What have you learned?
What information do you have that lets you know you’re on your way to achieving your goals?
What are you measuring to show that you’re building the right thing?
🗺 Share the plan
Given what you know, what are you going to do next? What’s the next important problem to solve or steps to take?
🧐 Invite dissent
Generate constructive feedback and challenge from people that aren’t as invested in the plan and the backlog you’ve created.
Let people know that their role in the session is to help you navigate by asking difficult questions and challenging your assumptions and plans. They don't have the same biases as you do, take advantage of that! Task them with poking holes and helping your aircraft not end up in the middle of a busy highway.
------------------------
https://www.latimes.com/archives/la-xpm-2001-jul-11-me-20945-story.html
https://humansystems.arc.nasa.gov/flightcognition/article2.htm
https://en.wikipedia.org/wiki/Crew_resource_management