Micro-scale vs. Macro-scale Innovation Evaluations
Imagine your company has 20 innovation projects, where each project has an equal chance of 50% to succeed and fail. The success of each project is independent from the others. If a project fails, your company will lose $500,000, if a project succeeds your company will make $1.5m. Then ask a team of 20 managers if they were to take a project. What answer will you get?
Behavioral economist Richard Thaler posed exactly that question to managers whether they’d take such a project. Out of 23 managers only 3 volunteered to take the risk.
When Thaler asked the CEO how many projects he’d like to see done, he of course responded: “All of them!”
This poses a problem. The CEO wants all projects done, but managers are not willing to take them on. The CEO’s perspective is completely rational and understandable. When half of the 20 projects fail, the company loses $5m, but the other half of projects succeeds and make the company $15m in profit. Regarding all 20 projects together as a macro-scale investment portfolio the company would reap a profit of $10m. This is a highly attractive investment.
That’s not how the managers see it. A manager sees the projects from a micro-scale, the individual project. If the manager succeeds, the best she will get is a small bonus and a pat on the back. If she fails, she may not only get no bonus, but even face the threat of being laid off. The possible gains are too small, while the risk is too high.
Micro-scale vs. Macro-scale Evaluations
Corporate incentive systems and evaluation processes suffer from multiple problems. One is that such an innovation portfolio is quickly broken down into measuring performance of individual projects without considering the macro-scale context. The 50% chance that a project will fail is ignored and reasons searched why the project failed. If the project fails, it must be the manager’s fault. If a project succeeds, the same diligence does not happen. The debriefing never looks at reasons why the project succeed, or rather “why the projects did not fail.”
NASA’s space shuttle launches are a good illustration. Twenty-four successful launches did not require NASA to answer or even ask the question “Why did the shuttle launch *not* fail?” Only when the 25th shuttle launch – Challenger – failed, such questions had to be asked and answered, and the culprits identified and punished.
The other problem with such evaluation processes is the hindsight bias. With incomplete information available during a project (ex ante), decisions are made that after the end of a project (ex post) with more comprehensive information available that may look outright stupid. The hindsight bias let us believe that this should have been known already at the time, which of course is not possible.
In behavior science we call this the principal-agent problem. The CEO (principal) tends to put blame on the manager (agent), while we can clearly see that here we encounter wrong behavior of the CEO. The CEO blames the managers for making decisions that fail to maximize profit for the firm, but try to benefit their own interest, such as not losing their jobs. The CEO is to blame for failing to create an environment in which managers and employees feel that they can take risks and not be punished if the risk doesn’t pay off. Richard Thaler calls this the dumb principal problem.
Managers and employees should be rewarded for decisions that were value-maximizing ex ante, with information at the time they were made, even when they turn out to fail ex post. Only this will allow to get managers to take on all 20 of the projects and maximize the company profit.