Most popular posts
- What makes great boards great
- The fate of control
- March Madness and the availability heuristic
- When business promotes honesty
- Due diligence: mine, yours, and ours
- Alligator Alley and the Flagler (?!) Dolphins
- Untangling skill and luck in sports
- The Southeastern Growth Corridors
- Dead cats and iterative collaboration
- Empirical evidence: power corrupts?
- A startup culture poses unique ethical challenges
- Warren Buffett and after-tax returns
- Is the secret to national prosperity large corporations or start-ups?
- This is the disclosure gap worrying the SEC?
- "We challenged the dogma, and it was incorrect"
- Our column in the Tampa Bay Business Journal
- Our letter in the Wall Street Journal
Other sites we recommend
The greatest comeback ever and the limits of decision models
McKinsey’s piece about the benefits and limits of decision models makes a crucial distinction: some outcomes can be influenced by leadership and some cannot. Big data may help make accurate predictions or guide knotty optimization choices or help avoid common biases, but it doesn’t control events. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather.
Executives, however, are not concerned only with predicting things they cannot influence. Their primary duty—as the word execution implies—is to get things done. The task of leadership is to mobilize people to achieve a desired end. For that, leaders need to inspire their followers to reach demanding goals, perhaps even to do more than they have done before or believe is possible.
One fantastic example of this is the story of the 2013 America’s Cup, in which “the richest and possibly most prohibitively favored team in the history” of the sport found itself down 8-1 in the first-to-9 competition. Skipper Jimmy Spithill was able to look past the data and rally Oracle Team USA through a combination of leadership and intuition:
In Against the Wind – One of the Greatest Comeback in Sports History, the Wall Street Journal introduces us to the optimistic conclusions of Oracle’s very expensive modelling software:
The 11 sailors were a collection of international superstars. The engineers who designed the yacht and the programmers who built the software used to plot strategy had no peer. Oracle’s computer simulations suggested the AC72—which cost at least $10 million to build—wasn’t just the better boat in the final, it was the fastest sailboat ever to compete for the Cup, capable of 48 knots, or about 55 mph.
After falling behind repeatedly on the upwind leg of the race, the crew picked up on a hint – but the model still reassured:
Both Oracle and New Zealand had been foiling downwind. [Explanation of ‘foiling’ in video, below. – ed] But New Zealand’s boat was getting partially up on its foils on the upwind leg, too.
Oracle had experimented with upwind foiling five weeks before the race … Nearly every time they tried, Oracle’s hulls would fall off the foils and the bows would nose-dive into the water… There was little time to experiment with the new technique, and Mr. Ozanne’s software indicated Oracle would easily outsail New Zealand upwind even without foiling.
And yet… Team USA lost ground every time while tacking on the upwind leg of the race. It was then that Mr. Spithill chose to demote the model and do a little bit of observing and intuiting:
Sailing upwind involves a trade-off between speed and distance—the tighter the angle to the wind, the shorter the total travel distance but the slower the boat moves. Mr. Ozanne’s computer program had given a target: Sail into the wind at a relatively tight angle of about 42 degrees, which would produce the optimal mix of speed and travel distance.
Looking at the video, Mr. Spithill could see that the Kiwis had come to a different conclusion. They were sailing at much wider angles to the wind—about 50 degrees, on average. They were covering more water but reaching higher speeds—more than enough to offset the greater distance traveled. Foiling appeared to be the key. Oracle’s computers hadn’t anticipated such speeds.
Mr. Spithill didn’t relish losing the Cup to a team who could say, rightfully, that their win represented a triumph for the craft of sailing. With his team’s prospects getting dimmer by the hour, Mr. Spithill decided it was time to stop obeying the computers and start thinking like sailors.
The next morning, a scheduled off day, Oracle’s sailors made upwind foiling the focus of their practice. Rather than sailing 42 degrees off the wind, what the team called their “high and slow” mode. Mr. Slingsby suggested trying 55 degrees, which he called “low and fast.” When the boat got moving fast enough to get up on its foils, the crew made another discovery. It was able to tack more quickly—13 mph rather than 10 mph.
So what about that expensive model?
Back at the Oracle base, Mr. Ozanne said he had found the flaw in the computer model. To get going fast enough upwind to get on the foils, the yacht initially had to sail at an angle that would force it to cover more water—something the computer wasn’t programmed to allow. When Mr. Ozanne input the wider angles into the software, the computer had recalculated the speed and showed the boat could sail faster that way, confirming what the sailors had found.
Oracle Team USA rattled off 7 convincing wins in a row, looking every bit the prohibitive favorite, but still faced high drama at the start and finish of the decisive 17th race. With the contest tied at 8 wins apiece, Spithill and his crew had to survive two last scares:
About 45 minutes before the start, Mr. Spithill heard a loud bang. A critical piece of the sail—a part attaching some of the flaps to the wing—had sheared off. The wing wouldn’t curve properly without it.
Two powerboats sped over and the maintenance guys climbed up onto the wing and started shooting hot glue everywhere. They finished the job about five minutes before the boat had to enter the starting area. Mr. Spithill and his tactician looked at each other and laughed.
As Oracle approached the finish line, Mr. Spithill glanced at one of his teammates, Kyle Langford, who was working in front of him. Mr. Langford, a 24-year-old last-minute addition to the crew, was in charge of adjusting the angle of the 13-story sail with a thick rope he held in his hands. There was nothing high-tech about this job, but it was absolutely crucial. If Mr. Langford dropped the rope, the yacht would quickly lose momentum and possibly capsize.
About three minutes from the finish line, the rope slipped out of Mr. Langford’s hands. He lunged and caught a piece of it with his left hand—just barely—and held on. Mr. Spithill laughed and said, “Nice catch, mate.”
No conversation on this topic would be complete without at least a quick reference to perhaps the most popular or ubiquitous example of the Data vs. Intuition debate: baseball’s sabermetrics, a.k.a. Moneyball. Returning to the McKinsey Quarterly article:
The notion that players could be evaluated by statistical models was not universally accepted. Players, in particular, insisted that performance couldn’t be reduced to figures. Statistics don’t capture the intangibles of the game, they argued, or grasp the subtle qualities that make players great. Of all the critics, none was more outspoken than Joe Morgan, a star player from the 1960s and 1970s. “I don’t think that statistics are what the game is about,” Morgan insisted. “I played the Game. I know what happens out there… Players win games. Not theories.”
Proponents of statistical analysis dismissed Joe Morgan as unwilling to accept the truth, but in fact he wasn’t entirely wrong. Models are useful in predicting things we cannot control, but for players—on the field and in the midst of a game—the reality is different. Players don’t predict performance; they have to achieve it. For that purpose, impartial and dispassionate analysis is insufficient.