March Madness and the availability heurisitic

March 18, 2015

(Editor’s note:  This is a slightly modified re-print of a popular piece we published in April 2013.  Our readers enjoy the subject of how to improve their decision making skills, especially when sports can provide the context.)

Decision making and cognitive biases are common themes here at NVSE.  We’ve written about good board decisions, how the popularity of the Mona Lisa is based on circumstance rather than inherent artistic qualities, how the design of the decision-making process affects the decision, and how managers can undermine their decision making by over-relying on common sense, rationalizing instead of being rational, or making unconscious choices.

The availability heuristic refers to placing too much emphasis on data that is quick and easy to gather.  However in this particular instance – clutch performance during March Madness – it just might be more of a reliable bellwether than a problematic bias.

In Method to the Madness Peter Keating explains “why NBA GMs should go mad for the breakout stars of March.”

NBA teams scout hundreds of players across the country, tracking their every move for months on end, and put dozens of prospects through extensive workouts. Yet when it comes to draft night, clubs routinely rely on the same measure the rest of the country uses: NBA GMs, it turns out, favor players who had surprising success in the postseason. And the even bigger shocker? They’re right to do so.

Economists Casey Ichniowski of Columbia and Anne Preston of Haverford studied March Madness because they wanted to investigate whether employers often overweigh recent and vivid information when making decisions. Earlier research had shown that when we make judgments, we rely on data that’s accessible — the quickest and easiest stuff to gather — even when we know it’s important to be objective. Social scientists call this the “availability heuristic,” and it explains why Americans wrongly believe tornadoes kill more people than asthma: A spectacular catastrophe is easier to recall, so we overestimate its likelihood…

On average, a player who scores four points per game above expectations on a team that wins one more game than projected in the tournament will boost his draft position by 4.7 slots, according to Ichniowski and Preston. Now, here’s the thing: Players who get March Madness bumps deserve them. Ichniowski and Preston also examined what happened to players after their draft days… In every case, the group that got draft boosts from the NCAA tournament played better than those who didn’t. If anything, teams undervalue March Madness as a predictor of future success and stardom.

I usually repeat “sample size, sample size, sample size” about as often as and in the same tone that Jan Brady wailed “Marcia, Marcia, Marcia,” so I was shocked by these results. For most players, March Madness lasts only a game or two, yet it sends a signal powerful enough to last entire careers.

“I’m thinking of showing my sports class a clip of Michael Jordan beating the Cavaliers and asking if you could have ever predicted this, so that maybe you take MJ at No. 1 instead of No. 3,” Ichniowski says. “Then I’d like to show his NCAA shot [winning the national championship for North Carolina] and move to the question of how much to weight March Madness performance.” The answer: At least as much as NBA GMs do now. The NCAA tournament, with its pressure-packed contests featuring the best college players in the country in front of gigantic audiences, is truly a meaningful simulation of NBA conditions.

UPDATE (6/14/15):

Same sport, different draft; same struggle, different bias:  a story about Kristaps Porzingis, a 7’1″ 19-year-old playing in Liga ACB, perhaps the second-best basketball league in the world.  He’s “the type of prospect that has historically torn coaching staffs and front offices apart” as they try to assess his NBA bona fides before the draft.

All draft picks are crapshoots, but some feel like crappier shots than others. It’s uncouth to plainly say, “I have a bad feeling about this guy,” so we do our best to justify our vague inklings. The stronger our distaste, the stronger our effort. So of course it’s the foreigner with the spindly frame and the funny name who has people [grasping for answers]. … What is the draft if not complete pseudoscience?  …

He’s like a young Robin trying on Batman’s utility belt — the tools are there, and they’re incredible. They just don’t fit yet, and you can’t be too sure that they ever will. His issues on defense are the same most players his age experience. He bites on pump fakes, he gets caught ball-watching, and he can be a step slow recovering to his man. But there is a chance that, five years down the line, he’ll be doing things that only a handful of NBA big men can do at a high level.

Maybe all of that hokey pseudoscience will prove prescient. Drafting isn’t an art, and it isn’t a science, but if you squint hard enough, it can look like a happy medium. It’s all just waves of confirmation bias on both ends of the spectrum posing as data points, right? It can tell you anything you want it to if you wait long enough. But it can’t, at the very moment, tell you the fate of Kristaps Porzingis. And so, like any other year, we’ll go on trying to find some illuminating detail that will solve the puzzle once and for all, blissfully ignorant to the fact that there’s only one person with the final pieces.

As with the NFL draft, pre-draft metrics have only some predictive power.  The data don’t predict a player’s ceiling, can’t account for what kind of system a player will enter, the talent he’ll have around him, the luck he’ll have with injuries, or the intangibles he possesses.

If you’re looking for a bellwether of NBA success, look to the NCAA tournament.  Its pressure-packed contests featuring the best college players in the country in front of gigantic audiences turns out to be a meaningful simulation of NBA conditions.  Even though it’s a very small sample size – for most players just a game or two – the data show that players who move up the draft board as a result of their performance in March Madness deserve it.

The crucial distinction to remember on this topic is that Big Data has limitsWhile it may help make accurate predictions or guide knotty optimization choices or help avoid common biases, it doesn’t control eventsModels can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather.  A top draft pick may or may not develop based on the system, surrounding talent, &etc.

In our experience the best results often come from a combination of deliberation and intuition.  Too much data can lead to analysis paralysis, common sense can be a shockingly unreliable guide, and those who rely on intuition alone tend to overestimate its effectiveness.

© 2023 Ballast Point Ventures. All rights reserved.