Most popular posts
- What makes great boards great
- The fate of control
- March Madness and the availability heuristic
- When business promotes honesty
- Due diligence: mine, yours, and ours
- Alligator Alley and the Flagler (?!) Dolphins
- Untangling skill and luck in sports
- The Southeastern Growth Corridors
- Dead cats and iterative collaboration
- Empirical evidence: power corrupts?
- A startup culture poses unique ethical challenges
- Warren Buffett and after-tax returns
- Is the secret to national prosperity large corporations or start-ups?
- This is the disclosure gap worrying the SEC?
- "We challenged the dogma, and it was incorrect"
- Our column in the Tampa Bay Business Journal
- Our letter in the Wall Street Journal
Other sites we recommend
Category Archives: The Art & Science of Investing
“All happy companies are different because they found something unique that gives them a vision and a monopoly of sorts; all unhappy companies are alike because they’ve failed to escape the essential sameness of competition.”
So says Peter Thiel – business subversive, founder of PayPal, first outside investor in Facebook, one of Silicon Valley’s leading investors, thinkers, and, since finding himself portrayed in the movie The Social Network, celebrities.
In the interview clip below, Mr. Thiel also says that we have “a very powerful but very narrow cone of progress around the world of bits, not so much in the world of atoms.”
The entire Uncommon Knowledge interview – which discusses competition in business, the value of monopolies, and the battle between humans and computers – can be found here.
Explosions of Creativity, a review of Peter Thiel’s Zero to One – Notes on Startups, or How to Build the Future a book based on “careful” notes taken by a student during a course on innovation Thiel taught at Stanford in 2012.
One suspects that the course was more a seminar bull session than a rigorous academic analysis (not that there’s anything wrong with that!) and it does not escape the genre, set forth in the subtitle, of “Notes.” The result is a loose collection of aphorisms and bits of wisdom, not a sustained inquiry. Nor does the book probe deeply into Thiel’s own experience. There are occasional references to PayPal, but the bloody details of entrepreneuring in one of the most cutthroat eras of business history are omitted…
To Thiel, the only valuable ideas are those that most other people disagree with, and the initial point for successful entrepreneurs must be: “What valuable company is nobody building?” He thinks the dot-com crash taught the wrong lessons: It convinced Silicon Valley to eschew grand visions, avoid plans in favor of opportunistic flexibility, focus on improving on existing products already offered by competitors, and avoid products that need intensive sales efforts.
All of these ideas are wrong. A great startup must have a vision and a plan, it must avoid competition, and it should recognize that if a better mousetrap falls in a forest and no one knows about it, it might as well not exist.
To have a shot at success, a startup must have good answers to seven questions: Engineering — can you create a breakthrough, not just incremental improvements? He uses the figure that technical improvements must be ten times as good as incumbents to succeed. Timing — is now right? Monopoly — are you starting with a big share of a small market? People — do you have the right team? Distribution — can you deliver the product? Durability — is your market position defensible over time? The secret — have you identified a unique opportunity that others do not see?
The goal is market power, usually based on combinations of technical superiority, network effects, scale economies, and branding.
These are not earth-shaking insights, but it is useful to be reminded of them, because they are regularly ignored. Thiel notes the problem with the wave of green tech that swept over Silicon Valley in the Aughts: The companies lacked good answers not just to one or two of these questions; they had bad answers for all seven.
That title may sound a little like early stage investing, but it comes from a description of October baseball. The Super-Rotation Rivalry explores the data behind the decision-making of two teams with very recent playoff history: the Detroit Tigers and Oakland A’s. (In both 2012 and 2013, Detroit eliminated the A’s because their ace – Justin Verlander – dominated in the final deciding game of the series.)
Both teams acquired aces at the trade deadline based on the theory that Moneyball may deliver results over a 162-game season, when the math has time to work, but in a short playoff series a team can be undone by a dominant pitcher or by cluster luck.
At the time it appeared as though those teams were likely to meet in the playoffs for a 3rd year in a row, so there was an element of game theory layered on top of the data analysis. However the A’s swooned and landed in a single-elimination Wild Card playoff tonight in Kansas City. Up 3 games on August 7, they went 18-30 to lose the division to the Angels by 10 games.
So we’ll have to wait to maybe see the game theory play out between the Tigers and A’s, but we will gather one more datum on the Moneyball-in-a-short-series argument. (In this case, a very short series…)
Can stockpiling aces reduce playoff unpredictability? It turns out the theory is hard to prove:
It’s possible there’s something to the “pitching wins pennants” hypothesis, but if so, it’s hard to see it in the stats. In 2012, Colin Wyers — then the director of research at Baseball Prospectus, now a “mathematical modeler” for the Astros — and I looked for evidence that teams with strong no. 1 starters outperformed expectations in the playoffs. We identified the ace of each playoff team from 1995 to 2011, rated each one using a normalized measure of ace-hood, and then checked for any correlation between the strength of each ace and the difference between his team’s regular-season and postseason winning percentages. There wasn’t one, which suggests that once you know a team’s regular-season record, knowing how good its best pitcher is doesn’t add any predictive power. Nor could Colin find any evidence of an effect after rerunning the analysis using the entirety of a team’s playoff rotation instead of its ace alone…
So why doesn’t the quality of a team’s top three starters or its ace register as significant? For one thing, the differences between teams are compressed in the playoffs, relative to the regular season: Teams with terrible staffs don’t make it to October, so the gulf between the best- and worst-pitching playoff teams isn’t as stark as we’re used to seeing during the season’s first six months. Perhaps more importantly, there’s more than one way to win baseball games, and even under an expanded playoff format, teams don’t get to October without doing something well. A team with an inferior pitching staff often makes up for its weakness on the mound by being better on offense.
If there’s no clear evidence that pitching acquires extra significance in the postseason, why is the belief that it does so persistent? It might be because it’s so hard not to notice the extent to which scoring is suppressed in the playoffs. There’s no question that playoff games tend to produce fewer crooked numbers: Last season, teams scored an average of 4.17 runs per game during the regular season, but in the postseason, their output declined to 3.78 runs per game, a 9.4 percent reduction. That figure fluctuates from year to year — in 2012, teams scored 19.2 percent fewer runs per game in the playoffs — but the direction of the difference is usually the same: down. During the 1995-2013 wild-card era, the gap has been exactly one run per game (half a run per team), or 10.6 percent.
Weather explains some of that effect; playoff games can be cold, and the lower the temperature, the less far the ball flies. Defense also plays a part, since playoff teams tend to be better than average at converting balls into outs. The bulk of the decline in scoring, however, stems from the difference in the postseason pitcher pool. … The pitchers on a given team’s postseason pitching staff are generally about half a run better than the same team’s full regular-season staff, and teams generally score about half a run less per game in the playoffs. The postseason scoring mystery is solved: It’s not that hitters lose their mojo once the calendar flips to October, it’s that they face superior opponents.
So in a sense, pitching is better during the playoffs, in that a team’s worst arms generally aren’t invited.
As it turns out, there are a few other hard-to-prove baseball theories that may be false:
Because October baseball subjects fans to a disquieting combination of disproportionate importance and exceptional unpredictability, it’s a fertile breeding ground for suspect narratives that attempt to explain small-sample postseason success or failure. Over the next few months, you might hear, for instance, that teams that “back into the playoffs” after a September slump are at a disadvantage against teams that end the regular season on a high note. Not so. You might be told that teams that rely on the home run can’t score in the playoffs, when small ball rules. In fact, the opposite is the case. Surely momentum matters? Uh–uh. And we all know that there’s no substitute for postseason experience — except for a lack of postseason experience, which works just as well.
In honor of the 75th anniversary of the release of The Wizard of Oz, we offer three thoughts about a movie whose plot was once humorously summarized as: “Transported to a surreal landscape, a young girl kills the first person she meets and then teams up with three strangers to kill again.”
1. Predicting technological trends is not for the weak at heart – and that’s before one tries to protect the IP and find a way to profit from it. The road to failure is paved with innovations that couldn’t quite achieve a sustainable business model. The evolution of color film is an excellent example.
The Wizard of Oz is often erroneously thought to be the first color film. Not so. The first true color still image was produced in 1861 (based on the same RGB principle in use today), and the first instance of color recorded in film was in 1910. Technicolor was invented in 1917 but it wasn’t until the introduction of their three-color camera in 1934 that the first viable full-color system came to the movies.
Instead of using a single piece of film, the three-color camera used bulky optics to split the image so that it could be recorded simultaneously on three strips of film. This meant that Technicolor had to be shot with a special camera that weighed several hundred pounds. It also required much more light than black and white cameras.
The lights on the set of Oz were so bright that Dorothy’s blue and pink (!) dress appears blue and white.
2. Success often depends on external dependencies within the business ecosystem. Despite good reviews and 6 Academy Award nominations, the film took roughly a decade to turn a profit due to the astronomical budget ($2.7million) and the low ticket price ($0.25). It was re-released in 1949 & 1955, but it was a new technology – broadcast television – that took a marginally profitable film and turned it into an institution and source of countless pop culture references. The initial broadcast in 1956 drew 45 million viewers.
3. The Dark Side of the Rainbow (Aka The Dark Side of Oz or The Wizard of Floyd) might be the most entertaining example of
chemically-enhanced BS confirmation bias we’ve come across.
At some point in the ’90s, word went around that Pink Floyd’s 1973 album “Dark Side of the Moon” synced up with the movie in eerie ways, producing moments where the film and the album appear to correspond with each other. E.g.,
- “The Great Gig in the Sky” meshes well with the tornado.
- The scarecrow dances during the track “Brain Damage.”
- The heartbeat at the album’s close coincides with Dorothy listening to the Tin Man’s torso.
- The old Side 1 of the album ends just as the sepia-colored portion of the movie does. Some also believe the iconic dispersive prism of the album’s cover purportedly reflects the movie’s transition from black-and-white Kansas to Technicolor Oz.
~ ~ ~
N.B. – Our research for this piece turned up a few additional charming bits of film history:
- Although it lost the Best Picture Oscar to Gone With the Wind, it won for Best Original Score and Best Original Song (Over the Rainbow). The studio had come within an eyelash of cutting that song from the movie because the scene “dragged.”
- The 1939 film was the 4th time L. Frank Baum’s story was adapted to the screen:
- The first was a 13-minute silent version entitled The Wonderful Wizard of Oz released in 1910.
- In 1925, a young Andy Hardy – later of the Laurel & Hardy comedy duo – played the Tin Woodsman in another silent version.
- A nine-minute animated version was released in 1933. Though produced in color, the short was released in black-and-white because the production did not have the proper license from Technicolor.
- Casting notes:
- 20th Century Fox had wanted Shirley Temple to play Dorothy, but her singing chops posed a problem. Fox ended up losing the film rights to rival MGM and a young contract player at the studio named Judy Garland got the role.
- Actor Buddy Ebsen was initially cast as the Tin Woodsman and completed some scenes, but had to bow out due to an allergic reaction to the silver makeup.
- Margaret Hamilton, who portrayed the (old) Wicked Witch of the West, was only 36 at the time.
- Burt Lahr’s Cowardly Lion costume was knitted from actual lion fur and weighed nearly 100 pounds.
- Dorothy’s dog Toto was paid $125 per week while the actors playing the residents of Munchkinland only received a reported $50 a week.
- The movie had two directors: Victor Fleming handling the Technicolor scenes set in Oz, and King Vidor overseeing the bookend black-and-white sequences set in Kansas.
Today is Erwin Schrödinger’s (he of the famous half-dead cat) 127th birthday. We found this terrific excerpt from his 1933 Nobel Prize address:
If I am to have an interest in something, others must also have one. My word is seldom the first, but often the second, and may be inspired by a desire to contradict or to correct, but the consequent extension may turn out to be more important than the correction, which served only as a connection.
Many things about our company turned out differently than we had expected… The Hayekian knowledge problem is not a mere abstraction. Our innovations that have driven the greatest economic value uniformly arose from iterative collaboration between ourselves and our customers to find new solutions to hard problems.
Success is often achieved in incremental, adaptive fashion – with failure counted on to make a brief cameo at some point along the way. We love the collaborative imagery of a “correction” being a “connection” to the “extension” of an idea. Perhaps the great scientist ought to have received a Nobel for nerd poetry to go along with the one in physics.
Here is an explanation of his “thought experiment” that does, and doesn’t, kill a cat:
For more on the topic of iterative collaboration, please see:
We’ve written frequently on the subject of cognitive biases and how to design decision making processes to account for them. A good process will entail astute management of the social, political and emotional aspects of decision making and address or at least understand the underlying biases of the participants.
We recently came across this piece in the archives at HBS Working Knowledge which introduces research on “fundamental attribution bias” (a.k.a. snap judgments), and how resistant that bias is to cures. Apparently it is so deeply rooted in our decision making processes that even highly trained people, warned explicitly of its dangers, remain susceptible.
People make snap judgments all the time. That woman in the sharp business suit must be intelligent and successful; the driver who just cut me off is a rude jerk.
These instant assessments, when we attribute a person’s behavior to innate characteristics rather than external circumstances, happen so frequently that psychologists have a name for them: “fundamental attribution errors.” Unable to know every aspect of a stranger’s back-story, yet still needing to make a primal designation between friend and foe, we watch for surface cues: expensive pants—friend; aggressive driving—foe.
The research looks at highly trained professionals – college admissions officers and hiring managers – and finds “how difficult it was to counteract the fundamental attribution error, and, particularly, how strongly its effects could be seen in these records.”
The first study asked professional university admissions officers to evaluate nine fictional applicants, whose high schools were reportedly uniform in quality and selectivity. Only one major point of variance existed between the schools: grading standards, which ranged from lenient to harsh. Predictably, students from “lenient” schools had higher GPAs than students from “harsh” schools—and, just as predictably, those fictional applicants got accepted at much higher rates than their peers.”We see that admissions officers tend to pick a candidate who performed well on easy tasks rather than a candidate who performed less well at difficult tasks,” says Gino, noting that even seasoned professionals discount information about the candidate’s situation, attributing behavior to innate ability.
Similar results can be seen for the second study, in which the researchers asked business executives to evaluate twelve fictional candidates for promotion. In this scenario, certain candidates had performed well at an easier job (managing a relatively calm airport), while others had performed less well at a harder job (managing an unruly airport).
As with the admissions officers, the executives consistently favored employees whose performance had benefited from the easier situation—which, while fortuitous for those lucky employees, can be disastrous on a company-wide scale. When executives promote employees based primarily on their performance in a specific environment, a drop in that employee’s success can be expected once they begin working under different conditions, Gino explains…
“We thought that experts might not be as likely to engage in this type of error, and we also thought that in situations where we were very, very clear about [varying external circumstances], that there would be less susceptibility to the bias,” she says. “Instead, we found that expertise doesn’t help, and having the information right in front of your eyes is not as helpful.”
The researchers do not yet have recommendations to offer as it relates to hiring, but we might have one in The Library at St. Pete: Who: The A Method for Hiring by Geoff Smart and Randy Street. The book outlines a hiring process that reduces the risk of making a bad hire – the costs of which can be great.
Back then we cited a joint study conducted by the NVCA and Dow Jones which outlined several factors that contribute to a good long-term partnership for long-term growth, and highlighted two data that we found insightful band mildly humorous:
Do you respect me or my money?
- 54% of VCs cite mentoring the CEO as a critical value-add; only 27% of CEOs see the value.
- 64% and 34% of CEOs see the ability to complete follow-on financings and facilitate exits as top value adds; VC numbers were 48% and 22% respectively.
The money will always be important. After all, entrepreneurs should pick a financial partner who can provide additional capital as needed as their companies grow. But the best (sadly, not all) venture partners provide much more than money – valuable contacts, “been there, done that” experience when facing tough business issues and a sympathetic sounding board for entrepreneurs working under great pressure.
As was the case with another contributor at a different publication, the author of the Entrepreneur piece is either subconsciously thinking mostly about early-stage venture financing or is perhaps painting with too broad a brush. But he still makes a few valuable points:
Ultimately, Gray’s [author of the 1992 book Men are from Mars, Women are from Venus – ed] advice for better relationships applies: If founders and capital providers invest the time to understand their objectives deeply, they will have a productive relationship. The key is to find activities where they can make the other party better off.
Or, if you prefer, as we once put it in The fate of control (also from 2009):
It’s more about chemistry than control. How you react during the inevitable challenges of building a business together will define the relationship. Over time you learn to play to each other’s strengths and make the concessions and adjustments that a given situation demands.
This article on valuation from the Houston Business Journal is written from the point of view of middle-market investment banking, but it’s also relevant to term sheet negotiations between entrepreneur and venture capitalist. Higher EBITDA doesn’t automatically lead to higher multiples (and higher valuations).
The reality is that valuations are much more complex and are primarily a function of the underlying fundamentals of a business. These fundamentals might include growth opportunities, recurring revenues, customer and product diversity, entry barriers, proprietary products and high levels of free cash flow. Our experience tells us that different buyers can have widely divergent views of value based on their relative assessments of these underlying fundamentals…
It is important for private business owners to understand valuation drivers and to develop the financial and operating data that will enable buyers to properly assess the underlying fundamentals of their business. More clarity for a buyer leads to a higher level of confidence and a more attractive valuation for the seller.
It also leads to a higher level of confidence in the relationship. The early conversations about valuation (and control) begin to shape the personal chemistry crucial to a successful long-term partnership. Clarity and transparency, which make it easier for everyone involved to observe how decisions are being made, are much more important to hopeful-future-teammates than either side trying to squeeze maximum value out of a single transaction.
If a good tone is set early and maintained consistently, over time everyone on the team worries less about who’s in control and more about how to create the best scoring opportunity.
Here is the long-overdue “VIth” installment of our Vintage Future series, in which we take a tongue-in-cheek look back at the predictions of past generations of investors and futurists.
In our line of work it’s good to guard against the hubris inherent in projecting conventional wisdom too far out into the future, and to remind ourselves that today’s trend can be tomorrow’s punchline.
Predicting technology trends is not for the weak at heart – and that’s before one tries to protect the IP and find a way to profit from it.
These are among the reasons we affectionately call the really early stage of investing adventure capital, and consider ourselves a “growth accelerator” for established, rapidly growing businesses with strong management teams. We prefer to focus our efforts on assessing competitive and execution risk rather than product or business model risk, and we want to see tangible evidence of the unique value offered by a company’s product or service.
N.B. – previously featured in Vintage Future:
- Nine Technologies That Will Change Your Future
- Innovative Products from the Past that Never Were
- Ten Worst Internet Ideas
- William Shatner narrating MicroWorld 1980
- Crazy Patents
- The Chef of the Future.
Ballast Point Ventures is pleased to announce a growth equity investment in PowerDMS, a cloud-based document management software company whose platform organizes policies and procedures online, allowing companies to distribute crucial documents collaboratively, message employees and capture signatures. Proceeds of the investment will be used to augment the company’s sales and marketing team and enhance its technology platform by offering new features to its customer base, which includes customers in law enforcement, public safety, healthcare, and retail.
Founded in 2001 by CEO Josh Brown, the robust software platform provides practical tools necessary to organize and manage crucial documents and industry standards, thereby helping organizations maintain compliance with constantly evolving industry accreditation protocols. Created as a software-as-a-service (SaaS) model, PowerDMS combines attributes of Governance and Risk Compliance (GRC) and Enterprise Content Management (ECM) into its software platform.
ESPN the Magazine asks, of the NFL Combine’s influence on the Draft, “How do you weigh a week of drills against three or four years of a player’s work?”
If you look at certain combine stats, they explain on average 20 percent of how well players perform during their first three pro seasons. That’s probably a weaker relationship than most team executives would want, but it aint zero… Phillips found that different measures matter for different positions. For instance, 40-yard dash time – sometimes derided by analysts who argue that players don’t actually have to run 40-yard dashes in games – is the only skill that’s significant for all positions. A players’ weight is important for offensive linemen, defensive linemen and linebackers, while scores in the three-cone drill (which measures agility) matter for running backs and defensive backs. Other metrics are narrower in their predictive value… definitive answers haven’t emerged yet from the fledgling research.
The data don’t predict a player’s ceiling, the “perfect storm awesomeness of Adrian Peterson or Patrick Willis.”
The raw data simply don’t know what kind of system a player will enter, or talent he’ll have around him, or luck he’ll have with injuries, or intangibles he possesses. But (the) stats do a pretty good job of separating the potential stars from likely busts… So looking at extreme cases from the class of 2014, Jadeveon Clowney’s 40 time was 0.3 of a second faster than any of the five best defensive linemen drafted in the past eight years, and his Phillips stats are better than 99 percent of players at his position. Among less famous prospects, keep a draft-day eye on Brandin Cooks, a receiver from Oregon State, whose blazing speed helped him achieve the third-best blend of stats among all wideouts since 2006… [picked Rd 1, #20 by the Saints] watch Minnesota’s Ra’Shede Hageman too [picked Rd 2, #5, #37 overall by the Falcons]. Just 11 defensive linemen over 300 pounds in Phillip’s database have shown better speed than Hageman did at the combine, and eight of them have gone on to successful NFL careers.
On the flip side, Ha Ha Clinton-Dix could go in the top 10 but he was below median in every key component of Phillip’s statistics for defensive backs at this year’s combine [picked Rd 1, #21 by the Packers].
We recently made a crucial distinction in another post on the topic of data and decision-making, entitled The greatest comeback ever and the limits of decision models: some outcomes can be influenced and some cannot. Big data may help make accurate predictions or guide knotty optimization choices or help avoid common biases, but it doesn’t control events. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather. A top draft pick may or may not develop based on the system, surrounding talent, &etc.
In our experience the best results often come from a combination of deliberation and intuition. Too much data can lead to analysis paralysis, common sense can be a shockingly unreliable guide, and those who rely on intuition alone tend to overestimate its effectiveness. (They recall the times it served them well and forget the times it didn’t.)
In the wake of the last financial crisis, BoE Director of Financial Stability Andrew Haldane deployed an analogy about a Frisbee-catching dog to explain how complex (and sometimes frivolous) attempts at regulation push the limits of data modeling or even the nature of knowledge itself. The dog can catch the Frisbee despite the complex physics involved because the dog keeps it simple: run at a speed so that the angle of gaze to the Frisbee remains roughly constant.
So while old-fashioned intuition is not out of date it’s also unwise to rely only on one’s instincts to decide when to rely on one’s instincts. The dog’s doing just fine, but if it involves more than a Frisbee he might want to crunch some numbers too.
Specifically about this year’s draft: we’re never quite sure what to make of the draft, it’s so over-hyped. Clowney seemed like the obvious first pick and Manziel is high risk, so that worked out as expected. A pretty efficient market overall given the information available to the teams…