Regulation of complex adaptive systems

November 21, 2014

“Chess in the Dark”

Why do regulatory measures implemented in the wake of one crisis inevitably fail to prevent the next one?

Kevin D. Williamson writes that regulation of complex adaptive systems (such as financial markets) present a challenge that is seldom appreciated or understood:

Every regulatory regime is explicitly or implicitly based on a model of how a particular system functions, but, for any system of meaningful complexity or sophistication, it is virtually impossible to develop a regulatory model that accounts for the effects of the regulatory regime on the system being regulated. (This is sometimes analogized to Kurt Gödel’s incompleteness theorems; whether that is an appropriate analogy I leave to the mathematicians.) The complex structured finance of the sort associated with mortgage derivatives and the like did not develop ex nihilo — it developed as a response to regulation and to political attempts to steer markets. We attempt to regulate markets as they exist, failing to account — and probably unable to account — for how regulation will change the behavior of the markets

Rather than admit that regulation is having unintended effects, Professor Taub retreats into moralizing, denouncing the banks’ behavior as “accounting tricks” and “gaming the system.” But these are not tricks or loopholes or games — they are the laws of the land and the products of regulators. If we could for a moment set aside the cheap homiletics, we could meditate on the fact that our current regulations are having certain effects, some of which are other than what was intended, and that other regulatory innovations also will have effects other than those intended, and that our power to regulate is limited by our inability to predict or account for how markets and institutions will react to that regulation.

Andrew Haldane (currently the Chief Economist at the Bank of England) similarly argued that regulations become less effective as they become more complex and likened it to a playing Frisbee with a dog.  Despite the complexity of the physics involved, catching a Frisbee can be mastered by an average dog because he has to keep it simple:

The answer, as in many other areas of complex decision-making, is simple. Or rather, it is to keep it simple. For studies have shown that the Frisbee-catching dog follows the simplest of rules of thumb: run at a speed so that the angle of gaze to the Frisbee remains roughly constant. Humans follow an identical rule of thumb.  Catching a crisis, like catching a Frisbee, is difficult. Doing so requires the regulator to weigh a complex array of financial and psychological factors, among them innovation and risk appetite. Were an economist to write down crisis-catching as an optimal control problem, they would probably have to ask a physicist for help.  Yet despite this complexity, efforts to catch the crisis Frisbee have continued to escalate. Casual empiricism reveals an ever-growing number of regulators, some with a Doctorate in physics. Ever-larger litters have not, however, obviously improved watchdogs’ Frisbee-catching abilities. No regulator had the foresight to predict the financial crisis, although some have since exhibited supernatural powers of hindsight.  So what is the secret of the watchdogs’ failure?  The answer is simple. Or rather, it is complexity. For what this paper explores is why the type of complex regulation developed over recent decades might not just be costly and cumbersome but sub-optimal for crisis control. In financial regulation, less may be more.

Haldane warned that “fundamental limitations of the human mind” thwart increasingly complex (and sometimes frivolous) attempts at regulation.

In one of our earliest posts in 2009 we asked the same question, from the viewpoint of a board of directors:

The end of every boom-bust cycle during my lifetime has included a fin de siècle scandal:  insider trading punctuated the ’87 crash, accounting irregularities (think Enron and Worldcom) helped pop the tech bubble of the ’90s, and our most recent bust was characterized by lax governance at Fannie & Freddie and more than a few banks.

We all understand the business cycle, and we all understand human nature… but what about all those good governance measures that get implemented in the wake of each meltdown?  Why do they inevitably fail to prevent the *next* crisis?

Presumably, those companies and regulatory bodies have boards comprised of accomplished and highly intelligent members, with personal wealth at stake.  Weren’t they paying attention to, and paying consultants to implement, best practices in good governance?  Ethics codes, audit and compensation committees,  Independent Directors, regular meetings, well constructed board packages…

It’s conceivable that a board member here or there could be corrupt or asleep – but entire boards?  Across multiple companies and regulatory agencies?  Unlikely.  It’s more likely that they were following the current and best practices for strong and effective board oversight.

There is more to strong board performance than best practices.  The critical factor is a ‘robust social system’ in which members’ informal  modi operandi ensure that all the well-designed board processes function properly.   Good boards combine tension and mutual esteem.

This is especially true when dealing with complex and detailed regulations, which increase the likelihood a board will mistake process for purpose and inadvertently tiptoe close to where a crisis can be triggered.



© 2017 Ballast Point Ventures. All rights reserved.