How Technology Might Get Out of Control

Humanity has a method for trying to prevent new technologies from getting out of hand: explore the possible negative consequences, involving all parties affected, and come to some agreement on ways to mitigate them. New research, though, suggests that the accelerating pace of change could soon render this approach ineffective.

People use laws, social norms and international agreements to reap the benefits of technology while minimizing undesirable things like environmental damage. In aiming to find such rules of behavior, we often take inspiration from what game theorists call a Nash equilibrium, named after the mathematician and economist John Nash. In game theory, a Nash equilibrium is a set of strategies that, once discovered by a set of players, provides a stable fixed point at which no one has an incentive to depart from their current strategy.

To reach such an equilibrium, the players need to understand the consequences of their own and others’ potential actions. During the Cold War, for example, peace among nuclear powers depended on the understanding the any attack would ensure everyone’s destruction. Similarly, from local regulations to international law, negotiations can be seen as a gradual exploration of all possible moves to find a stable framework of rules acceptable to everyone, and giving no one an incentive to cheat – because doing so would leave them worse off.

But what if technology becomes so complex and starts evolving so rapidly that humans can’t imagine the consequences of some new action? This is the question that a pair of scientists — Dimitri Kusnezov of the National Nuclear Security Administration and Wendell Jones, recently retired from Sandia National Labs — explore in a recent paper. Their unsettling conclusion: The concept of strategic equilibrium as an organizing principle may be nearly obsolete.

Kusnezov and Jones derive insight from recent mathematical studies of games with many players and many possible choices of action. One basic finding is a sharp division into two types, stable and unstable. Below a certain level of complexity, the Nash equilibrium is useful in describing the likely outcomes. Beyond that lies a chaotic zone where players never manage to find stable and reliable strategies, but cope only by perpetually shifting their behaviors in a highly irregular way. What happens is essentially random and unpredictable.

The authors argue that emerging technologies — especially computing, software and biotechnology such as gene editing — are much more likely to fall into the unstable category. In these areas, disruptions are becoming bigger and more frequent as costs fall and sharing platforms enable open innovation. Hence, such technologies will evolve faster than regulatory frameworks — at least as traditionally conceived — can respond.

What can we do? Kusnezov and Jones don’t have an easy answer. One clear implication is that it’s probably a mistake to copy techniques used for the more slowly evolving and less widely available technologies of the past. This is often the default approach, as illustrated by proposals to regulate gene editing techniques. Such efforts are probably doomed in a world where technologies develop thanks to the parallel efforts of a global population with diverse aims and interests. Perhaps future regulation will itself have to rely on emerging technologies, as some are already exploring for finance.

We may be approaching a profound moment in history, when the guiding idea of strategic equilibrium on which we’ve relied for 75 years will run up against its limits. If so, regulation will become an entirely different game.

Bloomberg

Economists Are Cheating Their Profession

Ohio Rust Belt Struggles With Opioid Addiction And Poverty

Many economists genuinely want to make their field more scientific — grounded in empirical evidence rather than in theory or, worse, ideology. Yet a recent article by four prominent academics demonstrates the extent to which ideology remains a problem.

My Bloomberg View colleague Justin Fox has highlighted the motivated reasoning in the article, penned by a team of conservative economists including R. Glenn Hubbard of Columbia Business School and John Taylor of the Hoover Institution at Stanford University. They argue that the current economic stagnation has nothing to do with a hangover from the financial crisis, and that policies such as lower taxes and cuts in social spending would markedly boost growth. They say this follows from objective analysis of data on past crises and recoveries.

As Fox notes, the analysis actually rests on a conveniently biased selection of data. It includes among past financial crises several moderate downturns that most economists don’t think of as crises, and rather bizarrely counts the grinding decade of the Great Depression as a “rapid recovery” from the recession of 1929.

Worse, the article projects a completely unjustified sense of certainty. “Economic theory and historical experience,” it boldly asserts, “indicate economic policies are the primary cause of both the productivity slowdown and the poorly performing labor market.” This willfully misrepresents current thinking. Economists hold diverse views on the roots of the recent malaise, and remain divided and uncertain about the fundamental causes of growth.

The authors have every right to express their views and opinions in forceful terms. But when professional economists write as experts and claim theory as a basis for their views, they also have a duty to present that theory — and other economists’ thoughts about it — honestly. Their failure to do so is “unprofessional,” as University of California at Berkeley economist Brad DeLong rightly put it. It doesn’t reflect the honest, evidence-based approach that most economists aim for.

The question, then, is what, if anything, the profession will do about it. Does it have standards? If so, can it enforce them?

Just like regulators, economists can be captured by powerful corporations and individuals, as University of Chicago economist Luis Zingales has argued. Conservatives in particular have been successful in subverting research for their own ends, especially through the creation of think tanks and by funding economists adept at disguising ideological arguments in objective academic language. Concerted efforts date back at least to the 1980s. In her recent — and controversial — book “Democracy in Chains,” historian Nancy MacLean offers billionaire industrialist Charles Koch’s backing of libertarian economist James Buchanan as an example.

How can the profession combat such capture? Zingales has suggested public shaming, following the example of media efforts such as the film “Inside Job,” which exposed a number of prominent academics for pushing the benefits of modern finance while hiding considerable income from major Wall Street firms. Among the economists scrutinized was Columbia’s Hubbard.

Shaming seems appropriate. After all, public trust is a resource from which all economists benefit. If they want to preserve it, they should draw guidance from Nobel Prize winner Elinor Ostrom. She showed that successful management of such resources typically requires an effective means to maintain group standards and values — for example, by punishing and deterring self-serving behavior among individuals within the group.

Economists who present their opinions as fact, or who misrepresent the consensus, are cheating at the expense of the entire profession. They shouldn’t get away with it.

(Bloomberg)