A Dangerous Master

Wendell Wallach

On December 6, 1999, after a successful landing, a Global Hawk unmanned aerial vehicle (UAV) unexpectedly accelerated its taxiing speed to 178 mph and, at a curve in the runway, veered off the paved surface. Its nose gear collapsed in the adjacent desert, causing $5.3 million in damage. At the time of the accident, the operators piloting the UAV from the safety of their control station had no idea why it had occurred. An Air Force investigation attributed the acceleration to software problems compounded by “a breakdown in supervision.” A spokesperson for Northrop Grumman, the UAV’s manufacturer, placed blame for the excessive speed totally upon the operators. There was certainly some delay in the operators’ recognition of the UAV’s unanticipated behavior. Even after they realized something was wrong, they did not understand which of its programmed subroutines the aircraft was following. The compensating instructions they did provide were either not received or not accommodated by the aircraft. That the operators failed to understand what the semi-autonomous vehicle was trying to do and provided ineffective instructions makes sense. But any suggestion that the aircraft could understand what it was doing, or what the pilots were trying to do, implies that the UAV had some form of consciousness. That was, of course, not the case. The vehicle was merely operating according to subroutines in its software programming.

The Global Hawk UAV is only one of countless new technologies that rely upon complex computer systems to function. Most of the smart systems currently deployed and under development are not fully autonomous, but nor are they operated solely by humans. The UAV, like a piloted aircraft or a robotic arm that a surgeon can use to perform a delicate operation, is a sophisticated, partially intelligent device that functions as part of a team. This type of team exhibits a complex choreography between its human and nonhuman actors and draws aspects of its intelligence from both. When operating correctly, the UAV performs some tasks independently and other tasks as a seamless extension of commands initiated by human operators. In complex systems whose successful performance depends upon coordinating computerized and human decisions, the task of adapting to the unexpected is presumed to lie with the human members of the team. Failures such as the Global Hawk veering off the runway are initially judged to be the result of human error, and a commonly proposed solution is to build more autonomy into the smart system. However, this does not solve the problem.

Anticipating the actions of a smart system becomes more and more challenging for a human operator as the system and the environments in which it operates become more complex. Expecting operators to understand how a sophisticated computer thinks and anticipate its actions so as to coordinate the activities of the team actually increases their responsibility. Designing computers and mechanical components that will be sufficiently adaptive and independent of human input is a longer-term goal of engineers. However, many questions exist as to whether this goal is fully achievable, and, therefore, it is also unclear whether human actors can be eliminated from the operation of a large share of complex systems. In the meantime, the need to adapt when unforeseen situations arise will continue to reside with the human members of the team.

Complex systems are by their very nature unpredictable and prone to all sorts of mishaps when met with unanticipated events. Even very well-designed complex systems periodically act in ways that cannot be anticipated. Events that have a low probability of occurring are commonly overlooked and not planned for, but they do happen. Designing complex systems that coordinate the activities of humans and machines is a difficult engineering problem. Just as tricky is the task of ensuring that the complex system is resilient enough to recover if anything goes wrong. The behavior of a computerized system is brittle—consider a Windows computer that locks up when a small amount of information is out of place—and seldom capable of adapting to new, unanticipated circumstances. Unanticipated events, from a purely mechanical failure to a computer glitch to human error to a sudden storm can disrupt the relatively smooth functioning of a truly complex system such as an airport. I’ve concluded that no one understands complex systems well enough. As MIT professor and best-selling author Sherry Turkle once said to me in reference to managing complex systems, “Perhaps we humans just aren’t good at this.” Can we acquire the necessary understanding? To some degree, yes, but there appear to be inherent limitations on how well complex adaptive systems can be controlled.

This is not a new problem. Debate has been going on for more than a half century as to whether governments and energy companies can adequately manage nuclear power plants. When the danger is known, generally extra care and attention are directed at managing the risks. But even so, many, but not all, of the nuclear accidents that have happened were caused by a lack of adequate care, stupidity, or both. A few nuclear accidents were the result of unexpected or unaddressed events. The meltdown of nuclear reactors at Fukushima, Japan, was caused by a huge once-in-a-thousand-year tsunami. The basic meaning of the word complex is easy to grasp. But why complexity functions as a game-changer is more difficult to understand. This article will serve as an introduction to complexity, the kinds of complex systems already evident, others under development, and the challenge in recognizing which forms of complexity pose dangers. In the process of clarifying difficulties in monitoring and managing complex systems, the related c-word chaos will also be elucidated. For our purposes, the inspection of complexity and chaos is central for appreciating challenges, difficulties, and dangers that might otherwise be overlooked. Complexity need not be treated as a roadblock, but it does warrant our attention if we are to safely navigate the adoption and regulation of new technologies. There are already fields of scientific research in place for studying how complexity and chaos work.

In the years following World War II, the interdisciplinary study of systems and how the components within those systems behave became increasingly important and eventually evolved into a field called “systems theory.” Self-regulating systems can be natural, such as the human body; social systems, such as a government or a culture; or technological inventions, such as cars or synthetic organisms. In everyday speech, words such as complexity and chaos are used loosely. In system theory, the words are used in a more precise way in order to lay a foundation for two related fields, complexity science and chaos science. But are these growing areas of scientific research helpful in wrestling with the unpredictability of, and limiting the likelihood of, unanticipated catastrophes? Weather patterns are complex, as are the behavior of economic markets. And yet modest success has been achieved in predicting the behavior of both. The science of complex systems has made some progress in modeling the patterns that might emerge out of an interdependent set of diverse inputs. Much of this progress relies upon computer simulations.

Unanticipated factors, however, make predicting the actions of markets and weather a combination of probability and luck. The same holds true for predicting the behavior of many technologies we create. This is deeply troubling, because contemporary societies are increasingly reliant on truly complex technological systems such as computer networks and energy grids. To appreciate the breadth of the problem, it is important to note that technological systems are not merely composed of their technological components. In our UAV example, the complex system includes the people who built, maintain, improve, operate, and interact with the technology. Collectively, the technological components, people, institutions, environments, values, and social practices that support the operation of a system constitute what has been called a “sociotechnical system.” Water purification facilities, chemical plants, hospitals, and governments are all sociotechnical systems. In other words, a specific technology—whether a computer, a drug, or a synthetic organism—is a component within a larger socio­technical system. What is at stake is whether the overall system, not just the technological component, functions relatively well.

My focus is on the technological components or processes, but the problems often arise out of the interaction of those components with other elements of the sociotechnical system. Can complex systems ever be adequately understood or sufficiently tamed? Engineers strive to make the tools they develop predictable so they can be used safely. But if the tools we develop are unpredictable and therefore at times unsafe, it is foolhardy for us to depend upon them to manage critical infrastructure. There is an imperative to invest the resources needed (time, brainpower, and money) to develop new strategies to control complex technologies and to understand the limits of those strategies. If planners and engineers are unable to limit harms then we, as a society, should turn away from a reliance on increasingly complex technological systems. The examples included here have been selected to help clarify which aspects of complex adaptive systems can be better understood and therefore better managed, and which cannot.

Complex and Chaotic Systems

The individual components of a complex system can be the nodes in a computer network, neurons in the brain, or buyers and sellers in a market. When many components are responding to each other’s actions, there is no surefire way to predict in advance the overall behavior of the system. To understand a complex system’s behavior, it is necessary to observe how it unfolds one step at a time. Consider a game of chess. The possible actions within each move are limited, but it is nevertheless difficult to predict what will happen five, ten, or twenty moves ahead. Deep Blue, the IBM computer that beat the World Chess Champion Garry Kasparov in 1997, required tremendous computing power to calculate possible sequences of moves. Even then, it could only predict with decreasing probability the fifth or tenth moves in a series that created new branches at each step. Each move made by Kasparov would eliminate some branches while opening up additional possibilities. A mathematical formula yields a result by working through the problem, but the algorithms that lie at the heart of computer technology define a process that must unfold step-by-step. A pattern can emerge over time out of a sequence of events, but that pattern may tell us very little about what to anticipate for a later sequence. Similar to the failure of the Global Hawk UAV, sophisticated future robots will act according to algorithmic processes, and their behavior when confronted with totally new inputs will be hard if not impossible to predict in advance. The kinds of damage semi-autonomous robots working in a warehouse shipping out Amazon orders could potentially cause might be judged acceptable given the rewards. But the unfolding actions of a robot that serves as a weapons platform could unexpectedly, yet conceivably, lead to the loss of many lives. To quote an old adage, “Baking a cake is easy. Building a car is complicated. Raising a child is complex. . . .”

Modeling Complexity

In the 1980s, systems theory gave birth to the scientific study of complex adaptive systems, which examines how individual components adapt to each other’s behavior and, in turn, alter the system’s structure and activity. This transition in the study of complex systems was enabled by the availability of more powerful computers. Powerful computers facilitate the creation of simulations that model complex activities. Systems theorists hoped that the scientific study of complex adaptive systems might offer a tool to understand, and, in turn, tame, the complex systems we increasingly rely upon. Much of the study of adaptive systems focuses on physical and biological processes. Even a tiny biological system such as a living cell can be truly complex. The medical illustrator David Bolinsky created a celebrated animation of the molecular dance of life within an individual cell. The animation is wondrous to behold, a rich universe in which fantasy-like structures interact in dynamic ways. An enormous number of molecular micro-machines within a cell change their state from moment to moment. Bolinsky’s animated simulation was created, together with the Harvard Molecular and Cell Biology Department, as a teaching tool to help students envision the complex inner life of a cell. However, for all of the intricacy the animation displays, Bolinsky told me that so much happens within a cell that his team was able to illustrate only 10 to 15 percent of the molecular structures and their activity. Simulations of evolution were an early focus in the study of complex adaptive systems. Exploring evolutionary processes in artificial environments offers a number of benefits. In the biological world, countless generations are required to evolve successful species with interesting features. Within a computer simulation, going from one generation to its progeny can take mere seconds.

A new field called “artificial life” (Alife) emerged from the study of evolutionary processes. Alife researchers populated simulated environments with virtual organisms. The idea was to explore how these artificial organisms would change and adapt in response to the actions of other entities in the population or in response to changes in the environment. Think of this as the computer game SimLife (made by Maxis, the company that created The Sims and SimCity) on steroids. It was hoped that the organisms in the simulated world would evolve to become complex virtual creatures, and, in turn, shed light on the robustness of biological life. Unfortunately, the evolution of virtual organisms plateaued. Artificial entities within computerized environments did not mutate to levels of sufficient complexity. Thomas Ray, a biologist who developed a highly regarded software program (Tierra) for studying artificial evolution, admits “evolution in the digital medium remains a process with a very limited record of accomplishments.” It is unclear why Alife simulations have been so disappointing. The failures suggest a fundamental limitation in modeling biological systems within computer simulations, at least through the approaches that have been tried to date. Nevertheless, Alife researchers continue to explore new approaches with varying degrees of success.

Scientific understanding depends upon maps and models that are designed to capture salient features of physical, biological, or social contexts while ignoring apparently extraneous details. A map lays out features that have already been discovered. There can be gaps in a map that represent the unexplored and the unknown. Dynamic models make the study of relationships and law-bound activity possible by focusing on those features that seem important while disregarding the rest. For a theoretical model to become a working hypothesis, it must reveal insights or predictions that are then confirmed by experiments in the real world. If a prediction proves wrong, the model is either incorrect or incomplete. Models fail for a variety of reasons. Often a theoretical model is too simple to capture all the important factors that influence a system. Before the advent of computers, scientists were limited to working with relatively simple conceptual models. Computer simulations offer an opportunity to observe how a more complex system unfolds step-by-step and have become an important tool for studying the activity within all chaotic and complex systems from the mo
lecular to the cosmological. With each new generation of faster computers and better programming tools, simulations will be able to model increasingly complex processes. Even excellent models can prove to be inadequate if they fail to incorporate an essential feature. Chaos and complexity theory tells us that very small influences, which may have been left out of the model, can have a significant effect.

On its opening day, the Millennium Bridge, an architectural wonder and acclaimed suspension footpath spanning the Thames River in London, began to sway as thousands of pedestrians walked across. The bridge immediately acquired the distinctly English nickname “The Wibbly Wobbly.” Many of those walking on the bridge were disturbed by the swaying, as was the engineering firm that designed the bridge. The possibility that the bridge would sway had not shown up in computerized models, nor was it evident in wind tunnel simulations. The firm soon realized that their models had not incorporated the way the walkers would slowly begin to synchronize their stride in response to subtle vibrations in the suspended walkway. This synchronized stride set the bridge swaying. In response to the sway, more and more walkers joined in the synchronized pace, which, in turn, accentuated the swing of the walkway. Unpredictability is the enemy of engineers, whose job is to design reliable tools, machines, buildings, and bridges. Conquering uncertainty is integral to safety. Mechanical systems are naturally prone to move from orderly to chaotic behavior. For example, friction can cause a vehicle’s motor to vibrate, and this vibration, in turn, loosens or damages other components. Detecting and eliminating chaotic behavior is key to insuring the optimal performance of complicated and complex systems. In the case of the Millennium Bridge, once the problem had been recognized, the engineers determined that installing dampers could effectively stop the swaying.

The Flash Crash of May 6, 2010, also garnered considerable attention and led to various reforms meant to halt markets once trading anomalies are detected. Yet even after intense study, not enough was learned nor were adequate reforms enacted to tame the uncertainty of markets dominated by high-frequency computerized trading. The moral of the story: even using highly refined models, the study of complex adaptive systems has not mastered, nor kept up with, the vastly complex technologies that corporations and governments continually build. Because our fledgling understanding of complex adaptive systems falls short, we need opportunities for human operators to step in. But in order to give them the freedom to fulfill this responsibility, we need to give them time.

Wrestling with Uncertainty

I was working in my office on August 14, 2003, when the electricity went off for just a few moments and then came back on. A power surge in Ohio had caused the largest blackout in the history of the United States and Canada. States from Michigan to Massachusetts were affected, as well as the Canadian Province of Ontario. It would take two days, and for some customers, two weeks, before power was restored. The surge began when an overheated electrical transmission line outside of Cleveland sagged into a tree. That small incident cascaded into a chain of computer-initiated shutdowns throughout eight states and into Canada. Software bugs compounded the blackout, as did decisions by human operators. In southern New England, we were spared the blackout because technicians evidently overrode automated shutdown procedures and disconnected our provider from the multistate electrical grid. A few extra moments were sufficient to save those of us living in Connecticut from the inconveniences and financial burden of a blackout.

In a world where networks of tightly connected computers are increasingly the norm, extra moments can be hard to come by. A little time can be bought by decoupling the critical units in a complex system. A modular design, which permits more independent action by individual units, can enhance safety by reducing the tight interaction of components. The backbone of the Internet is a good example of a modular system. A failure at a distribution center will not bring the Internet down, as it is designed to constantly seek new routes for transporting data among the available nodes in the network. Charles Perrow and other experts recommend decoupling and modularity as ways to limit the effects of unanticipated events. Unfortunately, in many industries the prevailing trend leads to tighter coupling of subunits. Under pressure to increase profits, business leaders gobble up or merge with competitors, eliminate redundant units, and streamline procedures. Multinational corporations grow larger, become more centralized, and tightly integrate their individual units for greater efficiency. The business leaders who make these decisions are seldom cognizant of the dangers they invite. They are just doing their jobs. The politicians who thwart regulation of large conglomerates also have no understanding that they invite even greater disruptions to the economy each time a periodic downturn or unanticipated catastrophe occurs.

Decoupling the financial markets from the high-frequency trading that accentuates fluctuations in financial markets could still be realized through a few modest reforms. For example, a transaction fee could be charged by the stock exchanges for each short-term trade. Or a tax might be introduced on the profits from all stocks, currencies, or commodities that are held by the purchasing firm for less than five minutes. Tendering all trades on the minute or even on the second would eliminate advantages to firms that acquire information or tender orders a fraction of a second faster than their competitors. Each of these reforms would significantly reduce the volume of high-frequency trading. Unsurprisingly, the firms that benefit from high-frequency trading worked viciously to defeat any measure that would interfere with their practices, and only a few reforms have been instituted since the Flash Crash. They argued that the liquidity from short-term trading provides a service. Evidently, short-term efficiencies are much more important than long-term safety. The profits of those best able to game the system have taken precedence over the integrity and stability of markets. It is not surprising that those who benefit from a system will fight reforms. Nor can we necessarily expect even reform-minded politicians to fully understand whether the imperfect measures they consider will address perceived problems.

Experts have considerable power to influence decisions regarding the management of complex systems. Considering the intricacies, however, experts can also muddy the water and sometimes do so in support of the status quo or to further ideologically inspired reforms. Whom can we task with deciding how complex systems should be managed and when reforms should be instituted?

Another recommendation for limiting disastrous consequences proposes that chemical factories, nuclear reactors, and other potentially dangerous facilities be built in remote locations. That recommendation is also seldom heeded. Placing a chemical plant near a large population center can lower the cost of labor, transportation, and energy. Locating nuclear reactors nearer to the demand lessens the amount of infrastructure that must be constructed to transport energy to its consumers. Backup systems and redundancies are useful for addressing common component failures. Automated shutdown procedures can protect equipment in a power grid from most surges. Safety certainly enters into the design of critical systems, but consideration is seldom given to the low-probability event that could be truly catastrophic. For example, Japan experiences a tsunami on average every seven years, so the planners at the Tokyo Electric Power Company made sure to build the nuclear power plant at the Fukushima Daiichi plant more than eighteen feet above the average sea level. Their worst-case scenario, however, ignored the Jogan tsunami of 869 CE, which produced waves similar to those of the 2011 tsunami that flooded the reactors. At its peak, the tsunami exceeded fifteen meters, five meters above the height of the sea wall that they had built.

British Petroleum (BP) and Transocean, the parties responsible for the 2010 oil spill in the Gulf of Mexico, made the basic calculation that planning for disastrous events was too expensive and took the chance that outliers would not occur. The resulting environmental damage, economic loss, and cost of the cleanup was tremendous. By September 2013, BP had spent $42 billion on cleanup, claims, and fines and began fighting additional fees that might be as large as $18 billion. BP argued that many of the claims it was being asked to pay were not caused by the oil spill. A year later (September 2014), U.S. District Court Judge Carl Barbier ruled that BP acted with gross negligence and must pay all claims. In the scheme of things, BP was unlucky. Other companies that took similar risks have fared better. In the years following the Transocean explosion, many deep­water oil-drilling companies failed to implement costly, voluntary safety measures. They continue to fight legislation that would make such procedures mandatory, and with the help of political allies, they have so far succeeded.

It is unclear, however, whether the cumulative economic benefits to oil companies and to the larger society from not planning for low-probability disasters outweigh the economic and environmental costs when periodic catastrophes do occur. Perhaps with a deeper understanding of complexity, engineers and policy planners will discover new ways to tame the beast. But for the time being, decoupling, modularity, locating dangerous facilities in remote locations, slowing transactions that reward a few but whose benefit to society is negligible, risk assessment, and better testing procedures are the best methods available for limiting disasters, or at least defusing the harm they cause. To believe such measures will be taken, however, remains an act of faith. The more likely outcome is an increasing reliance on ever-more complex technologies and a cavalier disregard by a significant proportion of companies to adopt costly safety procedures.

The dangers of relying upon complex systems will continue to be underestimated. Disasters will occur at ever-decreasing intervals. Nassim Taleb proposes that all efforts to conquer uncertainty merely turn gray swans (vaguely recognized problems) into white swans.1 The very logic of a complex world is such that black swans will always exist. After a disaster, a few black swans turn gray. This represents a tiny decrease in black swans. But increasingly complex technologies will spawn additional growth in the black-swan population. This inability to eliminate black swans should not, however, be taken as an argument against safety measures to reduce destructive events.

This article is adapted with permission from Wendell Wallach, A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control (New York: Basic Books, 2015).


1. Low-probability events may be called “black swans” because if you see a swan that is black, you are likely to be surprised. Nassim Taleb, a Lebanese-American statistician and best-selling author, has championed the black swan theory as a way of highlighting why people are blind to the inevitability of low-probability events and why these events, when they occur, are likely to have a broad impact. White swans are common events. Wallach uses the term gray swans to denote swans that were black—that is, unlikely events—but that appear to become more common with increasingly complex systems. — David Koepsell

Wendell Wallach

Wendell Wallach is known internationally as an expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. He is a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics and a senior advisor to The Hastings Center, a fellow at the Center for Law, Science &, Innovation at the Sandra Day O’Connor School of Law (Arizona State University), and a fellow at the Institute for Ethics &, Emerging Technology. At Yale, Wallach has chaired the Center’s working research group on Technology and Ethics for the past ten years and is a member of other research groups on animal ethics, end of life issues, and neuroethics. In addition to A Dangerous Master, Wallach coauthored (with Colin Allen) Moral Machines: Teaching Robots Right From Wrong. He is a series editor for the forthcoming eighth volume Library of Essays on the ethics of emerging technology. He has also published dozens of articles in professional journals.

“The profits of those best able to game the system have taken precedence over the integrity and stability of markets.”

This article is available to subscribers only.
Subscribe now or log in to read this article.