Porchlight Business Book Awards season is here.

Excerpts

An Excerpt from The Age of Prediction

Igor Tulchinsky, Christopher E Mason

August 24, 2023

Share

As prediction grows more robust, it also alters the nature of the accompanying risk, setting up unintended and unexpected consequences.

The Age of Prediction is about two powerful, and symbiotic, trends: the rapid development and use of artificial intelligence and big data to enhance prediction, as well as the often paradoxical effects of these better predictions on our understanding of risk and the ways we live.

With greater predictability also comes a degree of mystery, and the authors ask how narrower risks might affect markets, insurance, or risk tolerance generally. Can we ever reduce risk to zero? Should we even try? This book lays an intriguing groundwork for answering these fundamental questions and maps out the latest tools and technologies that power these projections into the future, sometimes using novel, cross-disciplinary tools to map out cancer growth, people’s medical risks, and stock dynamics. 

The excerpt below is from Chapter Two: "The Complexity of Prediction."

◊◊◊◊◊

The Dynamics of Risk 

A prediction is a statement about the future. Risk is the probability that the statement is wrong. Absolute certainty, if such a thing exists, is 100 percent; absolute uncertainty is 0 percent. The probabilities of one future over another, whether it’s a car weaving through city traffic, a medical diagnosis based on a genomic analysis, or the chances of one political candidate defeating another, fall somewhere in between. Certainty and uncertainty shift within that 0 to 100 percent range. Most of the mechanisms that drive predictive technologies involve calculations, based on data, that try to define the line between the two or to maximize certainty over uncertainty. The language of prediction is probability. What are the odds of a hurricane taking one path or another? What is the probability of getting struck by lightning on a golf course or of an asteroid hitting Earth? Risk is associated with failure. In finance, that failure is the trading strategy that goes wrong; in medicine, it’s the therapy that kills; in government, it’s the policy that fails.

How can we reconcile these two linked but very different concepts, prediction, and risk? How can we be living in the Age of Prediction and still feel that risks—uncertainties—are piling up around us? Nearly all these predictive technologies involve people, and that makes the problem far more complex. Humans are not machines or only physical systems. They change their minds. They calculate based on incomplete information. They act, and then react, and then react to the reaction. They live in complex social groups. They “follow their gut.” They aren’t, despite the hopes of generations of economists, rigidly rational and self-interested, but they’re not essentially irrational, either. The world around them never stops changing. Human consciousness is plastic, flexible, and diverse. Human behavior, in short, is difficult to predict. 

The Age of Prediction is not an age of perfect certainty. Risk will not evaporate; it may move, change shape, evolve like a virus. For all the deep thinking about risk and management of risk, the relation between prediction and risk remains something of a mystery. Like so many aspects of our world, the relationship between these two forces is ceaselessly changing, and, in our experience, one of the key factors driving this mutability is the ability to accurately predict the effects of prediction itself.

What do we mean by that? People tend to behave differently when they are no longer fearful of the consequences. This can be either good or bad or both. It’s subjective and often paradoxical. Removing a risk may be liberating because it eliminates a source of anxiety and even in some cases harm or near-certain death. We no longer worry about smallpox, tuberculosis, and polio, for instance, because vaccines have minimized their threat. We trust airplanes, anesthesiology, most prescription drugs, and smartphones despite most users lacking an understanding of how they work. It’s hard to find the downsides of such advances beyond some occasional side effects.

But even as some risks can be predicted, reduced, and perhaps eliminated, others continue to emerge, like a game of whack-a-mole. From a risk perspective, the chances of dying from heart disease, cancer, and neurological disease have risen, while the threat of infectious diseases, which traditionally preyed on the very young, has receded. Overall, longer life expectancy is a good thing, a wonderful trade-off. Yet, even that advance wrong; in medicine, it’s the therapy that kills; in government, it’s the policy that fails.

How can we reconcile these two linked but very different concepts, prediction and risk? How can we be living in the Age of Prediction and still feel that risks—uncertainties—are piling up around us? Nearly all these predictive technologies involve people, and that makes the problem far more complex. Humans are not machines or only physical systems. They change their minds. They calculate based on incomplete information. They act, and then react, and then react to the reaction. They live in complex social groups. They “follow their gut.” They aren’t, despite the hopes of generations of economists, rigidly rational and self-interested, but they’re not essentially irrational, either. The world around them never stops changing. Human consciousness is plastic, flexible, and diverse. Human behavior, in short, is difficult to predict.

The Age of Prediction is not an age of perfect certainty. Risk will not evaporate; it may move, change shape, evolve like a virus. For all the deep thinking about risk and management of risk, the relation between prediction and risk remains something of a mystery. Like so many aspects of our world, the relationship between these two forces is ceaselessly changing, and, in our experience, one of the key factors driving this mutability is the ability to accurately predict the effects of prediction itself. 

What do we mean by that? People tend to behave differently when they are no longer fearful of the consequences. This can be either good or bad or both. It’s subjective and often paradoxical. Removing a risk may be liberating because it eliminates a source of anxiety and even in some cases harm or near-certain death. We no longer worry about smallpox, tuberculosis, and polio, for instance, because vaccines have minimized their threat. We trust airplanes, anesthesiology, most prescription drugs, and smart- phones despite most users lacking an understanding of how they work. It’s hard to find the downsides of such advances beyond some occasional side effects. 

But even as some risks can be predicted, reduced, and perhaps eliminated, others continue to emerge, like a game of whack-a-mole. From a risk perspective, the chances of dying from heart disease, cancer, and neurological disease have risen, while the threat of infectious diseases, which traditionally preyed on the very young, has receded. Overall, longer life expectancy is a good thing, a wonderful trade-off. Yet, even that advance has its complexities: the high cost of caring for older people, the economic burden of an aging society, the moral dilemmas of old age that currently affect the most advanced economies.

Many people hope that future innovations will solve these issues, but it is worth noting that optimism creates conditions that nurture and encourage risk, as the economist Hyman Minsky noted in the context of financial markets. Reducing risk can shrink incentives to behave in risk-minimizing ways—to watch your health, to care for your home and car, to behave, in that old-fashioned word, prudently. Reducing risk tends to increase the sense that you can safely take on more risk. In the markets, risk is correlated with reward.

Traders move from securities that are declining in risk and reward to those increasing in risk and reward, but traders are not perfectly rational, which can lead to poor decision-making. There is even a backlash to some predictive technologies that require intimate or private data about individuals. Some of the rise of COVID-19 antivaccination sentiment was driven by a resistance to being told what to do, a protest in defense of personal autonomy or freedom, despite very apparent risks to the antivaxxers and others. Prediction can seem to be coercive or oppressive, narrowing current behavior and future options. 

Thus, predictive technologies can and do spawn unintended consequences. In fact, there may always be limits to prediction in human affairs; 100 percent certainty may be forever out of reach. We may grow confident that some targeted and quantifiable predictions are overwhelmingly likely to occur, but we may not be as certain about what the secondary and tertiary effects will be and whether they can suddenly mushroom into a threat even greater than the initial problem.

The fact that prediction and risk are inextricably linked explains why the optimism that often characterizes the Age of Prediction is also shadowed by anxiety, fear, and active opposition to what accurately predicting the future will entail. At its most fundamental level, the enhanced ability to predict creates change, which can be destabilizing and anxiety provoking. What will the change mean? How will we adjust to it? Will it release unforeseen risks, like the demons in Pandora’s box? In the great debates in seventeenth-century France over whether to engage in smallpox inoculations for children—using not a vaccine but the virus itself—a common objection was that the inoculations were against God’s will, even if they worked.

As we enter the Age of Prediction, the deepest fear among some people is that we will develop techniques to approach absolute predictability or certainty and thus render aspects of life completely determined, as if people were machines and humanity were assuming the omniscient role many once ascribed to God. A large literature speaks to this fear. Such a possibility would have bizarre effects. It would erase reward from the markets, destroying their function, and it would rob individuals of free will or at least of the perception that they can act freely. However, such an eventuality remains far off, theoretical, and perhaps unrealistic. Some quantity of risk will undoubtedly remain. As a result, we may be fated always to wonder if the very potency of our predictive tools means we are missing some new risk out there, quietly gestating in the dark.  

 

Excerpted from The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk, published by MIT Press. Copyright © 2023 by Igor Tulchinsky and Christopher E. Mason

About the Authors

Igor Tulchinsky is founder, chairman, and CEO of WorldQuant, a quantitative investment firm based in Old Greenwich, Connecticut. He is the author of Finding Alphas: A Quantitative Approach to Building Trading Strategies and The UnRules: Man, Machines and the Quest to Master Markets.

Learn More


Christopher E. Mason is Professor of Genomics, Physiology, and Biophysics at Weill Cornell Medicine and the Director of the WorldQuant Initiative for Quantitative Prediction.

Learn More

We have updated our privacy policy. Click here to read our full policy.