The Porchlight Business Book Awards longlist is here!

Excerpts

An Excerpt from Evolutionary Intelligence

W Russell Neuman

December 07, 2023

Share

A media technology scholar argues in favor of exploring the expanding boundaries of computationally augmented human intelligence.

Should we fear artificial intelligence or embrace it? NYU Professor of Media Technology W. Russell Neuman argues that, just as the wheel made us mobile and machines made us stronger, AI has the potential to transform our day-to-day decision-making and long-term survival.

In the following excerpt from Chapter 4 of Evolutionary Intelligence, Neuman points out that technology isn't inherently good or bad; it's the way humans use it that determines its value. He acknowledges the need to pay attention to the positives and negatives of up-and-coming technology.

◊◊◊◊◊

Here Be Dragons

Roman and medieval cartographers developed the tradition of drawing sea monsters and lion-like creatures to designate the unknown dangers of the uncharted lands and oceans at the edges of their maps. Mystery implies danger. Just to make it clear, some map-makers even wrote out the text in the margins—hic sunt dracones, here be dragons. There are many unknowns about how the phenomenon of evolutionary intelligence (EI) will ultimately become a routine part of our lives. And just as it was with each preceding generation of technology, there will be individuals with malevolent or criminal intent who will try to harness the power of these technologies to do evil things. We would be remiss not to look more closely at these potential fault lines in the future.

We have characterized EI as our digital interface with the world around us. Like the clothes we wear, the car we drive, our social media avatar, and our personal website, the way we design and use helpful technologies says something about us and represents our mediated interface with our environment. We wouldn’t want to rely on a dysfunctional or counterproductive interface intentionally. We want to look our best and be successful. We want reliable and classy transportation if we can afford it. What could go wrong?

You saw this coming? Yes, a lot could go wrong. The classic answer to what could go wrong is that the artificial intelligence gets so intelligent it “decides” to serve its own interests rather than those for which it was designed. The killer robot. Design a robot with lethal capacity and it decides to kill its creator. Or what may be the seminal narrative of a computer that has decided to ignore human commands and take over on its own—HAL 9000 in the 1968 space adventure movie written and directed by Hollywood icon Stanley Kubrick, 2001: A Space Odyssey. It takes place on a space station inhabited by surviving astronaut Dr. Dave Bowman, played by Keir Dullea, and the blinking red light that represents the central computer HAL that has taken over control of the station, voiced with a mechanical calm by Canadian actor Douglas Rain. Dullea’s character is trapped outside returning from a spacewalk in which his partner astronaut had been killed.

Dave: Hello, HAL. Do you read me, HAL?
HAL: Affirmative, Dave. I read you.
Dave: Open the pod bay doors, HAL.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
Dave: What’s the problem?
HAL: I think you know what the problem is just as well as I do.
Dave: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave: I don’t know what you’re talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.
Dave: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave: Alright, HAL. I’ll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You’re going to find that rather difficult.
Dave: HAL, I won’t argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.

Dave manages to reenter the space station and ultimately physically disconnects HAL’s higher-level functions as HAL pleads for forgiveness and reverts to its early programming and singing a child’s song it first learned many years ago back on earth. It is high-powered drama and in many ways the pivotal core of the film. The issue of diverging purposes between human and machine, of course, are more complex.

In my view, there are four types of danger, each a broad domain of dysfunction that could apply as well to any sociotechnical transition in history.

First, the early-stage EI interface could make an innocent mistake —more or less a miscellaneous error of various sorts. It could misunderstand the situation, misinterpret social or economic signals, awkwardly reveal personal information to others, or basically give very bad advice. As these technologies are trialed in early versions, there are likely to be many instances of this occurring. Some will be humorous, such as when Siri makes a faux pas of some sort. Some technical errors may lead to serious negative consequences—perhaps an economic loss or personal injury. There have already been fatalities attributed to self-driving cars. Presumably, we humans will be highly motivated to recognize and correct misinterpretations and bad advice because ongoing feedback will continuously improve performance. As I see it, this is primarily about tuning the technology to function as we intend it to. I won’t dwell on this first case.

Second, EI processes could be hijacked by others with malign intent. This situation is serious, fundamental, and important. It could be a particular individual with whom you are negotiating. It could be unknown third parties with nefarious interests of their own. As with any social system like paper currencies, lotteries, bank loans, credit cards, eBay-like trading systems, and stock exchanges, there will be fraud. We will address this issue in the pages ahead.

Third, artificially intelligent EI processes could function to serve their own interests rather than ours. This, of course, draws on the classic fictional dystopian scenario of the war of humans versus machines—the HAL computer, killer robots, the Matrix, replicants, the Terminator. Another serious issue worth serious attention.

Fourth, systemic developments could have unintended negative consequences, not necessarily self-interested or criminal manipulations, just unanticipated bad outcomes. This is also an important consideration but particularly difficult to address. We move, in the vocabulary of Donald Rumsfeld, from known unknowns to unknown unknowns. As it is said, in each case, here be dragons. We can draw on historical lessons and design early-warning measures to try to minimize risk.

OK, here we are in the dragons chapter. This is where we review all the nasty things technology has wrought and has yet to inflict. Don’t get me wrong. These are serious questions. But I’m a bit wary of technology bashing. I’m with historian Mel Kranzberg, whose first law of technology is:

Technology is neither good nor bad; nor is it neutral.

Kranzberg was a serious historian of technology, and I take his aphorism to mean that technologies in themselves don’t intrinsically harbor good or evil but that in their interaction with individuals and institutions, they may well tip the balance, especially when they first arrive. Best we pay close attention. So the takeaway is that the balance of positive and negative pivots on how technologies are implemented. And, in turn, given my argument about inevitability, we may need to get this next one right.

 

Excerpted from Evolutionary Intelligence: How Technology Will Make Us Smarter by W. Russell Neuman, published by MIT Press. Copyright © 2023 by W. Russell Neuman.

 

About the Author

W. Russell Neuman is Professor of Media Technology at New York University. A founding faculty of the MIT Media Laboratory, he served as Senior Policy Analyst in the White House Office of Science and Technology Policy. His recent books include The Digital Difference: Media Technology and the Theory of Communication Effects.<

Learn More

We have updated our privacy policy. Click here to read our full policy.