HOLIDAY CLOSURE: We will be out of the office on December 25 and back on December 26. You can place orders as usual.

The Porchlight Business Book Awards longlist is here!

Excerpts

An Excerpt from The Mind's Mirror

Daniela Rus, Gregory Mone

August 08, 2024

Share

An exciting introduction to the true potential of AI from the director of MIT’s Computer Science and Artificial Intelligence Laboratory.

imageakvqo.pngImagine a technology capable of discovering new drugs in days instead of years, helping scientists map distant galaxies and decode the language of whales, and aiding the rest of us in mundane daily tasks, from drafting email responses to preparing dinner. Now consider that this same technology poses risks to our jobs and society as a whole. Artificial Intelligence is no longer science fiction; it is upending our world today.

As advances in AI spark fear and confusion, The Mind’s Mirror reminds us that in spite of the very real and pressing challenges, AI is a force with enormous potential to improve human life. Computer scientist and AI researcher Daniela Rus, along with science writer Gregory Mone, offers an expert perspective as a leader in the field who has witnessed many technological hype cycles. Rus and Mone illustrate the ways in which AI can help us become more productive, knowledgeable, creative, insightful, and even empathetic, along with the many risks associated with misuse.

The Mind’s Mirror shows readers how AI works and explores what we, as individuals and as a society, must do to mitigate dangerous outcomes and ensure a positive impact for as many people as possible. The result is an accessible and lively exploration of the underlying technology and its limitations and possibilities—a book that illuminates our possible futures in the hopes of forging the best path forward.

The following excerpt come from Chapter 14: Societal Challenges, and addresses the concern of "Overreliance."

 

◊◊◊◊◊

 

As a mother and a teacher, I am tremendously concerned with how AI technologies could negatively impact young people. The rise of AI-­driven platforms and applications could compound some social-­media-­driven problems. Young people are already struggling with self-­image. What happens when their contemporaries resort to AI-­based image manipulation tools in their posts? What sort of impact will it have on a young person’s feeling of self-­worth when they discover that the “friend” they’ve been interacting with online is a cleverly constructed AI? Overreliance on AI could diminish human interaction, leading to changes in how we relate to one another, or even to the creation of a society in which we are overly dependent on automated solutions. 

This isn’t just a problem for young people. Recently I was riding in a city taxi when a major public celebration led to several roads being barricaded. My driver was relying on a popular navigation application, which failed to account for the blocked streets. Yet the driver continued to rely on the app even as it repeatedly suggested the route we’d just discovered was blocked off. Finally, after we’d circled the same area three times, I suggested he pick up his head and take the longer route around the blockade. I don’t mean to disparage this particular driver; we are all guilty of overreliance on our intelligent tools. And while this is not a problem on the scale of ethics or bias, it is worth considering deeply. The use of calculators means most people cannot do math in their heads and still our species has continued to advance, but what happens as we turn more mental skills over to AI? What if we cannot find our way around the world without a decent cellular signal, or write a cogent note because we have become too reliant on ChatGPT or its offspring? What if we depend so much on AI as a knowledge acquisition tool that we fail to develop our facility for learning? 

We are going to need to find a balance. I believe these tools can be enormously valuable. Yet I am also convinced that humans should be able to write essays, calculate sums, and navigate cities without the help of technology. Similarly, we should be capable of holding large and diverse stores of knowledge and information in the immensely powerful biological computers we call our brains. The Liquid Networks project offers a good example of why this last piece is so important. A few years ago I was at a conference, and Radu Grosu and I arranged to go for a run together. On a Sunday morning between events, we jogged through a local park and discussed what we were working on. At the time my students and I were deep into a self-­driving car project, and I voiced my frustrations with the fact that the AI models everyone was using to control these vehicles were so enormous and complex. Yes, they often produced wonderful results, but their performance wasn’t guaranteed and you couldn’t explain what had gone wrong if they made a bad decision. At some point we transitioned from talking about artificial brains to natural ones, and we began marveling at the brain of C. elegans, the worm that has informed much of the foundational knowledge of neuroscience. The brilliance of this worm’s brain lies in its simplicity. The human brain is packed with 86 billion neurons and many, many more connections between them. Yet C. elegans gets by on only 302 neurons. The worm lives a relatively good life, too. It’s able to find food, reproduce, and hide from the rain. 

As Radu and I continued our run, we began to hold these two opposing concepts in our minds, juxtaposing the complexity of standard AI models with the simplicity of the worm’s brain, and soon our students started working together on a new model for AI and its application to robot navigation. The core concepts were developed in part because we held all this varied information in our minds and we could ideate on the subject of neural networks and brains without having to consult any external knowledge sources. 

 

Excerpted from The Mind's Mirror: Risk and Reward in the Age of AI. Copyright 2024 by Daniela Rus, Gregory Mone. Used with permission of the publisher, W.W. Norton & Company. All rights reserved.

 

About the Authors

Daniela Rus is a pioneering roboticist and a professor of electrical engineering and computer science at MIT, where she is the director of the Computer Science and Artificial Intelligence Laboratory.

Learn More


Gregory Mone , a former editor at Popular Science , adapted Neil deGrasse Tyson's Astrophysics for People in a Hurry and Daniel James Brown's Boys in the Boat for young readers.

Learn More

We have updated our privacy policy. Click here to read our full policy.