An Excerpt from Taming Silicon Valley
September 19, 2024
Leading AI expert Gary Marcus simplifies the concept of artificial intelligence, highlights its potential and dangers, and emphasizes the need for accountability from policymakers and tech companies.
Will artificial intelligence help humanity or harm it? AI expert Gary Marcus believes in its potential to revolutionize science, medicine, and technology and create a better quality of life for all humanity. Yet, as it currently stands, the use of AI has been concentrated within Big Tech companies such as Meta and OpenAI.
In his new book, Taming Silicon Valley, Marcus contends that Big Tech has been playing both the public and the government, pushing forward a flawed product that is already having sweeping—and often damaging—effects on society. However, there’s still time to hold leaders accountable and safeguard our democracy, our society, and our future.
In this excerpt from the book’s introduction, Marcus outlines the four major issues facing artificial intelligence today.
◊◊◊◊◊
The 4 Biggest Problems with AI Today
I have been involved in AI in one way or another since I was a little kid. I first learned to code when I was eight years old. When I was fifteen I wrote a Latin-English translator in the programming language LOGO, on a Commodore 64. I parlayed that project into early admission to college the next year, skipping the last two years of high school. My PhD thesis was on the epic and still unsolved question of how children learn language, with an eye on a kind of AI system known as the neural network (ancestor to today’s Generative AI), preparing me well to understand the current breed of AI.
In my forties, inspired by the success of DeepMind, I started an AI and machine learning company called Geometric Intelligence, that I eventually sold to Uber. AI has been good to me. I want it to be good for everybody.
I wrote this book as someone who loves AI, and someone who desperately wants it to succeed—but also as someone who has become disillusioned and deeply concerned about where things are going. Money and power have derailed AI from its original mission. None of us went into AI wanting to sell ads or generate fake news.
I am not anti-technology. I don’t think we should stop building AI. But we can’t go on as we are. Right now, we are building the wrong kind of AI, an AI—and an AI industrial complex—that we can’t trust. My fondest hope is that we can find a better path.
You might know me as the person who dared to challenge OpenAI’s CEO Sam Altman when we testified together in the US Senate. We swore in together, on May 18, 2023, both promising to tell the truth. I am here to tell you the truth about how big tech has come to exploit you more and more. And to tell you how AI is increasingly putting almost everything we hold dear—from privacy to democracy to our very safety—at risk, in the short term, medium term, and long term. And to give you my best thoughts as to what we can do about it.
Fundamentally, I see four problems.
- The particular form of AI technology that everybody is focusing on right now—Generative AI—is deeply flawed. Generative AI systems have proven themselves again and again to be indifferent to the difference between truth and bullshit. Generative models are, borrowing a phrase from the military, “frequently wrong, and never in doubt.” The Star Trek computer could be counted on to gives sound answers to sensible questions; Generative AI is a crapshoot. Worse, it is right often enough to lull us into complacency, even as mistakes invariably slip through; hardly anyone treats it with the skepticism it deserves. Something with reliability of the Star Trek computer could be world-changing. What we have now is a mess, seductive but unreliable. And too few people are willing to admit that dirty truth.
- The companies that are building AI right now talk a good game about “Responsible AI,” but their words do not match their actions. The AI that they are building is not in fact nearly responsible enough. If the companies are left unchecked, it is unlikely that it ever will be.
- At the same time, Generative AI is wildly overhyped relative to the realities of what it has or can deliver. The companies building AI keep asking for indulgences—such as exemptions from copyright law—on the grounds that they will someday, somehow save society, despite the fact that their tangible contributions so far have been limited. All too often, the media buys into Silicon Valley’s Messiah myth. Entrepreneurs exaggerate because it is easier to raise money if they do so; hardly anyone is ever held accountable for broken promises. As with cryptocurrencies, vague promises of future benefits, perhaps never to be delivered, should not distract citizens and policymakers from the reality of current harms.
- We are headed toward a kind of AI oligarchy, with way too much power, a frightening echo of what has happened with social media. In the United States (and in many other places, with the notable exception of Europe), the big tech companies call most of the shots, and governments have done too little to rein those companies in. The vast majority of Americans want serious regulation around AI, and trust in AI is dropping,5 but so far Congress has not risen to the occasion.
As I told the US Senate Judiciary Subcommittee on AI Oversight in May 2023, all of this has culminated in a “perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability.”
In this short book I will try unpack all of that, and to say what we—as individuals, and as a society—can and should insist on.