Skip links

The AI Arms Race and Why We Need to Come Together Now

In September 1992, when I was 15 years old, President George H.W. Bush visited my hometown of Wixom, Michigan as part of his Spirit of America reelection campaign tour. It was a huge moment for our small town. My mother was good friends with the police chief and scored special access for our family of seven to stand up front. From the rear of a train shrouded in red, white, and blue, President Bush stood with First Lady Barbara and made his case for reelection to a few thousand members of my community. Bush beamed with pride touting to great applause his Strategic Arms Reduction Treaty (START) with the USSR which helped cement an end to the Cold War: “I’m proud of America’s leading role in ridding the fear of nuclear war from these young people here today. We’ve done it. We’ve changed America,” Bush proudly stated.

President Bush had good reason to be proud. He had served as President Ronald Reagan’s vice president during the 1980’s when the United States and Soviet Union were embroiled in a dangerous and unpredictable decades-long battle for nuclear superiority. The nuclear arms race traces back to 1945 when Los Alamos Laboratory Director J. Robert Oppenheimer’s Manhattan Project yielded the first atomic bomb. President Truman was in his first term and ordered the detonation of two atom bombs over Hiroshima and Nagasaki, instantly killing over 100,000 Japanese people, with thousands more later succumbing to related injury and illness. The world witnessed the staggering power of a new class of weapons. By 1949, the Soviet Union had conducted its first successful nuclear test, setting off an arms race of increasingly powerful weapons. Just three years later, the United States tested the first thermonuclear hydrogen bomb, generating an explosion hundreds of times more powerful than Hiroshima.

It wasn’t until 1957 that the International Atomic Energy Agency was created to address nuclear safety, security, and technology transfer. But by then it was too late. After the creation of the IAEA, the nuclear arms race only accelerated. In 1962, the world was brought to the brink of nuclear war during the Cuban Missile Crisis. Through the 1980’s, the USA and USSR continued to accumulate thousands more nuclear weapons. Tensions around the world remained high, especially following the 1986 Chernobyl nuclear power plant explosion that narrowly avoided global catastrophe. So, on that fall day, President Bush had earned his bow and rousing ovation from my hometown crowd for obtaining Soviet cooperation in reducing nuclear stockpiles.

Now a generation removed from that whistlestop speech, I’m reminded of the challenges presented by nuclear as we discover more about the new technology leading experts say is the most powerful and dangerous humanity has ever created, Artificial Intelligence. CEOs of the AI companies leading the charge like OpenAI’s Sam Altman and Microsoft’s former CEO Bill Gates openly acknowledge the dangers posed by the technology: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Over 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium because AI poses “profound risks to society and humanity.” And just last month, Dr. Geoffrey Hinton, known as the “Godfather of AI,” resigned from Google, stating, “The alarm bell I’m ringing has to do with the existential threat of them taking control… I used to think it was a long way off, but now I think it’s serious and fairly close.” OpenAI and Microsoft admit they do not fully understand the technology at the center of the arms race they’ve ignited. They’ve released it into the world anyway, and it’s rapidly entangling itself with every aspect of our lives.

Experts agree that unless safeguards are implemented, AI poses immediate risks to civilization, beginning with the loss of privacy, the breakdown of content-based verification systems that our institutions depend on, and loss of trust through the proliferation of scams, lies, and deepfakes that sow civil unrest. Many predict widespread job loss from AI supplanting both blue- and white-collar workers by misappropriating their stolen skillsets. And perhaps most troubling, experts fear generative AI may be used in autonomous weapons systems that change the incentive structure for starting wars, and creation of an artificial general intelligence that works against humanity’s interests.

For months, thought leaders like Center for Humane Technology cofounders Aza Raskin and Tristan Harris have been spreading the word about the AI Dilemma to anyone who will listen. And while people of good faith are united on the need for commonsense guardrails to govern access and development of this transformative technology, none have been put in place. Last month, Vice President Kamala Harris convened an AI Summit at the White House, but the Executive Branch has yet to issue any regulations. U.S. Senators have introduced two bipartisan AI bills, but neither addresses the necessary safety features needed to prevent catastrophe. But all of that is to be expected—simply put, the Executive and Legislative Branches were not designed to move as fast as this technology. Which is why the Courts play such a critical role in applying existing law to such unprecedented circumstances.

Meanwhile, CEOs of the leading AI companies like Altman and Alphabet’s Sundar Pichai have been publicly inviting regulation. The open letters signed by thought leaders across industry, government, and non-profits read like a desperate cry for help. They can all see the writing on the wall and yet, nobody seems to be doing anything about it. It’s like the people inside the Manhattan Project pleading for someone to save the day before an AI atom bomb is detonated.

What gets lost in all of this is how we got here. Generative AI didn’t just appear out of thin air. OpenAI began as an open-source non-profit precisely because the founders believed that what they were setting out to create would be so powerful and so profitable, that it shouldn’t be driven by corporate returns. But that all seems to have changed. Microsoft made a multibillion-dollar investment in OpenAI, which turned its powerful technology upon the world and set off a global AI arms race. To build the most transformative technology the world has ever known, an almost inconceivable amount of data was captured. The vast majority of this information was scraped without permission from the personal data of essentially everyone who has ever used the internet, including children of all ages. Everything, everywhere, from everyone, all at once.

With these unimaginably large troves of stolen information, tech companies like OpenAI created chatbots like ChatGPT, which quickly became the fastest platform to reach 100M users and fueled a valuation of the young company at tens of billions of dollars. Each new user and dollar earned represents another victim financially damaged by the ongoing commercial misappropriation of their personal information. These companies collected our entire digital footprint, including comments and conversations we had online yesterday or 15 years ago, all of which we communicated to unique communities, for specific purposes, targeting specific audiences. They capture every interaction with their chatbots—whether on their own websites or others that use their technology. By consolidating all this obscure data into one place to “train” the AI, they now have enough information about many of us to create our digital clones, including the ability to replicate our voice and likeness, predict and manipulate our next move, and misappropriate our skillsets in a way that could bring about our own obsolescence.

We hear the AI experts sounding the sirens on this powerful, yet dangerous and unpredictable technology that puts humanity at imminent risk on par with nuclear war. We agree that until OpenAI, Microsoft, and other tech companies ushering in the Age of AI can demonstrate the safety of their products in the form of effective privacy and property protections, a temporary pause on commercial development is necessary. We cannot afford to pay the cost of negative outcomes with AI like we’ve done with social media, or like we did with nuclear. As a society, the price we would all pay is far too steep.

I realize that all of this can sound alarmist or hyperbolic. But, when I think about that first atom bomb in 1945, I’m reminded of the hundreds of thousands of lives lost. I think about the twelve years that passed before we created the IAEA. I reflect on the Cold War that brought human civilization to the brink of self-destruction several times over four decades before our retreat from the nuclear arms race. And I think about the experts who tried to warn us. I can’t help but wonder how it all might have been different if we’d listened and taken steps to protect people from such a dangerous technology.

This is our chance to learn from our mistakes, to see the opportunity and risks clearly, to balance the hunger for innovation with the need for individual protections, privacy, and security. This is our chance to come together on AI.

To learn more, speak up, take action, go to TogetherOn.AI

Ryan J. Clarkson
Managing Partner
Clarkson Law Firm, P.C.