J. Robert Oppenheimer, who had led the project that culminated in the test, contemplated that morning the possibility that this destructive power might somehow contribute to an enduring peace. He recalled the hope of Alfred Nobel, the Swedish industrialist and philanthropist, that dynamite, which Nobel had invented, would end wars.
After seeing how dynamite had been used in making bombs, Nobel confided to a friend that more capable weapons, not less, would be the best guarantors of peace. He wrote, “The only thing that will ever prevent nations from beginning war is terror.”
Our temptation might be to recoil from this sort of grim calculus, to retreat into hope that a peaceable instinct in our species would prevail if only those with weapons would lay them down. It has been nearly 80 years since the first atomic test in New Mexico, however, and nuclear weapons have been used in war only twice, at Hiroshima and Nagasaki. For many, the bomb’s power and horror have grown distant and faint, almost abstract.
The record of humanity’s management of the weapon — imperfect and, indeed, dozens of times nearly catastrophic — has been remarkable. Nearly a century of some version of peace has prevailed in the world without a great-power military conflict. At least three generations — billions of people and their children and grandchildren — have never known a world war. John Lewis Gaddis, a professor of military and naval history at Yale, has described the lack of major conflict in the postwar era as the “long peace.”
The atomic age and the Cold War essentially cemented for decades a calculus among the great powers that made true escalation, not skirmishes and tests of strength at the margins of regional conflicts, exceedingly unattractive and potentially costly. Steven Pinker has argued a broader “decline of violence may be the most significant and least appreciated development in the history of our species.”
It would be unreasonable to assign all or even most of the credit for this to a single weapon. Any number of other developments since the end of World War II, including the proliferation of democratic forms of government across the planet and a level of interconnected economic activity that once was unthinkable, are part of the story.
The great-powers calculus that has helped prevent another world war might also change quickly. But the supremacy of U.S. military power has undoubtedly helped guard the peace, fragile as it might be. A commitment to maintaining such supremacy, however, has become increasingly unfashionable in the West. And deterrence, as a doctrine, is at risk of losing its moral appeal.
The atomic age could soon be coming to a close. This is the software century; wars of the future will be driven by artificial intelligence, whose development is proceeding far faster than that of conventional weapons. The F-35 fighter jet was conceived of in the mid-1990s, and the airplane — the flagship attack aircraft of American and allied forces — is scheduled to be in service for 64 more years. The U.S. government expects to spend more than $2 trillion on the program. But as retired Gen. Mark A. Milley, former chairman of the Joint Chiefs of Staff, recently asked, “Do we really think a manned aircraft is going to be winning the skies in 2088?”
In the 20th century, software was built to meet the needs of hardware, from flight controls to missile avionics. But with the rise of artificial intelligence and the use of large language models to make targeting recommendations on the battlefield, the relationship is shifting. Now software is at the helm, with hardware — the drones in Ukraine and elsewhere — increasingly serving as the means by which the recommendations of AI are carried out.
And for a nation that holds itself to a higher moral standard than its adversaries when it comes to the use of force, technical parity with an enemy is insufficient. A weapons system in the hand of an ethical society, and one rightly wary of its use, will only act as an effective deterrent if it is far more powerful than the capability of an opponent that would not hesitate to kill the innocent.
The trouble is that the young Americans who are most capable of building AI systems are often also most ambivalent about working for the military. In Silicon Valley, engineers have turned their backs, unwilling to engage with the mess and moral complexity of geopolitics. While pockets of support for defense work have emerged, most funding and talent continue to stream toward the consumer.
The engineering elite of our country rush to raise capital for video-sharing apps and social media platforms, advertising algorithms and shopping websites. They don’t hesitate to track and monetize people’s every movement online, burrowing their way into our lives. But many balk when it comes to working with the military. The rush is simply to build. Too few ask what ought to be built and why.
In 2018, about 4,000 employees at Google wrote a letter to Sundar Pichai, the chief executive, asking him to abandon a software effort, known as Project Maven, for the U.S. Special Forces that was being used for surveillance and mission planning in Afghanistan and elsewhere. The employees demanded that Google never “build warfare technology,” arguing that assisting soldiers in planning targeting operations and “potentially lethal outcomes” was “not acceptable.”
Google attempted to defend its involvement in Project Maven by saying the company’s work was merely “for non-offensive purposes.” This was a subtle and lawyerly distinction, especially from the perspective of soldiers and intelligence analysts on the front lines who needed better software systems to stay alive. Diane Greene, the head of Google Cloud at the time, held a meeting with employees to announce that the company had decided to end its work on the defense project. An article in Jacobin declared this “an impressive victory against US militarism,” noting that Google employees had successfully risen up against what they believed was a misdirection of their talents.
Yet the peace that those in Silicon Valley who are opposed to working with the military enjoy is made possible by that same military’s credible threat of force. At Palantir, we are building software architecture for U.S. and allied defense and intelligence agencies that will enable the deployment of this century’s AI weaponry. We should, as a society, be capable of carrying on a debate about the merits of using military force abroad without hesitating to provide those sent into harm’s way with the software they need to do their jobs.
What’s most concerning is that a generation’s disenchantment with and disinterest in our country’s collective defense has led to a massive redirection of resources — intellectual and financial — toward sating the needs of consumer culture. The diminishing demands we place on the technology sector to produce products of enduring and collective value are ceding too much power to the whims of the market. As David Graeber, who taught anthropology at Yale and the London School of Economics, observed in a 2012 essay in the Baffler, “The Internet is a remarkable innovation, but all we are talking about is a super-fast and globally accessible combination of library, post office, and mail-order catalogue.”
The technology world’s drift toward the concerns of the consumer has helped reinforce a certain escapism — Silicon Valley’s instinct to ignore the important issues we face as a society in favor of the trivial and ephemeral. Challenges ranging from national defense and violent crime to education reform and medical research have appeared to many people in the technology industry to be too intractable, thorny and politically fraught to be worth addressing.
One year after the revolt at Google, an uprising by Microsoft employees threatened to halt work on a $480 million project to build an augmented-reality platform for soldiers in the U.S. Army. The workers wrote a letter to Satya Nadella, the chief executive, and Brad Smith, its president, arguing that they “did not sign up to develop weapons” and demanding that the company cancel the contract.
In November 2022, when OpenAI released its AI interface ChatGPT to the public, it prohibited its use for “military and warfare” purposes. After the company removed the blanket prohibition on military applications this year, protesters gathered outside the San Francisco office of Sam Altman, OpenAI’s CEO, to demand that the company “end its relationship with the Pentagon and not take any military clients.”
Such outrage from the crowd has trained leaders and investors across the technology industry to avoid any hint of controversy or disapproval. But their reticence comes with significant costs. Many investors in Silicon Valley and legions of extraordinarily talented engineers simply set the hard problems aside. A generation of ascendant founders say they actively seek out risk, but when it comes to deeper investments in societal challenges, caution often prevails. Why wade into geopolitics when you can build another app?
And build apps they have done. A proliferation of social media empires systematically monetizes and channels the human desire for status and recognition.
For its part, the foreign policy establishment has repeatedly miscalculated when dealing with China, Russia and others, believing that economic integration can be sufficient to undercut their leaders’ domestic support and diminish their interest in military escalation abroad. The failure of the Davos consensus was to abandon the stick in favor of the carrot alone. Meanwhile, Xi Jinping of China and other authoritarian leaders have wielded power in a way that political leaders in the West might never understand.
On a visit to the United States in 2015, speaking to a group of business and political leaders in Seattle’s chamber of commerce, Xi recalled with affection reading “The Old Man and the Sea.” He said that when he visited Cuba, he traveled to Cojimar, on the northern coast that had inspired Ernest Hemingway’s story of a fisherman and his 18-foot marlin. Xi said he “ordered a mojito,” the author’s favorite, “with mint leaves and ice,” explaining that he “just wanted to feel for myself” what Hemingway had been thinking when he wrote his story. The leader of a nation with nearly one-fifth of the world’s population added that it was “important to make an effort to get a deep understanding of the cultures and civilizations that are different from our own.” We would be well advised to do the same.
Our broader reluctance to proceed with the development of effective autonomous weapons systems for military use might stem from a justified skepticism of power itself. Pacifism satisfies our instinctive empathy for the powerless. It also relieves us of the need to navigate among the difficult trade-offs that the world presents.
Chloé Morin, a French author and former adviser to the country’s prime minister, suggested in a recent interview that we should resist the facile urge “to divide the world into dominants and dominated, oppressors and oppressed.” It would be a mistake, and indeed a form of moral condescension, to systematically equate powerlessness with piousness. The subjugated and subjugators are equally capable of grievous sin.
We do not advocate a thin and shallow patriotism — a substitute for thought and genuine reflection about the merits of our nation as well as its flaws. We only want America’s technology industry to keep in mind an important question — which is not whether a new generation of autonomous weapons incorporating AI will be built. It is who will build them and for what purpose.
Source: Opinion | Why American tech companies need to help build AI weaponry