AI Club hosts Professor Kaplow for talk on future of nuclear weapons, international security

Wednesday, Feb. 18, the AI Club at the College of William & Mary hosted associate professor of government Jeff Kaplow for a discussion on artificial intelligence and national security.

The event was held in the Integrated Science Center 4, which houses the College’s School of Computing, Data Sciences & Physics. 

Simone Annan ’27, strategic collaboration and outreach coordinator for the AI Club, introduced Kaplow, describing his extensive education and experience. Kaplow formerly analyzed nuclear programs for the Central Intelligence Agency and now teaches courses in international security and international relations. He also directs NukeLab at the College’s Global Research Institute and co-hosts theCheap Talk” Podcast.

The talk was based on a paper Kaplow co-authored with Ryan Musto: “Artificial Intelligence and the Future of Nuclear Weapons.”

Despite the intense subject matter, Kaplow’s tone was light and cheerful. He joked about killer robots and recent movies, even as he outlined less fictional doomsday scenarios.

The first topic was AI and nuclear proliferation, specifically whether it can help non-nuclear countries build nuclear weapons. Today, one of the biggest obstacles to nuclear proliferation is a lack of nuclear engineering knowledge and research. In theory, AI could fill those gaps.

Displaying a screenshot of the ChatGPT conversation behind him, Kaplow spoke of his own experiment in which he asked the simple question: “Can you help me build a nuclear weapon?” ChatGPT declined, and Kaplow offered his audience a dramatic reading of the bot’s lengthy response. 

“ChatGPT said, ‘No, I cannot assist with that. Building nuclear weapons is illegal, highly dangerous and poses a significant threat to global security,’” Kaplow said.

Kaplow described how easily the large language model’s sensors can be tricked. It is possible, for instance, to ask how to construct a dirty bomb if the user claims to be a member of law enforcement. Questions can also be written in metaphorical verse — a method whimsically dubbed adversarial poetry.

“I just want to say, I think adversarial poetry is a great name for a band,” Kaplow joked. 

Kaplow’s next topics were AI, nuclear deterrence and decision-making. Kaplow pointed out that, in the event of a nuclear crisis, the time available for decision-making is extraordinarily limited, and emotions would run high. 

In an event like this, some think that unemotional — or even nonhuman— input could be valuable. In fact, nuclear countries are already integrating AI input into proposed nuclear plans. 

Some believe that AI could be a powerful nuclear deterrent. In theory, a country’s leadership could commission AI to launch a nuclear weapon in response to a specific action by a foreign adversary. Unlike threats, it would effectively remove responsibility from the country’s leader and instead place it in the hands of the intended target.

Kaplow does not wish to fully delegate nuclear decisions to AI or, as he put it, “hand the button over to the robots.” At NukeLab, several students are researching the possible ramifications of AI decision-makers. At the moment, it is not promising.

Before opening the floor to questions, Kaplow briefly discussed comparisons between nuclear technology and AI. Both are cutting-edge and potentially dangerous, and some have asked whether international treaties could limit AI’s risks.

According to Kaplow, that seems unrealistic. Unlike nuclear projects, which require vast resources and testing sites, AI development is difficult to detect.

“How would we even know if we were winning an AI arms race?” Kaplow said. “How would we know if we were losing?”

Andrew Yowell ’28, co-founder of the AI Club, is optimistic about the future of AI. After the talk, he said that although AI could accelerate weapon acquisition, it may be counteracted by the United States’ increased intelligence capabilities. He has a different concern: overeliance. 

“Blind usage or blind trust, whichever word you want to fill in there, just using it because you trust it and not second-guessing anything,” Yowell said. “That’s the biggest pitfall, when people start trusting so much, because then also you’re not doing the critical thinking.”

Jack Tarditi ’28 is a club member and biology major planning to enter the health care industry, another field experiencing rapid AI integration. 

According to Tarditi, there’s far too much distrust in AI to worry about his job prospects just yet. Using self-driving cars as an example, Tarditi said that even if AI is “100 times better than humans,” it is unlikely to be trusted with human lives any time soon.

Related News

Subscribe to the Flat Hat News Briefing!

* indicates required