Tuesday, Oct. 21, the College of William and Mary Geopolitics of Technology Initiative hosted Davison M. Douglas Professor of Law Margaret Hu and former Deputy Director of the National Counterterrorism Center Russell Travers ’78 to analyze the future of law, policy and national security in the wake of artificial intelligence advancements. The GeoTech Initiative is a student-led research project in collaboration with the Yale Policy Institute, focusing on the impacts of technology in Indo-Pacific geopolitical competition. GeoTech Talk Lead Chloe Cohen ’27 moderated the discussion.
The speakers began by reflecting on their backgrounds in public service.
Aside from teaching at the College of William and Mary Law School, Hu is also Director of the Digital Democracy Lab at the Law School and author of the casebook “AI Law and Policy.” She previously worked as special policy counsel in the Civil Rights Division of the U.S. Department of Justice.
“When I entered the Civil Life Division of the U.S. Department of Justice, my first day was Sept. 10, 2001,” Hu said. “So it was the day before 9/11, and the terrorist attacks, and immediately, they asked for volunteers for a post-9/11 task force, which I volunteered for. And what I saw in the 10 years that I spent in the Justice Department was the rise of AI systems and data collection, aggregation storage and analysis for counterterrorism purposes, and for the use of border security.”
Travers served as the deputy director and acting director of the National Counterterrorism Center and later moved to work as National Security Council deputy homeland security advisor under the Biden administration.
“We are way too siloed in basically everything in this country, and the notion of doing interagency efforts really interested me,” Travers said. “Between the National Counterterrorism Center and a few tours of the National Security Council at the White House, that has largely defined the last 20 years of my career. I would go back and do it all again in a heartbeat.”
Travers explained that increasing technology usage correlates with increasing data that national security agencies have to process.
“Even something that was just dealing with terrorism threats had grown to about 10,000 cables a day, about 16,000 names in those cables that we were having to process as a relatively small center to evaluate for threat,” Travers said. “And so we’ve got to the point where large language models are not optional when it comes to processing information. They are imperative.”
Travers continued by highlighting the negative impacts of AI, namely the spread of false information through deepfakes.
“As a country, we’re not very good at dealing with how you look at information and determine whether it’s good, better and different,” Travers said. “I mean, there’s studies of the last 10, 15 years where young people, college-educated students have a very difficult time distinguishing between misinformation, disinformation and malinformation. And now you overlay onto that AI deep fakes, and you have introduced a whole new level of complexity, of which we are not prepared.”
Hu views the spread of deepfakes and misinformation as implicating national security and democracy at large.
“I think that we’re having difficulty processing information in a way that our founders never intended,” Hu said. “Enlightenment depends on being able to know what is evidence-based reasoning. If we don’t have a sense of knowing where to get to evidence-based reasoning, I think that that also puts us in a very complicated place, not only with national security, but our entire democratic form of the way that we govern.”
Travers furthered Hu’s concerns about AI impeding democratic ideals by acknowledging how technology firms are making deals with the U.S. Immigration and Customs Enforcement to aid in deportations, creating tension with the Fourth Amendment.
“The ability of technology to compile just unheard of amounts of information and sell it to ICE in a way that they can use to apprehend individuals ostensibly under reasonable suspicion standards, and then deport them is pretty scary,” Travers said. “There have been, at latest count, 170 Americans that have been wrapped up in this. Irrespective of what you think about how immigration is being handled, Americans are losing out on due process because of this.”
Celia Schaefers ’28, the GeoTech Initiative open research call lead, enjoyed that the talk challenged the typical industry narrative.
“They brought up the fact that even in this AI arms race, going faster with less regulation doesn’t necessarily mean a better outcome or more AI infrastructure,” Schaefers said. “It just means that we’re skipping maybe foundational steps. I think that showing people that we can slow down and still be really strong is important.”
Anushka Pujara ’28 admired Hu’s emphasis on the Constitution.
“I really like that she went with the democratic side of it and then explained why she thought that that was so important, especially now,” Pujara said. “And she kept rooting it back to history, which I really enjoyed.”
When asked about advice for students seeking to work as public servants during the expansion of AI, Hu encouraged them to follow the U.S. founders’ groundwork by prioritizing innovation without sacrificing democratic values.
“Whatever career that you pursue, AI will be a part of it, whether you want it to be or not,” Hu said. “But I think you have to contextualize that knowledge of these AI systems within the broader goal and the broader ambitions of our founders. So I think we all have a responsibility to be democratic stewards of the AI world that we inherited.”
Travers acknowledged the mounting challenges in the public sector but encouraged students to get involved.
“I am tremendously worried right now about civil service,” Travers said. “But I will tell you that I also believe this would be a wonderful time for young people to get into government, because there’s going to be a lot to fix.”
