Opinion / Analysis
June 15, 2023
NI logo

Regulating the development of artificial intelligence is possible—and necessary. Tristan Paci

In discussions of artificial intelligence, commentary often focuses on how this technology may automate jobs, revolutionize war, or even lead to the collapse of humanity. However, a more immediate issue deserves serious consideration from national security policymakers. This is the potential for AI advancements—specifically generative AI—to turbocharge disinformation and flood our information ecosystem with untruths.

On the one hand, a new era of AI-powered disinformation could allow malign actors—particularly foreign adversaries—to manipulate the information environment and shape public discourse more easily, as Russia did successfully in the 2016 U.S. election. On the other hand, and perhaps more concerningly, the proliferation of content produced using this technology could erode public trust in the information we consume entirely, undermining the social fabric that holds societies together.

National security policymakers in the United States must recognize the threat these advancements pose to national and international security and prioritize addressing it. Given America’s role as a global leader in the development of AI, U.S. policymakers have a responsibility to coordinate an international response to the coming era of disinformation and work with partners to prevent the further breakdown of our shared reality that this technology threatens.

At the heart of this challenge is generative artificial intelligence. This type of AI helps create hyperrealistic content—including text, images, audio, and video—by learning from large datasets. Interest in generative AI has spiked in recent months largely because of its application in ChatGPT, a sophisticated chatbot developed by OpenAI that has exploded in popularity since its public release in late 2022.

Source: Tristan Paci, National Interest, June 6, 2023, https://nationalinterest.org/feature/policymakers-must-prepare-advent-a…