top of page
Search

Accelerating Responsible AI in the Department of Defense

By Sean Keeley


December 16, 2022

When I joined Pallas as a graduate fellow in September, I quickly learned that a core part of our mission was to educate the next generation of national security leaders. A key facet of this is through Pallas Foundation’s role as a convening authority, to educate future leaders on the most pressing security issues we face. This winter, I saw this mission in action as the Pallas Foundation hosted a moderated discussion on the future of responsible artificial intelligence (AI).


Bringing together stakeholders from across government, industry, media, and think tanks, Pallas Foundation fostered a candid conversation on how to accelerate the adoption of responsible AI in alignment with DoD goals. To me, three lessons stood out as takeaways.


First, trust is essential to the employment of artificial intelligence. Not only is warfighter trust an official tenet of the Department’s AI strategy, it is also a practical concern for operators on the edge: why should commanders employ AI if they don’t trust it? DoD policymakers, and the industry partners they engage, must work toward building justified confidence in AI if they hope to deploy this innovative technology at scale.


Second, AI literacy is key to developing that trust. In the words of one participant, “we tend to fear the things we don’t understand,” and AI remains a little-understood, much-feared technology. Any effective strategy to deploy AI responsibly will need to foster fluency in AI technology among the Department workforce. This, in turn, will mean breaking down the stovepipes between researchers, operators, validators, and the many other DoD players with a stake in artificial intelligence. Private industry, too, can encourage AI literacy by equipping operators with the tools they need to test and verify how effectively an AI model is performing.


Finally, national security remains a human endeavor, and AI must remain accountable to human-set goals and priorities. One participant mentioned a threefold test for responsible AI: we must ensure that AI works as it was designed to, that it does not work as it is not designed to, and that operators are trained to use it effectively. To that end, new processes to continuously test and evaluate AI performance can put guardrails around the proper employment of AI, while building the trust needed to unlock its potential.


Through conversations like these, the Pallas Foundation aims to bring a variety of perspectives around a table for discussions to inform a smarter security strategy for the 21st century. It’s all part of the broader Pallas mission: making a positive impact on national security.


N.B. The event described was held under Chatham House rules; the above summary reflects the views of the author alone and does not imply endorsement by any other attendee.


Sean Keeley was a Pallas Foundation Fellow in fall 2022. He is a recent master’s graduate of Columbia University’s School of International and Public Affairs (SIPA), and previously worked as a writer and editor at The American Interest. He will be continuing with Pallas in 2023 while pursuing a future candidacy in the U.S. Foreign Service.

bottom of page