Cybersecurity Challenges and Mitigations for LLMs in DoD Applications
Document Type
Conference Proceeding
Publication Date
6-25-2025
Abstract
Great power competition has escalated globally, making it increasingly important for the Department of Defense (DoD) to adopt artificial intelligence (AI) technologies that are advanced and secure. Large language models (LLMs), which generate text, code, images, and other digital content based on data sets used in training have gained attention for their potential in DoD applications such as data analysis, intelligence processing, and communication. However, due to the complex architecture and extensive data dependency of LLMs, integrating LLMs into defense operations presents unique cybersecurity challenges. These risks, if not properly managed, could pose severe threats to national security and mission integrity. This survey paper categorizes these challenges into vulnerability-centric risks, such as data leakage, and misinformation, and threat-centric risks, including prompt manipulation and data poisoning, providing a comprehensive framework for understanding the potential risks of LLMs in DoD settings. Each category is reviewed to identify the primary risks, current mitigation strategies, and potential gaps, ultimately identifying where further research is needed. By summarizing the state of the art in LLM cybersecurity, this paper offers a foundational understanding of LLM security within the DoD. By advocating for a dual approach that considers both the evolving nature of cyber threats and the operational needs of the DoD, it aims to provide actionable recommendations to guide ongoing research in the integration of LLMs to DoD operations.
Source Publication
Proceedings of the 24th European Conference on Cyber Warfare and Security
Recommended Citation
Yorkman, C., & Reith, M. G. (2025). Cybersecurity Challenges and Mitigations for LLMs in DoD Applications. Proceedings of the 24th European Conference on Cyber Warfare and Security, 848–855. https://doi.org/10.34190/eccws.24.1.3542
Comments
The "Link to Full Text" on this page opens the conference paper at the publisher website.
This is an Open Access conference paper published by Academic Conferences International, and distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License, which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. CC BY-NC-ND 4.0