Artificial intelligence (AI) has become a catchall term for a variety of computer science disciplines, applications, and use cases. Widely understood as technologies that implement intelligent task execution in machines, AI often describes a future state as much as a reality of today. Concrete gains in the AI field have resulted from an increased availability of data and computing power and advances in machine learning (ML) and electronics miniaturization. AI is increasingly profitable in commercial sectors, such as banking and retail. It will likely have significant national security applications in optimizing maintenance and logistics, the development of new weaponry, force training and sustainment, and the command and control of military operations. AI and ML also enable other emerging technologies critical to security, such as hypersonic missile defense, network and communication management and resiliency, Internet of Things, and fifth-generation wireless.
Adversaries and allies alike are issuing AI strategies and including AI use in defense technology portfolios. Emerging technologies development will largely occur in the private sector, which will mean, as the 2018 National Defense Strategy [PDF] calls out, that “state competitors and non-state actors will also have access to them, a fact that risks eroding the conventional overmatch to which our Nation has grown accustomed.”
Shortcomings in technology, workforce, computing infrastructure, data, and policy slow the ability of the U.S. Department of Defense (DOD) to develop, acquire, and deploy AI capabilities essential to national security. In order to address these challenges, the department should modernize procurement procedures for software, reform hiring authorities, shorten security clearance processing time, actively invest in areas the private sector ignores—such as machine learning system test and evaluation—and be prepared to demonstrate to internal and external audiences a return on investment in AI infrastructure development.
The DOD has a long history of working with AI and has invested in AI research and development (R&D) and used rules-based and expert systems—once considered cutting-edge AI—for decades. Today, AI adoption is common in data-rich areas of the DOD with rigorous analytic needs or repetitive tasking, such as Project Maven, a program that uses machine learning and computer vision to aid video analysis in intelligence, surveillance, and reconnaissance activities. AI is also being developed in robotics and unmanned systems, electronic warfare and electromagnetic spectrum management, logistics and maintenance, command and control, humanitarian assistance and disaster relief, and cyber activities. The DOD is focused on operationalizing AI capability by combining machine learning with other techniques, such as computer vision, robotics, natural language processing, and optimization. In contrast with prevalent science fiction narratives, the most rewarding AI uses across the federal sector have been comparatively mundane.
In addition to operationalizing AI capabilities, the Defense Department has also been trying to develop policy for governing AI use. In 2018, it released its strategic approach to AI [PDF], which focuses on human-centric adoption, rapid AI delivery, technical workforce development, national and international partnerships, and a call to lead on ethical and safe AI use. To provide centralized direction and support throughout the department and armed services, the DOD established the Joint Artificial Intelligence Center (JAIC) under the DOD Chief Information Office in 2018 to carry out its AI strategy. In 2019, the department issued its DOD Cloud Strategy [PDF] and directed the Defense Innovation Board (DIB), an independent advisory board of technology leaders established to provide recommendations to the DOD, to conduct a study on the department’s principles for ethical and responsible AI use. In February 2020, the DOD formally adopted the AI principles from the DIB study.
The armed services are also developing policies to leverage AI. In 2019, the Air Force published its Science and Technology Strategy [PDF] and Artificial Intelligence Strategy [PDF] and debuted its Computer Language Initiative. The Army established an AI Task Force [PDF] in 2018. The Navy and Marine Corps have increasingly focused their AI R&D efforts on unmanned and learning-enabled robotic systems. Despite the proliferation of DOD initiatives, none have successfully addressed the underlying issues that hinder AI adoption.
Challenges Throughout the AI Ecosystem
The Department of Defense artificial intelligence ecosystem—the complex network of technology, people, computing infrastructure, data, and policy—is underdeveloped. Some significant shortcomings challenge the DOD’s efforts, including a decrease in the overall federal R&D funding budget; the changing nature of work in an increasingly digital economy; skills shortage in computer science and other science, technology, engineering, and mathematics (STEM) fields; and rapid innovation in the commercial sector that outpaces the department. Despite these shortcomings, the DOD can address several areas to operationalize AI for national security.
While AI continues to demonstrate impressive results in both public and private applications, it is an immature technology and is generally only useful in the situations it is programmed for. Datasets, training approaches, and algorithms developed for one use are generally not transferable to another, and a misunderstanding of technical limitations could lead to an overreliance on AI, exacerbating the risk and consequences of misuse.
The DOD lacks a system for testing and evaluating AI and ML security, which leaves products more easily exploitable. For example, researchers have discovered that computer vision systems for autonomous vehicles can be easily deceived by placing a sticker over a stop sign, causing the car to mistake stop signs for speed limit signs. Without its own system for testing for these issues, the DOD is leaving itself vulnerable to potentially catastrophic accidents. At minimum, it will be impossible for it to accurately evaluate the quality and safety of AI products that it procures from vendors.
Moreover, the DOD acquisitions process is not designed for the high level of experimentation and modification necessitated by AI. Instead, it assumes that products are ready for use once procured. However, as the stop sign example above demonstrates, this does not hold true for AI, which needs to be constantly improved upon as new data becomes available and new vulnerabilities are uncovered.
The DOD lacks a workforce with foundational AI literacy, which hinders its ability to successfully acquire and deploy AI. Despite recognizing this deficiency, DOD efforts to recruit new talent knowledgeable about AI face challenges. First, where these employees will work within the components—the agencies, organizations, and military services included under the DOD umbrella—is unclear. Second, managers tasked with recruiting qualified technical talent often do not possess the foundational knowledge necessary to assess candidates’ qualifications. Third, some in the defense community are resistant to cultural changes that could be necessary to build a technical workforce, and their arguments against incorporating new skillsets into the workforce are often hyperbolic and rely on stereotypes of STEM professionals. The most prominent of these stereotypes is that the cyber community dislikes hierarchy, which would make it incompatible with the famously bureaucratic DOD. Fourth, the DOD is not competing for technical talent in a vacuum or solely against U.S. firms, which usually pay more and offer greater flexibility.
Computing Infrastructure and Data
The DOD struggles to implement up-to-date software because DOD processes do not incentivize leaders to update and modernize IT equipment, operating systems, computing power, and software packages at the pace necessitated by the current rate of technological evolution. System permissions are also not readily available to DOD employees. For example, Python is a computer language widely used in machine learning yet DOD computers do not come with Python installed nor do employees have administrative permissions to download it. The DOD also struggles to integrate AI with legacy systems that are not readily compatible with modern computing capabilities. Legacy systems include antiquated hardware, such as floppy disks (which the DOD used to control the U.S. nuclear arsenal until June 2019), and old pieces of software that have not been updated for possibly decades. The legacy systems that the department chooses to keep will not be modernized overnight and are not going away any time soon.
The data-scarce environment of the DOD is another hurdle to successful AI adoption. Often, the data required for AI systems is simply not gathered. The data that does exist is frequently “dirty”—siloed, flawed, and unstructured—making it largely unusable for machine-learning applications.
Policy and Guidance
The DOD has fallen behind on communicating with the public about how it will use AI under existing policy. This is partly due to a lack of AI understanding in the general public combined with unclear technical terminology. The conversation surrounding AI often uses the phrases artificial intelligence, autonomy, autonomous, and automation interchangeably. These concepts are distinct but overlapping, blurring the differences between systems. For example, is a suicide drone meaningfully distinct from a loitering cruise missile? Do rules-based systems of decades past count as AI when compared to machine learning–enabled systems? Moreover, due in part to the variety of terms included under the AI umbrella, the total number of AI projects that exist—let alone the dollar value of those projects—is unclear to the public.
The DOD is actively working to address the policy concerns with using AI, such as bias in data for machine learning, through the February 2020 adoption of its AI principles. Furthermore, the DIB’s AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense [PDF] concludes that existing legal and ethical frameworks, such as international law of war, guide acceptable development and deployment of AI systems. Often cited in AI dialogues, the Department of Defense Directive 3000.09 [PDF] covers the development of autonomous and semi-autonomous weapons strictly with respect to the role of the humans in selecting and engaging targets. However, AI is not required to implement autonomy in systems. Therefore, the directive would cover AI use in lethal autonomous weapons (LAWS) but would not apply to AI projects outside of this specific purpose. In the absence of clear policy on AI to date, segments of the American technology sector, as well as foreign governments and activists, feel uneasy about companies that choose to work with the department on AI projects. It remains to be seen how these policies will be implemented and if that will assuage concerns.
Beyond the dollar amount, processes including money allocation, planning lead times, and compromising among competing priorities present challenges to AI adoption. In an IBM and GovLoop survey of DOD and intelligence professionals on the challenges of data readiness and new technologies such as machine learning, 49 percent of respondents reported that the budget was their primary constraint (the additional response options were lack of skills, unsure which is best, not a priority, and cultural issues). Strategy documents do not grant the JAIC the authority to allocate funding and resources. It can incentivize and support AI initiatives but cannot force the armed services to start AI projects. How the JAIC will encourage AI adoption is unclear. Also, the U.S. government’s planning period is too long to allow for immediate, widespread AI technology use, and the measures called for by various AI strategies require at least two years’ lead time. For example, the planning for DOD programs that will be used in fiscal years (FY) 2022 to 2026 began in 2019.
The DOD’s early success stories include the continued capability development of Project Maven, partnerships with international counterparts to promote AI for humanitarian assistance and disaster relief, and bringing intelligent automation to the National Geospatial-Intelligence Agency. These examples demonstrate the importance of strengthening the AI ecosystem. While many lasting, structural changes need to occur, the department’s priorities should be fourfold.
First, the department should modernize its computing infrastructure, software procurement efforts, and data architecture. The Defense Department needs modern IT as well as the flexibility to acquire and experiment with new software. The department should implement the recommendations provided in the DIB’s Software Acquisition and Practices to address these challenges. Successfully tackling the dirty data challenge would significantly improve the department’s ability to work with AI and provide valuable experience in solving a core problem that many AI users face.
Second, the DOD needs to actively invest in innovation where the private sector is not incentivized to focus. The department should prioritize research and investment in security, verification and validation, test and evaluation for machine-learning systems, and AI-specific microprocessors. Further, as the United States and its adversaries increasingly adopt AI technologies, the department should prioritize research on counter-AI techniques to protect its own assets and exploit vulnerabilities in targets.
Third, the department should task all organizations that have a stake in AI development and deployment with demonstrating return on investment of money allocated for AI. The department should prioritize efforts to identify metrics to accurately assess AI program success and routinely collect data related to the metrics it identifies. As the AI hype cycle slows and trade-offs are made among competing priorities, the department will need to account for its investment in AI to itself, Congress, and the American people.
Fourth, the DOD should focus on the talent pipeline for both military and civilian personnel. First, it should address security clearance processing times and reform hiring authorities to better suit the realities of the information age job market. Though the 2020 National Defense Authorization Act requires processing times of 30 days or fewer for a secret clearance and 90 days or fewer for a top secret clearance by December 2021, the processing time for secret clearance averaged 234 days and top secret averaged 422 days as of mid–FY 2019. Second, because the DOD is at a disadvantage on salary for roles like software engineers compared to the technology sector, it should follow the example of the Department of the Treasury’s Office of Financial Research, the Securities and Exchange Commission, and the Federal Reserve. In order to compete with the high-paying private financial sector, these federal entities have the authority to hire federal regulators on a pay scale separate from the General Schedule (GS) pay scale. Though maybe not equivalent to private sector pay, a dedicated STEM pay scale coupled with signing and performance bonuses would make the DOD more competitive for technical talent. Finally, placement, retention, and professional development are critical for keeping talent once hired. The department should create a STEM career track tailored to technical career growth milestones for both early- and mid-career professionals to address challenges in retention.
The Department of Defense has set an ambitious yet necessary direction for AI use in national security. That said, if the department wants to evolve into an organization that designs, develops, and deploys AI, it will have to make bigger investments in people, policy, and foundational technologies.
This Cyber Brief is part of the Digital and Cyberspace Policy program. The Council on Foreign Relations takes no institutional positions on policy issues and has no affiliation with the U.S. government. All views expressed in its publications and on its website are the sole responsibility of the author or authors.