DoD’s New Ethical Principles for AI: Implications for Tech Companies

Posted on 28th February 2020

Richard Upchurch
Policy Manager, Asia & US

On 24 February, the US Department of Defence (DoD) officially adopted a series of ethical principles for the use of artificial intelligence (AI) in both combat and non-combat functions. The Defence Innovation Board, a group of business, academic and non-profit stakeholders advising the Secretary of Defence, developed the principles in consultation with a number of AI experts from industry, government, academia and the public to support the Department’s push to integrate AI and machine learning across its functions in accordance with the National Defence Strategy and DoD AI Strategy. This article examines the principles and their implications for technology companies.

The Five Principles

According to the information that the DoD has released publicly, the Department’s five principles for the ethical use of AI will encompass five major areas:

1. Responsible. DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.

2. Equitable. The Department will take deliberate steps to minimise unintended bias in AI capabilities.

3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

5. Governable. The Department will design and engineer AI capabilities to fulfil their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour.

The similarity between the adopted principles and those recommended by the Defence Innovation Board to the DoD in 2019 is striking; in fact, they are nearly identical. This indicates the high level of trust that the Department places in its advisory committee and suggests DoD lacks the internal AI expertise necessary to meaningfully contribute to the principles’ development. Also noteworthy is the focus on minimising unintended bias in AI capabilities, which is a growing concern among both policy-makers and the public not only in the US but in many other countries. The European Commission, for example, includes the avoidance of unfair bias as one of its “Ethical Guidelines for Trustworthy Artificial Intelligence”.

However, the DoD’s approach to bias in AI applications will likely vary depending on whether the application is used in a combat or non-combat setting: algorithms used in the hiring process that discriminate against people who dress a certain way – an example of the type of unfair bias that the European Commission and others are worried about – might run afoul of DoD hiring practices and national labour laws, while algorithms that target soldiers in enemy garb – supporting surveillance efforts – might provide a leg up on the battlefield. The keyword in the Department’s principles as it relates to bias is unintended.

In addition to bias, traceability and transparency, the DoD’s third principle, is another issue gaining traction in policy fora worldwide and one that is particularly susceptible to poor regulation due to the complexity of algorithmic processes.

Implications for Tech Companies

The DoD’s newly-adopted principles are high-level and refrain from prescriptive technical language, which is a good sign for technology companies worried that an overzealous government department might take a needlessly restrictive or heavy-handed approach to the development of AI standards. Nonetheless, the principles will impact all AI companies based in the United States, including those that sell, or plan to sell, their AI products and services to the Department as well as companies that do not.

The impact on companies that sell to the DoD will be direct: according to DoD Chief Information Officer Dana Deasy, the Department “will use these principles to guide the testing, fielding and scaling of AI-enabled capabilities across the DoD.” Therefore, the principles are likely to affect the Department’s procurement of AI products and services. For example, the DoD could require future contractors to demonstrate that the tools, technologies or services they provide to the DoD will minimise unfair bias and ensure that data and algorithmic processes are transparent and traceable. The requirements that contractors are subject to, however, could differ depending on the intended function of AI. Those providing AI to streamline business processes, for example, might be subject to different “unintended bias” related requirements than those providing AI for imagery analysis, for example, while governability might be a core requirement for AI used in combat situations.

Notably, the principles and any resulting contract requirements could help ease some Silicon Valley employees’ concerns that their firm’s AI technology will be weaponised for war. The principles do not preclude the DoD from using AI to improve warfighting capabilities, but they do seek to ensure that AI technologies are used responsibly, for intended functions, while avoiding unintended consequences. This may allow technology firms who backed out of partnerships with the Pentagon due to the protests of ethically-minded employees, a la Project Maven, to return to the table.

The impact on companies that do not count the DoD as a customer will be less direct but equally worthy of attention: the DoD’s principles will inform the development of the regulatory landscape in the US going forward, as policy-makers throughout the administration and Congress will begin to refer to the principles as an example of government-promoted guardrails around the use of AI.


Companies wishing to sell AI technologies to the DoD should monitor the Department’s implementation of its ethical AI principles to determine their impact on procurement of private-sector products and services. Specifically, companies should follow the activities of and seek to engage the DoD’s Joint Artificial Intelligence Centre (JAIC), established in 2018, which will coordinate the implementation of the principles as part of its larger mission to accelerate the DoD’s adoption and integration of AI. As a first step, companies can participate in the DoD’s East Coast Artificial Intelligence Symposium and Exposition scheduled to take place in Crystal City, VA from 29-30 April. The conference will be an opportunity for industry to discuss areas of collaboration with the DoD around the operationalisation of AI, demonstrate AI solutions and services, and learn about DoD’s current activities, future direction, and challenges. JAIC will hold a West Coast conference later in 2020.

In addition, current and prospective DoD partners should proactively assess their ability to meet potential requirements stemming from the principles and engage Department officials to promote the company’s desire and ability to adhere to the principles in future partnerships.

Industry should also keep a close eye on how the principles influence the development of other AI policies throughout the US government. To the extent that other policies are high-level and avoid prescriptive technical language like the DoD principles, industry will benefit. But as anyone familiar with tech regulation will attest, positive outcomes are hardly guaranteed, and vigilance is key.

Back to document archive