Ethical Principles for Artificial Intelligence in National Defense

Abstract

Defense agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defense are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defense domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control and reliable AI systems—and related recommendations to foster ethically sound uses of AI for national defense purposes.

The applications of AI in national defense are virtually unlimited, ranging from support to logistics and transportation systems to target recognition, combat simulation, training and threat monitoring. There is a growing expectation among military planners that AI could enable speedier and more decisive defeat of the adversary. As with the use in other domains, the potential of AI is coupled with serious ethical problems, ranging from possible conflict escalation, the promotion of mass surveillance measures and the spreading of misinformation to breaches of individual rights and violation of dignity. If these problems are left undressed, the use of AI for defense purposes risks undermining the fundamental values of democratic societies and international stability

Purposes of use of AI in defense span over three core categories of action by defense institutions sustainment and support, adversarial and non-kinetic, adversarial and kinetic. We shall delve into the ethical implications of each of these, but let us describe them briefly here. Sustainment and support uses of AI refer to all cases in which AI is deployed to support ‘back-office’ functions, as well as logistical distribution of resources. This category also includes uses of AI to improve the security of infrastructure and communication systems underpinning national defense. Adversarial and non-kinetic uses of AI range from uses of AI to counter cyber-attacks to active cyber defense, and offensive cyber operations with non-kinetic aims. Adversarial and kinetic uses refer to the integration of AI systems in combat operations, and these range from the use of AI systems to aid the identification of targets to lethal autonomous weapon systems (LAWS).

Ethical Challenges of AI for Defense Purposes

The three purposes of use of AI in the defense domain are more ethically problematic as one moves from sustainment and support uses to adversarial and kinetic uses. This is because alongside the ethical problems related to the use of AI (e.g. transparency and fairness), one also needs to consider the ethical problems related to adversarial, whether non-kinetic or kinetic, uses of this technology and its disruptive and destructive impact.    

Each category of use has its own specific ethical requirements but also inherits the ones from the categories on its left. For example, to be ethically sound, adversarial and non-kinetic uses of AI need to ensure some forms of meaningful control and measures to avoid escalation, while also respecting transparency and human autonomy, which appear in the sustainment and support category. Some AI systems have dual capability and can be used both defensively and offensively. Independent of the capability in which they are used, the systems still need to meet the requirements specified in Fig. 2. For example, whether in an offensive or defensive operation, uses of the systems need to be accountable, proportionate and coherent with the principles of the just war theory.

Sustainment and Support Uses of AI
AI can extract information to support logistics and decision-making, but also for foresight analyses, internal governance and policy. These are perhaps some of the uses of AI with the greater potential to improve defense operations, as they will facilitate timely and effective management of both human and physical resources, improve risk assessment and support decision-making processes. For example, a report by KPMG
Footnote
8 stresses that a defense agency could have only a few minutes to decide whether a missile launch represents a threat, share the findings with allies and decide how to respond. AI would be of great help in this scenario, for it could integrate real-time data from satellites and sensors and elaborate key information that may facilitate and improve human decision-making process by mitigating uncertainties due to the fog of war and possible human biases. The challenge is that these uses of AI must ensure that the systems would not perpetrate a biased decision and unduly discriminate, while also offering a means to maintain accountability, control and transparency.

Enabling AI Data Readiness in the Department of Defense

1. Accessibility to Small Business, Startups, and Non-traditionals

Because AI is a rapidly emerging field, innovative and breakthrough technologies are well distributed across the entire commercial landscape. Innovation is just as likely to be found in the newest startups pioneering breakthrough approaches as it is in the largest traditional companies. In developing the DRAID, we have taken effort to ensure that the best providers— regardless if this is their 1st or 101st time interacting with the Federal government—will be able to participate in the RFP process. To ensure the widest swath of businesses across the commercial spectrum can successfully participate in the RFP process, we have taken a number of key steps.

First, we are releasing an Accessibility Guide for responding to the need of clients. The Accessibility Guide clearly and simply lays out the prerequisite steps businesses need to take to respond to GTS. While the guide can be useful for all responding businesses, it is particularly well suited to small businesses, startups, and non-traditional participants for whom responding to may be their first interaction with the Federal acquisitions process.

Second we have leveraged industry knowledge to make concrete and substantial changes to the vehicle itself to ensure participation from across the enterprise. We reformed the experience requirements to allow newer non-traditional vendors—such as startups fostering the latest AI breakthroughs—to be able to compete. We enabled teaming in areas that help selected small and non-traditional vendors to execute on the requirements. Finally, we clearly noted our desire to accept non-government experience in responses in order to ensure companies without prior Federal experience can still participate in the RFP process.

2. Ethics: Front and Center

A cornerstone of the DoD’s AI transformation journey is to develop and field AI systems in a responsible and ethical manner. The DoD AI Ethical Principles, which dictate that DoD AI systems must be responsible, equitable, traceable, reliable, and governable, apply across the entire product lifecycle and for combat and non-combat application. The DoD recognizes that AI Ethics cannot be “bolted on” to an AI system after it is developed. Successfully embodying these principles in our systems requires integrating prompts, tools, and checkpoints to assess ethical risks across the AI product lifecycle, including directly into our technological processes. AI data preparation is a particularly important focal area in this regard.

This philosophy is directly integrated into the DRAID: for an AI system to be responsibly developed, the underlying AI data that is powering that system matters. To work toward our goal of fielding trustworthy and responsible systems, orders executed with the DRAID will explicitly include a task requiring the contractors to demonstrate how their products and solutions address or instantiate the DoD AI Ethical Principles, and/or aid in mitigating ethical risks throughout the AI product lifecycle. Additionally, we have explicitly included tasks to support ethical AI system development, such as providing technologies for identifying bias in data, and mechanisms for data management and data governance.

This point deserves repeating: every AI data preparation order executed with the DRAID will explicitly integrate AI ethics. This illustrates the DoD’s commitment to embedding ethics throughout the entire development process, including within this crucial process of data preparation.

3. Forward Looking

The services addressed by the DRAID span the full set needed to prepare “AI ready” data, from data ingestion right up to before model training begins. While many of these services are the core tasks in the AI data preparation process—including data ingestion, feature engineering, and labeling—we have shaped the DRAID to also include additional services that will become, and are already becoming, areas of critical interest to the DoD.

These forward-looking areas include topics such as AI security, synthetic data generation, and data representativeness. While many may not think of these areas as “core” AI data preparation steps, as one would of data labeling, these areas are critical to the DoD’s success in setting the standard for world-class AI military systems, including putting the DoD AI Ethical Principles into practice. AI security must be accounted for early in the process to ensure the data used to train AI systems has not been manipulated or poisoned in a way that will compromise AI system performance once the system is fielded. Synthetic data generation provides alternatives to having to collect, prepare, and label significant amounts of data, promising to substantially accelerate the development process. Checking for data representativeness, such as data bias or excluded entities, both serves to instantiate the DoD AI Ethical Principles as well as ensuring optimal system performance once the AI system is in the hands of the warfighter.

The inclusion of these technically-informed, forward-looking areas will help ensure the DoD is leveraging the newest commercial breakthroughs in a responsible manner and is consistently enabling the Department to meet and prepare for its strategic needs of both today and tomorrow.

Five Ethical Principles for Sustainment and Support and Adversarial and Non-kinetic Uses of AI

i. Justified and overridable uses

ii. Just and transparent systems and processes

iii. Human moral responsibility

iv. Meaningful human control

v. Reliable AI systems

Conclusion

Global Technology Solutions has the required expertise to satisfy your wants across any sort of Quality Datasets. We have a client pool composed of agencies of the government, different police forces, as well as local authorities. These clients trust us enough to tackle and offer top-notch security services for every of their data requirements: Surveillance Video Datasets, voice recognition Datasets, recognition Datasets etc  . We have grown our reputation and are respected in the AI industry. Every GTS facility is secured, and you are guaranteed that your data will stay protected at all times.

Comments

Popular posts from this blog