Testing comprehensive frameworks for AI in heightened risk contexts

Research output: Book/ReportCommissioned Reportpeer-review

Abstract

This policy paper, co-produced by Popa (Delft University) and Paterson-Young (University of Northampton) explored 'Testing comprehensive frameworks for AI in heightened risk contexts'. Seen as a disruptive and game changer technology, Artificial Intelligence (AI) has received considerable attention from the side of regulators and researchers aiming to establish comprehensive legislation and governance frameworks, explain its way of working and reflect on its ethical implications. However, policies and governance frameworks for the excepted or high risk areas are not as abundant or as visible as the ones addressing mainstream AI applications.

Research into Artificial Intelligence used in intelligence activities is still scarce and particularly challenging. AI represents an opportunity for the intelligence community in terms of enhanced analytical possibilities and, at the same time, raises new questions of ethics in intelligence and the ethical or legal use of technology for intelligence purposes. AI use in this sphere presents intrinsic explainability challenges, especially if used to facilitate decision making that have high impact. As such, it presents a unique topic of investigation for addressing the “classical” issues of regulating and deploying AI that mainstream developers and deployers face in a context that brings specific challenges due to different level of priorities. Regulating key enabling technologies while at the same time enabling cross-border cooperation for optimisation of solutions and approaches to common challenges or integrated processes requires a continuous feedback, the premises of interests and values alignment and existing or envisioned incentives for doing so. For intelligence agencies abiding by the principles of accountability and transparency, the subject of ex-ante negotiated values and priorities is a pertinent challenge. If and how negotiation for potential alignment takes place requires consideration in the development of guiding principles for the use of AI in intelligence.

The present research seeks to investigate how AI values and principles are negotiated for cases when AI is embedded in the core activities and processes of the national security agencies. The paper does not investigate data or intel sharing practices, but rather the alignment of compatible approaches to technological deployment between likeminded agencies. This niche deployment of AI puts both technology, legislation and democratic values to the test, emphasises tension points while attempting to put forward identified solutions. By investigating how AI can be regulated and governed in high risk areas in a cross-border setting, the paper sheds light on the intersection points of practical possibilities for regulating technologies, epistemic positions of how this should be done, degrees of value priorities in democratic states or their tension points such as that of security and privacy and cascading effects of geopolitical events.
Original languageEnglish
PublisherEmbassy of the Kingdom of the Netherlands
Commissioning bodyEmbassy of the Kingdom of the Netherlands
Number of pages31
Publication statusPublished - 31 Jan 2025

Keywords

  • Artificial Intelligence (AI)
  • High Risk
  • Intelligence
  • Military
  • Ethics
  • Responsible AI

Fingerprint

Dive into the research topics of 'Testing comprehensive frameworks for AI in heightened risk contexts'. Together they form a unique fingerprint.

Cite this