The project developed a white paper addressing the key enabling technologies development and deployment in the areas that fall outside the jurisdiction of mainstream AI regulatory frameworks, namely national security, intelligence, law enforcement. Key enabling technologies such as AI used in the law enforcement, defence and intelligence fields can bring strategic advantages and effectively counter common and hyber-threats, but face challenges when it comes to articulating accountable, explainable and trustworthy frameworks for their development and deployment, due to: "black box effects", the complexity of the above mentioned fields regarding order of priority and the time lag between technological development and legislative articulation exercises. In all five eyes countries respect for democracy and abiding by democratic values is a strength and as such deploying key enabling technologies in line with these values in the law enforcement, intelligence and national security fields is a common endevour. Ethics is the most challenging aspect when it comes to responsible, explainable and trustworthy AI, in the context of civilian deployment and even more so in the areas exempted from the main legislative frameworks. In the case of AI ethics there is one further gap that must be filled in by national security bodies, specifically, they must agree on the core principles they wish to prioritize. Drawing on the conclusions of the Report on Assurance of Third-Party AI Systems for UK National Security (CETAS, 2024) and the challenges identified by GCHQ regarding ethical AI, including the fact that the community needs to negotiate the principles it wishes to prioritize, the paper will investigate the specificity vs. universality of the principals put forward in the UK report. Reconciliating apparently incompatible values - security and what have been identified already as major challenges to ethical AI: fairness, accountability, empowerment and privacy is a challenge to be addressed. Further analysis will be conducted on the way values, ethics and principals are negotiated within the community, considering different threat levels and the power matrix within the community and between AI developers and developers. An example of how difficult it is to reach a global agreement on ethics for AI in these fields stands the fact that at the September REAIM conference on AI for the defence, China did not sign the blueprint on ethical AI. The balance between ethics and human centric AI in regulating key enabling technologies vs. focusing on hard strategic advantages of deployment of the technology will therefore also be considered in the joint paper. The research will test the feasibility of an ecosystem approach in this pursuit, given that common threats are addressed in an systemic approach and at the same time will investigate the tension points on the applicability of these principles and of their universality, given national jurisdiction of the exempted areas. The impact that different oversight mechanisms have over the maturity of the AI landscape, especially in the exempted areas will also be addressed. The paper abides by the pro-inovation approach to regulation.
The project developed a white paper addressing the key enabling technologies development and deployment in the areas that fall outside the jurisdiction of mainstream AI regulatory frameworks, namely national security, intelligence, law enforcement.