EUROPE INSIGHT – The role of ethical and responsible AI in European defence and security

Attilio Caligiani 1, Gen. (Ret.) Koen Gijsbers 2, Lennart Padberg 3

Artificial Intelligence (AI) is rapidly transforming every aspect of society, and its integration into defence and security is reshaping modern strategic and operational frameworks. While AI offers unprecedented advantages, it also raises critical ethical and policy challenges. Questions of accountability, reliability, and adherence to legal frameworks remain at the forefront of discussions on responsible AI deployment. In this article the authors explore the strategic implications of balancing innovation with ethical compliance and policy standards to ensure the responsible use of AI in defence and security

The rapid development and growing utilisation of AI across domains is changing the defence and security landscape as we know it – enhancing situational awareness and operational efficiency, supporting decision-making, and driving advancements in autonomous weapon systems, to only name a few.

While AI holds strong potential for strengthening Europe’s strategic autonomy in defence and security, it also entails a series of ethical implications. The potential for AI to autonomously execute life-or-death decisions underscores the necessity of a human-in-the-loop approach, where human oversight is maintained to prevent unintended consequences. Additionally, AI’s ability to process surveillance data at an unprecedented scale raises concerns about privacy, data protection, and the potential misuse of AI-enhanced intelligence tools.

With the adoption of the AI Act, the European Union is at the forefront of regulating AI, offering a comprehensive framework to harmonize national policies. To ensure its effective implementation, however, it will be crucial to ensure consistency  with other relevant European legislation such as the Directive on Security of Network and Information Systems (NIS2), the Cyber Resilience Act (CRA), the Critical Entities Resilience Directive (CERD), the General Data Protection Regulation (GDPR), the Law Enforcement Directive (LED), and  the European policy framework on drones and counter-drones solutions.

Aiming to ensure its ethical and safe use across various domains, the AI Act classifies AI systems based on risk levels, balancing the need for security with innovation incentives. However, its implementation in the defence sector presents a particular set of challenges. The ambiguity surrounding military AI exemptions, particularly in dual-use applications where civilian and military technologies overlap, requires further guidance. Since military AI applications are excluded from the AI Act, national governments retain significant autonomy in defining their own defence AI policies, leading to the risk of fragmentation in terms of ethical standards as well as interoperability between Member States. While there is currently no formal decision that the rules of war apply to digital space, it is widely agreed that exemptions for purely military applications of AI should strictly follow the Law of War, ensuring the preservation of ethical standards. 

While European policy efforts prioritize safety, they need to be carefully balanced with innovation, considering the rapid AI advancements implemented by other global powers. Without a well-defined strategic approach, and adequate financial and technical support at European Union and national level, European companies risk falling behind in the global AI race.

Such concerns are confirmed by the recent proposal of Henna Virkkunen, executive Vice-President of the European Commission for technological sovereignty, to implement an ‘AI and Cloud Development Act’ to counter Europe’s growing productivity gap with the United States and China.

Going forward, trustworthy advanced AI (AAI) will be one of the emerging technologies that the European Union should be paying more attention to. As outlined by the High-Level Expert Group on Artificial Intelligence, trustworthy AAI has three components, which should be met throughout the system’s entire life cycle: it should be lawful, ethical and robust, both from a technical and social perspective.

To address technical, social, and ethical considerations, regulatory sandboxes will be key, offering a controlled environment for companies to innovate while ensuring compliance.  By leveraging ethical requirements as a competitive advantage and transforming compliance in an asset rather than a limitation, European companies can lead in responsible AI use, fostering trust among allies at NATO and global level. This is especially crucial as cyber-threats grow more sophisticated: earning public confidence in innovative technologies demands companies to prioritize cybersecurity, developing resilient systems capable of detecting and mitigating increasingly sophisticated cyber-attacks.

Driving responsible AI development also requires close public-private partnerships and collaboration with policy bodies. This approach allows companies to share knowledge and resources to accelerate innovation while ensuring interoperability and alignment with policy requirements. A unified European strategy, backed by funding programs like the European Defence Fund (EDF) and Horizon Europe, is thus critical to strengthen industry collaboration and ensures AI research advances

Moreover, given the dynamic nature of AI technologies, agile strategies that anticipate technological shifts are essential: monitoring its global advancements and adapting approaches accordingly is pivotal to maintain Europe’s relevance and strategic advantage in defence and security. To succeed in this evolving environment, companies require a proactive strategic approach, ensuring preparedness and adaptability to both technological and policy challenges.

Image courtesy FSG Global

  • 1 Partner European Affairs at FGS Global
  • 2 Senior Advisor at FGS Global and former CIO of the NL MoD and GM of the NCIA
  • 3 Senior Associate at FGS Global
  • FGS Global is a leading strategic advisor of the stakeholder economy
Tweet
Share
Share