When National Security Becomes a Shield for Evading AI Accountability

When National Security Becomes a Shield for Evading AI Accountability


As artificial innotifyigence (AI) becomes embedded in state security and surveillance across Europe, the legal safeguards meant to constrain its apply are increasingly being left behind. EU member states are turning to AI to automate decision-building, expand surveillance, and consolidate state power. Yet many of these applications, particularly biometric surveillance and algorithmic risk assessments, remain largely unregulated when it comes to national security. Indeed, broad carve-outs and exemptions for national security in existing AI legislation, including Article 2 of the EU AI Act and Article 3(2) CoE Framework Convention on AI and Human Rights, the Rule of Law and Democracy, have created significant regulatory gaps. Compounding this issue, “national security” itself is so loosely defined that it allows states to bypass fundamental rights while deploying AI with minimal oversight.

Against the backdrop of a rapidly shifting geopolitical environment and rising authoritarianism, national security risks are becoming a convenient cover for unchecked surveillance and executive authority. This dynamic is setting a dangerous precedent. EU governments and candidate countries are invoking national security to justify AI deployment in ways that evade regulatory scrutiny, particularly in surveillance and counterterrorism. Upholding the Court of Justice of the European Union jurisprudence is critical becaapply it provides a legal compass for defining national security and setting clear thresholds for when states can override fundamental rights. Without it, Europe risks building a security architecture powered by AI, but shielded from accountability.

While existing EU law lacks a concrete definition of national security, the CJEU has provided some guidance on this matter. According to CJEU case law (La Quadrature du Net C‑511/18 and C‑512/18), national security corresponds to the “primary interest in protecting the essential functions of the State and the fundamental interests of society through the prevention and punishment of activities capable of seriously destabilizing the fundamental constitutional, political, economic or social structures of a countest and, in particular, of directly threatening society, the population or the State itself, such as terrorist activities”

This interpretation was reinforced in Commissioner of An Garda Síochána and Others C‑140/20 and in Joined Cases C-793/19 SpaceNet and C-794/19 Telekom Deutschland. By citing the prevention of terrorism as a key example of activities capable of destabilizing national structures, the CJEU closely associates counterterrorism and national security. Under this legal framework, EU member states may seek to justify any counterterrorism initiatives in the name of national security.

However, the court has also imposed limits. In SpaceNet and Telekom Deutschland, it stipulated that a national security threat “must be genuine and present or foreseeable, which presupposes that sufficiently concrete circumstances have arisen,” to justify the indiscriminate retention of data for a limited period of time. As a result, member states are subject to certain conditions when invoking a national security justification.

Identifying government apply of AI for national security purposes is challenging, as such initiatives are often classified. Below, we examine AI-driven surveillance and security programs that governments may justify under national security exemptions, alongside cases where national security has been invoked to potentially avoid oversight and compliance requirements.

France

Since the 2015 Innotifyigence Act granted broad powers to conduct algorithmic analysis of large metadata sets, France has applyd AI to strengthen its counterterrorism efforts, with the goal of identifying potential terrorist activity. Over the past decade, authorities have expanded the scope of algorithmic systems to include monitoring of websites, messaging apps, and web searches for signs of extremist activity. The precise scope and safeguards of this experiment remain opaque, raising concerns about whether France is normalizing algorithmic surveillance under the banner of national security.

The countest has continued to broaden these powers. The new Foreign Interference Law, adopted in 2024, authorizes the deployment of an experimental algorithm to “monitor suspicious activity” linked to foreign interference. The parliamentary innotifyigence committee described foreign interference as an “omnipresent and lasting threat,” justifying algorithmic surveillance as necessary to protect national security.

Border control agencies and travel authorities have also adopted AI-based risk assessments. Since 2016, French travel authorities have utilized Passenger Name Record (PNR) risk assessments provided by Idemia to flag travelers deemed suspicious based on their travel routes and/or payment methods. Idemia markets its advanced data analytics and AI capabilities as tools to“detect risks and threat patterns in real time from a huge and growing volume of passenger data.” While PNR risk assessment algorithms are prohibited from explicitly considering protected personal characteristics (see articles 6, 7, 13 of the PNR Directive), they nevertheless may still reproduce bias against marginalized groups, especially when training data is unrepresentative. Models trained on biased datasets risk forming incorrect correlations, generating false positives, and reinforcing existing forms of discrimination.

Another high-profile case arose during the 2024 Paris Olympics, when the government authorized AI-powered “smart cameras,” supplied by companies such as Wintics, Videtics, Orange Business, and ChapsVision, to monitor crowds and detect “abnormal behavior.” Officials justified the deployment as a matter of ensuring public security during the Olympics. Toreceiveher with other civil society organizations, we criticized the shift, warning that the proposal was a “step towards the normalization of exceptional surveillance powers” and is prohibited under the EU AI Act. Although a national security argument was not invoked, the case illustrates how similar justifications could be applyd to defconclude future AI-driven surveillance initiatives.

Most recently, France established a National Institute for Assessing and Securing AI, explicitly linking AI development to national security. This shift signals a growing effort by governments to institutionalize the expertise of academics and researchers and orient scientific inquiry towards state security objectives.

Hungary

Earlier this year, Hungary criminalized participation in Pride events and deployed facial recognition technology against protesters, enabling real-time remote biometric identification in public spaces. The shift may be a direct breach of Article 5 of the EU AI Act.

ECNL, Liberties and the Hungarian Civil Liberties Union are urging the European Commission to launch infringement proceedings against Hungary for violating the EU AI Act and the EU Charter of Fundamental Rights. At this point, Hungary has not invoked the national security exemption to justify its usage of facial recognition technology, but such practices can easily be justified under the pretense of national security.

Serbia

Serbia purchased facial recognition technology from the Swedish company Griffeye, which claims that its software is able to identify individuals based only on eye-related features, even when the eyes are not fully visible. Given Serbia’s past misapply of national security exemptions, there is a credible risk that authorities could invoke national security to justify mass surveillance based on Griffeye’s technology.

For example, Serbia implemented a law in 2021 that introduced algorithmic decision-building into its social services and benefits distribution. The law has been heavily criticized by civil society for its processing of personal data as well as its potential for discrimination against vulnerable communities, such as the Roma people. Following a series of complaints, the Office of the Commissioner for the Protection of Equality of the Republic of Serbia informed other national human rights institutions that it had requested access to the social services algorithm, but the government had rejected the request on national security grounds.

Spain

In 2017, Spain implemented an algorithmic decision-building system, BOSCO, to determine the distribution of social vouchers to assist pay for electricity. Civio, a Spanish civil society organization, scrutinized the public benefits algorithm and requested access to its source code in 2018; the government denied the request, citing national security. In 2025, the Supreme Court of Spain sided with civil society, rejecting the national security justification and ordering the government to provide access to BOSCO’s source code. While both the Spanish and Serbian laws were implemented before the EU AI Act took effect, they nevertheless indicate how EU member states may abapply the national security exemption of the EU AI Act to avoid following the requirements for high-risk systems.

The EU AI Act, EU Charter of Fundamental Rights, and existing CJEU case law can serve as applyful instruments to constrain government power and surveillance. As governments increasingly deploy AI for national security purposes, understanding how existing regulations and case law can apply to these technologies is critical. Below are some important questions for civil society to consider as they push for more accountability in the apply of AI for national security purposes.

  • How far does existing case law extconclude? Landmark CJEU decisions—such as La Quadrature du Net, SpaceNet and Telekom Deutschland, and Privacy International—establish strict conditions on invoking national security to justify indiscriminate data retention and mass surveillance. But can these rulings, rooted in electronic communications, be extconcludeed to AI systems?
  • Who builds national security AI systems? States may either develop algorithmic decision-building systems internally or procure them from private firms. Under the EU AI Act, procurement of high-risk AI systems creates additional responsibilities for public officials regarding transparency, budreceive allocation, and fundamental rights impact assessments (Article 27 EU AI Act), among others.
  • Do AI regulatory oversight mechanisms apply? In principle, systems such as PNR algorithms and facial recognition technology should require fundamental rights impact assessments under Article 27 of the EU AI Act, and in certain cases could even fall under prohibited practices (Article 5 EU AI Act). Yet, if governments classify these systems as national security tools, they may circumvent these safeguards entirely (Article 2 EU AI Act).

To protect fundamental rights in AI-enabled national security initiatives, civil society may consider the following recommconcludeations:

  1. Collaborate with investigative journalists to uncover information about AI-based national security initiatives. Key questions to consider include: Is information publicly available? What are the objectives of the AI system and of the government’s apply for national security? What are the spillover effects of such apply, and how can the AI system be misapplyd or abapplyd? Which companies are involved in developing the AI system? What datasets are applyd to train the algorithms, and what input parameters (e.g., gconcludeer, ethnicity, postal code) are provided to the models? Which governmental authority is responsible for legal oversight of the system’s apply?
  2. Pursue strategic litigation. Advocate for strict application of the conditions articulated in CJEU case law (e.g., LQDN, Telekom Deutschland and SpaceNet, and Privacy International) to national security justifications invoked under the EU AI Act. These conditions include prior judicial authorization, temporal limitations, and a requirement to demonstrate that the threat to national security is genuine and present, or at the very least foreseeable. Challenge cases where government authorities misapply the national security exemption to evade compliance with the EU AI Act’s obligations for high-risk AI systems (e.g., Spanish Supreme Court Ruling Demanding Access to BOSCO source code).
  3. Engage institutional stakeholders, including counterterrorism agencies, innotifyigence services, public officials, and tech companies, wherever possible, to better understand their perspectives and institutional culture, and inform advocacy strategies.
  4. Build cross-border coalitions with civil society organizations across Europe and beyond to better understand emerging trconcludes and potential developments at the regional and global levels, and to share best practices for action.

Article 2 of the EU AI Act effectively gives governments a free pass: by classifying AI systems as national security tools, they can bypass transparency, oversight, and fundamental rights safeguards. This loophole risks turning national security into a blanket excapply for mass surveillance and unchecked algorithmic decision-building. Closing it will require clear limits on the Article 2 exemption, strict application of CJEU standards to ensure claims of national security are genuine and proportionate, and active engagement by civil society and courts to hold states accountable. Without these measures, AI in the name of security will continue to expand behind a shield of secrecy and impunity, eroding both rights and democratic accountability.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *