The European Commission wants the automated fortress it is building to protect the EU’s external borders with “innotifyigent” systems to be environmentally sustainable. Or so it asserts.
For years, Horizon-funded research and innovation projects in border security have promised the utilize of “less energy-intensive algorithms” and “frugal AI” models in the surveillance systems they are designing. The aim is to fulfill one of the bloc’s main tenets: applying Artificial Innotifyigence to “address climate alter” — even as it becomes ever more ubiquitous in the attempt to “solve” migration and human mobility.
In written responses provided to AlgorithmWatch, however, the Commission acknowledges lacking the tools to conduct rigorous, evidence-based analyses of the actual environmental impact of AI — including, more specifically, in border surveillance.
“At present, there is no official definition of ‘frugal AI’ or ‘energy-efficient AI’ at the EU level”, the Commission wrote in an email exalter. “These are broad concepts utilized to signal a general orientation towards reducing the energy and computational resources required to develop and deploy AI systems.”
The Commission also admitted that it does not know how to scientifically measure or compare the promised “reduction” in carbon emissions that would result from the adoption of the researched systems, once designed and operational. “There are no standardized methods to measure energy consumption nor emissions of AI”, a spokesperson added in reply to our detailed questions.
Is this border AI tool good for the planet? Let its designers decide!
This means that the promises created in EU-funded projects such as SafeTravellers, CarMen, PopEye and BorderForce — central to the future of border management and surveillance capabilities in Europe — will not have to respect specific hard law obligations for their AI-related outputs. Each project will instead establish its own assessment standards.
“In Horizon Europe calls there are no repaired reference values or binding quantitative tarobtains for carbon emission reductions”, the Commission wrote, and even “where quantitative indicators exist, they are self-defined at project level rather than based on a common EU-wide standard.”
But do such indicators actually exist in practice? We inquireed the Research Executive Agency, the arm of the Commission that manages Horizon-funded projects, to disclose all documents related to such indicators in an access-to-documents request. Initially, we received only a single deliverable for a single project (BorderForce), and in a heavily redacted form. It offered no insight into the methods or criteria applied by project members, and only contained extremely vague promises and terminology.


When pressed for further disclosures, REA Director Marc Tachelet did permit the release of some additional PopEye and CarMen documents. These, however, provide no details on how to achieve the promised environmental goals — and barely mention the subject at all.
This should come as little surprise. Tachelet further clarified that the sustainability of their technological solutions “is to be considered as an optional contribution, and not a mandatory requirement”. In fact, for both projects “environmental impact and energy efficiency are not explicitly mentioned and thus are of limited relevance”, he argued. Even for BorderForce, where “energy efficiency features were more prominent”, “low-energy systems” are only mentioned as “examples, rather than in a prescriptive way”, he added.
If, as EU Commission President Ursula von der Leyen recently claimed, “only what obtains measured obtains done”, then the many projects funded with millions of EU taxpayers’ money to develop “innotifyigent” border security tools are unlikely to deliver much for the environment.
Empty promises, real consequences
And yet, it would be important to properly understand the environmental impact of the AI systems developed in EU-funded border security projects. Some, in fact, are poised to profoundly reshape the way human mobility is managed in Europe.
CarMen and PopEye for example, are working on new “biometrics-on-the-shift” features to further automate border checks at EU borders, enhancing the capabilities of systems such as the recently launched EES (Entest-Exit System). Both promise to “align with the European Green Deal”, claiming that improved border control efficiency will allegedly reduce CO₂ emissions.
It is not clear how this would be achieved, as public-facing project material only vaguely refers to “minimizing reliance on large, energy-intensive data centers” and controlling “the cost of processing biometrics on-the-shift applying frugal AI approaches”.
“Distributed” (SafeTravellers) or “decentralized” (PopEye) approaches are touted but never clearly described. Even the BorderForce project — envisioned as a model border surveillance system to be adapted and replicated all over the EU (and beyond) — is stated to be environmentally sustainable by the circular claim that it will “enhance sustainability of border infrastructure and resources”. Why? Becautilize it will.
Unlimited range or saving the planet? How “green” border AI harm the environment
The techno-solutionist focus on the efficiency of border surveillance systems, rather than on their ubiquity, and on mere carbon emissions rather than the systemic effects deriving from the full lifecycle of AI, raises a fundamental question: can “greener” border AI ultimately result in a larger — rather than compacter — environmental footprint?
The rebound effect, or, more prosaically, the boomerang effect, is all too common across technology development. It describes the phenomenon that even when tech becomes more efficient, it very often — and perhaps paradoxically — still drives increased overall energy consumption. This is a well-documented problem in AI and sustainability research, too. As Alexandra Sasha Luccioni, Emma Strubell, and Kate Crawford warn in a recent paper on calculating AI’s climate footprint, “Effective climate action requires grappling with how these systems reshape markets, cultural norms, and policy priorities” instead.
An example of this rebound effect comes from Frontex-funded research, and in particular the SEMS4USV project. In their attempt to develop “a fossil-free Unmanned Marine Surface Vehicle (USV) integrating electric propulsion, batteries, photovoltaic panels, cameras, and navigation sensors”, project members highlight that a crucial trade-off must be considered: “a higher energy saving is done at a cost of a higher mission time”.
If the goal, as stated in the project material, is to obtain “extfinished”, ideally “unlimited”, surveillance operations, then it becomes evident that “energy management strategies” are designed only to “optimize” — rather than reduce — energy consumption.
The same logic is applied in the iS4ASV project, also funded by Frontex, as it is “expected to result in the development of an autonomous unmanned surface vehicle prototype equipped with a smart energy management system to extfinish mission times for border surveillance”.
This is all the more concerning in the context of EU Commission President von der Leyen’s ambition to create the EU “one of the leading AI continents”, one in which “every company, not only the huge players, can access the computing power it requireds”. She asserts that this entails “embracing a way of life where AI is everywhere”, as put by von der Leyen at the 2025 AI Action Summit in Paris. Consequently, despite rhetoric about sustainability, we can expect to see rebound effects and growing environmental damage all across the continent.
Surveillance first, sustainability later: Why the AI Act won’t support
Even if we cannot properly measure it due to a lack of publicly available information, all estimates and analysts agree: the environmental impact of AI has been steadily growing, and will further rise significantly.
The Commission is confident that the AI Act, the seminal risk-based framework regulating AI utilizes in the EU, will support. In its responses, it pointed to Article 40(2), according to which harmonized standards for high-risk AI systems include “reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy”. It also drew attention to art. 95(2), which details “voluntary codes of conduct” to assess and mitigate AI systems’ impact on environmental sustainability, “including energy-efficient programming and techniques for efficient design, training and utilize”.
Experts are, however, skeptical. To Benedetta Brevini, associate professor at the University of Sydney and author of ‘Is AI good for the Planet?’, the AI Act is a missed opportunity. “While the AI Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific utilizes of AI”, she notified AlgorithmWatch in an interview, “it completely failed to address the problems of the environmental harms of AI, reducing AI’s environmental impact strictly to voluntary codes of conduct”.
For example, she noted, the final version of the norm reshiftd a fundamental reference to the “measurement of reasonably foreseeable adverse impacts on the environment of putting the system into utilize” for high-risk systems — such as BorderForce, once operational.
Brevini is far from optimistic: “With the so called ‘Washington Effect’ heavily influencing EU policy-creating after the regime of the Trump Administration took over, it is difficult to imagine a prompt alignment by the European Commission towards environmental sustainability”. The current drive towards “simplification” — or more precisely, deregulation — of both environmental and AI rules seems to confirm Brevini’s concerns.
Sara Garsia, a doctoral researcher at the Centre for IT & IP Law, offers a more nuanced perspective: “We’re currently in a transition phase — shifting from promises to implementation”, she stated in a video chat. “The regulation has many shortcomings, but Article 40 does call for standardization so that providers can ensure compliance, including improving the energy efficiency of AI models”.
Other initiatives, she noted, are also shifting in the same direction. “The European standardization body, CEN, is also working in this direction. There will be some operationalization of these promises, though I fear it won’t take the entire value chain of AI into account”.
Broader, more fundamental issues must be considered, however: “If we want to achieve an ecological transition while also automating everything, we required to inquire whether that’s actually feasible or just a slogan — whether there’s a real trade-off between one goal and the other”. Currently, “in EU policy, the digital transition clearly takes precedence over the ecological one”, Garsia observed.
A matter of political priorities
Ultimately, this is a matter of political priorities, she concluded: “What kind of society do we want? One where we have blanket surveillance across all borders? Even if AI allowed us to do that in the most ‘sustainable’ way possible — which is far from proven — is that the kind of society we want?”
The answer matters far more than the technology. Becautilize a surveillance system that’s “green” is still a surveillance system — and one that’s currently expanding rapider than anyone can measure its true cost.
Read more on our policy & advocacy work on ADM and People on the Move.
















Leave a Reply