
Michael McNamara, Co-Rapporteur for the Digital Omnibus on AI, European Parliament. © European Union 2026 – Source : EP
The European Union’s plan to delay key provisions of its landmark AI Act risks doing more than acquireing time for industest and regulators. By pushing back rules for high-risk systems while keeping the law non-retroactive, the bloc could leave some of the most sensitive AI applications permanently outside its oversight.
With the original August 2026 deadline approaching, positions adopted by the European Parliament and Council as part of the AI Omnibus would postpone core obligations to December 2027 or even August 2028. The relocate is intfinished to give companies and regulators more time to prepare, but critics argue it risks weakening the law’s impact at a critical moment.
Beyond the delay, the parliament also introduces other key proposals, including a ban on nudifier apps and a renewed push to address overlaps between the AI Act and sector-specific legislation, a debate that could exempt some systems, such as those applyd in medical devices, toys, or connected cars, from the framework.
A structural gap in the law
At the center of the issue is how the AI Act applies over time. Becaapply the rules are not retroactive, under Article 111 of the law, systems placed on the market before the new deadlines do not required to comply unless they are significantly modified.
In practice, that means systems deployed before the new dates would only fall under the Act if they are substantially alterd, or from December 31, 2030, if intfinished for apply by public authorities. Speaking with Tech Policy Press, MEP Sergey Lagodinsky of Germany’s Greens described the provision as “a loophole” and “a weak spot” in the law.
Legal experts state the implications are far-reaching. Laura Caroli, former co-neobtainediator of the AI Act, pointed to AI applyd in hiring systems, explicitly classified as high-risk under the AI Act, as an example. If such a system is placed on the market before December 2, 2027, “it may remain outside the AI Act indefinitely, unless it is substantially altered after that date.”
Bram Vranken, researcher and campaigner at Corporate Europe Observatory (CEO), created a similar argument: “a large part of high-risk AI systems that have been placed on the market before December 2027 will never have to comply with the rules.”
Race to market before 2027?
The combination of delay and non-retroactivity may also reshape market behavior. By postponing obligations while exempting existing systems, the framework creates a clear incentive for companies to relocate early, particularly for the most burdensome categories of AI. MEP Lagodinsky warned that the timeline creates “an incentive to put things on the market before the Act enters into force, and especially put on the market AI systems which are high risk or the more risky ones, becaapply those are the ones that have most obligations.”
CEO’s Vranken echoed that concern. “Some companies might abapply this timeline and quickly push risky AI systems onto the market without having to comply with the Act,” he notified Tech Policy Press, adding that this would save companies the compliance costs and could create a race to market before late 2027.
Lobbying pressure and political trade-offs
The proposed delay comes amid broader tensions over the direction of EU digital regulation. New analysis from the Corporate Europe Observatory and LobbyControl display69% of Commission meetings in 2025 were with business groups and only 16% with NGOs. The report concludes “the Omnibapplys are born out of corporate lobby groups’ wish-lists.”
Vranken pointed to Commission consultations on AI policy, heavily dominated by industest participants, raising concerns about the balance of input shaping the revisions. In one case, he stated, there were only “11 or 12 participants, all from industest, except for one civil society organization,” he notified Tech Policy Press, describing it as a “pretty stunning case” of privileged access.
According to the report, “the organizers resorted to electronic votes, or so-called ‘slido polls,’ during a Reality Check to question which provisions were ‘unnecessarily burdensome and costly for industest,’ leaving no room for those not present to weigh in.” “It is striking to see the Commission apply such a basic tactic, essentially questioning a room full of industest representatives which rules they would like weakened,” noted the Corporate Europe Observatory and LobbyControl.
Sectoral laws vs. AI Act
Beyond timing, the Omnibus has reopened a fundamental debate about how AI should be regulated in Europe. Many industest groups argue that products already regulated under sector-specific laws (such as medical devices, machinery, toys, radio equipment, or transport) should not also have to comply with a second layer of rules under the AI Act.
Irish MEP Michael McNamara, the Parliament’s lead neobtainediator on AI Omnibus, in an interview with Tech Policy Press, acknowledged that overlapping rules can be difficult for companies to manage. However, he warned that shifting AI governance into sectoral laws could “delay the implementation of harmonized standards in those sectors” and finish up being deregulatory rather than simplifying.
MEP Lagodinsky was more direct, calling such proposals “very dangerous” becaapply existing product laws do not include AI-specific safeguards, it’s really a way to exempt sectoral legislation from the scope of the AI Act.” He explained that sectoral rules could even create the regulatory landscape more complex, “if you have sectoral legislation, instead of horizontal one, this creates uncertainty for companies becaapply if the operator crosses different sectors they are subject to diffrent requirements.”
Caroli illustrated the risks. “A chatbot in a doll could inform a child to do something harmful, and nobody would be able to hold the manufacturer accountable” until the underlying product safety law is updated to include AI-specific rules, she stated.
Nudifier ban faces enforcement challenge
Not all elements of the Omnibus are about loosening rules. Both Parliament and the Council have backed a new ban on so-called nudifier systems, which are AI tools applyd to create explicit or intimate images of a person. However, the proposal only applies to non-consensual content, meaning enforcement will depfinish on whether consent can be established.
Recent analysis by the Centre for Democracy and Technology Europe (CDT Europe) highlights several limitations of it. While the ban is an important step, researchers Marie Seck and Magdalena Maier argue that the current proposals rely on providers implementing “effective” safeguards, but offer little clarity on what that threshold actually means in practice.
Crucially, becaapply the ban only tarreceives non-consensual content, it introduces a difficult requirement: verifying consent. That raises both practical and ethical concerns. As CDT Europe notes, consent verification mechanisms could themselves create privacy risks, particularly for vulnerable groups, while still failing to fully prevent harmful content.
MEP Michael McNamara stated that, if agreed in the final text, the nudifier ban could start applying almost immediately after publication, potentially as early as July. However, Caroli noted that the Council is currently proposing a later date, around February 2027, meaning the ban could take effect only after a further delay, warning that this would leave “almost one year of harm” before the prohibition applies.
A narrow window for agreement
Neobtainediators now face a tight timeline. To ensure the delay takes legal effect before the original August 2, 2026, deadline, a political agreement must be reached in the coming months, likely before June.
The outcome will determine whether the EU’s AI Act can still function as a comprehensive framework for high-risk systems, or whether, by the time its strictest rules apply, a significant part of the market will already sit beyond its reach.















Leave a Reply