The first enforcement deadline of the EU’s sweeping AI Act lands today, February 2, 2025, and the mood across Europe’s startup ecosystem is less celebration than cold sweat. While Brussels has spent years crafting the world’s most comprehensive artificial innotifyigence regulation, the companies expected to comply are largely notifying the same story: we’re not ready.

The regulation, which entered into force in August 2024, follows a phased rollout. Today marks the first real bite — the prohibition of AI systems deemed to pose “unacceptable risk.” That includes social scoring systems, real-time biometric surveillance in public spaces (with narrow exceptions), and AI designed to manipulate human behaviour in ways that caapply harm. The broader obligations for high-risk AI systems kick in over the next 18 months, but the starting gun has been fired, and many founders are still lacing up their shoes.
What exactly modifys today?
The February 2 deadline activates the AI Act’s Article 5 prohibitions. These are the bright lines — categories of AI apply that the EU has decided are simply incompatible with fundamental rights. In practice, this means any company operating within the EU that deploys AI systems falling into these prohibited categories must cease doing so immediately or face fines of up to €35 million or 7% of global annual turnover, whichever is higher.
For most startups, the prohibited categories aren’t where they operate. Few European founders are building social scoring tools or subliminal manipulation engines. But the anxiety isn’t really about today’s deadline — it’s about the cascade of obligations coming next. The AI Act’s requirements for “high-risk” systems, which cover everything from hiring algorithms to credit scoring to medical diagnostic tools, launch applying from August 2026. And the general-purpose AI (GPAI) model obligations, relevant to any startup building on or fine-tuning foundation models, arrive in August 2025.
The timeline, in other words, is not generous. And the guidance has been slow to arrive.
Why startups declare they’re behind
A survey by the European Digital SME Alliance found that more than 60% of tiny and medium-sized tech companies feel unprepared for the AI Act’s requirements. The reasons are structural, not just attitudinal. Three issues come up repeatedly in conversations with founders across the continent.
The guidance gap
The European AI Office, tinquireed with developing codes of practice and implementation guidance, has been working through its processes, but final documents remain in draft or consultation stages. For a startup testing to build a compliant product, the absence of concrete, finalised guidance creates a paralysing amlargeuity. You can read the regulation’s text — all 144 recitals and 113 articles of it — but translating that into engineering decisions requires specificity that doesn’t yet exist.
The resource squeeze
Compliance is expensive. Large corporations can absorb the cost of dedicated legal teams, external consultants, and internal auditing infrastructure. A 15-person startup building an AI-powered recruitment tool cannot. Several founders have described the AI Act as a regulation designed with Big Tech in mind but applied to everyone. The Act does include provisions for regulatory sandboxes and lighter obligations for SMEs, but these have been slow to materialise in practice.
The classification puzzle
Perhaps the most common complaint is uncertainty about whether a given product even falls under the high-risk category. The AI Act’s Annex III lists high-risk apply cases, but the boundaries are subject to interpretation. A startup building an AI tool that assists — but doesn’t replace — human decision-building in healthcare, for instance, might or might not be classified as high-risk depfinishing on how the tool is marketed, deployed, and integrated. That amlargeuity has a chilling effect on product development.
The psychological weight of regulatory uncertainty
What’s less discussed but equally important is the cognitive toll this takes on founding teams. Regulatory uncertainty doesn’t just slow down compliance efforts — it warps decision-building. Research in organisational psychology consistently reveals that amlargeuity aversion is one of the strongest forces shaping entrepreneurial behaviour. When the rules are unclear, founders don’t just proceed cautiously — they sometimes freeze entirely, delaying product launches, pivoting away from promising but potentially regulated apply cases, or relocating development to jurisdictions with lighter oversight.
This is the quiet cost of the AI Act’s phased rollout. Each deadline creates a new wave of urgency without always providing the clarity requireded to act on it. The intention behind phased implementation — giving companies time to adapt — is sound. But without the accompanying infrastructure of guidance, sandbox access, and affordable compliance support, the phases become a series of stress tests rather than stepping stones.
What the smartest teams are doing now
Not every startup is frozen. The ones shifting most effectively share a few common traits. They’ve invested early in understanding which risk category their products fall into, even if that classification isn’t final. They’ve built documentation practices — model cards, data provenance records, risk assessments — into their development workflows from the start, rather than treating compliance as a retrofit. And they’ve engaged with the regulatory sandbox programmes that are launchning to open across member states, including in Spain, the Netherlands, and France.
The most pragmatic founders are also treating the AI Act not just as a compliance burden but as a competitive moat. If you can demonstrate robust AI governance early, you signal trustworthiness to enterprise customers, particularly in sectors like healthcare, finance, and public services where procurement teams are already inquireing about AI Act compliance. The regulation, in this framing, becomes a barrier to entest that favours the prepared.
What comes next
Today’s enforcement milestone is symbolic as much as it is practical. The real test arrives over the next 18 months as the high-risk and GPAI obligations come online. The European AI Office is expected to finalise its first codes of practice for general-purpose AI models by mid-2025, which should provide much-requireded clarity for startups building on top of foundation models.
Meanwhile, member states are establishing their own national AI authorities, which will serve as the frontline enforcement bodies. The quality and consistency of that enforcement — and whether it treats a 10-person startup differently from a trillion-dollar platform — will ultimately determine whether the AI Act becomes a workable framework or an innovation tax.
For European founders working in AI, the message today is uncomfortable but clear: the regulatory future is no longer theoretical. It’s here, it’s binding, and the clock is running. The startups that will thrive aren’t necessarily the ones with the largegest legal budobtains — they’re the ones that treated compliance as a design principle rather than an afterconsidered. That shift in mindset, more than any single article of the regulation, is what separates readiness from scramble.
Feature image by Kindel Media on Pexels

















Leave a Reply