27 February 2025

With EU-US relations under strain as President Macron and Keir Starmer meet President Trump this week, the future of EU tech regulation is uncertain.
Following the AI Action Summit in Paris two weeks ago, the EU appeared on the verge of rolling-back technology laws in the face of criticism from the US administration (notably from Vice President JD Vance).
The EU’s Competitiveness Compass, released days before the Summit, promised to reduce the regulatory burden on companies operating in the EU, and pledged to invest more in AI gigafactories. Yesterday that was followed by omnibus legislative proposals “to cut red tape and simplify [the] business environment,” including consolidation efforts around the Corporate Sustainability Reporting Directive.
Within hours of the AI Action Summit ending, the AI Liability Directive – which some MEPs had insisted was vital for responsible AI – was scrapped. And, in spite of DeepSeek’s R1 release suggesting that super-infrastructure projects may not be necessary to achieve high-powered models, both the EU and France announced they were investing hundreds of billions of Euros in building gigafactories. Regulation seems to be out of fashion.
But is the EU really considering rolling-back the AI Act before most of it is even applicable? Is the Act just wounded, or dead on arrival?
This post will look at:
The political headwinds
Legal developments around the Act
Member States' enforcement plans
1. The political headwinds
The AI Action Summit in Paris on 07-11 February came less than two weeks after the release of the DeepSeek-R1 model, with many predicting R1 would result in a reversal of the trend towards huge “gigafactories.”
However, the Summit (renamed “AI Action” from the “AI Safety” Summits of Bletchley Park and Seoul) saw countries re-committing to huge AI infrastructure projects, with France pledging to spend €100 million and the EU €200 million on AI gigafactories.
“We will simplify […] At the national and European scale, it is very clear that we have to resynchronize with the rest of the world.”
President Macron, 10 February 2025
The first legislative casualty was the AI Liability Directive, proposed on 28 September 2022 to “improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems.”
MEPs were still voicing support for the Directive just before the Summit; MEP Axel Voss has criticised its withdrawal, saying that the Commission has chosen “legal uncertainty, corporate power imbalances and a Wild West approach that only benefits Big Tech.” Just last week, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) voted to continue their work on liability regulation for AI, despite the Commission’s intention to withdraw the proposal. The EU appears deeply divided over the future of AI regulation.
In a separate move, the UK renamed its AI Safety Institute to the AI Security Institute. This was seen as appeasing the US, whose own AI Safety Institute has come under scrutiny in the wake of proposed staff cuts to its parent agency, the National Institute of Standards and Technology (NIST).
European deregulation
Last week, the EU Commission published its simplification strategy: A simpler and faster Europe: Communication on implementation and simplification. The document did not directly address the AI Act, but it did include a suggestion that the “digital package” may be subject to review: including the AI Act.
The Commission’s communication states that a review of the Cybersecurity Act “will form part of the broader assessment, during the first year of the mandate, of whether the expanded digital acquis adequately reflects the needs and constraints of businesses such as SMEs and small midcaps, going beyond necessary guidance and standards that facilitate compliance” (p6, emphasis added) with the footnotes confirming that the "acquis" includes the AI Act.
In spite of all this, however, the AI Act was not mentioned during the AI Action Summit. Not even JD Vance, US Vice-President, referred to the Act in his speech, instead focusing on the Digital Services Act. The EU Competitiveness Compass even cites the AI Act as one of its key enabling regulations for AI growth (p6).
EU AI industry
The test for the AI Act is likely to be how its effects on the EU AI industry are seen. Big tech lobbied extremely hard during negotiations on the Act to reduce the level of regulation, and repeated their calls for its provisions to be revised before the Action Summit.
However, with rising tensions between the EU and US, it may be the opinion of European AI companies that have the greatest impact in the long term. Yet the fortunes of Europe’s tech “giants” remain intertwined with US big tech. Supply chain dominance by the US, including server and compute leasing, resulted in falls in EU AI-linked stocks on Tuesday, with Reuters reporting that the fall was linked to a “possible slowdown by Microsoft on data centre leasing.”
So for now, at least, the AI Act is not dead. However, some of the regulatory work going on to bring it into full application may hint that it is wounded.
2. Legal developments around the Act
The Act gives powers and responsibilities to the Commission, standards agencies and national governments to put in place implementing laws, regulations, technical standards and guidance to support implementation. Two key areas of legal development so far are technical standards being developed by the EU’s main standards bodies, and recent guidelines from the Commission on prohibited practices and the definition of an AI system.
Standards
The work of the Joint Technical Committee (JTC 21) of the EU’s technology standards bodies CEN and CENELEC continues. JTC 21 is producing 10 technical standards following the EU’s 2023 standardisation request under its digital policy, which were adopted as a means of demonstrating compliance with the AI Act. Conformity with the standards will create a “presumption of conformity” with the Act.
The standards are due to be published in April 2025. With the US’s main standards body NIST facing staff cuts that are likely to have a major impact on its AI Safety Institute (as a new agency, many of its staff are still in their probationary year), this may be an opportunity for the EU to set international standards for AI.
The standards are to some extent independent of the Act, since they are being prepared pursuant to the Commission's 2023 standardisation request, and as such would survive major amendments to the Act. As a result, the technical standards could end up having the greatest international impact of EU AI regulation. In short, the standards are very much alive and (soon to be) kicking.
Commission guidelines
On 04 February, the Commission released its first major guidance document under the Act: Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act). The guidelines are 140 pages in length, and set out in detail the types of systems that will be prohibited, as well as clarifying some of the Act’s inconsistencies around the definition of when a system is placed “on the market” (when the Act applies).
There is no indication in these guidelines that the Act will not be brought into full application, although the final paragraphs do allow for their revocation or revision in light of legal challenges or changes (para 434). Their sister guidelines on the definition of AI, however, are a little more circumspect.
On 06 February, the prohibited practices guidelines were followed by the Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act). This 13-page document seeks to provide clarity on the definition of an “AI system” under the Act. However, its effect may be the opposite.
Art 3 of the Act gives a somewhat vague description of an “AI system”:
“‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments"
Recital 12, designed to clarify the definition, is also unclear; it states that the ability to “infer” is an essential requirement of an AI system, but does not provide clarity on what the term means. External sources, including Oracle’s Jeffrey Erickson, give an excellent definition of the term ‘infer,’ which is extremely helpful in elucidating the key characteristic of an AI system:
“AI inference is when an AI model that has been trained to see patterns in curated data sets begins to recognize those patterns in data it has never seen before.”
Jeffrey Erickson of Oracle, 02 April 2024
However, the guidelines on the definition seem to disagree with this definition, suggesting that “a system’s ability to automatically learn, discover new patterns, or identify relationships in the data beyond what it was initially trained on is a facultative and thus not a decisive condition for determining whether the system qualifies as an AI system” (para 23).
Rather than giving a clear definition, the guidelines instead provide numerous examples of systems that are, and are not, an "AI system." Systems not meeting the definition may include those “for improving mathematical optimization” (paras 42-45), “basic data processing” (paras 46-47), “systems based on classic heuristics” (para 48) and “simple prediction systems” (paras 49-51).
This may seem logical, but the guidelines' preceding sections on what would be considered "AI systems" include those using reinforcement learning “to optimise personalised content recommendations in search engines” (para 37), “knowledge representation” and “search and optimisation methods” (both para 39).
The distinction between what does and does not fall within the Act is unclear, relying on AI’s ability “to handle complex relationships and patterns in data,” “generate more nuanced outputs” and “offering more sophisticate reasoning in structured environments” (para 59). Older systems are less likely to qualify, according to para 42. In other words, being shiny, new, complicated and sophisticated seem to be key qualifiers for a system to be deemed an “AI system.”
What is most remarkable about the guidelines is their apparent deference to deregulation. Paragraph 63 states:
“The vast majority of systems, even if they qualify as AI systems within the meaning of Article 3(1) AI Act, will not be subject to any regulatory requirements under the AI Act.”
The document seems to admit defeat before the first regulatory 'shots' are fired.
In a similar, but less dramatic way, the guidelines also appear to dumb-down the need for strict adherence, by describing the elements of Art 3's definition as matters to be “taken into account” when determining whether a system falls within scope, rather than as deterministic legal criteria (para 61).
The guidelines published so far give a mixed picture of the potential for robust AI Act enforcement. On the one hand, the prohibited practices guidelines seem detailed and extensive, with a standard (if possibly pessimistic) acceptance that regulatory or judicial interventions may result in their amendment.
On the other hand, the guidelines on the definition of an AI system are internally-inconsistent, have the potential to justify numerous organisations arguing that their systems fall outside the Act, and even suggest the Act’s definition itself should only be “taken into account” when determining whether the Act applies. Their assertion that “the vast majority of systems” will “not be subject to any regulatory requirements under the AI Act” may become a self-fulfilling prophecy.
3. Member States' enforcement plans
It is not just the EU Commission that is responsible for enforcing or providing guidance on the Act. The Act specifies that Member States must designate three types of bodies to enforce and provide guidelines upon it, as follows:
Market Surveillance Authorities (Art 3(26))
Responsible for market surveillance (checking there are no products on the market that do not conform with acceptable standards) and conformity assessment (checking that individual products conform with acceptable standards)
Notifying Authorities (Art 3(19) and Art 28(1))
Responsible for designating the national conformity assessment bodies and notifying the EU Commission which authorities are designated
National Public Authorities (Art 77(2))
Responsible for regulating the fundamental rights aspect of the Act in relation to High Risk Systems (Part III), including privacy rights
Member States have broad discretion over which bodies to designate, and a single body can carry out more than one function.
According to the Future of Life Institute, of the 27 Member States required to designate authorities to enforce the Act, only one has clearly designated authorities for both market surveillance and fundamental rights enforcement. 14 others have designated a fundamental rights authority but not a market surveillance authority, and only 10 of the remainder have given some clarity on which body will carry out market surveillance functions (but no clarity on their fundamental rights body).
This leaves a staggering 12 Member States with no clarity on their fundamental rights body, and 16 with no clarity on their market surveillance authority.

For those wondering which is the star Member State to have given clarity on both, it’s Malta. (Though notably Malta has designated 10 authorities to carry out the fundamental rights enforcement, rather than a single entity).
For a full list of all Member States and their designated authorities, I highly commend the work of the Future of Life Institute who are collating and publishing the data.
Besides the number of States without clearly-designated authorities, what is striking about the list is the number of countries who have designated multiple authorities for a particular role.
The UK’s AI regulatory policy is currently being debated in Parliament, with no sign of omnibus AI legislation this year. Its approach continues to involve leaving the regulation of AI to existing regulators under their existing legal mandates. Four regulators in particular – Ofcom (telecommunications and broadcasting), the Information Commissioner’s Office (data protection), the FCA (Financial Conduct Authority) and the CMA (Competition and Markets Authority) are all part of the soon-to-be revitalised Digital Regulation Cooperation Forum. This has often been cited as a key weakness of the UK’s approach: a confusing overlap of regulators.
However, it seems that EU Member States are adopting a similar approach, even with the AI Act in place. Most countries have more than four authorities proposed as possible AI Act regulators, with only Spain creating a single, specialist agency for AI so far (the Agencia Española de Supervisión de la Inteligencia Artificial or AESIA).
How enforcement of the Act will look in future remains to be seen, with so many authorities in each Member State responsible for policing it. The first provisions of the Act to be enforceable became applicable over three weeks ago (02 February 2025), yet most Member States have still not even identified their designated authorities clearly, let alone established their legal mandates and operating practices.
However, it should be added that the requirement for notified bodies to be fully-operational and their legal enforcement powers do not come into application until 02 August 2025 (Chapter III, Section 4), with Member States having to report their resourcing and headcounts to the Commission by that date.
The next few months may see a scramble to build the AI Act’s enforcement network.
Conclusion: dead on arrival or just wounded?
To use a phrase reminiscent of the recent Commission guidelines on AI systems, the AI Act's future is certainly uncertain. Deregulation is central to the EU's current legislative agenda, pressure from the US and China is driving greater infrastructure investment, enforcement seems uncertain from Member States, and even the Commission's guidelines appear to underplay the Act's potential reach.
Even so, the lessons of China's DeepSeek-R1 model show that tight regulation does not prevent innovation, but rather can be the cause of it.
Two of the major provisions of the Act are now applicable - the ban on prohibited AI practices and requirement for AI literacy - although the penalties for failure to comply do not become enforceable until August. Organisations using AI should put in place AI literacy plans (Art 4) and ensure they cease any prohibited AI practices (Art 5). Certainly some Member States - most notably Malta and Spain - have designated well-resourced authorities to enforce the Act at the national level. Others, however, have no clarity on their national watchdogs.
With geopolitics shifting rapidly and dramatically in recent weeks, the future of the EU-US relationship is itself uncertain. Whether this will result in increased regulation of US tech giants operating in Europe, or deregulation in an attempt to improve the EU's own economic prospects, remains unclear. Both seem possible at time of publication.
Having survived reaction to the AI Action Summit, and still being lauded as an enabler of innovation, the Act may yet emerge from the EU's consolidation and deregulation agenda unaltered. Although its start has been unpromising, national enforcement may see a last-minute scramble to designate and resource authorities before the 02 August deadline. As with so many policy areas at present, the broader geopolitical context will determine Europe's course.
It is premature to declare the AI Act dead, though recent weeks seem have left it wounded. Should the EU's AI industry receive a strong boost, the Act could get a 'shot in the arm,' though political leaders may feel such a boost can only come at the cost of enforcing the Act. However, legislators may do well to learn the lessons of DeepSeek-R1: that regulation breeds innovation. Whether we will see a European DeepSeek model remains to seen. For now, the AI Act remains bloodied but - hopefully - unbowed.