lediplomate.media — imprimé le 04/03/2026
François Souty, PhD
Intervenant en géopolitique à Excelia Business School, La Rochelle et Paris-Cachan
Intervenant en droit et politique de la concurrence de l’UE à la Faculté de droit de Nantes

Par François Souty
In the fall of 2025 and early 2026, amid increased global technological competition between established and emerging powers, the United States unamhugeuously stated its opposition to the establishment of a centralized global governance regime for artificial ininformigence (AI), a position that marked a significant shift in the U.S. cautious stance so far in international debates on the regulation of this strategic technology. This stance was emphatically reaffirmed on February 20, 2026 by Michael Kratsios, White Houtilize Advisor for Science and Technology and Director of the Office of Science and Technology Policy, who, at the conclusion of the India AI Impact Summit 2026 held in New Delhi, stated that the United States » totally rejects global AI governance » and that « the adoption of AI cannot lead to a better future if it is subject to bureaucracy and centralized control. »[1] The US administration sees a risk that supranational normative frameworks will stifle innovation and hinder competitiveness, preferring to promote » AI sovereignty » based on national autonomy, cooperation between states sharing similar values and the integration of American » best-in-class » technologies into partners’ digital architectures.[2] The statement is part of a broader sequence of positions opposed to multilateral processes aimed at establishing binding international rules or standards — from refutilizing to finishorse certain global ethical agreements at previous international summits to opposing a » centralized roadmap » for AI governance in the UN General Assembly.[3]
This American inflection must be analysed in the light of a global reorganization of technological and security standards at the launchning of the twenty-first century, where the economic, strategic and cognitive power associated with the mastery of AI systems converges with issues of competition, sovereignty, national security and international leadership. In this perspective, AI governance is itself becoming a structuring geopolitical issue — confronting divergent normative models such as that of the European Union, which in 2024 adopted an ambitious legal framework aimed at framing the risks of AI in a horizontal manner and proportionate to the risk levels of applications (AI Act).[4] and more fragmented approaches observed in the United States, China, Japan, South Korea or Taiwan. The underlying issues are not limited to the reduction of societal risks or legal compliance mechanisms, but extfinish to enforcement procedures, indepfinishent audit and certification mechanisms, gradients of legal liability between innovators and operators, as well as the ability of States to define normative rules of the game that give them a competitive advantage on the international scene. The introduction of this debate thus positions AI at the heart of contemporary markets and international relations, inviting a reflection that rigorously articulates the political, legal and strategic dimensions of technological regulation in a multipolar world in recomposition.
I. The European Union and the construction of a normative model of artificial ininformigence
The adoption in 2024 of the European regulation on artificial ininformigence, commonly known as the AI Act, is the culmination of a process that launched in 2018 with the European Commission’s communication « Artificial Ininformigence for Europe » and was consolidated by the White Paper on Artificial Ininformigence of February 2020. In the latter document, the Commission already stated that the Union should promote « trustworthy artificial ininformigence« , based on the Union’s values and respectful of fundamental rights.[5] This expression « trustworthy AI« , taken from the work of the High-Level Expert Group on AI established in 2018,[6] revealed a clear normative orientation: AI should not only be competitive, but in accordance with a structuring ethical and legal foundation.
a. The political and strategic context for the adoption of the EU AI Act
The proposal for a regulation presented by the Commission on 21 April 2021 was part of a two-pronged dynamic. First, it addressed internal concerns relating to the protection of fundamental rights, algorithmic non-discrimination and the safety of products incorporating AI systems. On the other hand, it was part of a more ambitious external strategy, explained in the Communication on the « Digital Compass » of March 2021, aimed at positioning the Union as a normative power in the regulation of emerging technologies.[7] Following the precedent set by the General Data Protection Regulation, the Union intfinished to assert its ability to set standards that could have a knock-on effect beyond its borders.
The debates in the European Parliament and the Council have gradually strengthened the ambition of the text, in particular by extfinishing the list of prohibited practices and specifying the regime applicable to so-called « high-risk » systems. The political compromise reached in December 2023, followed by the formal adoption of the regulation in 2024, enshrined a horizontal legal framework applicable to the entire internal market, based on Article 114 of the Treaty on the Functioning of the European Union, on the harmonisation of internal market rules.[8] This choice of legal basis underlines that the AI Act is designed not as a sectoral instrument, but as an instrument for the structural regulation of the European digital market.
b. The legal architecture: regulation based on risk levels
The AI Act is based on a graduated normative architecture, based on a classification of artificial ininformigence systems according to their level of risk. Recital 5 of the Regulation states that ‘AI systems may generate risks and undermine public interests and fundamental rights protected by Union law‘ and that an approach that is proportionate to the risks should therefore be adopted.[9]
The text thus distinguishes four main categories. AI practices deemed unacceptable are outright prohibited. Article 5 prohibits in particular systems exploiting vulnerabilities related to age or disability, generalised social rating schemes by public authorities, as well as certain forms of real-time biometric recognition in the public space, subject to strictly regulated exceptions.[10] This ban reflects a clear political will to draw normative red lines, in particular with regard to security or police utilizes that are likely to infringe on public freedoms.
High-risk AI systems, as defined in Articles 6 et seq., are permitted but subject to a set of substantial requirements: establishment of a risk management system, data governance, technical documentation, registration in a European database, appropriate human oversight and robustness, accuracy and cybersecurity requirements.[11] The objective is to integrate legal compliance into the very life cycle of the system, according to a logic of » compliance by design « .
Systems with limited risk are subject to transparency obligations, in particular when utilizers interact with conversational systems or are exposed to artificially generated content. Finally, systems with minimal risk remain in principle free of any specific constraints, which reveals the European legislator’s concern not to hinder innovation in a disproportionate way.
This architecture reveals a compromise between normative ambition and economic realism. The Union is not seeking to ban AI as such, but to regulate its most sensitive utilizes, by articulating the protection of fundamental rights and the integration of the internal market.
c. Compliance and enforcement mechanisms
One of the most significant aspects of the AI Act is its enforcement mechanisms. The regulation adopts a structure inspired by European product law, combining self-assessment of conformity, intervention by notified bodies and supervision by the competent national authorities.
Suppliers of high-risk systems must carry out a conformity assessment before placing on the market, including verification of technical requirements and the preparation of detailed documentation. In some cases, in particular where harmonised standards are lacking, the intervention of a third party body is required. Member States must designate national market surveillance authorities to monitor compliance with obligations and to impose corrective measures where necessary.[12]
The regulation also provides for the creation of a European Artificial Ininformigence Board, intfinished to promote coordination between national authorities and the Commission, based on the classic model existing in other areas of shared competence between the Commission and the Member States. This institutional dimension reflects the desire to prevent differences of interpretation that could fragment the internal market.
The sanctions regime is particularly dissuasive. Article 99 provides for administrative fines of up to €35 million or 7% of the total worldwide annual turnover, whichever is greater, for certain serious violations.[13] This scale of sanctions, comparable to that established by the GDPR, reflects the ambition to ensure the real effectiveness of the obligations imposed.
The AI Act also introduces specific obligations for general-purpose AI models, including transparency, technical documentation, and systemic risk assessment for the most powerful models. This extension of the scope of regulation to actors developing fundamental models marks a significant evolution of European law towards taking into account the underlying technological architectures.
d. The European Union as a standard-setting power
Beyond its legal mechanisms, the AI Act must be understood as an instrument of normative projection. The literature on international relations has long identified the Union as a « normative power » capable of influencing international standards through the attractive force of its internal market. Ian Manners theorized this capacity as constitutive of the external identity of the Union.[14]
In the field of AI, the challenge is twofold. On the one hand, the Union seeks to avoid excessive technological depfinishence on foreign players, particularly the United States and China. On the other hand, it aims to structure an international legal environment in which its companies can operate according to predictable standards that protect fundamental rights.
The adoption of the AI Act was hailed by several European leaders as a founding moment. Commission President Ursula von der Leyen presented the text as establishing « the world’s first comprehensive rules for trustworthy AI« .[15] This rhetoric underlines the symbolic and strategic dimension of regulation: it is not just a question of regulating a market, but of proposing a model.
However, this normative ambition is part of an international context marked by divergent approaches. Where the Union favours ex ante, structured and centralised regulation, other powers emphasise flexibility, self-regulation or national sovereignty. This divergence prepares the ground for a normative fragmentation, at the heart of the general problematic of this article.
It is striking that no European company is among the world’s top fifteen companies in artificial ininformigence, either in terms of market valuation or AI-related revenue.[16] The leading European player, according to market rankings and recent financial analysis, is SAP, which is positioned around 20th in the world, with an AI business integrated mainly in its business and cloud management solutions.[17] Europe’s relative backwardness can be explained by several factors: on the one hand, a historical fragmentation of the single market and a low concentration of players capable of investing massively in large-scale AI research and development; on the other hand, an entrepreneurial culture less oriented towards high-value start-ups and a marked depfinishence on public capital or limited industrial partnerships. Regulatory constraints and the priority given to the protection of fundamental rights may also have slowed down the aggressive commercial development of disruptive technologies. This situation supports to explain why the European Union has chosen to deploy a proactive normative strategy: the AI Act and the associated initiatives aim not only to regulate risks, but also to project a global normative influence, to create a competitive advantage based on trust and legal legitimacy, and to strengthen the role of European companies in a context where they remain structurally disadvantaged against the giants and Chinese. Three other European regulatory blocks are also at work: competition law and policy, but also the new regulatory block constituted by the Digital Markets Act [18] and the regulatory package relating to cybersecurity.[19] The European approach thus appears to be a means of compensating for a lack of technological critical mass through normative and regulatory leadership, likely to shape international standards in the long term.
II. The United States between sectoral regulation, power strategy and rejection of centralized global governance
Unlike the European Union, the United States has not opted for a horizontal and binding legislative instrument applicable to all artificial ininformigence systems. Their approach is based on a combination of sector-specific regulation, non-binding risk management frameworks and executive instruments, articulated with a proven strategy of technological leadership. The analysis of the American model thus presupposes a successive examination of the internal normative architecture, the recent instruments mobilized by the federal executive, and then the explicitly geopolitical dimension of the rejection of a centralized global governance of AI.
a. A tradition of sectoral regulation and soft law
The American regulation of artificial ininformigence is part of a legal tradition marked by the pre-eminence of sectoral law and the economic analysis of law. Unlike the European approach based on ex ante harmonisation, the US federal system relies on the intervention of specialised agencies – such as the Federal Trade Commission, the Food and Drug Administration or the Department of Transportation – competent to regulate specific utilizes of AI in their respective fields.
The Federal Trade Commission affirmed in 2020 that algorithmic systems fall under its jurisdiction when they give rise to unfair or deceptive practices within the meaning of the Federal Trade Commission Act.[20] In several public statements, the agency has stressed that the utilize of AI does not exempt companies from their obligations in terms of consumer protection or non-discrimination. However, this approach remains indirect: it does not create ex ante technical obligations comparable to those provided for by the European AI Act, but sanctions a posteriori the conduct deemed illegal.
In the financial sector, the Securities and Exmodify Commission has viewed at the utilizes of AI in asset management and potential conflicts of interest, while the Consumer Financial Protection Bureau has warned of the risks of algorithmic discrimination in the granting of credit. These interventions illustrate functional regulation, based on the application of pre-existing standards to new technologies.
At the same time, the United States has prioritized the development of voluntary risk management frameworks. In January 2023, the National Institute of Standards and Technology published the AI Risk Management Framework, a non-binding document intfinished to support organizations identify, assess and mitigate risks related to AI systems.[21] The framework is based on four main functions — governing, mapping, measuring and managing — and emphasizes flexibility and adaptability, rather than imposing uniform legal requirements.
This predominance of soft law reflects a legal culture attached to innovation and a posteriori responsibility rather than to prior certification. It also reflects a structural distrust of cross-cutting federal regulations that could hinder the competitiveness of American technology companies.
b . The Executive Order of 2023 and the utilize of the executive power
The Democratic administration of President Joe Biden had nevertheless marked a shift by adopting an executive order on October 30, 2023 entitled Safe, Secure, and Trustworthy Development and Use of Artificial Ininformigence.[22] This Executive Order required federal agencies to develop security standards, provided for information-sharing obligations for the most powerful models developed by private companies, and mobilized the powers of the Defense Production Act to require the communication of certain security test results.
Among other things, the executive order questions the Department of Commerce to develop standards for the « red-teaming« [23] of advanced models and to establish guidelines for the « watermarking » of artificially generated content.[24] It also requires federal agencies to assess the impacts of AI on civil rights, data protection, and the labor market.
However, this instrument remains an act of the executive, subject to modification or repeal by a subsequent administration. It is not a federal law passed by Congress and does not create a general liability regime comparable to that established by the European regulation. Its effectiveness depfinishs largely on the ability of the agencies to transform presidential orientations into operational standards.
This configuration highlights a central feature of the American model: the regulation of AI is less the result of systematic codification than of a pragmatic mobilization of existing legal tools, combined with tarreceiveed executive injunctions.
c. The rejection of centralized global governance: sovereignty and strategy
It is in this internal context that the statement of February 20, 2026 quoted at the launchning of the article in which Michael Kratsios (director of theOffice of Science and Technology Policy, it should be recalled) stated that the United States « totally rejects global AI governance » at the India AI Impact Summit 2026 in New Delhi.[25] According to accounts published by Agence France-Presse and picked up by several international media outlets, he argued that innovation should not be subject to a « centralized bureaucracy » that could hinder American competitiveness.[26]
This position cannot be interpreted as a rejection of any international cooperation. The United States remains engaged in forums such as the G7, including through the so-called Hiroshima Process on AI, and participates in the OECD’s work on the AI Principles adopted in 2019. However, the American administration clearly distinguishes between cooperation and binding centralized governance.
The rejection of a unified global regime meets several strategic objectives. First, it aims to preserve national normative room for manoeuvre in a sector considered to be decisive for national security and economic competitiveness. It is also part of a logic of systemic rivalry with China, in which the mastery of AI technologies is perceived as a key factor of military and industrial superiority. Finally, it reflects the desire to protect the ecosystem of large US technology companies, whose capacity for rapid innovation could be affected by rigid international regulatory constraints.
In this sense, the American position is not based on a simple technical disagreement on the modalities of regulation, but on a particular conception of the international technological order: an order based on competition between powers, the dissemination of standards by the market and the preservation of decision-building sovereignty. Where the European Union seeks to project a structured normative model, the United States favours a more flexible architecture, where economic power and technological advance play a decisive role in defining de facto standards.
This divergence feeds the fragmentation of the global governance of artificial ininformigence. It raises a central question for contemporary international relations and for defining both industrial and normative strategies or for the major diplomatic choices in the definition of the alliances of the future: will AI become the object of a structured international regime, comparable to those established in other sensitive technological fields, or will it remain a field of normative rivalry dominated by the national strategies of the great powers? The United States currently seems to be opting for the second approach.
III. Asian models of artificial ininformigence governance: technological sovereignty, state supervision and strategic pragmatism
The analysis of the normative fragmentation of the global governance of artificial ininformigence cannot be limited to the transatlantic face-to-face. We must therefore turn to Asia and observe that the Asian powers have developed distinct regulatory frameworks, revealing differentiated conceptions of the relationship between the state, the market and technology. China has opted for a dense legal framework, closely linked to its political and security objectives; Japan and South Korea favour more flexible and adaptive approaches, combining industrial guidelines and strategies; Taiwan, finally, seeks to reconcile economic openness, national security and alignment with democratic standards. The comparative examination of these models sheds light on the geopolitical logics at work in the regional structuring of AI governance.
- China: proactive regulation and political control of digital architectures
The People’s Republic of China was one of the first states to adopt specific legislation governing artificial ininformigence algorithms and services. As early as 2021, the Cyberspace Administration of China adopted the Provisions on the Management of Algorithmic Recommfinishation Services in Internet Information Services, imposing transparency and compliance obligations on digital platforms utilizing algorithmic recommfinishation systems.[27] These include a requirement for providers to respect » socialist core values » and refrain from producing content that could threaten national security or public order.
In 2022, China strengthened this system with the Provisions on the Administration of Deep Synthesis Internet Information Services, which govern deep synthesis technologies, including artificially generated content.[28] These rules require explicit identification of AI-generated content and provide for obligations to verify the identity of utilizers.
The most significant milestone was reached in July 2023 with the adoption of the Interim Measures for the Management of Generative Artificial Ininformigence Services.[29] This text imposes obligations on providers of generative AI services in terms of security, verification of training data and prevention of the production of illegal content. Article 4 stipulates that services must » adhere to socialist core values » and not generate content that undermines national security, territorial unity or social stability, according to formulations equivalent to those that underpinned Chinese competition law in the 2007 Antimonopoly Law.[30]
This normative architecture is part of a broader strategy defined by the New Generation Artificial Ininformigence Development Plan adopted in 2017 by the State Council, which sets the goal of building China the world leader in AI by 2030.[31] Chinese regulation therefore does not aim to curb innovation, but to direct it and integrate it into a logic of political control and national security. The state plays a central role, both as a regulator and as a strategic investor.
- Japan: soft law, innovation and agile governance
Unlike China, Japan favours a principled approach and flexible regulation. As early as 2019, the Japanese government supported the adoption of the G20 AI Principles within the G20, largely inspired by the principles developed by the OECD. These principles emphasize the promotion of human-centered AI, transparency, and accountability.
Internally, Japan adopted guidelines on the responsible utilize of AI in 2022 and updated in 2023, without establishing a horizontal legislative regime comparable to the European AI Act. The government is focutilizing on public-private cooperation and promoting an innovation-frifinishly ecosystem.
Japan’s public policy on artificial ininformigence is based on a hybrid approach combining promotional policy legislation, voluntary standards and ethical guidelines, and centralized government coordination.[32] The Act on the Promotion of Research, Development and Utilization of Artificial Ininformigence-related Technology (often referred to as the AI Promotion Act) was passed by the Japanese Diet on May 28, 2025:[33]this law came into force gradually from July 2025, with full implementation only a few months ago in September 2025 ; this law is the first explicit legal framework in Japan to promote innovation and the utilize of AI while mitigating the associated risks through cooperation between public and private actors, the encouragement of research, the strengthening of skills, and the development of guiding principles for ethical and transparent AI.[34] In particular, it establishes an AI Strategy Headquarters under the direct authority of the Prime Minister, responsible for coordinating AI policies and developing master plans, but does not impose direct criminal sanctions: the emphasis is on « soft law » (incentive and guided regulation) rather than on strict prohibitions or fines, reflecting a Japanese preference for light regulation that promotes innovation and sectoral interoperability rather than mandatory general regulation. This strategy is part of the broader context of the Society 5.0 socio-economic vision and continued coordination with international standards on AI.
The Japanese choice therefore reflects a desire to maintain normative flexibility in order to support the competitiveness of its technology companies, while affirming its commitment to democratic principles and international cooperation. Regulation appears more as an accompanying instrument than as an ex ante binding mechanism.
- South Korea: between industrial ambition and progressive legal framework
In 2020, South Korea adopted a National Strategy for Artificial Ininformigence aimed at positioning the counattempt among the world leaders in the sector by 2030. This strategy combines massive investments, support for start-ups and the development of data infrastructures.
On the normative level, Seoul has gradually developed ethical guidelines and is considering the adoption of a more structured legislative framework. Bills relating to the promotion and regulation of AI have been debated in the National Assembly, seeking to reconcile innovation and the protection of fundamental rights. South Korea is closely observing European and American developments, in a logic of balance between international competitiveness and normative credibility. Most recently, South Korea established a public regulatory framework for artificial ininformigence around the Basic Act on the Development of Artificial Ininformigence and the Establishment of a Foundation for Trustworthiness (often referred to as the AI Basic Act), which was passed by the National Assembly and came into force on January 22, 2026 after a one-year preparation period.[35] This framework law, which covers 43 articles and six chapters, is one of the world’s first comprehensive legislative regimes governing the utilize and development of AI, with the stated objective of enhancing transparency, security and public trust while supporting innovation.[36] In particular, it imposes transparency and labelling requirements for AI-generated content, risk management obligations for high-impact systems in critical areas (health, transport, financial services, etc.) and the establishment of a coordinated national governance structureunder the authority of the Minisattempt of Science and ICT; Penalties for non-compliance can be up to 30 million Korean won, but there is a one-year grace period before fines are applied to allow economic actors to comply. This approach aims to balance the promotion of international competitiveness with the protection of citizens, and is a model of national regulation that stands alongside the European Union’s AI Act in the global AI governance landscape.
This pragmatic approach illustrates the intermediate position of a technologically advanced state but closely inserted in global value chains, depfinishent on both Western markets and its economic relations with China.
- Taiwan: Technological Security and Democratic Alignment
Taiwan occupies a singular position in the global AI ecosystem due to its central role in the production of advanced semiconductors, especially through companies such as TSMC. Mastering cutting-edge chips is a major strategic lever in the global competition for AI.
The Taiwanese government has adopted guidelines on AI ethics and supports the development of industrial and medical applications, while ensuring the protection of personal data under its personal information protection legislation. Regulation remains relatively flexible, but strongly marked by national security imperatives and the necessary to maintain close partnerships with the United States and the European Union.
In Taiwan, as in Japan and South Korea, the situation has modifyd markedly in recent months, in 2025. The regulation and public policy of artificial ininformigence was structured around the adoption of the Artificial Ininformigence Basic Act, a fundamental law passed on December 23, 2025 by the Legislative Assembly (Legislative Yuan) after a preparation phase leading to the approval of the project by the Executive in August 2025. This law, which aims to reconcile the promotion of innovation and risk governance, establishes a national framework for the development, application and supervision of AI. The Act codifies seven guiding principles aligned with international standards — sustainability and well-being, human autonomy, privacy and data governance, cybersecurity and security, transparency and explainability, fairness and non-discrimination, and accountability — and designates the National Science and Technology Council (NSTC) as the competent authority to coordinate implementation, while the National Science and Technology Council (NSTC)The Minisattempt of Digital Affairs (MODA) should develop AI risk classification frameworks and practical guidelines.[37] The text also requires the promotion of research, infrastructure equipment, the protection of labour rights in the face of automation and digital equity, while establishing a national strategic committee on AI, chaired by the Prime Minister, to guide policy orientations.[38] This approach emphasizes principles-based governance, cross-sectoral coordination, and a balance between technological competitiveness and societal safeguards, rather than immediate administrative penalties for the private sector.[39]
In the context of growing tensions with Beijing, the governance of AI in Taiwan is inseparable from the challenges of technological sovereignty and strategic resilience. The island is viewing to consolidate its position as an indispensable player in global supply chains, while also being part of the camp of technological democracies.
Through these Asian models, profoundly differentiated conceptions of the relationship between technology, the state and the international order are emerging. China articulates regulation and political control in a perspective of global power; Japan and South Korea favour flexible regulation integrated with industrial strategies; Taiwan links AI governance and strategic security. This diversity confirms that the normative fragmentation of artificial ininformigence is not only between Europe and the United States, but is part of a broader recomposition of the global technological order.
IV. Issues, consequences and geopolitical perspectives of the normative fragmentation of AI
The diversity of regulatory approaches analysed in the previous sections reveals a growing fragmentation of global AI governance. This situation is not limited to a discrepancy between legal models: it is a major strategic, economic and political factor in contemporary international relations. The European Union, the United States, China and the Asian powers express logics of sovereignty, competitiveness and normative projection that combine to produce a fragmented international landscape, where the definition of common standards is uncertain and where AI is becoming a power issue.
a. Major geopolitical implications
On the geopolitical level, normative fragmentation leads to a de facto standardization race by the dominant actors. The United States, by favouring flexibility and sectoral self-regulation, encourages the emergence of standards by major technology companies, which are becoming global references. The European Union, through the AI Act, is attempting to impose an ex ante regulatory model, based on the protection of fundamental rights, likely to impose itself on international players wishing to access the European market.[40] The table in Appfinishix II below on the top 20 AI companies in the world speaks for itself: the first – and only – AI company of a European nationality, the German SAP, only comes in sixteenth place! The top ten are from the United States. China places five companies between tenth and twentieth place. Taiwan and Japan place a company in eleventh and eighteenth place respectively.
This duality of norms creates potential tensions in trade and technological cooperation. International companies must navigate between binding European obligations and a more flexible US environment, while taking into account China’s strict and politically oriented regulations. Tensions are particularly visible in strategic sectors, such as huge data processing platforms, biometric recognition systems and large-scale generative AI models.
Normative fragmentation also has implications for national security. AI, as a dual technology, combines civilian and military utilizes. The United States and China consider AI to be a key factor of technological and military superiority, which explains the American rejection of centralized global governance and the strict and political Chinese framework.[41] The European Union, although less focutilized on the military aspect, sees the control of standards as a lever of normative power and international influence.
b. Economic and industrial consequences
We have just observed the dominance of American companies in the top ten places. The divergence of regulations influences the overall industrial dynamic. Large U.S. companies, such as OpenAI and Google DeepMind, are taking advantage of a flexible framework that fosters rapid innovation and experimentation. The European AI Act, on the other hand, imposes heavy constraints on compliance and documentation, which can slow down market enattempt but strengthen the confidence of consumers and public institutions, with only one company in the top twenty places.[42]
Chinese regulation, geared towards security and compliance with policy guidelines, encourages domestic companies to align their innovations with the state’s strategic priorities, limiting foreign influence in the domestic market and consolidating local champions. Japan, South Korea and Taiwan are adopting intermediate models, seeking to balance innovation, security and competitiveness.
All of these strategies produce a fragmented global ecosystem, where mastery of AI standards becomes a key factor in economic competitiveness and geostrategic positioning.
c. Challenges for international governance
This fragmentation poses major challenges for international coordination. Multilateral bodies, such as the OECD, the G7 or the UN, have initiated principles and recommfinishations, but their scope remains limited in the face of the divergence of national approaches. The risk is that of the emergence of « normative blocs « : a transatlantic bloc influenced by the European Union and the United States, a Sino-centric bloc with politically oriented norms, and intermediary Asian actors that adjust their regulations according to economic and strategic opportunities.
This situation is also fuelling diplomatic and trade tensions, with states seeking to impose their standards as a condition for market access or technological investment. It can generate compliance fragmentation, where global companies must simultaneously comply with divergent regulations, increasing the costs and complexity of compliance.[43]
d. Future prospects and strategies
Faced with this fragmentation, several scenarios are possible. The first consists of a gradual convergence towards harmonised international standards, driven by multilateral nereceivediations and the dissemination of best practices, such as environmental regulation or digital trade. The second scenario, which is more likely in the short and medium term, is that of a coexistence of national and regional regulations, with partial knock-on effects and persistent trade and technological tensions.
The challenge for the European Union is to maintain its standard-setting capacity while facilitating interoperability with other major players. For the United States, it is a question of preserving competitiveness and innovation while addressing ethical and security concerns. China is pursuing a strategy of tight supervision and political control, while the Asian mid-powers are adjusting their frameworks to maximize their technological and economic advantages.
In conclusion, the fragmentation of global AI governance illustrates the complex relationship between law, power and international strategy. It emphasizes that artificial ininformigence is not only a technical or economic object, but a lever of power and an instrument of normative projection, the regulation of which becomes a field of geopolitical competition. The tension between national sovereignty and international cooperation will be decisive for the future evolution of the global AI ecosystem.
Conclusion
The analysis of artificial ininformigence regulations on a global scale highlights a paradoxical dynamic: while AI represents a universal strategic issue, governance models are deeply divergent, reflecting different visions of the relationship between state, market and society. The European Union embodies a structured normative model, focutilized on the protection of fundamental rights and the harmonisation of the internal market. The United States favours flexibility, sectoral regulation and the preservation of the competitiveness of its companies, while refutilizing any centralised global governance. China has a strict legal framework, guided by political and security priorities, while Japan, South Korea and Taiwan adopt intermediate approaches, reconciling innovation, security and international cooperation.
This diversity reflects a growing normative fragmentation, where AI simultaneously becomes an economic object, an instrument of power and a field of geopolitical rivalry. It poses several major challenges: the coexistence of divergent standards, the risk of trade and diplomatic tensions, the necessary for companies to navigate complex legal frameworks, and the question of the security of dual technologies in a context of strategic competition.
For international relations, the fragmentation of AI governance underlines the limits of traditional multilateral approaches to emerging technologies with a strong societal and military impact. It suggests that the definition of global standards will depfinish less on binding international treaties than on the combined influence of the major powers and their industrial and financial ecosystems. In this context, the European Union appears to be a normative power in search of the extraterritorial dissemination of its standards, while the United States and China exploit regulation to assert their technological sovereignty and their competitive advantage.
The future trajectory of global AI governance will therefore depfinish on a combination of factors: the ability of states to align their national strategies, the emergence of de facto technical standards, pressure from markets and consumers, and the necessary to manage ethical, social and security risks. Artificial ininformigence is thus confirmed as a field of power where law, politics and geopolitics are inextricably linked, and where normative fragmentation constitutes both a challenge and a strategic lever for the great powers.
This analysis suggests that normative competition will be a central element of international relations in the twenty-first century, and that mastery of AI — technical, legal, and strategic — will partly determine the positioning and influence of states on the world stage.
Appfinishix 1
Top 20 Global Artificial Ininformigence Companies
| Rank | Company | Nationality | Capitalization / Valuation (est.) | Estimated AI Revenue (2025-26) | AI business model | Sources of funding / investments |
| 1 | NVIDIA | United States | ~T$4.5 | ~$210 billion | GPU & AI hardware for training/inference | Self-financing, R&D, industrial partnerships |
| 2 | Microsoft | United States | ~T$3.0 | ~$80–100 billion | Cloud AI, SaaS Platforms & Integration | Self-financing, partnerships (OpenAI) |
| 3 | Alphabet(Google) | United States | ~T$3.7 | ~$70–90 billion | Service-integrated AI and cloud | Self-financing, intensive R&D |
| 4 | Amazon | United States | ~$2.4 | ~$50–70 billion | B2B AI Cloud, Infrastructure & Services | Self-Financing, AWS capex |
| 5 | OpenAI | United States | ~B$700-B$830 | ~$20–40 billion | Generative AI, SaaS API | Microsoft + Nvidia + SoftBank + VC |
| 6 | Anthropic | United States | ~B$380 | ~$5–14 billion | Generative AI, chatbot | GIC, Coatue Management, VC |
| 7 | Meta Platforms | United States | ~B$600–800 | ~$30–50 billion | AI for social and metaverse | Self-financing, R&D |
| 8 | IBM | United States | ~B$280 | ~$15–25 billion | Enterprise AI Solutions | Self-financing |
| 9 | Oracle | United States | ~B$560 | ~$10–20 billion | Cloud & Enterprise AI | Self-financing |
| 10 | Salesforce | United States | ~B$245 | ~$8–15 billion | CRM + AI business workflow | Self-financing |
| 11 | TSMC | Taiwan | ~$1.0–1.1 | ~$30–50 billion | AI Chip Manufacturing | Self-financing, Foundry partnerships |
| 12 | Baidu | China | ~$50–60B | ~$15–25 billion | AI for search & cloud | Self-financing, local investments |
| 13 | Tencent | China | ~B$560 | ~$10–20 billion | Applied AI social/gaming | Self-financing |
| 14 | ByteDance | China | ~B$330 | ~$20–30 billion | AI recommfinishation, content | Self-financing, partial IPO envisaged |
| 15 | Adobe | United States | ~B$160 | ~$5–10 billion | Creative AI tools | Self-financing, R&D |
| 16 | SAP | Germany (EU) | ~B$160 | ~$3–5 billion | Built-in ERP and cloud AI | Self-financing, Europe |
| 17 | Qualcomm | United States | ~B$150 | ~$5–8 billion | AI chips and edge computing | Self-financing, licenses |
| 18 | Sony | Japan | ~$90–100B | ~$3–6 billion | Applied AI Media & Games | Self-financing, R&D |
| 19 | Alibaba Cloud | China | ~B$230–260 | ~$8–12 billion | AI cloud & e-commerce services | Self-financing, local capital |
| 20 | Tencent Cloud | China | ~B$100–120 | ~$6–10 billion | AI cloud & services | Self-financing, international expansion |
Sources :
AI capitalization and revenues: estimates based on the Largest AI companies by market capitalization 2025 (Capital.com) rankings, public financial reports 2025-2026 and sector analyses available in open access.– Source: Largest AI companies by market cap 2025, Capital.com, accessed February 2026.– Source: Top AI & Machine Learning Revenue Leaders 2025The analyses GIS AI Rankings, AEO SIG.
Valuation of private companies (OpenAI, Anthropic, ByteDance): estimates from institutional press releases, financial cables and specialized journals. For unlisted companies (e.g. OpenAI, Anthropic), valuations are recent publicly reported estimates.– Source: Financial Times, OpenAI raises $X trillion, 2026;– Source: The Guardian, Anthropic funding round, 2026.
AI business model and revenues: estimate based on companies’ annual or interim documents (2025/2026 reports), indicating the share of revenue related to AI, cloud or advanced ininformigence platform activities.
Appfinishix II:
Comparison of national artificial ininformigence regulations (with sanctions)
| Counattempt / Region | Type of regulation | Stress Level | Enforcement mechanisms | Penalties for violations | Strategic and geopolitical objectives |
| European Union | Ex ante, horizontal, risk-based regulation (AI Act, 2024) | High | Mandatory certification for high-risk AI, audits, inspections | Fines of up to 6% of annual global turnover, withdrawal from the market, injunctions | Protection of fundamental rights, legal certainty, global normative leadership, influence on international standards |
| United States | Sectoral self-regulation, federal and state guidance, voluntary principles (NIST AI Risk Management Framework) | Low to medium | FTC ad hoc investigations, indusattempt remedies, tax incentives, public-private partnerships | Limited Civil Penalties, Contractual Penalties, Civil and Commercial Liability | Industrial competitiveness, rapid innovation, flexibility, technological leadership via private companies, preservation of economic sovereignty |
| China | Strict and centralized regulation, cybersecurity laws and AI guidelines (CAC, 20232025) | Very high | Compulsory licensing, government oversight, control of generated content, security audits | Significant fines, suspension of activities, revocation of licenses, criminal sanctions for executives | National security, political control, technological autonomy, consolidation of national champions, limitation of foreign influence |
| Japan | Flexible regulatory framework, voluntary principles, AI sectoral laws (20232025) | Medium | Self-certification, recommfinishations, voluntary audits | Limited administrative fines, formal recommfinishations, restrictions on access to certain markets | Encourage innovation while ensuring security and trust, alignment with international standards |
| South Korea | Mixed regulation, guiding principles, sector-specific obligations for critical AI | Medium to High | Data protection authority, sector audits, mandatory compliance for risky systems | Turnover-related fines, injunctions, temporary suspension of services | Technological innovation, industrial competitiveness, protection of citizens, alignment with international standards |
| Taiwan | Phased regulation, sectoral recommfinishations and guidelines | Medium | Supervision by sectoral agencies, voluntary certifications | Administrative fines, voluntary withdrawal or suspension of services | Encourage local R&D, utilizer protection, interoperability with international standards |
[1] Michael Kratsios, Remarks by Director Michael Kratsios at the India AI Impact Summit, India AI Impact Summit, New Delhi, 20 February 2026, Office of Science and Technology Policy (full text consulted online), available at The White Houtilize website, https://www.whitehoutilize.gov/articles/2026/02/remarks-by-director-michael-kratsios-at-the-india-ai-impact-summit/?utm_source=chatgpt.com. Also, « White Houtilize adviser declares US « totally » rejects global AI governance », AFP article (dispatches of February 20, 2026) reporting on Michael Kratsios’ statements in New Delhi during the India AI Impact Summit 2026, reported by several international press outlets.
[2] Ibid. See in particular the passages where Kratsios criticizes the idea of centralized normative governance and calls for promoting » adoption and human prosperity » via AI. « No Euro tone again: US rejects global AI governance, pushes ‘sovereign’ American stack« , New Indian Express, February 20, 2026
[3] UNGA 80, US rejects a centralised AI rulebook, a record of debates at the United Nations General Assembly where the US rejected centralised AI governance, promoting national sovereignty and cooperation between states. « US promotes AI sovereignty, exports at India AI Impact Summit 2026, » Business Standard, February 20, 2026, https://www.business-standard.com/technology/tech-news/us-promotes-ai-sovereignty-exports-at-india-ai-impact-summit-2026-126022000532_1.html?utm_source=chatgpt.com.
[4] European regulation on artificial ininformigence, known as the AI Act, adopted in 2024 by the European Union institutions to regulate the « risks » related to AI according to graduated risk levels (texts and official comments of the European institutions available in the EU legal documentation). It should be noted that the habit has been adopted, particularly with the Digital Markets Act, of naming « Acts » in English the category of texts falling under the term « Regulation » within the meaning of the European treaties, to give them readability equivalent to the texts of laws in the United States. However, not all Regulations are called « Acts » but « Regulations« .
[5] European Commission, White Paper on Artificial Ininformigence – A European approach to excellence and trust, COM(2020) 65 final, Brussels, 19 February 2020, pp. 2–4.
[6] European Commission, Commission appoints expert group on AI and launches the European AI Alliance, Press release, 14 June 2018. The AI HLEG was set up to provide strategic advice to the Commission on its European AI strategy, to develop public policy recommfinishations, and to address the ethical, legal and social issues of artificial ininformigence in the context of the implementation of the Communication « Artificial Ininformigence for Europe » published in April 2018. The group, composed of indepfinishent experts from academia, civil society and indusattempt, was also expected to contribute to the development of ethical guidelines and guidance for trustworthy AI in Europe.
[7] European Commission, 2030 Digital Compass: the European way for the Digital Decade, COM(2021) 118 final, Brussels, 9 March 2021.
[8] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial ininformigence (Artificial Ininformigence Act), COM(2021) 206 final, 21 April 2021, based on Article 114 TFEU.
[9] AI Act, recital 5.
[10] Ibid., art. 5.
[11] Ibid., arts. 8-15.
[12] Ibid., art. 59 et seq.
[13] Ibid., art. 99.
[14] Ian Manners, « Normative Power Europe: A Contradiction in Terms? », Journal of Common Market Studies, Vol. 40, No. 2, 2002, pp. 235–258.
[15] Ursula von der Leyen, Statement on the political agreement on the AI Act, European Commission, press release, Brussels, 9 December 2023.
[16] Capital.com, Largest AI companies by market capitalization 2025, accessed February 2026: no European company in the top 15.
[17] SAP, Annual Report 2025, SAP Integrated Report 2025, pages 4852, Walldorf, Germany: AI business estimate and position in global rankings.
[18] See in particular our articles Souty, F., « European Digital Markets Act, competition policy and sovereignty: geopolitical consequences and strategic impact of the law on the digital economy », Le Diplomate Média, 4 February 2026 and « Competition and Antitrust policy in Europe and the United States: transatlantic perspectives and geopolitical issues », Le Diplomate Médias, December 30, 2025. See also the excellent analysis by Babinet, G., « The real subject of AI is antitrust », Les Echos, Wednesday 18 February 2026, p. 12, which quite rightly explains that « the debate on AI is less a debate on labour law than a debate on competition policy« .
[19] Souty, F., Cyberspace, security, sovereignty, technology and global geopolitical rivalries: legal issues, regulation and priorities for the European Union and for France », Le Diplomate Média, 11 February 2026.
[20] Federal Trade Commission, « Using Artificial Ininformigence and Algorithms, » Business Blog, April 8, 2020; see also FTC, Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI, April 19, 2021.
[21] National Institute of Standards and Technology, Artificial Ininformigence Risk Management Framework (AI RMF 1.0), U.S. Department of Commerce, January 2023.
[22] Executive Office of the President, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Ininformigence, October 30, 2023, Federal Register.
[23] The term « red teaming » of advanced content comes from cybersecurity and risk assessment practices, but applied to generative AI and digital content. It is a proactive approach to testing, detecting, and assessing vulnerabilities or risks associated with AI-generated content.
[24] Content watermarking is a technique that consists of inserting invisible or visible information into content that identifies its source, owner or traceability. It’s a bit like signing or marking a document or image to reveal that it belongs to someone, or to track its circulation in a context of copyright protection, traceability or authentication of AI-generated content.
[25] Statements reported at the India AI Impact Summit 2026, New Delhi, February 20, 2026, op.cit.
[26] AFP dispatch, 20 February 2026, already cited, reprinted in particular by Stratégies, 20 February 2026; see also US promotes AI sovereignty, Business Standard, 20 February 2026 also already cited.
[27] Cyberspace Administration of China, Provisions on the Management of Algorithmic Recommfinishation Services in Internet Information Services, adopted on December 31, 2021, entered into force on March 1, 2022.
[28] Cyberspace Administration of China, Provisions on the Administration of Deep Synthesis Internet Information Services, November 25, 2022, effective January 10, 2023.
[29] Cyberspace Administration of China et al., Interim Measures for the Management of Generative Artificial Ininformigence Services, July 13, 2023, entered into force August 15, 2023.
[30] Souty F. « Chinese Antimonopoly Law – Assessment: The Chinese Competition Authority draws a first assessment of the two years of application of the antimonopoly law », Concurrences N° 4-2010, August 12, 2010, Art. N° 33180, pp. 235-238
[31] State Council of the People’s Republic of China, New Generation Artificial Ininformigence Development Plan, July 8, 2017.
[32] « AI Regulation in Japan: Policy Framework & Governance », Nemko Digital, 2026, pp. 2-4 (explanation of the Act’s national objectives, the establishment of the AI Strategy Headquarters under the Cabinet Office, and the integration of existing sector-specific laws for the applicable law, revealing the so-called « light touch regulation » approach)
[33] International Bar Association, Japan’s emerging framework for responsible AI: legislation, guidelines and guidance, 12 Nov. 2025, section on the AI Promotion Act.
[34] Vincent Fauchoux, « Japan’s 2025 AI Promotion Act: Structuring Innovation Through Soft Regulation », Tech & DATA, June 2, 2025, pp. 1-3 (presentation of the objectives, main axes and institutional architecture of the law, highlighting the absence of direct sanctions but the promotion of cooperation between stakeholders).
[35] « South Korea Enacts World’s First Comprehensive AI Law, Balancing Innovation and Safety, » Asia Daily, Jan. 25, 2026.
[36] « South Korea’s AI Basic Act Takes Effect Jan 22, 2026, » AI Business Weekly, Jan. 22, 2026. This article provides a series of detailed information on the enattempt into force, institutional structure, grace period and sanctions, including the broader regulatory context (Articles 25-43, provisions applicable to national governance).
[37] Minisattempt of Digital Affairs (R.O.C., Taiwan), Legislative Yuan Passes Artificial Ininformigence Fundamental Act in Third Reading, Laying Foundation for AI Innovation, Security Governance in Taiwan, Dec. 24, 2025 news release outlining the AI Basic Act and its goals of promoting human-centered AI and the protection of fundamental rights.
[38] Baker McKenzie, Taiwan: AI Basic Act, fact sheet, Jan. 9, 2015 2026. Note that specifies the institutional obligations (NSTC, MODA) and the seven guiding principles aligned with international standards in AI governance. https://www.bakermckenzie.com/en/insight/publications/2026/01/taiwan-ai-basic-act. AI Basic Act passed, tries to balance AI promotion with social welfare, Focus Taiwan (CNA), Dec. 23, 2025: gives details on the principles, the establishment of the national committee, and support measures for indusattempt and infrastructure.
[39] White & Case LLP, AI Watch: Global regulatory tracker – Taiwan, Jan. 2026, discusses the current framework, which relies primarily on principled regulation and guidelines rather than direct sanctions, positioning Taiwan in a « low-constraint » approach.
[40] European Commission, Artificial Ininformigence Act, 2024, see recitals and articles on extraterritorial scope and the protection of fundamental rights.
[41] Cyberspace Administration of China, Interim Measures for the Management of Generative Artificial Ininformigence Services, July 13, 2023, Article 4.
[42] Ursula von der Leyen, Press release on the adoption of the AI Act, European Commission, 9 December 2023.
[43] OECD, AI Policy Observatory: Mapping AI Regulations, 2025: https://www.oecd.ai/en/policy-regulations .
#ArtificialIninformigence,#AIGovernance,#AISovereignty,#NormativeSovereignty,#Geopolitics,#TechGeopolitics,#GlobalGovernance,#AIPolicy,#AIRegulation,#AIACT,#EuropeanUnion,#UnitedStates,#ChinaTech,#AITaiwan,#SouthKoreaAI,#JapanAI,#DigitalSovereignty,#TechPower,#StrategicAutonomy,#AIStandards,#AICompliance,#AICompetition,#MultipolarWorld,#TechColdWar,#AIEthics,#NationalSecurity,#AIMarket,#AIIndusattempt,#InnovationPolicy,#TechLeadership,#GlobalStandards,#AIPower,#RegulatoryCompetition,#DigitalOrder,#AIFragmentation,#AIIndustrialPolicy,#EmergingTechnologies,#InternationalRelations,#CyberStrategy,#TechnologicalSovereignty

François Souty est Président exécutif du Cabinet LRACG Conseil en stratégies européennes et droit de la concurrence, enseignant à Excelia Business School (La Rochelle-Tours-Cachan), à l’Université Catholique de l’Ouest (Niort) et chargé d’enseignements à la Faculté de Droit de l’Université de Nantes. Auparavant Expert National Détaché auprès de la Commission Européenne (rapporteur antitrust sur les marchés financier de 2018 à 2021 et chargé d’affaires internationales de concurrence à la DG Concurrence de 2021 à 2024), il a été conseiller économique européen pour la politique de la concurrence auprès du gouvernement de Géorgie à Tbilisi en 2017-2018. Longtemps Directeur départemental de la DGCCRF au ministère de l’Économie et des Finances (1982 à 2024), il a été également professeur-associé à l’Université de La Rochelle (1996-2018). Membre des comités d’experts de la concurrence de l’OCDE et de la CNUCED de 1992 à 2018, il a participé aux travaux de l’OMC sur le commerce international et la politique de la concurrence de 1997 à 2004. Un des fondateurs du Cercle Jefferson, du Cercle K2, de la revue Concurrences en 2004, il est auteur d’une douzaine de livres ou rapports internationaux et de plus d’une centaine d’articles académiques en droit et politique de la concurrence et en histoire économique. Il prépare actuellement la 5e édition de «Droit et politique de la concurrence de l’Union Européenne » chez LGDJ-Montchrestien (coll. Clefs). Il est auteur d’une thèse de doctorat en histoire économique à l’Université de Paris III sur les monopoles des Compagnies des Indes néerlandaises au XVIIIe siècle. François Souty est Officier de l’Ordre National du Mérite.











Leave a Reply