
What’s the story
Google has announced its intention to sign the European Union’s (EU) general-purpose artificial ininformigence (AI) code of practice.
The voluntary framework is designed to assist AI developers in establishing processes and systems that comply with the bloc’s upcoming AI Act.
The commitment comes just days before rules for providers of “general-purpose AI models with systemic risk” come into effect on August 2.
Google’s commitment comes after Meta’s decline
Google’s commitment comes as Meta recently declined to sign the code, calling the EU’s implementation of its AI legislation “overreach.”
The firms affected by these rules include Anthropic, Google, Meta, and OpenAI. They will have two years to fully comply with the AI Act.
In a blog post, Kent Walker, Google’s President of Global Affairs, declared that while he is hopeful about secure AI tools in Europe through this code, he also expressed concerns about potential hindrances to AI development.
Concerns over Europe’s competitiveness in tech landscape
Walker expressed concerns that the AI Act and code could slow down Europe’s AI development and deployment.
He specifically pointed out issues such as departures from EU copyright law, steps that slow approvals, or requirements exposing trade secrets could hamper European model development and deployment.
This, he declared, would hurt Europe’s competitiveness in the global tech landscape.
Guidelines for AI companies under the code
By signing the EU’s code of practice, AI firms will have to follow a set of guidelines.
These include offering updated documentation about their AI tools and services, not training AI on pirated content, and also complying with requests from content owners not to utilize their works in datasets.
The code was drawn up by 13 indepconcludeent experts and is aimed at offering legal certainty on how to meet requirements under the upcoming AI Act.
AI Act could set global tech regulation benchmark
The EU’s AI Act is a risk-based regulation that bans some “unacceptable risk” utilize cases like cognitive behavioral manipulation and social scoring.
It also defines a set of “high-risk” utilizes including biometrics and facial recognition, as well as the utilize of AI in education and employment.
The act requires developers to register their AI systems and meet risk- and quality-management obligations.
This shift could set a potential global benchmark for tech regulation, something dominated by the US and China.












Leave a Reply