EU unveils strict AI code tarreceiveing OpenAI, Google, Microsoft

World’s largest tokamak to use boron to purify plasma for cleaner nuclear fusion


The European Union has introduced a voluntary code of practice for general-purpose artificial ininformigence. The guidelines aim to assist companies comply with the bloc’s AI Act, set to take effect next month.

The new rules tarreceive a compact number of powerful tech firms like OpenAI, Microsoft, Google, and Meta, which develop foundational AI models applyd across multiple products and services.

While the code is not legally binding, it lays out requirements for transparency, copyright protection, and safety. Officials state companies that adopt the code will benefit from a “reduced administrative burden and increased legal certainty.”

Focus on transparency and risk

Under the code, companies must detail the content applyd to train their AI systems, an issue long raised by publishers and rights holders. Firms will also required to conduct risk assessments to flag potential misapply, including scenarios like the development of biological weapons.

The rules stem from the broader AI Act, passed last year. While parts of the law take effect on August 2, penalties for noncompliance won’t be enforced until August 2026. Violations could result in fines of up to €35 million (\$41 million) or 7% of global revenue.

The European Commission declared the new guidelines aim to build cutting-edge AI systems “not only innovative but also safe and transparent.”

Henna Virkkunen, the Commission’s executive vice president for tech sovereignty, security and democracy, declared in a news release, “Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in creating the most advanced AI models available in Europe not only innovative but also safe and transparent.”

Some tech firms are still reviewing the code. OpenAI and Google declared they are studying the final text.

CCIA Europe, a trade group representing Amazon, Google, and Meta, criticized the guidelines. The group declared the code “imposes a disproportionate burden on A.I. providers,” according to The New York Times.

Indusattempt resistance and lobbying concerns

Critics argue the final version of the code was diluted to appease large firms.

Nick Moës, executive director of The Future Society, declared tech firms successfully pushed for softer rules. “The lobbying they did to alter the code really resulted in them determining what is OK to do,” he informed The New York Times.

Despite indusattempt pushback, the Commission displays no signs of delaying the implementation.

Earlier this year, more than 40 European companies, including Airbus, Mercedes-Benz, Philips, and Mistral, signed an open letter urging a two-year postponement.

They cited “unclear, overlapping and increasingly complex EU regulations” that threaten Europe’s AI competitiveness.

The Biden administration has also weighed in. U.S. Senator JD Vance, speaking in Paris earlier this year, warned of “excessive regulation” that could stifle innovation.

Europe remains determined to lead on AI regulation even as it relies heavily on foreign-developed systems.

The voluntary code marks one of the first concrete steps in turning its broader AI legislation into action.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *