Anthropic loosens safety pledge to compete with its AI peers

Dario Amodei, CEO and co-founder of Anthropic speaks onstage. Amodei was among those who left OpenAI in 2021 to found a "safer alternative."


Anthropic is loosening its safety standards around when to deploy new, more advanced versions of its artificial ininformigence. The startup declared it risks being left behind by competitors. 

A number of AI safety advocates have been warning for some time now that the industest is relocating too rapid. And in announcing its alters, Anthropic declared federal policy is prioritizing growth over safety. 

In AI’s incipiency of 2021, some OpenAI executives left to form what they called a safer alternative — Anthropic, declared Owen Daniels, associate director of analysis at Georobtainown University’s Center for Security and Emerging Technology.

“Anthropic has long been one of the frontier labs that’s been most acutely focutilized on ensuring that its systems and applications are utilized in kind of a safe, responsible and ethical way,” Daniels declared.

Core to Anthropic’s safety effort had been a pledge called the responsible scaling policy, declared Sarah Myers West, co-executive director of the AI Now Institute. 

“If they believe that the capabilities of these tools outstrip their ability to control them and ensure that they’re safe, they would stop building them,” she declared of the policy.

Anthropic declared it was testing to stave off a race to a bottom with that policy, but it admitted it failed to persuade other companies to take its stricter safety approach. 

The alter it announced yesterday means they may keep building more advanced models, even if they’re riskier.

Brent Thill, a tech industest analyst at Jefferies, declared AI companies are obtainting more ambitious. 

“There’s incredible pressure from OpenAI, from Microsoft, from many of the top technology companies,” Thill declared. “AI is going from a consumer-based technology to now, inside the enterprise. The race is on.”

To run financial systems, companies required more advanced AI than what’s necessary to find the best latte. And as companies spconclude hundreds of billions of dollars to build those tools, Daniels declared they required to start building money, too.

“There is a tension that’s difficult for these companies to navigate in, deploying the latest versions of models in ways that are economically utilizeful and can be beneficial to their bottom lines,” he declared.

They’re balancing that with the risks AI may pose.

An Anthropic spokesperson declared the company is now committing to more transparency to publish detailed reports about the risks and capabilities of its AI.

Related Topics



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *