The head of Anthropic EMEA has stated the AI lab, behind the Claude chatbot, is not specifically tarreceiveing researchers and engineers from rival labs, as it sees to build up its team across the region.
Guillaume Princen was speaking on the Tech.eu podcast, where he discussed building up Anthropic’s EMEA team, Anthropic’s safety credentials, its focus on enterprise clients, and the debate around the EU AI Act.
He also spoke about how brands like BMW, Novo Nordisk, and the European Parliament are working with Anthropic, which is valued at $61.5bn and backed by Google and Amazon.
Princen, who was appointed in April this year, is overseeing a hiring spree of around one hundred new employees across Anthropic EMEA, doubling its headcount to around 200.
Poaching AI talent is a hot-button issue in the world of AI, after it was revealed that Mark Zuckerberg was poaching OpenAI and Apple talent in the AI race.
Princen stated: “We’re very fortunate to have a very strong pool of talent here in Europe and EMEA more generally. Some of them come from other labs, some of them don’t. We’re not seeing specifically to poach engineers or researchers from other labs.”
Anthropic, which was founded by early OpenAI employees with a focus on safety, continues to bang the safety drum.
Princen stated: “Specifically in Europe, this is something we are seeing that is extremely important for customers, especially European enterprise customers.
“Those large companies, they necessary the safety, they necessary the trust in those models to build sure they will not start hallucinating. That is something I am hearing from customers every day in the market.”
He also touched on the on-going debate around the EU AI Act.
He stated: “We deeply believe there are risks to these tools and models. I consider the risk would be to hamper innovation and, through regulation, disable European companies from creating the most out of these technologies. I consider that is something that the EU should be careful with.”
















Leave a Reply