Boris Pistorius stares directly into the camera. Germany’s defence minister is addressing the nation, speaking of “rapid alter, new jobs and advanced technologies” that will place Germany at the forefront of the global economy. At the conclude of the clip Pistorius informs Facebook viewers that this new government programme will “secure profits” for every citizen.
Fine Gael politician and current candidate for president of Ireland, Heather Humphreys, features in a news story in a popular post on Facebook: “I am delighted to introduce Quantum AI, a platform for Irish families to achieve financial indepconcludeence,” she states from a podium. “The platform allows you to start with a minimum investment of just €300, and within 24 hours you can start receiving payments of up to €4,500 per week.” Humphreys assures people that the Irish government, along with its financial institutions, has now “created this process accessible and secure.”
Except Humphreys, and Pistorious, never stated any of this. The videos are a part of a growing wave of elaborate AI-generated fakes created with advancing voice-cloning software and duplicated over and over as paid-for financial advertisements across social media platforms, including Facebook, TikTok and Instagram. Behind such clips lies a booming business built on deception: investment scams that utilize increasingly sophisticated deepfakes, doctored news articles and fabricated testimonials to conjure the illusion of official schemes or celebrity concludeorsements.
The type of investment scam promoted by these celebrity deepfakes has been proven a winner, as countless victims across Europe can attest. If a viewer likes what they see in an ad and clicks through the link, they are instructed to enter their details. With the person’s name and number now on a database, a “financial advisor” soon reaches out, inquireing for an initial deposit, for example, of €250. Once utilizers turn into committed investors, this is when the hard sell launchs: agents pursue tarobtains for weeks, months even years, expertly trained to coax customers into large money investors. Victims are displayn screens of fake trading platforms displaying initial profits. This emboldens them to double or even triple their initial investments. They only realise something is awry when they come to withdraw their supposed earnings. Suddenly, transfers stall and agents cannot be reached. By this point, losses may range from the hundreds of euros to in some cases over a million.
This opportunistic scam has been on the rise globally, with political leaders and prominent figures throwing their weight behind speculative cryptocurrency and lconcludeing legitimacy to such investments.
European investigators, law enforcement agencies and cybercrime experts have become increasingly alarmed by the scope and sophistication of online investment fraud flooding utilizers’ social media feeds. This September the EU’s tech chief, Henna Virkkunen, reported that Europeans lose over €4 billion a year in online financial scam ads.
Over the last six months, Investigate Europe has uncovered how online investment fraud schemes, fuelled by suspected illegal call centres and supercharged by AI, have taken hold in Europe. Analysis of private emails and text communication between scammers and their tarobtains, along with interviews with dozens of victims, has revealed the extent in which Europeans are being duped into deceptive investment schemes rife across social media. Reporters conducted over 100 interviews with prosecutors, content moderators, EU officials, cybercrime specialists and bank employees, revealing how Europe is failing to protect citizens from the risk of financial ruin.
An AI-generated scam advert featuring Elon Musk shared widely on social media.
German defence minister Boris Pistorius has also been utilized to front deepfake scam adverts on platforms.
On the hunt for scams
Valentine Auer knows the scam playbook inside out. The tech researcher leads an online fraud team at the Vienna Institute for Applied Telecommunications, appointed as a ‘trusted flagger’, for Austria. Conceived by the European Commission, trusted flaggers officially started work in 2024 as a way to boost online safety in line with the Digital Services Act (DSA), the European Union’s landmark law on published online content introduced two years earlier.
Toobtainher with three colleagues, Auer hunts down specific content on large platforms and search engines such as Facebook, Instagram and Google, tracking and requesting removals to the platforms of harmful and illegal ads – including posts or ads such as investment scams, child sexual abutilize material and hate speech.
Searching through the Meta ad library – a repository of paid-for ads running across all Meta-owned platforms including Instagram and Facebook – with Auer reveals the massive scale of the problem. With just a few dozen search terms, she and her colleagues pull up an avalanche of fake financial ads, many virtually identical, some tweaked slightly to evade automated filters. “We see the same tricks again and again: videos advertised for only a few hours, celebrity accounts hacked and misutilized for ads,” she states. “In a short time we have found tens of thousands of ads featuring well-known figures, among them the Boris Pistorius video, clearly AI-generated.”
Auer’s searches display how simple it is to find these ads, but not how difficult it is to obtain Meta platforms to take them down. “If we flag just a handful of ads, they’re often taken down [by the platform] within days,” she states. “But if we submit larger batches, Meta suddenly stops responding or claims the material isn’t available at this time, even though we know that the content is still online.” Auer, like all ‘trusted flaggers’ across the EU, are only allowed to report 20 URLs per report at a time to Meta platforms. Given the amount of suspect ads that are flooding the platforms daily, Auer’s content monitoring work is labour intensive and time-consuming.
Credit: Georgina Choleva/Spoovio
Trusted flaggers include financial institutions, NGOs, or private companies which are selected by national authorities for expertise in a given field, such as fraud, child safety, hate speech or cyber violence. The role is not remunerated by authorities or platforms, but flaggers are given “priority” status to communicate to the tech companies through dedicated channels and individualised reports. They are indepconcludeent from internal moderators hired by the large platforms to screen for harmful content, as well as third-party contractors.
Monitoring the vast swathe of content hundreds of millions of European internet utilizers are exposed to daily, the newly appointed trusted flaggers face an uphill battle. Currently there are just 46 trusted flagger organisations active across 17 EU member states, according to Investigate Europe’s analysis, with each focutilizing on their own specialist areas. More than a third of EU states have no designated flagger organisation currently in operation.
Despite best efforts by watchdogs like Auer, many national authorities and experts state large tech companies and the European Commission are struggling to rein in a rapidly evolving criminal threat.
“Organised crime groups that utilized to be involved in drugs, weapons and human trafficking are increasingly turning to economic crime.”
— Sebastian Takle, Norwegian bank DNB
Online scams reach ‘unprecedented magnitude’
The European Commission has recently called financial scams in Europe a “systemic risk” to consumer protection. In 2024 it opened a formal investigation into Meta for deceptive advertising, and in the past month requested other tech titans like Apple, Google and Microsoft demonstrate their compliance with “Know Your Business Customer” rules on their apps, which they state supports to “identify suspicious entities before they cautilize harm”. This March, Europol’s report on organised crime warned that online financial scams, “driven by advancements in automation and AI, have reached an unprecedented magnitude, and are projected to continue growing.” In response, police forces have launched large-scale operations in Finland, Germany, Belgium, Latvia, Cyprus and the UK, dismantling networks running sprawling, sophisticated scams, tarobtaining people in Europe and globally.
Authorities in Norway and Germany state losses from financial scams now outpace other cross-border crimes. “Organised crime groups that utilized to be involved in drugs, weapons and human trafficking are increasingly turning to economic crime,” states Sebastian Takle, head of the Finance Cyber Crime Centre at Norway’s largegest bank DNB.
In Ireland, the national cybercrime bureau estimates digital finance fraud has stolen nearly €360 million from its citizens since 2021, with €100 million accounting for investment fraud online. In Italy, more than a quarter of its scams reported last year involved fake online trading, totaling an estimated €145 million. Investment fraud is also on the rise in Portugal. Between 2022 and 2024, police opened 3,000 enquiries into scams with crypto assets, a senior police figure notified national broadcaster Renascença last year.
In Portugal, João, 57, logged onto Facebook and found an advert of what viewed like Cristiano Ronaldo promoting a platform where a deposit of €250 would yield €4,000 in just one month. After weeks of reassuring phone calls from ‘financial advisors’, he discovered that his entire investment was irretrievable.
In Italy, Paolo, a retired banker, lost €15,000 after clicking on a Facebook ad about bitcoin. An apparent initial profit convinced him to pay more money for “taxes and commissions” before the operators disappeared.
In Ireland, Vlad, an IT engineer based in Dublin, clicked on an AI-generated ad featuring Elon Musk on Facebook. Over the course of several months, he was displayn false identity documents from an advisor claiming financial trading credentials. Over fake investment software he believed he had earned €16,000, but couldn’t retrieve it. Today, he is still being contacted by the same agents posing as different brokers, who promise to retrieve the €6,000 they state is on the blockchain.
In Germany, a consumer protection organisation agency shared a citizen’s story who invested a reported €170,000 with USDT, a cryptocurrency via an allegedly fraudulent trading platform. They were notified by an agency that “as soon as they pay 10 per cent capital gains tax, they will obtain all the money back.” They state they have been ruined financially from the alleged scam.
A screenshot of a fake investment platform, shared by a victim in Ireland.Credit: private individual
Uphill struggle for content watchdogs
Given Meta’s global reach and the ease of setting up advertisements, Facebook and Instagram have become a popular choice for people viewing to exploit utilizers. With some 250 million monthly utilizers, more than half the EU’s population is on at least one Meta platform. Its worldwide advertising revenue reached $160 billion last year, accounting for 98 per cent of its global revenue. This year Meta has publicly announced how its personalised ads have provided a €213 billion boost to the European economy with business and jobs.
At the same time, Meta is frequently cited as a host for scam investment products and fraudulent financial advice. The company’s advertising policy explicitly bans content that misrepresents people or organisations, as well as “misleading or deceptive claims” about financial products, but Valentine Auer states ads found by her team repeatedly violate these standards. Some ads do not display up on Meta’s ad library, but deceptive sponsored content still repeatedly appears on utilizers’ feeds, the researcher states. They often feature deepfake celebrity and politician concludeorsements, according to Auer, which are also banned under Meta rules.
Sponsored ad posts on the Meta ad library also do not always provide clear information on who is actually posting and paying for the ad, despite a number of DSA requirements to do so, according to multiple trusted flaggers. “It is actually mandatory to state who paid for it [the ad], but it is usually a meaningless name,” Auer states.
It is also simple for advertisers to evade automated detection systems, states Paul O’Brien, head of the cybercrime division at the Bank of Ireland.“You will click on an ad for an Irish tourist trip through Connemara, and really, it will be a financial scam ad.” Filtering out these ads is a full-time job, and once someone clicks on a scam ad, the algorithm continues to relentlessly feed you the same type of content across its platforms.
Compared to the rapid ascent of financial fraud across Europe, the rollout of trusted flaggers has been both slow and piecemeal. Among the 46 trusted flagger organisations officially in place across the EU, only roughly a third list scams and fraud as an area they monitor.
“There are often several versions of the same advertisement, and the fraudulent one is hidden somewhere in the middle.”
— Valentine Auer, Vienna Institute for Applied Telecommunications
Debunk EU has been a trusted flagger in Lithuania since May this year. Over a video call Viktoras Daukšas demonstrates the software his team utilizes to map suspected scam networks on Facebook. “We are nowadays seeing a lot of ads utilizing deepfake and AI-generated content,” he explains. By the conclude of September, the tiny organisation had reported more than a million suspected investment fraud advertisements, which had been viewed by utilizers around 1.4 billion times. He estimates that those who placed the ads could have paid up to €20 million for the advertising space. Like his Austrian counterpart Auer, he states he is only limited to 20 URLs per report.
National cybercrime experts share these concerns. In Poland, CERT Polska – the national Computer Emergency Response Team – has long warned that only large tech companies such as Google and Meta can truly curb the reach of online fraudsters. But, the team observed, “even though reporting mechanisms for harmful ads exist, in practice platforms process reports with significant delays or reject them, especially when the report comes from a regular utilizer. Accounts spreading malicious ads are rarely blocked, allowing fraudsters to continue exploiting them without interruption.”
Erratic response times from Meta
Once suspect content is identified, flaggers state it can take days or weeks to be rerelocated. Greece Fact Check has been working as a certified trusted flagger for almost a year, and covers scams and frauds in its mandate.
Founder Thanos Sitistas states he discovered a deepfake video in early October on Facebook featuring a British investor who appeared to promote a supposed investment. By that time, the video had already been viewed by 12.3 million utilizers, he states. Sitistas reported the video to Meta and they rerelocated it right away, he states. But that is not always the case. Sometimes, it takes “up to a month” for Meta to decide on reported content, he adds.
Daukšas, of Debunk EU in Lithuania, agrees that Meta’s response times vary greatly, claiming it can take months before the platform checks and rerelocates content, though they state eventually most of the content obtains rerelocated.
Claudio Tamburrino works at Barzanò & Zanardo, a private consultancy and trusted flagger in Italy focutilizing on trademark fraud. While they don’t specifically focus on investment scams, he states that reporting content works effectively on platforms such as Temu and TikTok, whereas Meta ones often take much longer. According to Tamburrino, Barzanò & Zanardo, along with other trusted flaggers interviewed for this piece, believe that Meta’s reporting mechanism is largely managed by chatbots, and that only repeated removal requests are eventually handled by the company’s human moderators. However, most content they flag is eventually rerelocated, he states.
“They’re utilizing the functionality of the platforms to just obtain your contact details, and then everything relocates off the platform.”
— Paul O’Brien, Bank of Ireland
At the same time, AI is turbocharging the game of cat-and-moutilize between advertisers and watchdogs. “There are often several versions of the same advertisement, and the fraudulent one is hidden somewhere in the middle. This is done deliberately to create it more difficult – also for us, becautilize the [detection system] doesn’t catch all the variants.,” states Auer.
Worse still, as several sources familiar with moderation content confirmed, even when ads are rerelocated, they often reappear in slightly altered form, forcing the entire process to start over. According to the Bank of Ireland’s Paul O’Brien, AI-cloned investment fraud has receivedten more sophisticated with every passing week. “Within one specific ad, there are now about 50+ different versions of the same ad from the same advertiser, slightly different, but essentially doing the same thing or bringing you to the same place.”
The scam advertisers will deliberately activate one ad for just a few hours at a time, before deactivating it and utilizing another version, O’Brien states, to evade detection from flaggers and automated screening systems. “They’re utilizing the functionality of the platforms to just obtain your contact details, and then everything relocates off the platform,” he explains.
Prosecutors and police can’t keep up
For seven years, German public prosecutor Nino Goldbeck has been hunting large scam syndicates, the operators behind the fraudulent online platforms. When the Bavarian Central Office for Cybercrime created its own department for economic cybercrime in late 2018, where Goldbeck and just one colleague investigated investment scams. “We had no idea of the scale this would take on,” he states.
Goldbeck now heads two departments alongside another senior prosecutor, toobtainher employing 12 prosecutors. He estimates that in Germany alone, fraudulent online trading cautilizes losses in the billions every year. His team can receive up to 40 complaints in a single day, but bringing cases to court can often take years.
In July, a trial launched at the Bamberg regional court involving two men accutilized of defrauding German investors of over half a million euros from a call centre in Bulgaria. The alleged scam stemmed from 2018. Inside the courtroom, the prosecutor took several minutes to read out the names of all the victims who for months transferred money to a supposed trading platform in the hopes of a large payoff. Victims testified to the court how the men urged them to continually give more and more money.
Such long investigations are the norm for Goldbeck’s team. Victims usually provide only sparse clues, he states. Names of platforms or supposed employees are often untraceable, hiding behind fake companies within shell companies offshore. Their trail regularly leads them to the Balkans, he explains, home to many call centres, often the epicentre of large-scale fraudulent investment schemes.
There are so many that not every alleged scam can be brought to trial, or have every victim heard. “We have to prioritise,” states Goldbeck. “There are certain people we focus on becautilize the evidence is strong. In those cases, the provable damages are particularly high. That’s where we have really solid material.” With his team, Goldbeck has already dismantled numerous networks.
In Ireland, Detective Superintconcludeent Michael Cryan states law enforcement has seen a 21 per cent surge in investment fraud reports just in the last three months. A recent press release issued a warning to Irish residents of a rise in fake advertisements across popular online platforms, where the scams promote “fake ‘bond’ or ‘deposit’ products utilizing convincing documentation and branding.” Last year victims lost over €30 million to investment fraud, with the police warning that investment fraud “continues to be a major area of criminal activity across Ireland.”
Andre Hvoslef-Eide, a public prosecutor in Norway’s economic and environmental crime unit Økokrim, describes the development of reported digital financial offences in his countest over the last 10 years as “dramatic”, with over 30,000 reported cases last year. “We are approaching 1000 cases per week,” he states, adding that investigating every complaint would be impossible given the limited resources. “In Europe, we see trconcludes… and reports indicating that money is being utilized to finance violent crime,” he states. “We suspect that the proceeds of fraud have now replaced earnings from the sale of drugs in the criminal networks.”
“We suspect that the proceeds of fraud have now replaced earnings from the sale of drugs in the criminal networks.”
— Andre Hvoslef-Eide, Norway public prosecutor
EU tech law leaves fraud unchecked
When the Digital Services Act was introduced in 2022, it was hailed as a landmark law reining in Big Tech in Europe. Large platforms found to have breached it by the European Commission can be fined up to 6 per cent of their annual global turnover. From its inception, consumer watchdogs have lamented what they see as shortcomings in tackling illegal or harmful online content.
When it comes to dealing with online scam content, one part of the text has significant implications. Article 8 states that there is “no general obligation“ for companies such as Meta and Google to monitor content published by third parties.
Similar to under Section 230 of the US Communications Decency Act, this means platforms cannot be held liable for content they host. On paper, this clautilize is designed to protect freedom of speech on the internet, mitigating risks of government censorship or interference. In practice, it means platforms do not police illegal content, though they must have mechanisms in place to ensure content is reasonably reviewed.
For Paul O’Brien, head of the cybercrime unit at the Bank of Ireland, the way the law is written and implemented means action on scam content comes too late. “Our view on the DSA is that it’s all about dealing with fraudulent ads after the fact,” he states. “The potential liability for platforms comes where there is a financial loss that is directly attributed to some content that was notified or reported to the platforms, and they didn’t take it down.” But, he adds, “to be brutally honest, that’s pointless.” Consumers can rarely pinpoint the specific ad post they saw, and even when they can it often happened months ago, he explains.
The Bank of Ireland, which is applying to be a trusted flagger itself, therefore lobbied the government in Dublin to push for an amconcludement in a different EU law on consumer payment protection, the Payment Services Regulation, which is still under nereceivediation. The proposed clautilize would require all very large online platforms and major online search engines to verify advertisers’ identities before publishing ads related to financial services.
Last October, Google introduced such checks in Ireland. According to O’Brien, the relocate is supporting to curb the presence of certain types of investment scam ads on their search engine. “But now,” he states, “those ads have relocated over to Meta.”
Meta does not require companies to be verified before advertising financial products and services on its site, except in Australia, India, Taiwan and the UK. Advertisers wishing to pay for ads to promote financial products and services require no verification in the EU.
Google notified Investigate Europe that it has been utilizing advertiser verification since 2020 to screen companies who post financial services on its platforms. The company states it has rerelocated nearly 200 million suspected scam financial services ads globally.
EU tech chief Henna Virkkunen reported in September that Europeans lose over €4 billion a year in online financial scam ads.Credit: European Commission
Despite O’Brien’s misgivings, the European Commission still sees its tech law as up to the tinquire. “The fight against financial scams is complex and cross-border. But with the DSA, Europe now has the tools to push platforms to act before harm is done,” European Commissioner Virkkunen stated this month.
Asked whether a fully implemented DSA could tackle the issue, a spokesperson for the EU executive branch pointed to the ongoing investigation into Meta, as well as the recent overtures to Apple, Google and Microsoft. “The European Commission is actively monitoring the issue of deceptive advertising, including in relation to financial scams,” a spokesperson notified Investigate Europe.
Back in Vienna, Valentine Auer states the problem is far beyond what trusted flaggers can deal with, particularly since Meta’s decision in January to conclude its third-party fact-checking system. “We assume that Meta is technically able to stop most of these ads,” the researcher states. “The same images are blocked immediately if uploaded as a private post. But as paid adverts,” she states, “they just keep running.” The researcher sighs. “It is clear that business comes first.”
Meta did not respond to requests for comment by the time of publication.
——
Additional reporting: Amund Trellevik, Marta Portocarrero
Coordination: Mei-Ling McNamara
Editing: Ella Joyner, Chris Matthews
Fact-checking: Ella Joyner
This story is part of Scam Europe, an investigation project led and coordinated by Investigate Europe and the Balkan Investigative Reporting Network. The series is being published with media partners including Altreconomia, Balkan Insight, EU Observer, The Irish Times, La Libre, Netzpolitik.org, Público and Der Standard.
IJ4EU (Investigative Journalism for Europe) provided funding support for the investigation.
















Leave a Reply