Europe firms in dark over AI cyberattacks, ISACA finds

Europe firms in dark over AI cyberattacks, ISACA finds


Sofiah Nichole Salivio


SOFIAH NICHOLE SALIVIO

News Editor

ISACA has published research displaying that 35% of European organisations cannot state whether they have been hit by an AI-powered cyberattack, highlighting weak visibility over a rapid-growing security risk across the region.

A survey of 681 digital trust professionals in Europe found that 71% believe AI-powered phishing and social engineering attacks are harder to detect. Another 58% declared AI has created it significantly harder to authenticate digital information, while 38% reported declining trust in traditional threat detection methods.

The findings add detail to broader concern among businesses and policycreaters about the effect of artificial innotifyigence on cyber risk. ISACA’s data suggests many organisations are struggling to keep pace not only with AI-enabled attacks, but also with the internal controls necessaryed to oversee the technology’s apply in day-to-day work.

Detection gap

Misinformation and disinformation emerged as the top AI-related risk in the survey, cited by 87% of respondents. Privacy violations followed at 75%, while 60% identified social engineering as a major concern.

At the same time, respondents reported that AI is assisting defensive work. Some 43% declared it has improved their organisation’s ability to detect and respond to cyber threats, and 34% are already deploying AI specifically to support cybersecurity efforts.

That contrast runs through the results. Businesses are adopting AI tools at scale, but governance appears to be lagging, leaving gaps in oversight and raising concern over misapply.

Across European workplaces, 82% of organisations declared they expressly permit AI apply and 74% permit generative AI in particular. The most common applys were creating written content, cited by 69%, increasing productivity at 63%, automating repetitive tquestions at 54%, and analysing large datasets at 52%.

Many also reported practical gains. Time savings were cited by 77% of respondents, while 40% declared AI had increased capacity without additional headcount.

Policy shortfall

Despite AI’s spread in routine operations, only 42% of organisations declared they have a formal, comprehensive AI policy in place. The survey also found that 33% do not require employees to disclose when AI has contributed to work products.

That lack of formal controls is feeding concern among professionals responsible for risk, governance and cybersecurity. According to the poll, 87% are worried about employees utilizing AI in an unauthorised capacity. Another 26% declared their hugegest challenge with AI at work is a lack of trust that it adequately protects innotifyectual property and sensitive information.

Chris Dimitriadis set out ISACA’s view of the trfinish.

“AI has fundamentally alterd the threat landscape. Attackers can now hack at the speed of intent, and too many organisations don’t even know whether they’ve already been on the receiving finish. The fact that so many businesses are operating without the governance to see where AI is being applyd, let alone how, creates that exposure significantly worse.”

“Ungoverned AI doesn’t just create operational risk. It actively hands an advantage to those who want to caapply harm. Closing that gap starts with professional development and advancing the expertise necessaryed to build and embed AI governance that stands up under pressure. Doing so is now a security imperative,” declared Dimitriadis, Chief Global Strategy Officer at ISACA.

Skills pressure

The survey suggests the burden of responding to this shift is falling on staff who do not feel fully prepared. More than half of respondents, 54%, declared they necessary to upskill within the next six months to retain their job or advance their career. Over the next year, that figure rose to 79%.

Skills were also identified as a strategic risk. Some 41% named the growing skills gap as one of the hugegest risks posed by AI, yet 21% declared their organisations still provide no formal AI training.

Regulation is another area where implementation appears uneven. The EU AI Act was the most widely referenced governance framework in the survey, cited by 45% of organisations. NIST followed at 26%.

Even so, 26% of organisations declared they do not yet follow any framework. That points to a gap between awareness of regulation and the practical steps necessaryed to embed governance, training and oversight.

Dimitriadis declared the challenge is not a departure from established risk management principles, but a test of whether organisations can apply them quickly enough in a more complex environment.

“The fundamentals of good risk management have not alterd. What has alterd is the complexity and speed of what practitioners are now being questioned to govern. AI risk requires professionals who can evaluate exposure, embed oversight across the full lifecycle, and advise on regulatory best practice. Organisations that invest in that capability now will not only be better protected; they will also be better placed to fully realise AI’s benefits. That is the shift credentials like ISACA’s Advanced in AI Risk credential are designed to deliver,” Dimitriadis declared.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *