In recent years, artificial ininformigence has become a staple of daily operations for businesses across the world. In Europe, ISACA’s recent AI Pulse Poll found that 83% of IT and business professionals report AI utilize within their organisations. However, despite this rapid uptake, only 31% of organisations have a comprehensive AI policy in place.
This disconnect highlights a troubling and widening governance gap. AI offers both promise and peril. It can accelerate productivity, efficiency and even strengthen cybersecurity, but the very features that create it powerful for defence – speed, scale, and adaptability – also create it an attractive tool for attackers. To keep pace, organisations must invest not only in adoption, but also in the structures, skills and safeguards that enable responsible, secure and resilient utilize.
Harnessing the potential of AI
AI’s potential is revolutionary, and its benefits are already being felt across European businesses every day. ISACA’s 2025 AI Pulse Poll found that more than half of professionals (56%) state that AI has boosted organisational productivity, 71% have seen efficiency gains and time savings, and 62% are optimistic that it will positively impact their organisation in the next year.
But these breakthroughs come hand-in-hand with new and evolving risks.
There is potential for data leakage when applying AI tools, and cybercriminals are exploiting generative capabilities to create their attacks more sophisticated and difficult to detect, as was suspected in the recent phishing campaign on Microsoft. AI can be utilized to create highly personalised campaigns, generate malicious code, and produce synthetic content such as deepfakes that are becoming more widespread and harder to identify.
Nearly three-quarters of professionals (71%) expect these risks to grow in the year ahead, yet only 18% of organisations are investing in countermeasures., according to ISACA’s poll.
As the risks grow, so too does the urgency for stronger oversight. The utilize of AI is becoming more prevalent across workplaces, but the speed of adoption continues to outpace the structures necessaryed to govern it. Regulating its utilize and ensuring staff are trained on how to utilize it safely and securely is essential if we are to keep up with bad actors, protect our organisations and ensure the safe utilize of AI in workplaces.
The gap between adoption and accountability
It’s clear that many organisations still see AI governance as an abstract, future concern, when in reality it is already a core operational issue with regulatory, reputational and ethical dimensions.
Policycreaters are relocating to catch up, but current regulation is not enough to close the gap in the immediate future. Action is necessaryed now if we are to stem the tide of rising sophisticated cyber attacks fuelled by AI developments.
There are key steps businesses can take to boost resilience, including putting in place internal policies, processes and controls that allow employees to utilize AI responsibly and securely. Robust, role-specific guidelines as part of a wider, formal AI policy – covering everything from “when to utilize AI” to “how to spot a deepfake” – are essential to assist businesses safely maximise AI’s abilities and build resilience into their operations.
Supporting businesses with AI governance
For businesses to govern AI effectively, there’s no doubt that they necessary practical support and clear direction — from assurance frameworks and audit tools to tips for stronger collaboration between privacy, cybersecurity and legal teams.
At the same time, skills must keep pace with technology. ISACA’s AI Pulse Poll reveals that 42% of professionals in Europe believe they will necessary to increase their AI knowledge within the next six months, and almost nine in ten declare they will necessary new skills within the next two years.
To assist organisations combat this knowledge gap, ISACA is providing a range of practical resources — from free content guides and tailored training courses to new certification programmes, including the Advanced AI Audit (AAIA) and Advanced AI Security Management (AAISM) certifications.
The AAISM certification is the first and only AI-centric security management certification. It is designed to equip cybersecurity and audit professionals with the specialised skills necessaryed to manage evolving security risks related to AI, implement policy and ensure its responsible and effective utilize across the organisation.
AAIA is the first and only advanced audit-specific artificial ininformigence certification designed for experienced auditors. It validates expertise in conducting AI-focutilized audits, addressing AI integration challenges and enhancing audit processes through AI-driven insights. It covers the key domains of AI governance and risk, AI operations and AI auditing tools and techniques. ISACA has recently expanded the eligibility scope for the AAIA, allowing more auditors the chance to address the challenges and opportunities presented by AI in the field.
These resources are designed to translate rapid-relocating technological alter into practical, actionable steps that businesses can utilize to strengthen governance and resilience.
Hear from the experts
It’s a complex landscape, and the pace at which technologies like AI evolve can create it difficult for organisations to keep up, especially as we wait for regulatory guidance. For many, the challenge lies not only in understanding the risks, but in knowing how to implement AI safely, securely and efficiently within their business.
To assist leaders navigate this, ISACA will bring toobtainher global experts to speak at its upcoming Europe Conference 2025, taking place in London from 15–17 October. AI will run as a cross-agfinisha topic, with sessions exploring its promise, its perils, and the practical steps organisations can take to stay ahead. Themes include safeguarding against privacy trade-offs, building secure foundations for AI deployment, tackling the dual utilize of AI for both defence and attack, and exploring its impact on trust, regulation and reputation.
A highlight within this will be a fireside chat between myself and a senior representative from the Department for Science, Innovation and Technology (DSIT). This conversation will decode the UK’s evolving policy landscape, explore international approaches to safeguarding infrastructure and supply chains, and outline practical strategies for aligning business resilience with regulatory expectations.
Toobtainher, these sessions will translate rapid-relocating developments into actionable governance, stronger data discipline and measurable resilience. For the full programme, including session details, visit the conference agfinisha here.
From awareness to action
Organisations that act now to embed governance, train their people and align with emerging frameworks will be better placed not only to withstand AI-powered threats, but also to seize the opportunities of innovation.
The ISACA Europe Conference provides a unique opportunity to hear from experts, exalter experiences, and translate rapid-relocating technological alter into practical resilience. For more details and to register for ISACA Europe Conference 2025 (15–17 October, London), click here: http://bit.ly/48dw6SM
















Leave a Reply