'AI-driven Cyber Threats Are the Biggest Concern for Cybersecurity Professionals Going Into 2026'
Over half (51%) of European IT and cybersecurity
professionals fear AI-driven cyber threats and deepfakes will keep them up at
night next year, according to new ISACA research.
What’s driving this concern is a lack of preparedness for
AI-related risks across the industry. Only 14% of respondents feel their
organisation is very prepared to manage the risks associated with generative AI
solutions in 2026. The majority (82%) feel they are only somewhat prepared, not
very prepared, or not prepared at all.
Tech professionals clearly perceive AI-driven threats as the
most concerning, yet other threats persist which pose risks to business
continuity. Other than AI-driven threats, regulatory complexity and compliance
risks (38%), supply chain vulnerabilities (37%), and the failure to detect and
respond to a breach, causing irreparable harm to the business (35%) were also
cited as concerns keeping cyber professionals up at night. Further, only 7% of
respondents feel extremely confident that their organisation could successfully
navigate a ransomware attack in 2026.
Despite the risks, AI also has transformative potential for
cybersecurity
AI is seen as both a growing threat and an opportunity for
cyber and digital trust professionals, who recognise how transformative it can
be for their organisation. The survey also asked respondents what they believe
will be the top three technology trends or priorities impacting their work in
2026. The leading responses were generative AI and large-language models (61%),
which are used for processes such as content and code generation, followed by
AI and machine learning (57%), such as predictive analysis.
But when asked what respondents see as the most significant
cyber threats facing organisations in 2026, almost two-thirds (59%) said
AI-driven social engineering. Another threat facing organisations is insider
threats, whether intentional or accidental (29%).
Over six in ten (64%) said business continuity and resilience
is a very important focus area in 2026. ISACA believes ensuring staff are both
trained to use AI safely and securely in the workplace and trained to respond
to AI-driven cybersecurity threats will be key to building business resilience.
But over a quarter (27%) have no plans in the year ahead to hire for digital
trust roles, such as audit, risk and cybersecurity.
“AI represents both the greatest opportunity and the greatest
threat of our time. This research highlights a stark reality: while
organisations are beginning to embrace AI’s transformative potential, many
remain underprepared to manage its risks in the year ahead,” said Chris
Dimitriadis, Chief Global Strategy Officer at ISACA. “AI cybersecurity and
assurance certifications will help cyber professionals manage the evolving risk
related to AI, implement policy, and ensure its responsible and effective use
across the organisation.”
A greater understanding of professionals’ concerns will help
them navigate cyber threats with confidence
A further tension is that while over one-thirds of
respondents (38%) cite regulatory complexity and global compliance risks as a
concern keeping them up a night, over three-quarters (79%) agree or strongly
agree that cyber-related regulation will advance digital trust, and over half
(53%) agree or strongly agree that it will drive business growth. It is clear
that a better understanding of regulatory change and the opportunities it can
unlock would see cyber professionals navigating compliance with greater
confidence, driving business resilience.
“Many of the concerns the respondents raise signal an
opportunity to transform how we approach these issues, shifting them from
worries to a catalyst for business growth,” Dimitriadis continued. “For
example, when regulation or guidance is viewed not just as a box-ticking
exercise but as an opportunity to innovate in a resilient manner in the long
term.”

































Leave A Comment