Artificial intelligence (AI) is no longer an experimental tool; it has become a mainstream technology. It’s a critical part of how organisations operate, innovate and defend against threats. But as adoption accelerates, so do the risks.
At Microsoft Secure 2025, leaders warned that while AI opens doors to incredible opportunities, it also expands the attack surface for cybercriminals, nation-state actors and insider threats.
Here are the risks you can’t afford to overlook this year.
Risk #1 A Threat Landscape That Moves at Lightning Speed
Cyberattacks have never been faster or more relentless. According to Microsoft, it now takes just 72 minutes for attackers to access private data after a phishing link is clicked. On top of that, password attacks have surged to 7,000 per second, nearly double last year’s rate.
Nation-state actors from Russia, China, Iran and North Korea are increasingly sophisticated, exploiting cloud services, IT vendors and even fake companies to achieve espionage and financial goals. This evolution shows how quickly adversaries adapt and why defenders can’t afford to lag.
Risk #2 AI as a Tool for Attackers
While organisations explore AI to drive efficiency, adversaries are doing the same. Threat groups are using AI to:
- Polish phishing emails for greater believability
- Create deepfakes for social engineering
- Jailbreak AI models to weaponise them for malicious use
Although Microsoft notes that “novel AI-based attacks” haven’t yet gone mainstream, the trajectory is clear: attackers will harness generative AI to scale their operations faster than ever.
Risk #3 Shadow AI: The Invisible Insider Threat
One of the most overlooked risks is shadow AI. When employees use unsanctioned AI tools at work without IT oversight. Microsoft reports that 78% of AI users bring their own AI apps into the workplace, often exposing sensitive data in the process.
“We know 57% of organisations have seen an increase in security incidents from employees using AI and while most organisations recognise the need for AI controls, 60% have not yet started.”
- Vasu Jakkal, CVP, Microsoft Security
The risk isn’t just about productivity. It’s about confidential documents, merger details, or financial data being uploaded into external systems without proper controls in place. Without visibility, organisations are flying blind.
Risk #4 New Attack Surfaces: Prompt Injection, Model Manipulation and Wallet Abuse
AI introduces vulnerabilities unique to its ecosystem. Emerging threats include:
- Prompt injection attacks where malicious inputs manipulate AI outputs
- Credential theft targeting AI systems
- Model manipulation that changes how AI responds
- Wallet abuse overloads AI systems with resource-heavy requests to drive up costs
These risks go beyond data leaks. They strike at the reliability and integrity of AI itself.
Risk #5 Data Breaches That Take Nearly a Year to Contain
Even with strong defences, breaches happen. The aftermath is costly. It takes an average of 292 days to identify and contain a breach involving stolen credentials.
With AI expanding data usage and sharing, the potential blast radius of leaks is greater than ever. New tools, such as Microsoft’s Purview Data Security Investigations, aim to help teams quickly investigate large datasets, but prevention remains paramount.
Risk #6 The Trust Gap: Regulations, Oversharing and Governance
The biggest risk is trust. If employees, customers, or regulators lose faith in how organisations secure AI, the fallout could be devastating.
Key challenges include:
- Preventing oversharing of sensitive data in generative AI apps
- Defending against backdoors in AI models
- Staying compliant with rapidly evolving global AI regulations
Despite recognition of these risks, Microsoft found that while 57% of organisations reported AI-related security incidents, 60% have yet to implement AI controls.
Security is the Foundation of AI’s Future
“Microsoft has been at the forefront of securing AI and we are excited about our new advanced capabilities to help you secure your AI investments.”
- Vasu Jakkal, CVP, Microsoft Security
The message is clear: AI must be secured if it is to be trusted. Without addressing these risks, shadow AI, evolving nation-state threats, prompt injection and regulatory gaps, AI could become as much a liability as an advantage.
As Microsoft executives stressed, “Security is a team sport.” Defenders must leverage AI-powered protections, adopt end-to-end governance and rethink security not in silos, but as a connected graph. Only then can organisations truly unlock AI’s potential without leaving the door wide open to attackers.
Cyber Maturity: From Risk to Resilience
Addressing AI risks is only part of the picture. True resilience comes from building cyber maturity, the ability of an organisation to move from reactive defence to proactive, strategic security.
Cyber maturity is now a boardroom benchmark. It’s not just about having tools in place; it’s about culture, governance and the capacity to adapt to evolving threats.
Organisations that invest in cyber maturity:
- Anticipate risks rather than simply responding to them
- Build resilience across people, processes and technology
- Earn trust from regulators, customers and partners
In 2025, cyber maturity will define the difference between those who struggle to keep up and those who thrive in the face of change.
“Cyber maturity isn’t just about having the latest tools, it’s about embedding resilience into every layer of the organisation. As AI reshapes the threat landscape, boards need to recognise that security and trust are no longer operational issues. They’re strategic ones.”
- Luke Elston, Microsoft Practice Lead, CyberOne
Learn More About Cyber Maturity
Want to learn how your organisation can benchmark its cyber maturity and turn risk into resilience? Register now for our free webinar on Thursday, 30th October:
From Risk to Resilience: Why Cyber Maturity is the New Boardroom Benchmark