Across UK organisations, attackers are using AI to scrape LinkedIn profiles, auto-generate phishing emails that reference real projects and join existing email threads at exactly the right moment. These are not theoretical risks. They are live incidents happening quietly, at scale, inside businesses that believe their security is reasonable.
The uncomfortable truth is that 86% of organisations have already encountered at least one AI-related phishing or social engineering incident. [https://www.brightdefense.com/resources/phishing-statistics/]. In 2026, the organisations still treating AI threats as a future problem will be the ones explaining to their boards why paid security controls were switched off when the breach occurred.
Most leaders still think of AI attacks as deepfakes and science fiction. The reality is far more practical and far more dangerous.
AI-Powered Phishing Now Bypasses Human Instinct:
Attackers scrape company websites, LinkedIn and breached data to auto-generate emails that reference real suppliers, ongoing projects and internal language. These messages no longer look suspicious. They look like normal business traffic.
Research shows that 60% of recipients fall for AI-generated phishing emails, with click-through rates reaching 54% compared to just 12% for traditional phishing. The quality of the attack has fundamentally changed. Awareness training alone is no longer enough.[https://www.strongestlayer.com/blog/ai-generated-phishing-enterprise-threat]
Business Email Compromise Has Become Thread-Aware:
AI ingests entire stolen mailbox histories, learns normal payment flows and approval language, then joins existing email threads at the right moment. This is not spoofing. This is impersonation that understands context.
Since the popularisation of generative AI tools, there has been a 1,760% year-over-year increase in BEC attacks. What used to account for 1% of all cyber attacks in 2022 now represents 18.6% of all attacks. The average BEC wire transfer request at the start of 2025 was $24,586. [https://hoxhunt.com/blog/business-email-compromise-statistics]
MFA Is Being Exploited Through Behaviour
AI allows attackers to observe when and how users authenticate, then deliver MFA prompts at moments that feel routine. These attacks are patient and adaptive, avoiding the noise that made earlier MFA fatigue attacks obvious.
The pattern is clear. AI has not made attacks more complex. It has made them faster, quieter and far more targeted.
Independent research into Microsoft 365 security adoption shows widespread configuration and implementation gaps, with many organisations not fully deploying or enforcing included security controls such as MFA, conditional access and configuration management. [https://www.coreview.com/resource/coreview-state-of-microsoft-365-snapshot]
Identity Protection Is Licensed But Underused:
Most organisations have Entra ID P2 through M365 E5 or EMS E5. MFA is in place. One or two basic Conditional Access policies exist. What is missing are user risk and sign-in risk policies actively enforcing, token and session risk controls, full legacy authentication blocking and automated remediation. MFA is treated as the finish line. Identity risk detection is barely used.
Defender for Office 365 Is Running But Not Trusted:
Default anti-phish and malware policies exist. Safe Links run in audit mode. Safe Attachments are softened to reduce user friction. What is missing are executive and finance-grade phishing protection, proper impersonation controls, ZAP tuning and automated investigation and response. Email security exists, but behaviour-based phishing detection is not relied on.
Defender for Endpoint Is Deployed but Not Operated:
The agent is deployed. Alerts are visible. What is missing are ASR rules in block mode, tamper protection enforcement, EDR in block, automated investigations and proactive vulnerability management. Endpoints are watched, not actively defended.
Defender for Cloud Apps Is the Blind Spot:
Almost always included via E5, very little or nothing is in place. OAuth app governance, anomalous app behaviour detection, session controls for risky users and regular app permission reviews are missing. This is where long-lived, low-noise compromises hide.
Sentinel Is Owned But Not Run:
Often enabled, often under-ingested. A handful of connectors exist. Default analytics rules run. What is missing are full identity and email log ingestion, UEBA tuned to the organisation, automation and response playbooks and 24x7 monitoring and action. A SIEM exists on paper, not as an operational SOC.
The biggest gap is response capability. Even where detection exists, alerts are reviewed once daily or less. There is no out-of-hours ownership. No identity or BEC runbooks. No structured learning after incidents. Detection without response only shortens the time to impact.
Attackers do not defeat Microsoft security. They wait for the unused parts.
When you sit down with a CIO or CFO and show them they are paying for capabilities they are not using, the reasons are remarkably consistent.
"We're worried it will break the business."
This is the number one reason. It comes out as "We can't lock people out" or "The business won't tolerate disruption" or "We tried tightening security once and it caused issues". This fear is usually based on a bad experience years ago, amplified by anecdote rather than evidence and never revisited with modern controls. The irony is that they accept far more risk from a breach than from a controlled access failure.
"We don't have the skills to run it properly."
This is the most honest answer. Microsoft security has become operationally complex. Turning controls on without understanding blast radius, knowing how to roll back or having runbooks feels reckless to internal teams. This is not incompetence. It is a skills-to-scope mismatch.
"We don't have capacity to manage the noise."
Teams know that detection creates obligation, obligation creates cost and cost creates scrutiny. So controls stay off. This is rational behaviour, not negligence.
"We assumed it was already on."
More common than anyone likes to admit. Especially with bundled E5 licensing, security inherited through M&A or legacy tenants. CFOs in particular say "I thought that's what we were paying for". Technically true. Operationally false.
"No one ever made a clear decision."
Features landed gradually. Responsibility was unclear. Security posture drifted. No formal decision was made to leave controls off. They were simply never consciously switched on.
The real reason is structural. Microsoft security assumes continuous operation, IT is built for project delivery. That mismatch creates paralysis.
Controls are not off because leaders are careless. They are off because no one felt safe turning them on.
The business breaks first.
The operational shift does not happen naturally or proactively. It is almost always forced, and the forcing function is an incident that lands faster than the organisation can react.
In 2026, as AI compresses attack timelines from weeks to hours, IT optimised for planned change, business-hours response and tickets will simply run out of time. AI-driven attacks move laterally in minutes, exploit identity and trust rather than malware and do their damage before the next stand-up meeting.
The first visible failure is identity. Identity abuse, token replay, OAuth persistence and privilege escalation without malware. There is no "system down" moment. Just quiet, valid access doing the wrong thing. By the time a ticket is raised, the damage is already done.
The second failure is human response lag. Even when detection exists, alerts fire outside business hours. No one is on call. No one feels authorised to act. So organisations wait. AI does not. This is where hours matter and where losses occur.
The third failure is governance and accountability. After the incident, no one owned the risk explicitly. No one approved the residual exposure. No one signed off on "accepting dwell time". This becomes a board issue very quickly.
The shift does not come from strategy decks, threat reports or vendor roadmaps. It comes from a real financial loss, a regulator asking uncomfortable questions, a board demanding to know why paid controls were not active or a cyber insurer changing terms overnight.
In 2026, breach-to-impact will be measured in hours, not days. "Next business day response" will be indefensible. MFA alone will be treated as baseline hygiene, not protection. Identity and email will be considered critical infrastructure.
Organisations still operating security as a project will be exposed.
When a market organisation decides to shift before the incident, the first 90 days are not about "turning everything on". They are about changing how security behaves day to day without breaking the business.
Days 1–30: Stabilise & See Clearly:
Establish 24x7 monitoring and response ownership, even if narrow in scope. Turn on identity, email and endpoint telemetry fully. Enable risk detection in report-only where possible. Baseline normal behaviour for users, devices and apps. Kill obvious debt like legacy auth, unused global admins and stale OAuth apps. No broad block policies. No sweeping Conditional Access changes. No "security big bang".
What gets measured: mean time to detect for identity and email incidents, percentage of logs ingested and visible, number of high-risk misconfigurations removed and volume of alerts versus actionable incidents. Proof it's working: incidents are being seen that were previously invisible and response ownership is clear at 2am, not just at 10am.
Days 31–60: Contain & Respond Faster:
Move identity risk policies from report-only to enforced for high-risk scenarios. Enable EDR in block mode on priority devices. Turn on automated investigation and response for email and endpoint. Introduce OAuth app governance and session controls. Build and test runbooks for BEC and identity compromise. No perfection tuning. No full ASR rollout to every device. No alert flood without response paths.
What gets measured: mean time to respond, number of incidents auto-contained, reduction in manual analyst effort per incident and time from alert to first action. Proof it's working: incidents are being stopped before business impact and security stops being reactive and starts interrupting attacks.
Days 61–90: Prove Value & Lock In Behaviour:
Extend protections to remaining user groups and endpoints. Tune policies based on real incident data. Introduce executive-level reporting tied to business risk. Run incident simulations with real telemetry. Formalise ownership and escalation paths. No vanity dashboards. No generic "number of alerts" reporting. No security theatre.
What gets measured: dwell time trend, percentage of incidents detected outside business hours, financial risk avoided or incidents averted and control utilisation versus licence entitlement. Proof it's working: leaders can see risk reduction, not just activity and security becomes part of operations, not an annual project.
The first 90 days do not make you "secure". They make you operationally dangerous to attackers. That is the goal.
Boards do not fund capability. They fund avoided pain.
After 90 days, we do not lead with dwell time graphs or MITRE heatmaps. We translate operational change into financially legible deltas.
"Breaches Avoided" Reframed As Loss Events Interrupted.
Before: 1–2 BEC attempts per quarter, detection reliant on user reporting, containment time measured in days.
After 90 Days: X BEC attempts detected and contained automatically, 0 progressed to payment change or fund movement, average containment time reduced from days to minutes. Each interrupted event is mapped to average BEC loss values for the sector. CFOs understand this immediately because it looks like loss prevention, not cyber spend.
Dwell Time Converted Into Exposure Hours Eliminated:
Before: Identity and email incidents visible only in business hours, effective dwell time 12–72 hours or unknown.
After 90 Days: X incidents detected outside business hours, average dwell time reduced to under 60 minutes, zero incidents exceeded defined exposure thresholds. Reducing the exposure window is framed as risk compression. It turns an abstract metric into something intuitively dangerous.
Insurance & Audit Posture Improvements:
Before: Controls present but unproven, incident response largely manual, evidence gaps for underwriters and auditors.
After 90 Days: Documented 24x7 monitoring and response, measurable containment capability, evidence of active control utilisation. This ties security operations directly to cost of capital and risk transfer.
Cost of Inaction vs. Cost of Operation:
Before: Security spend largely fixed, risk largely unquantified, loss impact uncapped.
After 90 Days: predictable monthly operational cost, quantified avoided loss events, bounded exposure with defined response times. This reframes the decision as "Would we rather fund uncertainty or fund control?" That is a CFO question, not a security one.
"Near Misses" Made Visible For The First Time:
We show incidents that would never have been seen before, how close they were to impact and where automation or out-of-hours response stopped them. Boards are often surprised by the volume of avoided incidents. The reaction is usually "So this was already happening?" Yes. That is the point.
The shift sticks when a board realises they were already exposed, the exposure is now measurable, the cost of control is lower than the cost of loss and the organisation is no longer relying on luck.
If a CIO reads this in early 2025 and thinks "I've got 12–18 months to get ahead of this", the single biggest mistake they will make is spending that time trying to "finish the security programme".
It feels sensible. It is exactly the wrong instinct.
Most CIOs will think "We need a roadmap", "We should modernise the architecture", "Let's rationalise tools and uplift maturity" and "We'll operationalise later once it's tidy". That leads to strategy decks, target operating models, licence changes and long implementation plans. All while attack timelines are collapsing underneath them.
By the time the programme looks finished, the threat model has already moved on.
AI does not care whether your programme is well-designed, aligned to frameworks or on track. It only cares whether someone is watching, someone can act immediately and someone owns the outcome at 2am. A beautiful future-state architecture with no live operational muscle is functionally defenceless.
Operate First. Optimise Later.
Instead of asking "What should our security look like in 18 months?", ask "What must be running continuously in 90 days?"
The single most valuable use of the next 12–18 months is to get continuous detection and response working early, then iterate. That means 24x7 identity, email and endpoint monitoring, clear authority to act without escalation delays, runbooks for the small number of incidents that actually matter and metrics that show response speed and impact avoided.
Once that exists, architecture decisions get better, control tuning becomes evidence-led, tool choices become obvious and the programme stops being theoretical.
In 2026, "ahead of this" does not mean perfect Zero Trust diagrams, every feature switched on or no incidents. It means breaches are detected in minutes, incidents are contained before business impact, leadership is never surprised by an exposure and security risk is bounded and visible.
If you wait to operationalise until the programme feels "ready", AI will take your buffer away.
The organisations that cope in 2026 will not be the ones with the best plans. They will be the ones that started operating sooner than felt comfortable.
AI-driven threats are not arriving in 2026. They are already here, moving quietly inside organisations that believe their security is reasonable.
The gap is not tools or licences. It is operational capability. The shift from project-based security to continuous operation is not optional. It is the difference between controlled change now and forced change after an incident.
If you recognise the patterns described here, the next step is simple. Audit what you have switched on versus what you are paying for. Baseline your current detection and response capability. Identify the 30-day sprint that moves you from watching to acting.
Join our executive webinar Build, Scale, Secure: Harness AI - your 2026 cyber security roadmap. In 60 minutes, you will learn how to prioritise AI-accelerated threats, navigate the UK Cyber Resilience Bill and EU mandates, build a business resilience case your board understands, master supply chain risk and prepare for how AI and quantum will shape cyber insurance in 2026. Expect plain English, practical steps and measures your board will back.
Time is the only asset AI is taking away. Use it while you still have it.