CyberOne Blog | Cyber Security Trends, Microsoft Security Updates, Advice

Front Door Still Open: AI Hype vs Real Cyber Risk

Written by Philip Ridley | Jan 26, 2026 8:59:14 AM

  By Philip Ridley, Cyber Risk Management Director 

The Patterns We See Every Week 

The external narrative is AI-fuelled nation-state attacks. The internal reality is incomplete MFA (Multi-Factor Authentication) coverage, weak Conditional Access and unpatched servers. 

Leading 2026 cyber security forecasts point to imminent geopolitical tensions, AI-driven disruption and analyst burnout. All are real. But when you run a Microsoft 365 Security Review today, you confront something dire and urgent: the front door remains wide open. 

Clients want urgent answers about AI-powered attacks and zero-day exploits. Yet upon review, admins lack MFA, legacy authentication persists and Conditional Access is riddled with risky exceptions. Ransomware groups exploit flat networks, weak privilege controls and untested recovery and they can attack at any moment. 

The belief to shift this year: owning the tool does not equate to readiness. Consistently operating the basics is essential. Tools and AI multiply this foundation, not replace it. 

What Licensed-But-Unused Actually Looks Like 

A typical Microsoft 365 E3 or E5 tenant appears to have powerful controls on paper. In reality, the effective control surface is far smaller. 

You see MFA technically there, but it’s not universal. A one-time campaign that did not reach all users. Legacy protocols are still allowed, bypassing MFA entirely. Conditional Access is present, but not decisive. A handful of broad policies with exceptions layered on exceptions to “get things working again”. Entra ID Protection is enabled but not tuned or actioned. PIM (Privileged Identity Management) is licensed but not implemented, or partially configured and then abandoned. 

The gap persists for 5 reasons: 

  1. MFA & Conditional Access Are SeenAsOne-Off Projects, Not A Journey
    Organisations treated rollout as a one-off task. They did not put in place ongoing governance for exceptions, a process to ensure joiners inherit the right controls or regular review of sign-in logs and bypass patterns. Coverage erodes over time as the organisation evolves, but no one owns “MFA completeness” as a KPI (key performance indicator). 
  2. Fear Of DisruptionToLegacy Systems 
    Security teams inadvertently weaken Conditional Access to avoid breaking key apps, service accounts, or scripts. Legacy authentication stays enabled for convenience, broad exclusions are created and “trusted networks” persist despite shifts in work patterns. This secures short-term peace at the cost of long-term risk. 
  3. Ownership Is Fuzzy
    Identity sits at an awkward intersection. The identity team focuses on uptime and directory health. The security team wants strong policies, but may not control the tenant. The workplace team is concerned about user experience and helpdesk workload. Without a clear RACI (Responsible, Accountable, Consulted, Informed) matrix, no one feels fully empowered to enforce strong Conditional Access. You end up with a compromise: good enough on paper, weak in reality. 
  1. LicenceSprawl Outpaced Operating Model Maturity
    Customers bought E5 or added security bundles for good reasons: commercial deals, consolidation or board pressure. The operating model lags: insufficient time to design and tune Conditional Access, no roadmap for higher-friction controls and limited skills to interpret Entra risk signals. Powerful features remain half-configured or remain idle due to risk aversion. 
  1. Success Metrics Are About Tools, Not Outcomes
    If your KPI (key performance indicator) is “we have Entra ID P2 and Defender XDR (extended detection and response) deployed”, the job looks done. If your KPI is “more than 98% of sign-ins require phishing-resistant MFA, legacy authentication is eliminated and admin accounts are all just-in-time”, the conversation changes. Most organisations report the number of licences and the number of tools deployed. Very few regularly report the percentage of users and apps that can still bypass MFA, the number of stale exceptions and high-risk sign-ins, or the coverage of Conditional Access for high-value assets. Without those outcome metrics, configuration drift goes unnoticed. 

The technical gap can often be closed within weeks, but the real danger is the habit-and-governance gap, which sustains that licensed-but-unused space. This gap leaves you open to attack, right now. 

Building on Sand vs. Building on Solid Foundations 

Industry analysts highlight AI SOC (Security Operations Centre) agents and workflow augmentation as major trends for 2026. If you drop AI-driven automation on top of a messy foundation, 3 things happen. 

  1. Bad Decisions Are Automated
    AI and automation act on your data. Patchy MFA, unreliable device health, messy identities and numerous exceptions mean “smart” workflows process noisy, incomplete data. You get auto-isolated business servers and blocked accounts, but you cannot distinguish risk from misconfigurations or failing playbooks. Ownership and escalation paths stay unclear. This is not an AI operations centre, just faster, more confident mistakes. 
  2. Alert Fatigue Increases
    In a noisy, misconfigured environment, AI surfaces clusters of noise. Analysts still solve the basics. Overloaded teams face more panels, more “insights” they cannot act on and less trust in automation, leading to default “approve” or “ignore.”
  3.   WorkaroundsAreHard-Coded Into The Future 
    Most organisations have years of pragmatic shortcuts: blanket exclusions in Conditional Access, service accounts no one wants to touch, shadow IT routes that keep a key process alive. If you build AI-driven workflows around that reality, you enshrine those compromises. When you later try to clean up identity, network or device posture, all your clever runbooks and AI policies start to break. 

Follow this recommended sequence of actions for building resilient security: 

Phase 1: Stabilise Identity & Hygiene 

Move MFA from “mostly there” to near-universal adoption for staff, admins and high-value service accounts. Introduce or tighten Conditional Access baselines for admin roles, external Access, high-risk sign-ins and legacy auth. Roll out PIM (Privileged Identity Management) for global admins and key Azure roles. Clean up a small but critical set of high-risk apps and service principals in Entra. Baseline controls on endpoints through Defender for Endpoint, attack-surface reduction and local admin under control. Logging that is complete and consistent into Sentinel or your MXDR (managed extended detection and response) platform. At this stage, you can already have some basic automation, but keep it limited and well understood. 

Phase 2: Define Your Operating Model & Playbooks 

Before adding AI agents, clarify human responsibilities. Who owns incidents and decisions? How will you handle account/device compromises, high-risk sign-ins and ransomware? Define good detection, response and communication. Document playbooks. Run tabletop exercises with leadership and IT teams. Validate processes manually or with light-touch automation first. 

Phase 3: Use AI To Assist Analysts, Not Replace Them

In Microsoft environments, let AI summarise incidents, correlate signals and suggest next actions. Let AI draft queries, hypotheses and reports; automate low-risk, reversible actions like extra prompts or ticket creation. Keep humans in control. Use AI to reduce repetitive work, not hand over critical decisions. 

Phase 4: Move Selected Workflows Towards Higher Automation 

Only once you are comfortable with data quality, playbooks and augmented workflows should you consider auto-isolation for well-understood patterns on well-managed devices, automatic step-up authentication for specific identity risk thresholds or automated ticketing and routing tied to clear owners and SLAs (service level agreements). Even then, you do it in waves, with guardrails and rollback. You measure impact on users and on real incidents, not just incident counts. 

If your foundational controls are weak, prioritise installing effective core protections (such as robust MFA and Conditional Access) before investing in advanced solutions like AI-driven automation. Recommendation: Secure your "front door" first, then layer advanced defences only when strong basics are established and well understood. 

Geopolitical Resilience in 90 Days 

When a London firm asks about “geopolitical preparedness”, the first step is to translate the headlines into “what could actually break for you and how we would cope”. 

Structure the conversation around 3 simple questions: 

  1. What Could You Suddenly Lose?

People, locations, networks, cloud regions, key suppliers, payment routes? 

  1. Where Does Your Critical Data Really Live & Who Can Access It? 

Jurisdictions, cloud regions, admin paths, support models? 

  1. If That Broke Tomorrow, What Would You Actually DoInThe First 72 Hours?  

Who decides how you work, how you talk to customers and regulators? 

In 90 days, you will not solve geopolitics, but you can materially improve how you cope with 3 concrete exercises. 

Geopolitical Tabletop Exercise (Half-Day)

Pick one or two scenarios that fit your footprint. Sudden outage or sanctions affecting the region where a key SaaS (software as a service) or outsourcing provider is based. Loss of connectivity or degraded performance to a major cloud region on which your workloads rely. Regulatory change driven by geopolitics that restricts data transfers to or from a country where you currently process data. Run a structured tabletop with the executive team, senior operations, IT, security, risk, communications and customer-facing leads. 

Test who actually makes decisions and on what information. See whether your incident and business continuity plans cope with “upstream failure”, not just internal incidents. Capture specific gaps: missing contact trees, unclear authority to fail over and regulatory blind spots. Outcome: a short, prioritised list of fixes to governance, playbooks and communications you can start immediately. 

Critical Supplier & Cloud Dependency Review 

Identify your top 10–15 critical dependencies: SaaS (Software-as-a-Service), payments, logistics, MSPs (Managed Service Providers) and cloud regions. For each, record jurisdiction, hosting region, exit options, recovery time and whether there is a tested alternative. Map which ones sit on the same cloud provider or in the same geography. 

Look for quick wins. 

  • Where can you implement a simple fallback, a secondary provider, an export path or an offline failover? 
  • Where do you need stronger contractual rights, clarity on data location, or support for Access? 
  • Where is there excessive concentration in a single provider or region? 

In a Microsoft-first environment, this often includes ensuring key workloads in Azure are at least zone-redundant and, for the most critical, have a clear cross-region recovery plan. Check whether any “shadow SaaS” holds regulated data in problematic jurisdictions. Outcome: a one-page “concentration and dependency risk” view you can show the board, with 3–5 prioritised actions. 

Data Location & Access Path Check-In Microsoft 365 & Azure 

Confirm your Microsoft 365 data residency, where Exchange, SharePoint and Microsoft Teams actually store data. Review Azure region usage for your key workloads. Map admin and support paths: which identities, roles and partners have global Access. Prioritise quick, realistic improvements: use Conditional Access to restrict sensitive admin access by country or network, where feasible. Tighten guest and partner access in Entra to ensure you know which external organisations can touch what. For highly sensitive workloads, plan or pilot a move to specific regions that align with regulatory or data sovereignty requirements. 

Document and, where possible, limit support arrangements that allow unrestricted cross-border admin access. Outcome: you can answer, with evidence, “where is our critical data and who can get to it internationally”, which is exactly what regulators and customers will ask in a crisis. 

When an organisation asks about geopolitical resilience, keep it grounded. First 90 days: run one geopolitical tabletop, map top dependencies and concentration risk, baseline data location and admin access in Microsoft 365 and Azure. Next step: ensure identity and hygiene are tidy so your response is credible; implement a small number of technical and contractual changes, and embed geopolitical scenarios into your regular continuity and incident testing. 

If you do those three things well, you move from abstract anxiety about “geopolitics” to concrete answers: what might break first, how you would decide and communicate, where you are over-exposed today and what you are doing about it. 

Institutional Resilience: Service, Not Heroics 

When an organisation loses a key SOC (Security Operations Centre) analyst or their sole in-house security person, 3 things tend to break first. 

  1. Triage Quality & Prioritisation.

Alerts still come in. Tools still run. But nobody is truly owning triage, so queues grow and important signals get buried. Junior or generalist IT staff start closing tickets “to keep the noise down”. Escalation paths become unclear, so issues bounce around rather than being fixed. The visible symptom is “we still have a SOC” or “we still have a SIEM (security information and event management)”, but the effective time to detect and respond quietly lengthens. 

  1. Context & Tribal Knowledge

The person who left usually held an unwritten map of “what normal looks like” for your business, with mental notes on noisy systems, awkward legacy apps, VIP behaviours and relationships with IT, networks, developers and business owners that made things move. Without that, an incident that used to be resolved in an hour now needs multiple calls, emails and approvals because no one knows which door to knock on. 

  1. Tuning, Hygiene & Continuous Improvement

When the key person leaves, you feel it most 3–6 months later. Detections stop being tuned, so either noise rises or visibility drops. New projects and apps go live without the security monitoring keeping pace. Playbooks and documentation drift out of date because no one “owns the glue”. You might still be meeting SLAs (Service-Level Agreements) on paper, but your posture is slowly decaying. 

All of that is exactly what MXDR (managed extended detection and response) and managed services are designed to prevent. 

When structuring MXDR for a client, the mental model is simple: the service must survive key-person loss on both sides, in your team and ours. 

5 Design Principles 

1. Service, Not Heroics 

You get a named service lead and a small core pod who know your environment well. Behind them, a 24×7 team working from shared playbooks and a common platform. Clear handover between shifts and locations so knowledge lives in the system, not just in individuals. If your in-house person leaves, the MXDR service does not suddenly become blind, as we were never relying on that individual to carry the whole service. 

2. Playbooks In Tooling, Not In Heads 

Get agreed processes into Sentinel playbooks, Defender automated response rules, case management and runbook systems. So “how we handle a high-risk sign-in” or “what happens when Defender flags ransomware behaviour” is documented, implemented and rehearsed across the wider team. When a key analyst leaves, the playbook still runs. A new analyst can step in and follow the pattern. 

3. Shared Context & Enrichment 

Remove reliance on a single person’s memory by enriching alerts with platform-level business context, critical apps, VIPs (very important persons) and crown jewels. Keep a living “customer run sheet” that documents networks, key apps, exceptions and contacts. Using tags and classifications so any analyst can see “this is high business impact” at a glance. If your only in-house person leaves, lean harder on that shared context so you do not spend months re-explaining the estate. 

4. Clear Split Of Responsibilities 
Single points of failure often hide in blurry boundaries. Document what the MSP owns: end-to-end detection, investigation and containment recommendations. Document what you own: approving actions, comms, regulatory decisions and patching. Who signs off on higher-risk actions? If the one “security person” leaves, that RACI is the safety net. We already know the alternative contacts, escalation routes and decision-makers because we agreed them upfront. 

5. Resilience In Our Own SOC 
Analyst fatigue and turnover also hit providers. Internally, we rotate analysts across customers and workloads to avoid burnout, cross-train people so no customer is tied to one analyst, measure the quality and consistency of investigations, not just the volume of tickets. We ensure our runbooks, documentation and automations are up to date and peer-reviewed. That is how we prevent your service quality from being held hostage by “that one brilliant person” on our side. 

When boards ask about this, put it simply. Your risk is not “will that analyst resign”. Your risk is “if they do, does our ability to detect and respond drop off a cliff?” Structure MXDR and managed services so the answer is “no”. You might lose some familiarity for a while, but the core outcomes: time to detect, time to respond, and ability to explain what is happening stay stable because they are built into a service model, not balanced on one pair of shoulders. 

The Q1 Sprint Plan 

If a CISO (Chief Information Security Officer) or board asks in January, “What should we actually pilot in Q1 to be credibly ready for these 2026 trends?” the answer is pick 2 or 3 cross-cutting pilots, not ten tasks. They should harden the basics, demonstrate some 2026 capabilities and provide metrics the board can track. 

Sprint 1: Identity & Admin Control Baseline.  

Goal: close the “front door” and create hard metrics on identity risk that you can show the board. Scope for Q1: pick one tenant and go deep, not wide. Move MFA from “mostly there” to near-universal adoption for staff, admins and high-value service accounts. Introduce or tighten Conditional Access baselines for admin roles, external Access, high-risk sign-ins and legacy auth. Roll out Privileged Identity Management for global admins and key Azure roles. Clean up a small but critical set of high-risk apps and service principals in Entra. 

2026 Themes: AI and SOC (security operations centre) automation need reliable identity signals. The software supply chain and CNAPP (cloud-native application protection platform) both rely on strong admin and pipeline access control. Institutional resilience improves because the controls live in policy, not in one person’s memory. How we measure success: by end of Q1 you should be able to answer, with numbers, what percentage of interactive sign ins now require MFA (target more than 97% for staff, 100% for admins), how many accounts can still use legacy authentication (target to drive this as close to zero as possible, with explicit, time bound exceptions), what percentage of admin roles are now under PIM with just in time access, what reduction have we seen in risky sign ins and impossible travel alerts over 4–6 weeks. For the board, this becomes a simple chart: identity risk exposure trending down backed by concrete control changes. 

Sprint 2: Cloud & Data Posture On One Critical Business Service 

Goal: Prove that your Microsoft stack can deliver CNAPP-style control and data-centric security for a real workload, not just a slide. Scope for Q1: choose one important service or application, ideally one that runs largely on Azure or Microsoft 365, handles customer, financial or regulated data and would be hurt if it went down for a day. Then run a focused posture sprint. Use Defender for Cloud to onboard all its resources, address the top critical recommendations and raise the Secure Score for that scope. Implement basic infrastructure-as-code guardrails or policies for new changes, for example, no public storage, enforced encryption and logging. Classify and label the key data involved with Microsoft Purview and apply at least one meaningful DLP or access policy. Wire the main signals into Microsoft Sentinel or your MXDR platform and test at least one playbook. 

2026 Themes: CNAPP for cloud workloads on a real service, not a lab. Software supply chain maturity at the runtime end of the pipeline. Data-centric security as a foundation for safe AI use on that data in future. How we measure success: by the end of Q1 we should be able to show the Secure Score for that workload before and after, including number of high and medium risk issues closed, percentage of the workload’s data now discovered and labelled in Purview, number of new or improved detections and playbooks in place for that service, a simple heatmap for that application. The identity, infrastructure, and data monitoring should all have moved from red or amber towards amber or green. The board does not need all the details. They need to see that one critical service is now materially more resilient and that the same pattern can be applied to others. 

Sprint 3: Incident & Geopolitics Exercise With AI-Assisted SOC

Goal: Test how your organisation makes decisions under pressure, while quietly introducing AI augmentation in the SOC on a controlled basis. Scope for Q1: two threads in parallel. Scenario exercise: run a half-day tabletop involving IT, security, risk and at least one business leader based on a realistic scenario, for example a major identity compromise that affects a key supplier and your own tenant at the same time, or loss or sanctioning of a cloud region or critical SaaS provider that underpins a London operation. Work through who decides what in the first 24–72 hours, how you communicate with customers, regulators and staff, what data and dashboards you wish you had on the day. Capture gaps in playbooks, contacts, authority and reporting. AI augmentation pilot: in your Microsoft SOC or MXDR environment, enable specific AI-based incident summarisation and investigation helpers in Defender or Sentinel for a subset of alerts. 

Define which types of incident the analysts will utilise AI for and where human approval is always required. Collect feedback from analysts on time saved, clarity of suggested actions and any errors. 

2026 Themes: Institutional and geopolitical resilience through tested decision-making. AI SOC and workflow augmentation have proven in a narrow, safe slice. Better understanding of what the board will actually want to see during complex, externally driven incidents. How we measure success: from the exercise, the time it takes at the tabletop to reach clear decisions on containment, communication and recovery, the number of roles, contact points and playbooks that needed updating as a result and a short list of concrete follow-up actions with owners and deadlines. From the AI pilot: percentage reduction in time to triage and document selected incidents, based on analyst feedback and sample timing; analyst confidence rating for AI suggestions (e.g., useful, neutral, harmful); and any reductions in repeat work, such as manual query writing or copy-pasting into reports. For the board, you can say we have rehearsed a geopolitically flavoured incident, updated our plans and started using AI to make our SOC more efficient without giving it unchecked control. 

If I were in front of a board in January, I would frame it like this. In Q1, we are not trying to “solve 2026”. We are running three sprints, each delivering a tangible improvement now and positioning us for AI, CNAPP and resilience in 2026. Sprint 1 hardens identity and admin access, providing us with clean metrics. Sprint 2 proves cloud and data posture on a real business service. Sprint 3 tests our crisis decision-making and introduces AI into the SOC in a controlled way.  

By April, you should be able to demonstrate improved numbers on identity risk, one critical service that is demonstrably more resilient, updated playbooks for complex incidents and evidence that AI can safely take some load off your analysts. That is a credible story to carry into 2026: you are not just buying what appears on industry roadmaps, you are proving the foundations and the future in small, well-chosen pilots. 

The 3 Nuances That Separate Real Readiness from PowerPoint Readiness 

The Change In Tax On The Organisation

Most firms have very limited change capacity. Security is competing with CRM (customer relationship management) rollouts, ERP (enterprise resource planning) upgrades, M&A (mergers and acquisitions), restructures and cost-cutting. If a CISO does not factor that into the plan, one of two things happens. You design a “perfect on paper” roadmap that never ships, or you push changes through and quietly burn trust with the business. Readiness is not just “what controls are theoretically in place”, it is “what level of friction the organisation can absorb while still delivering for customers”. The smart CISOs are ruthless about sequencing. One or two big security changes per quarter. Always tied to visible business benefit or regulatory pressure. Always with somewhere for the pain to go, training, support and communications. If you ignore the change tax, the 2026 agenda becomes undeliverable, regardless of how good your tooling is. 

The Missing Middle Between Strategy & Tickets

Strategy documents that never make it past the steering group. SOC reports that never translate into backlog items or changed behaviours. Data and AI policies exist, but nobody in the business feels personally accountable for them. Closing the 2026 gap is less about added policy at the top and more about creating clear, owned and funded work for the “missing middle” who actually run systems and processes: product owners, ops managers, regional leads and data owners. That is where solutions like CISO-as-a-Service or strong internal governance really earn their keep. Someone has to sit in the middle and turn strategy into concrete changes in products, operations and data. 

Courage To Decommission, Not Just Add 

Very few roadmaps explicitly include turning off old tools, retiring bad exceptions, killing reports no one reads and ending relationships with suppliers who cannot meet requirements. Yet if you do not actively remove items, three problems arise. Analyst fatigue worsens because they are managing multiple platforms and reporting lines. Attack surface grows in ways no one is funded to manage. Executives lose patience because the cost and complexity keep rising without a clear simplification story. The organisations that make real progress are the ones that say, “If we turn on these Defender for Cloud capabilities, which legacy scanner can we retire?”, “If we standardise on one way of doing admin, which ad hoc methods do we shut down?”, “If we bring AI into the SOC, which manual reports do we stop writing?”. A credible 2026 journey includes a decommissioning plan. Readiness is as much about what you stop doing and stop running as what you add. 

To Recap 

Fix identity and hygiene. Use Microsoft as your backbone. Treat AI as an accelerator, not a foundation. Test geopolitics and institutional resilience in real exercises. Respect the change tax, empower the missing middle and be brave enough to turn old stuff off. 

If organisations shift their mindset on those points over the next 12 months, most of the 2026 agenda becomes achievable rather than aspirational. 

Book a Free 30-Minute Cyber Consultation