CyberOne Blog | Cyber Security Trends, Microsoft Security Updates, Advice

What Are Zero-Day Vulnerabilities & How Can Organisations Reduce Exposure?

Written by Mark Terry | Apr 14, 2026 10:19:25 AM

 

 

When a client calls after discovering unusual activity that may have been present for days or weeks, you are no longer in an early-warning containment scenario. You are in incident response and recovery mode.

The priority shifts from surgical containment to controlled eradication of a wider compromise.

If the attacker may have had days or weeks inside the environment, you assume more than one foothold may exist, identity compromise is likely as important as endpoint compromise, persistence may already be in place, logs may be incomplete or partially tampered with and business impact may spread if you only isolate the first visible symptom.

The Visibility Gap Most Mid-Market Firms Miss

The first gap we typically find is a visibility gap. The client cannot see enough, fast enough, across identities, endpoints, email and cloud to spot and contain a zero-day before it spreads.

In practice, that usually shows up as fragmented tools, patchy telemetry and no true 24x7x365 detection and response, rather than a single missing product.

Clients often already have several security products but still lack infrastructure-wide visibility, dedicated expertise and 24x7x365 support, leaving response times slow and inconsistent.

The blind spot is rarely “we had no tools”. It is usually "we had signals, but no unified way to separate early exploit indicators from background noise and act on them at speed”.

Fix Telemetry Coverage Before Adding More Rules

The first change that most consistently improves detection speed is fixing telemetry coverage and correlation before adding more rules.

In plain English, that means making sure the right signals from identity, endpoint, email and cloud are flowing into a unified detection layer, then tuning triage so analysts only see what matters.

The fastest win is not another dashboard.

It is getting the right security signals into one place, cutting the noise and making triage immediate.

Why that works:

  • It turns disconnected alerts into one incident picture
  • It reduces false positives and analyst delay
  • It surfaces early exploit indicators sooner because the signals can be correlated across the attack chain

 According to the Microsoft Digital Defence Report 2025, Microsoft now processes 100 trillion security signals daily, blocks 4.5 million new malware files every day, analyses 38 million identity risk detections in an average day, screens 5 billion emails daily and supports a security ecosystem of more than 15,000 partners, backed by 34,000 full-time equivalent security engineers worldwide 

What Weak Indicators Look Like in Zero-Day Scenarios

In a zero-day scenario, weak indicators are usually behavioural clues, not neat alerts with a product name.

What we are looking for is a chain of small anomalies that are unremarkable on their own, but suspicious when linked across identity, endpoint, email, application and cloud.

Identity

Not “known bad login”, but things like:

  • A user authenticating from a normal location, then suddenly requesting access tokens or elevated privileges they do not usually use
  • Repeated conditional access failures followed by one successful sign-in with a different authentication pattern
  • Unusual consent grants to apps or service principals
  • A normal account touching admin paths, directory objects or sensitive groups for the first time

That suggests someone may be abusing identity control planes before doing anything overt.

 Microsoft detected 147,000 token replay attacks in the past year, up 111% year-on-year, indicating that attackers now target identity and session security rather than relying on traditional software exploits.  

Endpoint

This is often where the first real smoke appears:

  • A trusted process spawning an unusual child process
  • Office, browser or PDF reader activity followed by PowerShell, cmd, rundll32, mshta or other living-off-the-land binaries
  • Unusual memory access, code injection patterns or script execution in temporary directories
  • A process making network connections it has never made before
  • Security tooling being queried, stopped or bypassed

None of that proves a zero-day. But it does suggest exploitative behaviour.

Email & Collaboration

If the exploit chain starts with phishing or malicious content, the weak indicators can be:

  • A message that passed filtering but led to odd downstream behaviour on the endpoint
  • Mailbox rule creation, forwarding rule changes or OAuth consent shortly after message delivery
  • A user opening a file and then immediately triggering uncommon process activity or identity events

This is why joining the email to the endpoint and identity data matters so much.

Cloud &SaaS

In Microsoft 365, Azure or other cloud services, we watch for:

  • Impossible changes in admin behaviour rather than impossible travel alone
  • New inbox rules, new app registrations or token use patterns
  • Abnormal data access from SharePoint, OneDrive or Exchange
  • A workload suddenly talking to a region, service or API it does not normally touch
  • Privileged changes outside normal change windows

Again, each event may be explainable. The concern is the sequence.

A very typical “weak signal” chain might be:

  1. User opens a benign-looking file or browses to a compromised site
  2. Browser or Office process spawns a script interpreter
  3. That endpoint makes a rare outbound connection
  4. The same user account requests a token or touches a resource it never normally uses
  5. A new inbox rule, app consent or privilege change appears
  6. Another host then sees the same pattern

No single step screams zero-day. Together, they tell a much stronger story.

With a zero-day, the giveaway is rarely a signature. It is the behaviour around the exploit: what ran, what it touched, what it asked for and what happened next.

Build a Baseline Fast Enough to Make Decisions

You do not start with a perfect baseline. You build one fast enough to make decisions.

For a mid-market client that has never had proper visibility, the practical answer is to establish “good enough normal” in layers rather than wait months for pristine telemetry.

Start with business context, not just logs.

Before you tune detections, you need to know which users are privileged, which systems are business-critical, what normal admin activity looks like, when real maintenance windows happen and which cloud apps, devices and locations are expected.

For a Microsoft-centric mid-market client, the fastest route is to pull together visibility across identity, endpoint, email, cloud apps, workloads and logs already available in Microsoft 365 and Azure.

Define normal by peer groups, not by every individual event.

For mid-market organisations, a workable starting baseline is standard users, privileged admins, finance and HR users, servers and infrastructure hosts, remote and hybrid workers and high-value applications and data repositories.

You then look for activity that is out of pattern for that group. That is far more realistic than trying to profile every user perfectly from day one.

Establish a 2-4 week observation window, but act on obvious risk immediately. You do not wait for a flawless baseline before responding.

Much of the faster detection comes from removing routine noise: scheduled scripts, vulnerability scanners, backup jobs, patching tools, expected service-account behaviour, and known admin tooling.

Once that noise is reduced, weak exploit indicators stand out faster.

We do not need a perfect history to define normal. We need enough context, enough telemetry and enough tuning to quickly distinguish expected activity from suspicious behaviour.

When to Escalate from Observation to Action

The decision point is not “this looks weird”. It is “this now shows credible attacker behaviour plus plausible business impact, so delaying is riskier than containing”.

We act now when three things line up:

1. The behaviour chain crosses more than one control plane

A single odd event may stay under observation. But when the pattern links across identity, endpoint, email or cloud, it becomes much more credible.

Example: unusual Office or browser activity on an endpoint, followed by token use or privilege activity in identity, followed by suspicious outbound traffic or access to sensitive data.

2. The activity suggests attacker objectives, not just a technical oddity

The key question is whether the pattern now points to one of the attacker’s next moves: gaining persistence, escalating privilege, moving laterally, accessing sensitive data, disabling controls, staging exfiltration, or impacting availability.

Once the signal chain maps to those behaviours, it is no longer just “interesting”.

3. Containment is low-regret compared with waiting

If you can take a proportionate step, such as isolating a host, disabling or challenging an account, revoking a token, blocking a malicious connection, quarantining an email or app or triggering a playbook and the downside of acting is lower than the downside of waiting, that is usually the decision point.

During the observation window, we are asking: “Do we now have enough correlated evidence to believe this is an attack path in motion, not a noisy anomaly?”

When the answer becomes yes, we escalate.

The trigger to act is not certainty. It is credible evidence of attack progression with a realistic chance to contain it before the business impact spreads.

What Containment Actually Means Without Known IOCs

In practice, containing it means blocking the attacker’s path faster than they can advance, even when you do not yet know the exploit, payload, hash or command-and-control infrastructure.

So instead of trying to block a known bad artefact, you contain by reducing the attacker’s ability to execute, persist, move or access data.

Isolate the affected endpoint.

If a workstation or server is showing exploit-like behaviour, the fastest containment step is often to isolate it from the network while preserving management access.

That stops lateral movement, outbound beaconing, further payload download and access to file shares and internal services.

You are not proving the exploit at this stage. You are removing the host’s ability to do more harm.

Contain the identity, not just the device.

With modern attacks, identity is often more valuable than the initial host. So containment may mean disabling or challenging the user account, revoking active sessions or tokens, forcing password reset, re-registering multifactor authentication if compromise is suspected, removing temporary privilege elevation and blocking risky app consent or service principal abuse.

Stop the behaviour class, not the specific artefact

If you cannot block a known hash or IP, you often contain by blocking the behaviour pattern: Office or browser spawning script interpreters, suspicious PowerShell or command shell activity, execution from temp paths or user-writable directories, new persistence mechanisms, unsigned or unusual child process chains and unmanaged remote admin tooling.

This is where attack surface reduction, endpoint hardening and automated investigation matter.

Restrict access to high-value assets.

If the behaviour suggests the attacker is moving towards sensitive systems, containment may mean blocking access to critical servers, tightening conditional access, turning off privileged admin routes, restricting access to finance, HR or regulated data stores and segmenting or ringfencing critical workloads.

That buys time. Even if the exploit path is not fully understood, the attacker cannot easily convert access into impact.

With a potential zero-day, you are usually containing one or more of these four things: execution, identity use, lateral movement and data access.

If you can break two or three of those quickly, you usually stop the incident from becoming a business event.

Containment is the act of denying the attacker their next move, even before you know exactly what the first move was.

When Discovery Comes Late

When a client calls after unusual activity has already been present for days or weeks, you are no longer in an early-warning containment scenario. You are in incident response and recovery mode, with the priority of limiting damage, classifying the scope and preventing the attacker from retaining access while preserving evidence.

If the attacker may have had days or weeks inside the environment, the first question is no longer “Is this suspicious?” It becomes “What can we still trust?”

The response shifts from surgical containment of a single chain to controlled eradication of a broader compromise.

First, stabilise the situation.

Restrict attacker access, contain affected systems, limit privilege use and stop obvious spread paths.

Second, classify the incident properly.

You need a fast impact assessment across identities, endpoints, email, cloud workloads and critical data. The goal is to work out whether this is an isolated compromise, business email compromise, malware or ransomware, data theft, insider activity or something broader.

Third, widen the search immediately.

If a suspicious host is found late, you do not treat it as the only problem. You search laterally for related identities and tokens, persistence on other hosts, mailbox rule changes, privilege changes, unusual cloud access and indicators of staging, exfiltration or lateral movement.

Fourth, eradicate in a controlled way.

If the attacker has been present for some time, containment alone is not enough. You need to remove persistence, reset trust in identities, close exploited gaps and harden the environment before normal operations resume.

Fifth, recover with evidence and governance intact.

In a delayed-discovery incident, leadership usually needs not just technical recovery but also board-level clarity on scope, business impact, lessons learned and what has changed to prevent recurrence.

If we are called in late, the mission shifts from spotting an exploit early to quickly re-establishing control of the environment, proving scope and restoring trust before the attacker can reuse their access.

Measure What Matters

For zero-days, you are not hunting for known bad artefacts. You are hunting for execution out of pattern, access out of pattern, movement out of pattern and persistence out of pattern.

That is why correlation matters. The earlier signals are weak because they look like noise when viewed in isolation. They only become meaningful when you connect them to attacker behaviour.

The three metrics that prove zero-day resilience are time to detect, time to respond and blast radius.

Track those weekly and tune based on real alerts, then publish the numbers.

You will see the difference inside a fortnight with a light pilot.

Book a 30-minute review and see where to start.