Cyber security has entered a new era.
AI is no longer an emerging technology sitting on the edge of security operations. It is now influencing both sides of the battlefield. Threat actors are using AI to scale attacks faster than ever before, while security teams are leveraging the same technologies to improve detection, investigation and response.
In our latest webinar MXDR & AI: Rethinking Security Operations, CyberOne Microsoft Practice Director Luke Elston and Head of Security Innovation Lewis Pack explored how AI is reshaping both offensive cyber threats and defensive security operations, and why organisations must rethink cyber resilience in response to increasingly autonomous attacks.
AI is fundamentally changing the way modern security operations function, forcing organisations to rethink how they approach cyber resilience, detection and response. Traditional security models are struggling to keep pace with the scale, speed and sophistication of AI-assisted attacks.
The session focused on a clear reality, AI is not replacing security operations. It is reshaping and augmenting them.
Modern threats are increasingly outpacing traditional defensive capabilities.
Cyber crime continues to grow in scale, sophistication and profitability. AI has dramatically lowered the barrier to entry for attackers, allowing even relatively low-skilled actors to execute highly convincing phishing campaigns, automate reconnaissance and accelerate attack execution.
Lewis Pack highlighted how threat actors are already weaponising AI in multiple ways:
The discussion noted that malicious AI models such as WormGPT and Mythos are being used to remove safeguards built into public AI platforms. Once ethical restrictions are removed, AI becomes capable of supporting far more advanced offensive activity.
The result is a dramatic increase in attack velocity.
Rather than manually conducting attacks one at a time, threat actors can increasingly automate entire attack chains.
AI enables attackers to chain together multiple capabilities into fully orchestrated attack workflows.
Lewis outlined a hypothetical but highly realistic scenario:
Each individual step already exists today.
The concern is no longer whether AI-assisted attacks are possible. It is how quickly these capabilities become fully autonomous and continuously operational.
Organisations must now prepare for a future in which automated, AI-driven attack systems operate continuously at scale.
More convincing phishing attacks were identified as the biggest AI-driven threat concern by 50% of respondents in audience polling, highlighting how rapidly AI is improving the scale, speed and realism of social engineering attacks. Deepfake and impersonation attacks followed at 20%, while 30% of respondents admitted they were still unsure where AI-driven risk posed the greatest threat to their organisation.
AI-powered impersonation attacks are becoming increasingly convincing.
Lewis shared a real-world example of a deepfake-enabled fraud attack in which a victim transferred nearly $500,000 after being convinced they were speaking with trusted executives and legal representatives.
Using AI-generated voice and video impersonation, threat actors bypassed traditional trust mechanisms.
This highlights a critical challenge for organisations:
Traditional security awareness training alone is no longer enough.
Nearly half of respondents in audience polling (47%) said their organisation felt only somewhat prepared for AI-driven threats, while 23% said they were not very prepared. Only 23% described themselves as very prepared, reinforcing the growing gap between AI adoption and operational cyber resilience.
As AI-generated content becomes increasingly realistic, businesses must assume that social engineering attacks will continue to evolve in sophistication.
While AI introduces significant offensive risks, it can also dramatically improve defensive operations.
This is where MXDR becomes particularly important.
Managed Extended Detection and Response provide broad visibility and protection across the attack lifecycle, from reconnaissance and initial compromise through to containment and remediation.
According to CyberOne, this broad operational coverage makes MXDR one of the most effective areas for AI augmentation.
AI allows security operations teams to:
The key advantage is not simply automation.
The advantage is amplification.
AI enables security teams to do more, respond faster and operate at a greater scale without removing the human element.
AI should be viewed as a force multiplier rather than a replacement for security professionals.
Modern security operations are evolving across three broad phases:
This represents the first stage of adoption, where analysts use AI-assisted capabilities to improve individual tasks. Examples include:
This level of adoption already exists across many modern security operations teams.
The next phase introduces specialised AI agents that act almost like digital members of the SOC team. These agents are designed to perform specific functions such as:
Rather than replacing analysts, these agents support them by handling repetitive or high-volume operational tasks.
The future model described during the webinar involves human analysts operating at a higher strategic level, while AI systems manage much of the operational workload. In this model:
This approach creates a blended operational model where AI increases efficiency while humans retain oversight, context and accountability.
A major theme throughout the webinar was trust.
A useful comparison is modern aircraft autopilot systems.
Commercial aircraft already operate largely autonomously during most flights, yet passengers still expect trained pilots to remain present in the cockpit.
Security operations should function in the same way.
AI can automate and accelerate many operational activities, but fully autonomous security operations without human oversight introduce significant risk.
The operational reality is clear: Security operations should be human-led and AI-augmented.
This is particularly important during complex or novel attacks where human judgement, contextual understanding and stakeholder communication become critical.
Modern SOC teams are evolving into blended human-AI operational models.
At the Level 1 layer, AI handles much of the high-volume triage, while human analysts oversee and validate outcomes.
This allows for faster alert processing while maintaining human quality assurance.
At the Level 2 layer, investigations become more collaborative between human analysts and AI systems.
AI assists with:
Human analysts remain responsible for oversight, interpretation and escalation.
CyberOne views Level 3 operations as remaining heavily human-driven.
This is where highly experienced analysts handle:
The webinar stressed that AI cannot currently replace the nuanced communication and contextual reasoning required during critical incidents.
The final layer consists of engineering and innovation teams responsible for designing, building and refining AI systems.
These teams continuously improve operational capabilities based on real-world incident response experience.
CyberOne has also developed internally built AI systems designed to support day-to-day MXDR operations.
These capabilities are embedded directly into the company’s MXDR operations.
Hermes acts as the operational coordination layer, synchronising ticket information, investigation notes and incident status across multiple platforms.
Pythea focuses on automated investigation enrichment, it gathers additional intelligence related to incidents such as:
Aries handles escalation workflows by summarising incident information and improving handovers between analysts.
Themis provides quality assurance and secondary validation before incidents are closed. It challenges analysts to verify whether investigations are complete and whether any additional actions are needed.
Hydra orchestrates automated containment actions. This allows multiple response playbooks to activate simultaneously during incidents such as phishing attacks.
Artemis delivers AI-powered 24x7 threat hunting designed to identify niche or novel attack behaviour that traditional detection methods may miss.
Together, these systems form what CyberOne refers to as its AI SOC Team.
Organisations should remain sceptical of vague AI claims.
The cyber security industry has seen a significant increase in vendors marketing products as “AI-powered” without clearly explaining what AI actually does or how it improves outcomes.
Organisations evaluating AI-driven MXDR services should be asking practical operational questions such as:
The webinar stressed that transparency and operational trust are critical.
AI should improve security outcomes, not introduce hidden operational risk.
The threat landscape is evolving too quickly for static or heavily manual security operations.
Attackers are already using AI to:
To counter this shift, security operations must evolve at the same pace. Organisations need faster detection and response capabilities, continuous monitoring, greater visibility across their environments and more intelligent correlation between security signals. At the same time, human expertise remains critical, particularly when responding to complex or novel threats where context and decision-making matter most.
Modern cyber resilience increasingly depends on the ability to combine human expertise with intelligent automation, allowing organisations to operate at the speed and scale required to defend against AI-driven attacks.
The conversation around AI in cyber security is often dominated by extremes.
Some predict complete automation, others focus entirely on the risks.
AI is neither a magic solution nor an existential replacement for security professionals.
Instead, it is becoming an operational force multiplier.
The organisations that will succeed over the next several years will be the ones that combine:
As threat actors continue accelerating their use of AI, defensive security operations must evolve at the same pace.
The future of security operations is not fully autonomous, it is intelligently augmented, continuously adaptive and fundamentally human-led.
Want to explore the full discussion around AI-driven threats, modern MXDR operations and the future of security operations?
Watch the full MXDR & AI: Rethinking Security Operations webinar on demand to hear directly from CyberOne’s Microsoft Practice Director Luke Elston and Head of Security Innovation Lewis Pack.