Cyber security has entered a new era.
AI is no longer an emerging technology sitting on the edge of security operations. It is now influencing both sides of the battlefield. Threat actors are using AI to scale attacks faster than ever before, while security teams are leveraging the same technologies to improve detection, investigation and response.
In our latest webinar MXDR & AI: Rethinking Security Operations, CyberOne Microsoft Practice Director Luke Elston and Head of Security Innovation Lewis Pack explored how AI is reshaping both offensive cyber threats and defensive security operations, and why organisations must rethink cyber resilience in response to increasingly autonomous attacks.
AI is fundamentally changing the way modern security operations function, forcing organisations to rethink how they approach cyber resilience, detection and response. Traditional security models are struggling to keep pace with the scale, speed and sophistication of AI-assisted attacks.
The session focused on a clear reality, AI is not replacing security operations. It is reshaping and augmenting them.
Cyber Crime Is Accelerating Faster Than Traditional Security Models
Modern threats are increasingly outpacing traditional defensive capabilities.
Cyber crime continues to grow in scale, sophistication and profitability. AI has dramatically lowered the barrier to entry for attackers, allowing even relatively low-skilled actors to execute highly convincing phishing campaigns, automate reconnaissance and accelerate attack execution.

Lewis Pack highlighted how threat actors are already weaponising AI in multiple ways:
- Generating convincing phishing emails
- Building malicious scripts and tooling
- Mining publicly available intelligence
- Identifying exploitable vulnerabilities
- Automating credential attacks
- Creating deepfake impersonation attacks
- Cloning websites and synthesising fraudulent communications
The discussion noted that malicious AI models such as WormGPT and Mythos are being used to remove safeguards built into public AI platforms. Once ethical restrictions are removed, AI becomes capable of supporting far more advanced offensive activity.
The result is a dramatic increase in attack velocity.
Rather than manually conducting attacks one at a time, threat actors can increasingly automate entire attack chains.
The Rise of AI-Driven Attack Chains
AI enables attackers to chain together multiple capabilities into fully orchestrated attack workflows.
“Threat actors are already using AI to scale attacks faster, automate reconnaissance and improve success rates. Security operations have to evolve at the same pace if organisations want to remain resilient.”
Lewis outlined a hypothetical but highly realistic scenario:
- AI identifies suitable targets through LinkedIn profiling.
- Additional intelligence is gathered from social media and public sources.
- AI performs smart credential guessing using behavioural patterns.
- Compromised accounts are used to monitor financial communications.
- AI generates convincing invoices and cloned payment portals.
- Phishing emails are automatically generated and delivered.
- Payments are redirected to threat actors.
Each individual step already exists today.
The concern is no longer whether AI-assisted attacks are possible. It is how quickly these capabilities become fully autonomous and continuously operational.
Organisations must now prepare for a future in which automated, AI-driven attack systems operate continuously at scale.
More convincing phishing attacks were identified as the biggest AI-driven threat concern by 50% of respondents in audience polling, highlighting how rapidly AI is improving the scale, speed and realism of social engineering attacks. Deepfake and impersonation attacks followed at 20%, while 30% of respondents admitted they were still unsure where AI-driven risk posed the greatest threat to their organisation.

Deepfake Fraud Is No Longer Theoretical
AI-powered impersonation attacks are becoming increasingly convincing.
Lewis shared a real-world example of a deepfake-enabled fraud attack in which a victim transferred nearly $500,000 after being convinced they were speaking with trusted executives and legal representatives.
Using AI-generated voice and video impersonation, threat actors bypassed traditional trust mechanisms.
This highlights a critical challenge for organisations:
Traditional security awareness training alone is no longer enough.
Nearly half of respondents in audience polling (47%) said their organisation felt only somewhat prepared for AI-driven threats, while 23% said they were not very prepared. Only 23% described themselves as very prepared, reinforcing the growing gap between AI adoption and operational cyber resilience.
.png?width=485&height=485&name=poll-data-security-concern%20(3).png)
As AI-generated content becomes increasingly realistic, businesses must assume that social engineering attacks will continue to evolve in sophistication.
Why AI Matters So Much Inside MXDR
While AI introduces significant offensive risks, it can also dramatically improve defensive operations.
This is where MXDR becomes particularly important.
Managed Extended Detection and Response provide broad visibility and protection across the attack lifecycle, from reconnaissance and initial compromise through to containment and remediation.
According to CyberOne, this broad operational coverage makes MXDR one of the most effective areas for AI augmentation.
AI allows security operations teams to:
- Analyse more data faster.
- Correlate signals across multiple platforms.
- Reduce alert noise.
- Accelerate investigations.
- Improve triage accuracy.
- Prioritise high-risk activity
- Trigger rapid containment actions.
- Support continuous threat hunting.
The key advantage is not simply automation.
The advantage is amplification.
AI enables security teams to do more, respond faster and operate at a greater scale without removing the human element.
AI As a Force Multiplier for Security Operations
AI should be viewed as a force multiplier rather than a replacement for security professionals.
Modern security operations are evolving across three broad phases:
1. AI As a Tool
This represents the first stage of adoption, where analysts use AI-assisted capabilities to improve individual tasks. Examples include:
- Automated enrichment
- Threat intelligence correlation
- Investigation assistance
- Natural language querying
- Alert summarisation
This level of adoption already exists across many modern security operations teams.
2. AI Agents Operating Alongside Analysts
The next phase introduces specialised AI agents that act almost like digital members of the SOC team. These agents are designed to perform specific functions such as:
- Incident enrichment
- Threat hunting
- Escalation management
- Response orchestration
- Investigation validation
Rather than replacing analysts, these agents support them by handling repetitive or high-volume operational tasks.
3. Human-Led AI-Orchestrated Operations
The future model described during the webinar involves human analysts operating at a higher strategic level, while AI systems manage much of the operational workload. In this model:
- AI handles large-scale operational processing.
- Analysts oversee decision-making.
- Senior engineers continuously improve AI capabilities.
- Human expertise remains central to critical incident handling.
This approach creates a blended operational model where AI increases efficiency while humans retain oversight, context and accountability.
Why Human Oversight Still Matters
A major theme throughout the webinar was trust.
A useful comparison is modern aircraft autopilot systems.
Commercial aircraft already operate largely autonomously during most flights, yet passengers still expect trained pilots to remain present in the cockpit.
Security operations should function in the same way.
AI can automate and accelerate many operational activities, but fully autonomous security operations without human oversight introduce significant risk.
The operational reality is clear: Security operations should be human-led and AI-augmented.
This is particularly important during complex or novel attacks where human judgement, contextual understanding and stakeholder communication become critical.
The Future SOC Model: Blended Human and AI Operations
Modern SOC teams are evolving into blended human-AI operational models.
L1 Operations
At the Level 1 layer, AI handles much of the high-volume triage, while human analysts oversee and validate outcomes.
This allows for faster alert processing while maintaining human quality assurance.
L2 Operations
At the Level 2 layer, investigations become more collaborative between human analysts and AI systems.
AI assists with:
- Evidence gathering
- Correlation
- Investigation enrichment
- Timeline reconstruction
- Context generation
Human analysts remain responsible for oversight, interpretation and escalation.
L3 Operations
CyberOne views Level 3 operations as remaining heavily human-driven.
This is where highly experienced analysts handle:
- Novel threats
- Complex investigations
- Major incidents
- Customer engagement
- Strategic response decisions
“AI is not replacing security operations. It is reshaping and augmenting them. The organisations that succeed will combine human expertise with intelligent automation and continuous innovation.”
The webinar stressed that AI cannot currently replace the nuanced communication and contextual reasoning required during critical incidents.
L4 Engineering and Innovation
The final layer consists of engineering and innovation teams responsible for designing, building and refining AI systems.
These teams continuously improve operational capabilities based on real-world incident response experience.
Inside CyberOne’s AI-Powered SOC Team
CyberOne has also developed internally built AI systems designed to support day-to-day MXDR operations.
These capabilities are embedded directly into the company’s MXDR operations.
Hermes
Hermes acts as the operational coordination layer, synchronising ticket information, investigation notes and incident status across multiple platforms.
Pythea
Pythea focuses on automated investigation enrichment, it gathers additional intelligence related to incidents such as:
Aries
Aries handles escalation workflows by summarising incident information and improving handovers between analysts.
Themis
Themis provides quality assurance and secondary validation before incidents are closed. It challenges analysts to verify whether investigations are complete and whether any additional actions are needed.
Hydra
Hydra orchestrates automated containment actions. This allows multiple response playbooks to activate simultaneously during incidents such as phishing attacks.
Artemis
Artemis delivers AI-powered 24x7 threat hunting designed to identify niche or novel attack behaviour that traditional detection methods may miss.
Together, these systems form what CyberOne refers to as its AI SOC Team.
Measuring AI By Outcomes, Not Marketing Claims
Organisations should remain sceptical of vague AI claims.
The cyber security industry has seen a significant increase in vendors marketing products as “AI-powered” without clearly explaining what AI actually does or how it improves outcomes.
Organisations evaluating AI-driven MXDR services should be asking practical operational questions such as:
- Where in the incident lifecycle is AI being used?
- Is AI embedded into operational workflows?
- Can the provider demonstrate measurable outcomes?
- What happens if the AI gets something wrong?
- Is there human oversight?
- Is customer data used to train AI models?
- What capabilities deliberately do not use AI?
The webinar stressed that transparency and operational trust are critical.
AI should improve security outcomes, not introduce hidden operational risk.
Security Operations Must Evolve Alongside the Threat Landscape
The threat landscape is evolving too quickly for static or heavily manual security operations.
Attackers are already using AI to:
- Increase attack scale
- Improve attack quality
- Accelerate compromise timelines
- Automate targeting
- Improve social engineering
- Enhance credential attacks
- Reduce operational effort
To counter this shift, security operations must evolve at the same pace. Organisations need faster detection and response capabilities, continuous monitoring, greater visibility across their environments and more intelligent correlation between security signals. At the same time, human expertise remains critical, particularly when responding to complex or novel threats where context and decision-making matter most.
Modern cyber resilience increasingly depends on the ability to combine human expertise with intelligent automation, allowing organisations to operate at the speed and scale required to defend against AI-driven attacks.
The Future of Security Operations
The conversation around AI in cyber security is often dominated by extremes.
Some predict complete automation, others focus entirely on the risks.
AI is neither a magic solution nor an existential replacement for security professionals.
Instead, it is becoming an operational force multiplier.
The organisations that will succeed over the next several years will be the ones that combine:
- Human expertise
- AI augmentation
- Continuous innovation
- Operational transparency
- Resilience-focused security strategies
As threat actors continue accelerating their use of AI, defensive security operations must evolve at the same pace.
The future of security operations is not fully autonomous, it is intelligently augmented, continuously adaptive and fundamentally human-led.
Watch The Webinar on Demand
Want to explore the full discussion around AI-driven threats, modern MXDR operations and the future of security operations?
Watch the full MXDR & AI: Rethinking Security Operations webinar on demand to hear directly from CyberOne’s Microsoft Practice Director Luke Elston and Head of Security Innovation Lewis Pack.