In a recent CyberOne webinar, Luke Elston, Microsoft Practice Lead and Lewis Pack, Head of Cyber Defence, explored how Artificial Intelligence (AI) transforms the cyber threat landscape and the tools used to fight it. While AI introduces new capabilities—and new risks—it’s clear that real protection still requires human oversight, critical thinking and experience.
The session opened with a stark reality check:
Yet organisations are struggling to keep up.
This creates an imbalance between threat and defence, explaining why many attacks go undetected until too late.
Another trend highlighted was the surge in “Bring Your AI” (BYOAI).
This introduces new vulnerabilities and unsurprisingly:
Compounding this:
This unchecked usage of generative AI introduces significant vulnerabilities. Attackers are capitalising on this gap, using AI to gain scale, speed and stealth in their campaigns.
While it introduces new challenges, AI offers substantial benefits when used effectively.
Luke described AI as a “force multiplier,” enabling faster threat detection, improved signal-to-noise ratios and predictive capabilities. It never tires, processes huge volumes of data in real-time and reduces false positives—freeing human analysts to focus on strategic actions rather than alert fatigue.
Combined with human intelligence, this creates a highly adaptive defence capability—blending instinct, judgement and automation.
As AI becomes embedded in modern security tools, the demand shifts toward hybrid roles that blend cyber security fundamentals with AI literacy. The speakers noted a growing need for skills like prompt engineering, AI-driven threat hunting and adversarial AI awareness—especially in tools like Microsoft Security Copilot.
This isn’t about replacing cyber professionals. It’s about upskilling and empowering them to evolve with the tools they now use daily.
Drawing attention to the dark side of generative AI, threat actors increasingly use this powerful technology to create sophisticated and convincing phishing attacks, deepfakes and malware.
Shared examples included:
This evolving threat landscape means detection alone isn’t enough—contextual analysis and human oversight are now vital.
The webinar discussed real-world incidents to highlight how AI is actively being weaponised. This included:
Humans still outperform AI in critical thinking, empathy, creativity and communication.
AI isn’t replacing your team; it’s enhancing their impact.
Crucially, AI is still bound by its data. It can’t always explain why it made a decision, nor can it consider broader business risks or strategies.
CyberOne’s experts stressed the importance of AI-powered pre-breach strategies to proactively defend against evolving threats.
These include:
This proactive stance is essential for resilience, especially as threat actors continue to automate and scale their attacks.
One of the most thought-provoking parts of the session touched on AI’s lack of ethical judgment.
Unlike humans, AI doesn’t consider fairness, privacy, or proportionality. It simply optimises for outcomes—without a moral compass.
Organisations pairing AI and Human Intelligence unlock a truly resilient cyber security model.
Human defenders excel at interpreting why something is happening, not just what is happening.
This understanding helps distinguish a harmless anomaly from a strategic insider threat. It also enables humans to adapt in real-time, especially during “low and slow” attacks that AI might miss due to the lack of obvious patterns.
CyberOne combines the speed and scalability of AI with the intelligence and judgement of experienced analysts. Our Managed eXtended Detection and Response (MXDR) service leverages Microsoft Security’s tools while their experts interpret the data, investigate threats and take decisive action.
During the webinar session, we asked attendees to share how their organisations are approaching AI in cyber security.
Here’s what we learned:
AI Governance is Inconsistent: Only 38% of attendees reported having a formal, enforced policy governing AI tool usage. Another 33% said they have an informal or evolving policy, while 29% had none at all—though some plan to implement one.
AI Adoption is Varied: 42% of participants have moderately or extensively integrated AI into their cybersecurity operations. However, 32% are still in early stages or not using AI at all, highlighting a wide maturity gap across organisations.
Top Security Challenge? Resources. When asked about their biggest cybersecurity challenge, 38% cited limited internal resources or expertise. This was followed by lack of visibility (31%) and the growing complexity driven by digital transformation and AI (15%).
CyberOne is offering on-demand access, whether you’re a Security or IT leader or business stakeholder, this session is a must-watch.
Watch: "AI & Human Intelligence: The Best Defence Against Cyber Threats" On-Demand Webinar
View & Register: CyberOne's Upcoming Events & Webinars