By Lewis Pack, Head of Cyber Defence
TL;DR: Big cyber outages feel like failure. However, sometimes they are the only opportunity you’ll ever get to rebuild things properly. What matters is not how fast you get back online, but how well you were prepared when everything went wrong.
When Marks & Spencer hit the headlines earlier this year for having their click-and-collect service down for nearly four months, the public reaction was predictable.
“Four months? How is that even possible?”
From the outside, it looks outrageous. You imagine a broken button on a webpage. But behind that button is an entire world:
Ransomware doesn’t politely take down one neat component. It tears through layers of dependency that took years to build.
“What looks like a single outage is often a dozen failures happening at once.”
If you don’t have clean backups, trusted infrastructure or confidence in what the attacker touched, you’re not restoring…you’re rebuilding.
And rebuilding a national retail operation from near-zero in four months is not slow. It’s almost impressive.
Most big organisations live under a “no downtime, ever” mindset. Even small changes require:
But when the system is already down. Properly down. You suddenly have freedom.
At CyberOne we saw this on a smaller scale when a vendor changed an API and it affected one of our enrichment functions. It was frustrating, but I realised something:
So we used that moment to push in improvements we’d been waiting months to do. We came back not just operational, but better.
M&S and others often do the same at scale. When your house has already fallen down, you don’t rebuild it with the same weak foundations.
“If you’re already at 0% availability, why not return at 95% on a stronger foundation?”
That downtime is painful, but it’s often the only chance an organisation gets to modernise without fighting internal politics.
There’s a quiet truth in cyber security that most people don’t want to admit: a huge amount of what slows businesses down isn’t hackers, compliance or lack of talent. It’s technical debt.
Old code nobody wants to touch. Systems built on systems built on systems. Services that everyone knows are fragile but nobody dares to rebuild because “it works… for now”.
A major outage forces the issue.
Suddenly, the thing everyone avoided touching is already broken. The question stops being “should we change it?” and becomes “how do we make sure it never collapses like this again?”
That’s where the opportunity really lies.
“An outage doesn’t just expose weaknesses. It gives you permission to fix the things you always knew were wrong.”
Rebuilds allow organisations to rethink architecture, strip out legacy complexity and move to platforms that are easier to secure and scale. And right now, that includes a huge opportunity to lean into AI and machine learning in ways that were hard to justify before.
If you’re starting from a clean slate, you can:
What was once a fragile chain of legacy systems can become something far more resilient, intelligent and maintainable.
It’s painful to admit, but sometimes the only way organisations ever get the green light for this level of modernisation is when something breaks so badly they have no choice but to rebuild. And if you’re going to rebuild anyway, you may as well come back faster, simpler and smarter than before.
Something else worries me more than any single outage. Cyber incidents used to be rare enough to shock people. Now you see them daily. Sometimes hourly. And that repetition has made people numb.
It’s leading to a subtle but dangerous mindset:
“Everyone gets hacked and they all seem to bounce back… so why invest more?”
What you don’t see in the headlines:
Just because someone is back online doesn’t mean they escaped the impact.
“Survival shouldn’t be mistaken for resilience.”
Attackers always get the first move.
They can:
You can have strong MXDR, strong controls and strong people. But someone, somewhere, will eventually hit you with something you weren’t expecting.
That’s why resilience matters as much as defence.
Being prepared isn’t about assuming you’ll win every battle. It’s about making sure you can stand back up when you don’t.
“Good security isn’t just about stopping everything. It’s about recovering fast when something gets through.”
I wish I could tell you that resilience is created by thick binders full of incident response plans. But when real incidents land, the first thing that usually disappears is the plan.
What doesn’t disappear?
If the first time your board sees a ransom note is the real thing, you’re already behind.
If the first time your exec team talks about losing a core system is during an actual outage, expect chaos.
That’s why exercising matters. Not just for IT. For the people who make decisions. For the people who communicate with customers. For the ones who sign off budgets. For the ones who will be blamed if it goes wrong.
An unpractised team panics. A practised team performs.
“When things go bad, people don’t rise to the occasion. They fall to their level of preparation.”
Most boards still lack someone with genuine cyber experience. That means messages about risk go up the chain like a game of whispers:
Organisations like CyberOne cannot magically fix that, but we can bridge the gap—if the internal champions trust us enough to bring us into the conversation.
We don’t need to “own the room”.
We just need to make sure the right truth gets heard.
Because our success is tied to our clients’ success. If they fall over after an incident, nobody wins.
When I look at the M&S outage, I don’t judge the four months. I ask different questions:
If your core service went down tomorrow…
Those answers matter far more than any headline about downtime. Because resilience isn’t measured in how perfect your defences are. It’s measured in how calmly and decisively you stand back up when they fail.