The Boring Breach
I logged into the database and everything was gone. Not corrupted, not encrypted, just deleted and replaced with a polite request for Bitcoin.
The strange part was not the ransom note. It was realizing the damage happened months after the real mistake.
The Problem No One Wants to Admit
Most people imagine a breach as a sudden event. Something dramatic happens, alarms go off, and a clever attacker slips past defenses in real time. That framing is comforting because it implies urgency and bad luck rather than long-term neglect. Unfortunately, it is usually wrong.
The most damaging breaches are slow. They start with systems that still work, still deliver value, and still feel too annoying to retire. Old WordPress sites, shared servers, reused credentials, environment files full of secrets. Nothing breaks loudly enough to justify stopping and fixing it.
The real danger is not misconfiguration. The real danger is trust that outlives its justification.
Why Shared Infrastructure Multiplies Mistakes
When systems share resources, convenience becomes contagion. A single compromised application quietly becomes a gateway to everything else running nearby. We had five applications sharing one database server. Each had legitimate credentials. The attacker only needed to compromise one.
Internal access feels safe right up until it is not, because internal systems rarely expect to defend themselves. Why would they? They’re internal.
And backups make this worse psychologically. They provide confidence that data loss is recoverable, which is true, but they also mask the fact that compromise is still ongoing. You can restore last week’s data while the attacker still has this week’s credentials.
Backups reduce regret, not risk.
This is why extortion rarely begins with encryption anymore. It begins with patience.
The Fix Was Boring
The solution was not a tool. It was a series of uncomfortable decisions that felt excessive at the time.
We shut down unused sites instead of promising to clean them up later. We stopped sharing database credentials entirely; each application got its own user with access to exactly one database. We treated internal access as hostile by default and assumed every credential stored on a compromised machine was already lost.
Most importantly, we stopped trying to clean compromised systems. We rebuilt them from scratch. This felt wasteful until we realized how expensive uncertainty really is. Knowing a system is clean is worth more than believing it probably is.
Security rarely fails because people do nothing. It fails because people keep doing what worked yesterday.
The Timeline That Changed Everything
When the database disappeared, I assumed the breach was recent. That assumption lasted until I found a cron job quietly restarting a fake system process every hour. It was designed to look boring; something you’d skip over in a process list.
The timestamps told the real story.
The attacker had been there since October. The database deletion in January was not the breach. It was the invoice.
That was the moment I understood: the problem was not the database at all. The problem was that we trusted systems we had stopped actively thinking about.
What Stays With Me
The most dangerous systems are the ones no one is actively maintaining. The ones that “just work” until they spectacularly don’t. The temporary solutions that became permanent infrastructure because removing them felt riskier than leaving them alone.
If you are running something temporary that has been around for years, that is probably where your breach will start.
If you just mentally inventoried your own infrastructure and felt a twinge of dread, you should probably subscribe. I write about the boring failures that quietly become expensive ones.

