The Play-Pretend Problem

Walk into any large enterprise’s security team and ask them to show you what they’ve built. You’ll see tools. Lots of them. A SAST platform, an SCA scanner, a CSPM dashboard, maybe a GRC suite, probably a SIEM. You’ll see policies — information security policy, acceptable use, data classification, incident response. You’ll see frameworks — ISO 27001 mapped, SOC 2 controls documented, maybe a NIST alignment matrix. You’ll see org charts with roles and responsibilities clearly delineated.

From the outside, it looks mature. Everything a security programme should have is there. The audit checklist is ticked. The board gets a quarterly report with a dashboard that’s mostly green.

And yet — the vulnerability backlog hasn’t shrunk in two years. The GRC team and the security engineers have never been in the same room. The risk register exists but hasn’t been updated since it was created — and there’s a decent chance it’s still an Excel spreadsheet being emailed between three people who each have a different version. The CSPM tool generates 300 findings a week and nobody can tell you which ones matter. A third-party firewall sits in front of a cloud environment that already has a cheaper, better-integrated native alternative.

This is play-pretend security. Everything looks right. Nothing works.


How it happens

Nobody sets out to build a security programme that’s all theatre. It accumulates.

A team gets budget allocated for the year. The budget needs to be spent or it gets cut next cycle. A vendor pitches a product that looks good in the demo and ticks an auditor’s checkbox. The product gets bought. It gets deployed — partially. The team that bought it uses it for the narrow use case they had in mind. Nobody else in the organisation knows it exists, or if they do, they don’t have access. The GRC team has their own tools. The security operations team has theirs. The developers have theirs. Each team can point to their stack and say “we’re covered.”

But no one looks across the boundaries. The CSPM tool that could power risk management for the entire cloud environment sits in one team’s silo, generating alerts that go nowhere because the team running it doesn’t have the context to prioritise them and the team that does have the context doesn’t know the tool exists.

Third-party products get stacked on top of native capabilities for no reason other than procurement inertia. I’ve seen organisations deploy commercial firewall appliances in front of cloud environments where the cloud provider’s own firewall is more tightly integrated, more capable for that specific environment, cheaper to maintain, and introduces no additional third-party supply chain risk. The commercial product was bought because that’s what the networking team knew, or because it was in an existing enterprise agreement, or because someone in procurement had a relationship with the vendor. Nobody asked whether the native option was better — because the question wasn’t about capability. It was about budget utilisation and vendor management.

Just like security is often an afterthought in development and deployment, risk is an afterthought in tool acquisition. The question “what problem does this solve?” gets asked. The question “does something we already have solve it better?” almost never does. There’s internal competition where there should be collaboration.


Where risk management should be — and where it actually is

If there’s one thing that’s supposed to tie all of this together, it’s risk management. RM is the function that sits above individual tools and teams and asks: what are our actual risks, who owns them, and are they being managed?

In practice, risk management in most organisations I’ve worked with is a compliance exercise maintained by a disconnected GRC team. They have a risk register. It was populated during an audit prep cycle. It maps to a framework — ISO 27001, NIST, whatever the auditors want to see. It lives in a platform that nobody outside the compliance function logs into.

The security engineers don’t contribute to it. The developers don’t know it exists. The operational risks — the ones that actually matter day to day — aren’t in it. Business risks that should inform security priorities aren’t captured. The register is technically complete, in the sense that it has rows and columns and a last-modified date. It doesn’t reflect reality.

Policies and standards exist alongside it, equally disconnected. An information security policy that runs to forty pages, approved by an executive who didn’t read it, filed in a SharePoint folder that nobody visits. Standards that describe what should happen without any mechanism to verify whether it does. Annual reviews that consist of changing the date on the cover page.

None of this is cynical. The people maintaining these artefacts are usually doing their best within the constraints they’re given. The problem is structural: GRC operates in a silo, security engineering operates in a different silo, and the bridge between “what risks have we identified?” and “what are we actually doing about them?” doesn’t exist.


Three companies, same gap

I’ve worked with three organisations on this specific problem. Each had a different starting point. All had the same gap: risk management was an artefact, not an operation.

The first had the best setup of the three — a solid risk management framework, good documentation, clear structure. What was missing was the downstream translation. The framework existed on paper, but the annual risk review was a bureaucratic exercise. Nobody was interviewing teams, nobody was assessing whether controls were actually working. What we built was the collaboration layer — running actual risk reviews with actual teams, feeding results directly into the security programme instead of filing them in a register. When a Big Four auditor reviewed the process, they admired the maturity. Not the documentation — the fact that it was being used.

The second had budget and a licensed GRC platform that was already deployed. But it was being used by a handful of people in legal and compliance. The broader security organisation didn’t touch it. No asset inventory. No risk assessment process. No structured preparedness for the compliance frameworks they were working toward. The platform wasn’t the problem — it was capable. The process around it didn’t exist. I set it up from scratch: asset inventory, risk assessment questionnaires, compliance mapping, review meetings. The platform that had been collecting dust became the backbone of a functional programme.

The third had no mature GRC platform at all. A stack of frameworks that had never been operationalised. The proposal was the same intent — asset inventory, risk management, vulnerability management — but building from zero. That work is still in progress.

Three very different organisations. One had the framework but not the practice. One had the tool but not the process. One had neither. Same underlying problem every time.


Why it persists

The consulting industry has a role in this, and it’s not a comfortable one to name.

When the incentive structure rewards billable days over solved problems, there’s a pull toward work that generates artefacts rather than outcomes. Another framework document. Another maturity assessment. Another policy review. These are legitimate activities, and the people doing them aren’t doing anything wrong. But they can become self-perpetuating — the client gets a document, the document needs a review cycle, the review cycle generates more work, and somewhere along the way the question “is the problem actually getting smaller?” stops being asked.

I’ve been in engagements where the right answer was: build something operational today on the platform your teams already use, and keep the compliance artefacts separate for the auditors. That’s a less glamorous deliverable than a 60-page framework document. It’s also what actually changes things.

The play-pretend problem persists because it’s comfortable for everyone involved. The organisation gets to tell its board that security tools are deployed, frameworks are in place, and risk is being managed. The consulting firm gets to bill for producing and reviewing documents. The auditor gets to check boxes. And the actual security posture — the thing all of this exists to improve — stays flat, measured by metrics that are themselves part of the theatre.


What breaking it looks like

The pattern that worked across every engagement where I got the chance was the same.

Start with asset inventory. You can’t manage risk for things you don’t know you have. I’ve walked into organisations with hundreds of applications and no central register of what exists, who owns it, or what data it handles. Every other security activity — pentesting, vulnerability management, risk assessment, compliance — is guesswork until this is in place.

Build risk management as an operational function. Risk assessments should involve actual conversations with actual teams — not forms filled out by a compliance analyst working from last year’s register. The output should feed directly into what gets prioritised for security work. If a risk assessment identifies a critical gap and nothing changes in the security programme as a result, the assessment was theatre.

Connect the tools that already exist. Most of the organisations I’ve worked with didn’t need more tools. They needed the tools they had to be used by the right people, with the right process, pointed at the right problems.

Here’s what this looks like in practice. At one engagement, the vulnerability management process was drowning in raw output — Nessus reports, pentest findings, CSPM alerts, third-party scanner results — all accumulating with default CVSS scores as the only metric for severity. Even the pentest reports were using unmodified CVSS, which was the most absurd part. A critical-rated finding on an internal tool reachable from four machines, used by five people, was being treated with the same urgency as a critical on a public-facing API handling customer data. Without a risk profile, there was no way to prioritise meaningfully. Nobody knew what compensating controls existed. Nobody had mapped which assets were actually exposed. The mitigation teams lost trust in the severity ratings immediately — when a “critical” doesn’t match their understanding of the actual risk, they stop taking the next one seriously.

Then came the customer pressure. External stakeholders wanted reports, and there was nothing to share except raw, unfiltered scanner output. That created another layer of play-pretend — customers seeing inflated severity counts, demanding urgent fixes for issues that weren’t urgent in context, putting pressure on DevOps teams who knew the real risk didn’t match what the report said but had no way to explain it.

What I built was a contextual severity layer. Every finding got assessed against the actual state of the asset — what controls exist, what’s the real exposure, what’s the calculated impact, what compensating controls reduce the likelihood. The CVSS score got recalculated to reflect the environment, not the generic vulnerability. The report that went to the customer showed real findings — triaged, contextually scored, with adjusted severities that reflected actual risk. Findings got fixed within SLA because the teams trusted the ratings and the customers saw a report that made sense.

That’s not a tool. It’s a process that sits between detection and decision-making — the exact gap that play-pretend leaves empty.

Security will always be treated as overhead if what lands on a DevOps team’s desk is raw, context-free scanner output with 30 policies and 60 standards documents stacked on top of it — none of which make sense to the people who actually get things done. That’s not a security programme. That’s a burden dressed up as governance.

But when the policies resonate — when they’re written in language the teams recognise, tied to problems they’ve actually seen — something shifts. At one client, risk assessments stopped being a compliance chore and became something the teams genuinely engaged with. Developers started raising risks during interviews unprompted — not because they were asked to fill a form, but because they could see what was wrong and understood why it needed to be fixed. That’s what risk management looks like when it’s operational instead of performative. The people closest to the code become part of the process instead of being on the receiving end of it.

Stop buying tools to spend budget. Every tool acquisition should start with: what problem does this solve that our existing capabilities can’t, and who specifically will use it daily? If the answer to the second question isn’t clear, the purchase is play-pretend. It’s adding a line item to the security stack without adding security.


The uncomfortable bit

This is the post in the series that’s hardest to write because there’s no clean villain. The organisations aren’t being negligent — they’re doing what the incentive structure rewards. The consulting firms aren’t being dishonest — they’re delivering what the client asked for. The auditors aren’t being careless — they’re checking against a standard. Everyone is playing their role correctly within a system that optimises for appearances over outcomes.

Breaking the pattern means someone has to step outside that system and ask whether the security programme is actually making the organisation more secure — not whether it looks like it is. That’s a harder question to ask than it sounds, because the answer is often uncomfortable, and the people who need to hear it are the ones signing the invoices.

Every problem I’ve written about in this series — how findings get delivered, how teams relate to each other, what tools actually do versus what they’re bought to do — feeds into this one. Fix the delivery and teams start collaborating. Fix the collaboration and tools start being used properly. Fix the tooling approach and risk starts being managed for real. But if the underlying incentive is to look secure rather than be secure, each of those fixes gets absorbed back into the theatre.

The only counter I’ve found is to build something that works — visibly, operationally, on a platform people actually use — and let the results speak louder than the documentation. It’s not a framework. It’s not a methodology. It’s a decision to measure success by whether the problem is getting smaller, not by whether the report is getting longer.

Leave a comment