Every year, security teams run tabletop exercises and check a box. Compliance is satisfied. The calendar clears. And almost nothing changes.
That is the core problem with how most organizations approach tabletops today.
The Checkbox Mentality
Compliance frameworks require incident response testing. PCI DSS, HIPAA, SOC 2, NIST CSF: they all ask organizations to demonstrate that they have tested their plans. Tabletop exercises satisfy that requirement at the lowest possible cost. Run a scenario, document the conversation, file the report.
The exercise becomes an artifact, not a training event.
This mindset shapes every decision downstream. Generic scenarios get selected because they are easy to facilitate and hard to fail. The guest list skews toward IT and security, leaving out legal, communications, finance, and operations. Discussions happen over PowerPoint slides. Findings get noted and forgotten.
The result is a team that has technically "done a tabletop" but has never felt the pressure of a real incident.
Why Generic Scenarios Don't Work
A ransomware scenario can be a powerful exercise. It can also be a complete waste of time. The difference is specificity.
Generic ransomware scenarios describe a company called "AcmeCorp" with unnamed systems and vague stakeholders. Participants know it is a simulation. They answer questions hypothetically, without the cognitive load that comes from seeing their actual environment under attack.
A realistic scenario names the actual file servers. It references the specific backup solution the organization uses. It includes the name of the executive whose credentials were compromised. It describes the threat actor's lateral movement through the real network segments the team manages every day.
That level of detail forces genuine engagement. It surfaces actual gaps: the runbook that references a system that was decommissioned, the contact list with wrong phone numbers, the escalation path nobody has tested.
Building Realistic Scenarios at Scale
Creating that kind of customized scenario by hand is expensive. Ron Dilley, SANS faculty member and former Warner Bros CISO, described the challenge clearly:
"I was blown away... the good ones, they're a month's worth of work. Not just creating everything, creating the scenario, creating all the material, the artifacts, the deck, but also testing them, because you have to play through them again and again and again and refine them before you can put them in front of the target audience. And the reality is that this capability does an order of magnitude more data collection and analysis, and then creates everything in 40 minutes."
Reflex Security built a platform that solves this problem through automated OSINT collection. Enter a domain name. In 10 to 15 minutes, the platform generates five fully customized scenarios based on publicly available information about that organization.
The platform pulls from job postings to identify the technology stack. It finds executive names and organizational structure. It reviews recent security incidents affecting similar organizations. It analyzes DNS records, maps subprocessors and vendors, and reviews patents, blogs, and public filings.
The scenarios that emerge are rich in forensic detail: specific subnets, registry keys, command and control infrastructure, lateral movement paths. They read like real incident reports because they are built from real data about the target organization.
This is not a generic template with the company name swapped in. It is a scenario that could actually happen to that organization, built from the same information an attacker would use.
The Facilitation Problem
Scenario quality is only half the equation. Even the best scenario can produce a poor exercise if the facilitation is weak.
Most traditional tabletops rely on a human facilitator reading inject cards, moderating discussion, and keeping the conversation moving. The quality of the exercise depends entirely on the facilitator's expertise and preparation. That creates inconsistency, and it does not scale.
But scenario design is just the starting point. How the exercise runs, and who runs it, determines whether participants actually change behavior.
The next article examines facilitation: what makes a tabletop feel like a real incident rather than a guided discussion, and how AI-driven simulation changes the dynamic entirely.
Good scenarios set the stage. But what happens when the exercise begins, and there is no script? In the next article, we look at how adaptive simulation replaces inject-based tabletops and why that difference matters.
