This post explains how small and mid-sized organizations can design, run, document, and improve tabletop and live incident response exercises to validate their incident response capability and meet NIST SP 800‑171 Rev.2 / CMMC 2.0 Level 2 control IR.L2‑3.6.3, with practical steps, technical details, evidence mapping, and real‑world examples.
Why IR.L2‑3.6.3 matters (practical compliance context)
IR.L2‑3.6.3 expects organizations handling Controlled Unclassified Information (CUI) to validate their incident response (IR) capability through exercises. For auditors and assessors this means you must show regular, objective testing of your IR plan, team roles, communications, technical detection/remediation workflows, and triage/forensics processes. Practical validation closes gaps you won't discover until an actual incident — e.g., misrouted escalation emails, missing SIEM parsers, or backup restores that fail. For small businesses, exercises are a low-cost, high-value control: they provide evidence (exercise plans, attendee logs, AARs, artifacts) and materially reduce time‑to‑containment and the risk of CUI exposure.
Types of exercises and tangible objectives
Use a matrix of exercise types: tabletop (discussion-based), walkthrough (step-through procedures), functional (technical validation of a subsystem, e.g., EDR response), and full‑scale/live (production-impacting simulation). Each has clear objectives: tabletop — leadership decision-making and communications; walkthrough — plan completeness and checklist accuracy; functional — SIEM/EDR alerting and automated containment; live — end‑to‑end detection, containment, and recovery including backups and incident communications. For compliance, prioritize at least annual tabletop and periodic functional/live tests for high‑risk assets or after major changes (cloud migration, new HR systems storing CUI).
How to run a tabletop exercise — step by step
Scope and objective: pick 1–2 objectives (e.g., validate escalation path, test legal/vendor notifications, exercise CUI containment). Assemble participants: IR lead, IT/sysadmin, security analyst, legal/compliance, HR (if insider risk), ops, and an executive decision-maker. Create a 1‑page scenario rooted in recent threat intelligence (ransomware encrypting a file share with CUI, or a contractor-click phishing leading to credential theft). Draft injects (emails, logs, vendor calls) delivered every 10–20 minutes to push decisions. Assign a facilitator/controller to keep time and record decisions; appoint observers to note policy gaps. Run a 60–120 minute session: present scenario, walk through actions and decisions, capture timelines, and note any process gaps or tooling failures. Immediately after, conduct a 30–60 minute after‑action review (AAR) producing a prioritized POA&M (plan of action and milestones). Save all artifacts (slide deck, participant list with titles, chat logs, AAR, screenshots) as compliance evidence.
How to run a live/functional exercise — step by step (technical details)
For technical validation, start with a safety and rules of engagement (ROE) document: whitelist IPs, define blast radius, have rollback procedures and backups tested in advance, and notify upstream providers if necessary. Select a contained environment (segregated VLAN, test tenant, or cloud sandbox). Example technical scenario: simulate credential compromise and lateral movement using mock accounts and benign tools (e.g., adversary emulation frameworks in a controlled lab). Validate detection logic: ensure SIEM parsers collect Windows Event logs, Sysmon, PowerShell transcription, EDR telemetry, and firewall logs; run prewritten queries to verify alerts fire (YARA, Sigma rules, or custom EDR detections). Test response automation: confirm EDR quarantines, NAC isolates the host, backup restores work, and incident ticketing is created automatically. Capture evidence in real time: SIEM alerts, EDR screenshots, packet captures (pcap), system images if safe/required, and time‑stamped command output. After the exercise, produce a technical AAR with root cause analysis, remediation steps, metrics (MTTD, MTTR), and updated runbooks.
Evidence mapping, metrics, and artifacts for auditors
Map exercise artifacts directly to IR.L2‑3.6.3 evidence expectations: exercise schedule and roster (policy/perhaps table of periodic exercises), exercise plan and objectives (scope & ROE), scenario book and injects, attendance logs and roles exercised, AAR with prioritized findings and POA&M, and technical artifacts (SIEM queries, EDR alerts, pcap, screenshots, ticket history, backup restore logs). Track metrics: time to detection (MTTD), time to containment (MTTC), time to recovery (MTTR), number of playbooks executed, and percentage of findings remediated within planned time. Keep a simple evidence index (spreadsheet) linking each artifact to the control requirement and the assessor's expected evidence so audits are straightforward.
Small business examples and low‑cost implementations
Scenario A (small MSP): a phishing email with a malicious link leads to a compromised service account. Tabletop objective: test notification to affected customers and escalation to third‑party forensics. Use low-cost injects (printed emails and mock SIEM screenshots) and a one-hour tabletop with the owner, IT technician, and client account manager. Scenario B (manufacturing shop handling CUI): ransomware hits a file server. Run a walkthrough to verify backup restores to a clean environment and update firewall rules to block C2 traffic. For technical validation when budgets are tight, use open-source tools (Zeek for network logs, OSQuery, Velociraptor) and a home‑lab or cloud instance to run functional tests. If you lack in-house skills, contract a short engagement with a local MSSP/IR firm to run a single live exercise and produce an AAR you can reuse and expand into internal training.
Compliance tips, best practices, and risks of not implementing
Best practices: tie exercises to real threats and recent incidents, involve executives (decision makers must be exercised), make playbooks actionable and versioned in a runbook repository, and automate evidence collection (SIEM dashboards, ticket exports). Prioritize remediation of high‑impact AAR findings and track closure in your POA&M. Technical tip: ensure all systems have synchronized clocks (NTP), centralize logs with at least 90 days retention for investigation, and enable PowerShell and Sysmon logging on Windows hosts that store CUI. Risk of not implementing IR.L2‑3.6.3 is significant: slower detection and containment, prolonged business interruption, uncontrolled exfiltration of CUI, contract breach with DoD or prime contractors, failed CMMC/NIST assessments, regulatory fines, and reputational damage. Demonstrable exercises materially reduce these risks and provide the audit trail assessors expect.
Summary
To satisfy IR.L2‑3.6.3 you must move beyond a written plan: run regular tabletop and periodic live/functional exercises, document objectives and participants, collect technical and administrative artifacts, measure MTTD/MTTR, and close findings via a POA&M. Small businesses can scale exercises to budget and risk, leverage open‑source tooling or short external engagements, and still produce the evidence auditors require. Start with an annual tabletop, add targeted functional tests for critical systems, and use each AAR to harden your detection, containment, and recovery processes so your organization reliably protects CUI and meets CMMC/NIST expectations.