Requirement
NIST SP 800-171 REV.2 / CMMC 2.0 Level 2 - Control - IR.L2-3.6.3 – Test the organizational incident response capability.
Understanding the Requirement
This control requires that an organization validate its incident response capability by running tests that reveal process, tooling, and staffing gaps before a real incident occurs. The objective is simple: the incident response capability is tested. As part of NIST SP 800-171 REV.2 / CMMC 2.0 Level 2, testing should expose weaknesses in procedures, escalation, skillsets, and technical controls so you can make focused improvements and reduce response time and impact.
Technical Implementation
- Choose test types and schedule: Start with a mix of walk-throughs, tabletop exercises, and one live simulation per year. For most SMBs, a bi-annual cadence (one tabletop and one simulation or walk-through) balances cost and effectiveness. Put these on the calendar and treat them as required business events.
- Build realistic scenarios: Develop scenarios that reflect your actual risks—failed patches, phishing-caused credential compromise, ransomware on a file share, or cloud misconfiguration. Use example tabletop guides (such as those produced by reputable benchmarks) to structure injects, timelines, and expected decisions.
- Define participants and roles: Identify and involve employees with incident reporting, response, and operational responsibilities: system/network admins, on-call technicians, security leads, and an executive decision-maker. Assign a facilitator and a notetaker to capture decisions and action items during the exercise.
- Test tools and runbooks: During simulation exercises, validate that runbooks (playbooks) work: notification lists, escalation paths, rollback procedures, backups, and communications templates. Verify technical capabilities such as isolation steps, restore from backups, and rollback of patches in a controlled environment.
- Document and act on findings: Produce an after-action report that lists gaps, owners, deadlines, and severity. Convert findings into tracked remediation tasks in your ticketing or project system and verify closure with follow-up tests.
- Measure and improve: Track measurable outcomes such as time-to-detect, time-to-contain, time-to-recover, and number of runbook failures. Use those metrics to prioritize training, tool investments, and policy changes.
Example in a Small or Medium Business
AcmeTech, a 75-person engineering firm, schedules a bi-annual incident response test. For their tabletop, leadership designs a scenario where a rushed patch deployment prevents users from logging in. They invite the network administrator, the on-call technician, the IT manager, and a member of HR to simulate communications. During the exercise, the team walks through first-response steps: who receives the initial report, how systems are isolated, and whether rollback options exist for the patch. The on-call technician demonstrates their runbook but notes they lack the credentials to execute the rollback; escalation to the network admin is slow. The group also discovers the change control policy isn't well understood and that disciplinary steps are not clearly defined. The facilitator documents actions: update the patch rollback procedure, add the necessary credentials to the vault with proper controls, schedule a focused training session, and clarify change control responsibilities. Within 30 days AcmeTech updates its runbooks, conducts the training, and assigns owners to remediate the deficiencies, then verifies improvements in a follow-up walk-through.
Summary
Testing your incident response capability combines practical policy and technical measures: scheduled exercises, realistic scenarios, clear roles, validated runbooks, and a formal corrective action process. For SMBs this approach uncovers process and staffing gaps early, produces actionable remediation tasks, and measurably improves response times and business continuity—helping you meet IR.L2-3.6.3 while strengthening your overall security posture.