Control IR.L2-3.6.3 in NIST SP 800-171 Rev.2 and CMMC 2.0 Level 2 requires organizations to test their incident response capability; this post shows a practical, step-by-step implementation for small and medium businesses (SMBs) to design, execute, measure, and document those tests so you can demonstrate compliance and improve real-world readiness.
Step-by-step implementation plan
Step 1 — Define scope and objectives: map the systems that process Controlled Unclassified Information (CUI) and document which assets, users, and data flows are in-scope for IR.L2-3.6.3 testing. Your objectives should be measurable: validate detection of simulated malware, validate containment playbooks for a compromised workstation, verify backup restoration for critical CUI repositories, and confirm notification procedures to internal stakeholders and the contracting officer where required. Record this in the System Security Plan (SSP) and link each test to a specific requirement or control clause.
Step 2 — Prepare the test plan and artifacts: create a written test plan that includes test types (tabletop, walk-through, technical/functional, full-scale), schedule, required tools, logging and evidence collection requirements, success criteria, and roles. For each scenario include a timeline of injects (events you will introduce), expected detections and alerts, and required logs: Windows Security and Sysmon logs, Linux auditd, AWS CloudTrail, Azure Activity Logs, firewall and IDS alerts, email gateway logs, and endpoint telemetry from EDR. Specify technical prerequisites such as NTP synchronization across hosts, hashing algorithm for artifact integrity (e.g., SHA-256), and chain-of-custody templates for evidence preservation.
Step 3 — Execute progressively: start with a tabletop exercise to walk through the incident response playbook with stakeholders (IT, security, legal, HR, executive), then run a functional exercise that triggers real alerts in your environment, and finally conduct a technical validation where you run controlled malware simulations or use frameworks like Atomic Red Team or MITRE Caldera to validate detections. During the technical phase, validate that your SIEM/SOAR receives and correlates events, that EDR generates alerts and can quarantine endpoints, that network segmentation blocks lateral movement, and that immutable/air-gapped backups can be restored. Always use isolated testbeds or time-boxed actions to avoid production harm, and document each action with timestamps and supporting logs.
Technical specifics to implement during testing
Configure log collection and retention before testing: centralize logs to a SIEM with retention consistent with contract and organizational policy, ensure Windows event channels (Security, System, Application), Sysmon configuration to capture process creation and network connections, Linux audit rules for file access to CUI, and CloudTrail/Azure Monitor enabled at the account/subscription level. Ensure EDR is deployed to all endpoints and that it is configured to report telemetry to your console. Test forensic readiness: capture memory and disk images using trusted tools (FTK Imager, Magnet, or open-source alternatives), compute SHA-256 for images, store artifacts in a tamper-evident repository, and include metadata (who collected, when, how). Validate detection rules by creating synthetic events (e.g., PowerShell command lines, suspicious process spawns) and confirming SIEM correlation rules fire as expected.
Small business examples and scenarios
Example A: A 25-person marketing firm hosting CUI in Google Workspace and an on-prem file server runs a quarterly tabletop and a semi-annual functional test. The tabletop exercises stakeholder communications and CUI breach notification; the functional test simulates a user credential compromise by sending a targeted phishing test and validating that SSO logs, mailbox audit logs, and EDR process creation logs provide the necessary alerts for containment. Example B: A small MSP with mixed client environments performs an annual red-team style exercise via a trusted third party to simulate ransomware. They validate that backups are immutable, test full restore of a critical VM from an offline backup, and time the MTTR (mean time to recovery) to ensure service-level commitments can be met. Both examples document findings in the SSP and create POA&Ms for deficiencies.
Compliance tips and best practices: run table-top exercises at least annually and after major changes, run functional/technical tests at least every six months or when you deploy new detection tooling, and re-run tests after any significant incident. Always create runbooks and playbooks that are versioned and stored with access controls. Link test results to the SSP and POA&M and track remediation items to closure with owners and deadlines. Use external assessors for adversary emulation if you lack internal capability, and preserve evidence from tests in the same way you would from a real incident so auditors can validate your process.
Risks of not implementing this control include delayed detection and containment of breaches, loss or unauthorized exfiltration of CUI, contract penalties or loss of DoD contracts, and an inability to demonstrate CMMC/NIST compliance during assessments. Operationally, inadequate testing can lead to failed restorations, incomplete forensic data, breakdowns in communication, and misalignment between technical teams and leadership during a real incident, which magnifies business impact, downtime, and recovery cost.
Summary: To satisfy IR.L2-3.6.3 you must plan, run, measure, and document incident response tests mapped to your CUI environment and SSP; use progressively realistic exercises (tabletop to technical), validate logging, EDR, SIEM, and backups, preserve forensic evidence, and convert findings into SSP updates and POA&Ms. For SMBs, leverage automation frameworks, low-cost tools, and third-party expertise where needed, and ensure you can show repeatable evidence of testing, remediation, and continuous improvement during assessments.