{
  "title": "How to create and retain system audit logs to meet NIST SP 800-171 REV.2 / CMMC 2.0 Level 2 - Control - AU.L2-3.3.1: A practical implementation checklist",
  "date": "2026-04-19",
  "author": "Lakeridge Technologies",
  "featured_image": "/assets/images/blog/2026/4/how-to-create-and-retain-system-audit-logs-to-meet-nist-sp-800-171-rev2-cmmc-20-level-2-control-aul2-331-a-practical-implementation-checklist.jpg",
  "content": {
    "full_html": "<p>NIST SP 800-171 Rev.2 control AU.L2-3.3.1 requires organizations to create and retain system audit logs sufficient to support after-action reviews, incident investigations, and forensic analysis — a requirement that, for small businesses, translates into practical decisions around what to log, how to protect logs, where to store them, and how long to keep them.</p>\n\n<h2>What AU.L2-3.3.1 requires (practical interpretation for Compliance Framework)</h2>\n<p>This control does not simply mean “turn on logging”; it means instrument systems so you capture relevant events, ensure logs are protected and tamper-evident, centralize and index logs for efficient access, and retain them for a period that supports incident response and investigations. Within the Compliance Framework scope, focus on (a) identifying critical systems and events, (b) producing reliable audit records, (c) protecting log integrity and confidentiality, and (d) establishing retention and access controls aligned to business and contractual requirements.</p>\n\n<h2>Practical implementation checklist</h2>\n\n<h3>1) Identify and prioritize log sources</h3>\n<p>Begin by creating an inventory of systems that process Controlled Unclassified Information (CUI) or are critical to operations: servers (Windows & Linux), firewalls, VPNs, identity providers (Azure AD, Okta), endpoints, cloud management APIs, application logs, and databases. For each system record the minimum event types required (e.g., authentication success/failure, privilege escalations, file access to sensitive directories, configuration changes, admin commands, network ACL changes). Prioritize logging for systems exposed to the internet and those storing or processing CUI.</p>\n\n<h3>2) Configure systems to generate authoritative logs</h3>\n<p>Use native and best-practice logging configurations: enable Windows Security Event Auditing + Sysmon for process and network details; enable auditd on Linux with rules for execve, file open on sensitive paths, and changes to /etc; enable CloudTrail for AWS API activity and Azure Activity Logs for subscription-level changes. Example auditd rule snippet to capture execs and writes to /etc: -a always,exit -F arch=b64 -S execve -k execs; -w /etc -p wa -k etc_changes. Document specific event IDs to capture and avoid noisy logs (too many informational events) that drown out signals.</p>\n\n<h3>3) Centralize, secure, and retain logs</h3>\n<p>Forward logs to a centralized collection and storage system (SIEM, log aggregator or cloud logs). For small businesses, managed services reduce operational burden: AWS CloudTrail → S3 bucket with server-side KMS encryption + lifecycle to Glacier; CloudWatch Logs with Log Insights; Azure Monitor with Log Analytics workspace. For on-prem or hybrid, use rsyslog/Fluentd/Vector to forward to a hardened ELK, Graylog, or Splunk instance. Protect storage with encryption at rest (KMS), strict IAM/ACLs, and network segmentation. Implement write-once/read-many (WORM) capabilities where legally required (S3 Object Lock, on-prem WORM appliances).</p>\n\n<h3>4) Ensure log integrity, time synchronization, and access controls</h3>\n<p>Apply tamper-evidence and chain-of-custody controls: sign or hash logs (SHA-256) as they are ingested, store hashes separately or in an append-only ledger. Use centralized timestamps and sync all hosts to a reliable NTP pool (chrony or ntpd); misaligned clocks undermine correlation and forensics. Restrict who can view or delete logs—use role-based access and multifactor authentication for administrative accounts. Enable immutable storage options when available and maintain an audit trail of who accessed logs.</p>\n\n<h3>5) Operationalize logs: monitoring, retention, and incident workflows</h3>\n<p>Define retention policies that satisfy contractual and investigative needs (recommendation: at minimum 90 days of readily searchable logs; 1 year encrypted archive; maintain longer if required by contracts or litigations). Implement automated alerts for key events (multiple failed logins, privilege escalations, unusual data exfil patterns). Integrate alerts into incident response playbooks so that analysts know which logs to collect, where to find them, and how long they will be available. Regularly test log collection and retention by performing tabletop exercises and verifying you can retrieve and read logs from archives.</p>\n\n<h2>Small-business examples and scenarios</h2>\n<p>Example A — Cloud-first 30-person contractor: Enable AWS CloudTrail and configure multi-region trails to an S3 bucket with server-side encryption (SSE-KMS), enable CloudTrail Insights for unusual API activity, set S3 Object Lock in compliance mode for critical trails, and set lifecycle rules: 90 days in S3 Standard for quick search, then transition to S3 Glacier Deep Archive for 2+ years. Example B — Small hybrid shop with 20 endpoints and on-prem server: deploy an open-source stack (Filebeat → Elasticsearch + Kibana) on a locked VM; enable Windows Event Forwarding from endpoints and rsyslog from Linux servers; store daily offsite encrypted backups of the ELK indices and hash them for integrity verification.</p>\n\n<h2>Compliance tips, best practices, and the risk of not implementing</h2>\n<p>Best practices: keep a logging baseline document, capture config-change events and privileged activity, automate evidence collection for audits, and use separation of duties for log administration. Don’t over-retain unsupported noisy logs—identify sentinel events and tune collection to save storage and reduce analyst fatigue. Risk if you fail: inability to prove what happened during an incident, failed CMMC/NIST assessments, potential contract loss for DoD contractors, increased time-to-detect and time-to-respond, and higher forensic costs. Attackers often try to erase or alter local logs; without centralized protected logging, you lose critical investigative data.</p>\n\n<h2>Summary</h2>\n<p>Meeting AU.L2-3.3.1 is achievable for small businesses with an inventory-driven approach: identify what to log, configure authoritative sources, centralize and protect logs (encryption, immutability, RBAC), keep clocks synchronized, implement searchable retention strategies, and integrate logs into incident response. Use managed cloud logging where possible to reduce operational load, document your logging architecture and retention policy in the Compliance Framework artifacts, and validate log availability through regular tests — these steps together provide the evidence auditors and investigators need while lowering your operational risk.</p>",
    "plain_text": "NIST SP 800-171 Rev.2 control AU.L2-3.3.1 requires organizations to create and retain system audit logs sufficient to support after-action reviews, incident investigations, and forensic analysis — a requirement that, for small businesses, translates into practical decisions around what to log, how to protect logs, where to store them, and how long to keep them.\n\nWhat AU.L2-3.3.1 requires (practical interpretation for Compliance Framework)\nThis control does not simply mean “turn on logging”; it means instrument systems so you capture relevant events, ensure logs are protected and tamper-evident, centralize and index logs for efficient access, and retain them for a period that supports incident response and investigations. Within the Compliance Framework scope, focus on (a) identifying critical systems and events, (b) producing reliable audit records, (c) protecting log integrity and confidentiality, and (d) establishing retention and access controls aligned to business and contractual requirements.\n\nPractical implementation checklist\n\n1) Identify and prioritize log sources\nBegin by creating an inventory of systems that process Controlled Unclassified Information (CUI) or are critical to operations: servers (Windows & Linux), firewalls, VPNs, identity providers (Azure AD, Okta), endpoints, cloud management APIs, application logs, and databases. For each system record the minimum event types required (e.g., authentication success/failure, privilege escalations, file access to sensitive directories, configuration changes, admin commands, network ACL changes). Prioritize logging for systems exposed to the internet and those storing or processing CUI.\n\n2) Configure systems to generate authoritative logs\nUse native and best-practice logging configurations: enable Windows Security Event Auditing + Sysmon for process and network details; enable auditd on Linux with rules for execve, file open on sensitive paths, and changes to /etc; enable CloudTrail for AWS API activity and Azure Activity Logs for subscription-level changes. Example auditd rule snippet to capture execs and writes to /etc: -a always,exit -F arch=b64 -S execve -k execs; -w /etc -p wa -k etc_changes. Document specific event IDs to capture and avoid noisy logs (too many informational events) that drown out signals.\n\n3) Centralize, secure, and retain logs\nForward logs to a centralized collection and storage system (SIEM, log aggregator or cloud logs). For small businesses, managed services reduce operational burden: AWS CloudTrail → S3 bucket with server-side KMS encryption + lifecycle to Glacier; CloudWatch Logs with Log Insights; Azure Monitor with Log Analytics workspace. For on-prem or hybrid, use rsyslog/Fluentd/Vector to forward to a hardened ELK, Graylog, or Splunk instance. Protect storage with encryption at rest (KMS), strict IAM/ACLs, and network segmentation. Implement write-once/read-many (WORM) capabilities where legally required (S3 Object Lock, on-prem WORM appliances).\n\n4) Ensure log integrity, time synchronization, and access controls\nApply tamper-evidence and chain-of-custody controls: sign or hash logs (SHA-256) as they are ingested, store hashes separately or in an append-only ledger. Use centralized timestamps and sync all hosts to a reliable NTP pool (chrony or ntpd); misaligned clocks undermine correlation and forensics. Restrict who can view or delete logs—use role-based access and multifactor authentication for administrative accounts. Enable immutable storage options when available and maintain an audit trail of who accessed logs.\n\n5) Operationalize logs: monitoring, retention, and incident workflows\nDefine retention policies that satisfy contractual and investigative needs (recommendation: at minimum 90 days of readily searchable logs; 1 year encrypted archive; maintain longer if required by contracts or litigations). Implement automated alerts for key events (multiple failed logins, privilege escalations, unusual data exfil patterns). Integrate alerts into incident response playbooks so that analysts know which logs to collect, where to find them, and how long they will be available. Regularly test log collection and retention by performing tabletop exercises and verifying you can retrieve and read logs from archives.\n\nSmall-business examples and scenarios\nExample A — Cloud-first 30-person contractor: Enable AWS CloudTrail and configure multi-region trails to an S3 bucket with server-side encryption (SSE-KMS), enable CloudTrail Insights for unusual API activity, set S3 Object Lock in compliance mode for critical trails, and set lifecycle rules: 90 days in S3 Standard for quick search, then transition to S3 Glacier Deep Archive for 2+ years. Example B — Small hybrid shop with 20 endpoints and on-prem server: deploy an open-source stack (Filebeat → Elasticsearch + Kibana) on a locked VM; enable Windows Event Forwarding from endpoints and rsyslog from Linux servers; store daily offsite encrypted backups of the ELK indices and hash them for integrity verification.\n\nCompliance tips, best practices, and the risk of not implementing\nBest practices: keep a logging baseline document, capture config-change events and privileged activity, automate evidence collection for audits, and use separation of duties for log administration. Don’t over-retain unsupported noisy logs—identify sentinel events and tune collection to save storage and reduce analyst fatigue. Risk if you fail: inability to prove what happened during an incident, failed CMMC/NIST assessments, potential contract loss for DoD contractors, increased time-to-detect and time-to-respond, and higher forensic costs. Attackers often try to erase or alter local logs; without centralized protected logging, you lose critical investigative data.\n\nSummary\nMeeting AU.L2-3.3.1 is achievable for small businesses with an inventory-driven approach: identify what to log, configure authoritative sources, centralize and protect logs (encryption, immutability, RBAC), keep clocks synchronized, implement searchable retention strategies, and integrate logs into incident response. Use managed cloud logging where possible to reduce operational load, document your logging architecture and retention policy in the Compliance Framework artifacts, and validate log availability through regular tests — these steps together provide the evidence auditors and investigators need while lowering your operational risk."
  },
  "metadata": {
    "description": "A practical, step‑by‑step checklist for small organizations to create, protect, centralize, and retain system audit logs to satisfy NIST SP 800-171 Rev.2 / CMMC 2.0 Level 2 (AU.L2-3.3.1).",
    "permalink": "/how-to-create-and-retain-system-audit-logs-to-meet-nist-sp-800-171-rev2-cmmc-20-level-2-control-aul2-331-a-practical-implementation-checklist.json",
    "categories": [],
    "tags": []
  }
}