{
  "title": "How to Create Traffic Baselines and Anomaly Detection Rules for Inbound/Outbound Communications — NIST SP 800-171 REV.2 / CMMC 2.0 Level 2 - Control - SI.L2-3.14.6",
  "date": "2026-04-22",
  "author": "Lakeridge Technologies",
  "featured_image": "/assets/images/blog/2026/4/how-to-create-traffic-baselines-and-anomaly-detection-rules-for-inboundoutbound-communications-nist-sp-800-171-rev2-cmmc-20-level-2-control-sil2-3146.jpg",
  "content": {
    "full_html": "<p>Creating effective traffic baselines and anomaly detection rules for inbound and outbound communications is a concrete, auditable step toward meeting NIST SP 800-171 Rev.2 / CMMC 2.0 Level 2 control SI.L2-3.14.6 — it reduces dwell time for attackers, helps detect data exfiltration and command-and-control activity, and provides the evidence auditors expect for monitoring CUI-related communications.</p>\n\n<h2>Why this control matters for Compliance Framework</h2>\n<p>SI.L2-3.14.6 requires organizations to know what \"normal\" looks like for network traffic and to detect deviations that could indicate incidents involving controlled unclassified information (CUI). For auditors, compliance is demonstrated by documented baselining procedures, collected logs (firewall, proxy, DNS, flow, endpoint), detection rules, and evidence of tuning and response. Without baselines and detection rules, you can miss slow or low-and-slow exfiltration, DNS tunnels, or beaconing from compromised hosts — all common tactics used against small organizations supporting DoD supply chains.</p>\n\n<h2>Implementation steps (high level)</h2>\n<h3>1) Inventory and log sources</h3>\n<p>Start by listing all inbound/outbound telemetry sources you control: firewall/NAT logs, next‑generation firewall (NGFW) logs, web proxy, secure email gateway, DNS recursive resolver logs, VPC Flow Logs (AWS), Azure NSG Flow Logs, endpoint telemetry (EDR), and NetFlow/sFlow/IPFIX from network devices. For small businesses, a minimal effective set is firewall logs + DNS logs + endpoint telemetry + VPC Flow Logs if using public cloud. Ensure centralized collection into a SIEM, log lake (ELK/Opensearch), or managed detection service with timestamps normalized to UTC.</p>\n\n<h3>2) Establish baseline methodology</h3>\n<p>Define baseline metrics and windows. Useful baseline metrics: bytes/second per host and subnet, connections/minute, distinct external IPs contacted per hour, DNS query types and entropy, number of new domains contacted per day, and median session duration for web/HTTPS. For a 50-user small business, capture a 30-day rolling baseline broken into weekday/work-hours, weekday/off-hours, and weekend patterns. Use percentile baselines (e.g., 95th percentile for bytes/minute) and moving averages (7- and 30-day) so seasonal or weekly patterns don't generate noise. Store baseline values with versioning and timestamp — auditors will want to see historical baselines and tuning notes.</p>\n\n<h3>3) Write concrete anomaly detection rules</h3>\n<p>Translate deviations into rule types: threshold rules, rate/volume anomalies, behavioral rules, and protocol-anomaly rules. Examples: (a) Outbound data transfer > 3x 95th percentile of bytes/minute for host/subnet; (b) Host contacting > 20 new external IPs in 1 hour (possible scanning/beaconing); (c) High-entropy DNS responses or unusually long TXT records (DNS tunneling); (d) Large number of failed TLS handshakes or connections to known bad IPs. Implement a mix of deterministic rules (for fast, low‑latency alerts) and statistical rules (z-score, EWMA) for spotting subtle changes.</p>\n\n<h2>Technical examples and snippets</h2>\n<p>Below are practical rule snippets and queries you can adapt. Suricata/Zeek/ELK examples are popular in small environments because they are open and auditable.</p>\n<p>Example Zeek notice (detect large outbound transfer > 100 MB):</p>\n<pre><code>event connection_state_remove(c: connection)\n{\n  if (c$id$orig_h != c$id$resp_h && c$resp_bytes > 100000000)\n    NOTICE([$note=\"Large_Outgoing_Transfer\", $conn=c, $msg=fmt(\"Resp bytes %d\", c$resp_bytes)]);\n}</code></pre>\n<p>Example Suricata threshold rule (many connections to new IPs):</p>\n<pre><code>threshold: type limit, track by_src, count 20, seconds 3600;\nalert tcp any any -> any any (msg:\"MULTIPLE_EXTERNAL_IPS_CONTACTED\"; sid:1000001; rev:1;)</code></pre>\n<p>Example Elastic/Kibana query (detect host outbound bytes spike vs. 30-day baseline):</p>\n<pre><code>IF bytes_out_per_minute > (baseline_95th * 3) AND host NOT IN (allowlist) THEN alert</code></pre>\n\n<h2>Tuning, alerting, and incident playbooks</h2>\n<p>Tune iteratively: log false positives with tags and adjust thresholds or add allowlists (e.g., scheduled backups to cloud storage). Categorize alerts (P1–P4) and map to response playbooks: P1 (possible exfiltration) — isolate host, collect full packet capture, preserve logs, notify IR lead; P2 (suspicious DNS tunneling) — block domain at resolvers, escalate to security analyst. Record every tuning change in a baseline/tuning log with rationale to satisfy evidence requirements. For small businesses, automate containment steps (block IP, quarantine via EDR) for high-confidence alerts to reduce time to response when staff are limited.</p>\n\n<h2>Real-world small-business scenario</h2>\n<p>Example: a 40-person engineering shop uses Office 365, AWS, and an NGFW. Baseline shows normal outgoing traffic of 5–10 MB/min per subnet during business hours, with weekly backups pushing bursts to 200 MB at midnight. An anomaly rule flags any host sending > 50 MB/min outside scheduled backup windows. One week after deploying rules, the team detected a developer workstation sending 120 MB/min to an unknown external IP at 02:30 — automated containment triggered via EDR, an investigation showed a cloud-synced repo with accidentally embedded CUI; data access was blocked and the incident was documented for auditors. That small change turned a potential breach into a controllable event.</p>\n\n<h2>Risks of not implementing this control</h2>\n<p>Without baselining and anomaly detection you risk undetected CUI exposure, successful long-term compromises (beaconing/exfiltration), and failure in audits. Regulatory and contractual fallout can include loss of DoD contracts, required corrective action plans, reputational harm, and fines. Operationally, the business remains blind to attackers using low-noise channels (DNS, HTTPS tunnels) or to misconfigured services that leak data.</p>\n\n<h2>Compliance tips and best practices</h2>\n<p>Document everything: baseline methodology, tools used, tuning decisions, retained alerts and incident logs. Keep retention and access controls aligned with NIST/CMMC expectations (retain detection and investigation artifacts long enough to support audits — typically 90 days minimum for logs, longer for key incident artifacts). Use role-based access controls to protect log integrity, and schedule quarterly baseline reviews. For small shops, consider outsourced SOC-as-a-Service for 24/7 coverage and to provide attestation evidence. Finally, map each detection rule back to SI.L2-3.14.6 in your SSP and POA&M so auditors can trace implementation to requirement.</p>\n\n<p>Summary: Build baselines from firewall, DNS, flow, and endpoint telemetry; use percentile and moving-average baselines; implement a layered set of deterministic and statistical anomaly rules; tune and document iteratively; and integrate alerts into a practiced incident response. Following these steps gives small organizations a practical, auditable path to meet SI.L2-3.14.6 while materially reducing risk of CUI compromise.</p>",
    "plain_text": "Creating effective traffic baselines and anomaly detection rules for inbound and outbound communications is a concrete, auditable step toward meeting NIST SP 800-171 Rev.2 / CMMC 2.0 Level 2 control SI.L2-3.14.6 — it reduces dwell time for attackers, helps detect data exfiltration and command-and-control activity, and provides the evidence auditors expect for monitoring CUI-related communications.\n\nWhy this control matters for Compliance Framework\nSI.L2-3.14.6 requires organizations to know what \"normal\" looks like for network traffic and to detect deviations that could indicate incidents involving controlled unclassified information (CUI). For auditors, compliance is demonstrated by documented baselining procedures, collected logs (firewall, proxy, DNS, flow, endpoint), detection rules, and evidence of tuning and response. Without baselines and detection rules, you can miss slow or low-and-slow exfiltration, DNS tunnels, or beaconing from compromised hosts — all common tactics used against small organizations supporting DoD supply chains.\n\nImplementation steps (high level)\n1) Inventory and log sources\nStart by listing all inbound/outbound telemetry sources you control: firewall/NAT logs, next‑generation firewall (NGFW) logs, web proxy, secure email gateway, DNS recursive resolver logs, VPC Flow Logs (AWS), Azure NSG Flow Logs, endpoint telemetry (EDR), and NetFlow/sFlow/IPFIX from network devices. For small businesses, a minimal effective set is firewall logs + DNS logs + endpoint telemetry + VPC Flow Logs if using public cloud. Ensure centralized collection into a SIEM, log lake (ELK/Opensearch), or managed detection service with timestamps normalized to UTC.\n\n2) Establish baseline methodology\nDefine baseline metrics and windows. Useful baseline metrics: bytes/second per host and subnet, connections/minute, distinct external IPs contacted per hour, DNS query types and entropy, number of new domains contacted per day, and median session duration for web/HTTPS. For a 50-user small business, capture a 30-day rolling baseline broken into weekday/work-hours, weekday/off-hours, and weekend patterns. Use percentile baselines (e.g., 95th percentile for bytes/minute) and moving averages (7- and 30-day) so seasonal or weekly patterns don't generate noise. Store baseline values with versioning and timestamp — auditors will want to see historical baselines and tuning notes.\n\n3) Write concrete anomaly detection rules\nTranslate deviations into rule types: threshold rules, rate/volume anomalies, behavioral rules, and protocol-anomaly rules. Examples: (a) Outbound data transfer > 3x 95th percentile of bytes/minute for host/subnet; (b) Host contacting > 20 new external IPs in 1 hour (possible scanning/beaconing); (c) High-entropy DNS responses or unusually long TXT records (DNS tunneling); (d) Large number of failed TLS handshakes or connections to known bad IPs. Implement a mix of deterministic rules (for fast, low‑latency alerts) and statistical rules (z-score, EWMA) for spotting subtle changes.\n\nTechnical examples and snippets\nBelow are practical rule snippets and queries you can adapt. Suricata/Zeek/ELK examples are popular in small environments because they are open and auditable.\nExample Zeek notice (detect large outbound transfer > 100 MB):\nevent connection_state_remove(c: connection)\n{\n  if (c$id$orig_h != c$id$resp_h && c$resp_bytes > 100000000)\n    NOTICE([$note=\"Large_Outgoing_Transfer\", $conn=c, $msg=fmt(\"Resp bytes %d\", c$resp_bytes)]);\n}\nExample Suricata threshold rule (many connections to new IPs):\nthreshold: type limit, track by_src, count 20, seconds 3600;\nalert tcp any any -> any any (msg:\"MULTIPLE_EXTERNAL_IPS_CONTACTED\"; sid:1000001; rev:1;)\nExample Elastic/Kibana query (detect host outbound bytes spike vs. 30-day baseline):\nIF bytes_out_per_minute > (baseline_95th * 3) AND host NOT IN (allowlist) THEN alert\n\nTuning, alerting, and incident playbooks\nTune iteratively: log false positives with tags and adjust thresholds or add allowlists (e.g., scheduled backups to cloud storage). Categorize alerts (P1–P4) and map to response playbooks: P1 (possible exfiltration) — isolate host, collect full packet capture, preserve logs, notify IR lead; P2 (suspicious DNS tunneling) — block domain at resolvers, escalate to security analyst. Record every tuning change in a baseline/tuning log with rationale to satisfy evidence requirements. For small businesses, automate containment steps (block IP, quarantine via EDR) for high-confidence alerts to reduce time to response when staff are limited.\n\nReal-world small-business scenario\nExample: a 40-person engineering shop uses Office 365, AWS, and an NGFW. Baseline shows normal outgoing traffic of 5–10 MB/min per subnet during business hours, with weekly backups pushing bursts to 200 MB at midnight. An anomaly rule flags any host sending > 50 MB/min outside scheduled backup windows. One week after deploying rules, the team detected a developer workstation sending 120 MB/min to an unknown external IP at 02:30 — automated containment triggered via EDR, an investigation showed a cloud-synced repo with accidentally embedded CUI; data access was blocked and the incident was documented for auditors. That small change turned a potential breach into a controllable event.\n\nRisks of not implementing this control\nWithout baselining and anomaly detection you risk undetected CUI exposure, successful long-term compromises (beaconing/exfiltration), and failure in audits. Regulatory and contractual fallout can include loss of DoD contracts, required corrective action plans, reputational harm, and fines. Operationally, the business remains blind to attackers using low-noise channels (DNS, HTTPS tunnels) or to misconfigured services that leak data.\n\nCompliance tips and best practices\nDocument everything: baseline methodology, tools used, tuning decisions, retained alerts and incident logs. Keep retention and access controls aligned with NIST/CMMC expectations (retain detection and investigation artifacts long enough to support audits — typically 90 days minimum for logs, longer for key incident artifacts). Use role-based access controls to protect log integrity, and schedule quarterly baseline reviews. For small shops, consider outsourced SOC-as-a-Service for 24/7 coverage and to provide attestation evidence. Finally, map each detection rule back to SI.L2-3.14.6 in your SSP and POA&M so auditors can trace implementation to requirement.\n\nSummary: Build baselines from firewall, DNS, flow, and endpoint telemetry; use percentile and moving-average baselines; implement a layered set of deterministic and statistical anomaly rules; tune and document iteratively; and integrate alerts into a practiced incident response. Following these steps gives small organizations a practical, auditable path to meet SI.L2-3.14.6 while materially reducing risk of CUI compromise."
  },
  "metadata": {
    "description": "Practical guide to building network traffic baselines and anomaly detection rules to meet NIST SP 800-171 Rev.2 / CMMC 2.0 Level 2 SI.L2-3.14.6 compliance for inbound/outbound communications.",
    "permalink": "/how-to-create-traffic-baselines-and-anomaly-detection-rules-for-inboundoutbound-communications-nist-sp-800-171-rev2-cmmc-20-level-2-control-sil2-3146.json",
    "categories": [],
    "tags": []
  }
}