🚨 CMMC Phase One started November 10! Here's everything you need to know →

How to Configure SIEM for Audit Record Reduction and On-Demand Reporting to Meet NIST SP 800-171 REV.2 / CMMC 2.0 Level 2 - Control - AU.L2-3.3.6

Practical guidance to configure your SIEM to reduce audit record volume and enable fast, on-demand reporting to satisfy NIST SP 800-171 Rev.2 / CMMC 2.0 Level 2 AU.L2-3.3.6.

•
April 22, 2026
•
5 min read

Share:

Schedule Your Free Compliance Consultation

Feeling overwhelmed by compliance requirements? Not sure where to start? Get expert guidance tailored to your specific needs in just 15 minutes.

Personalized Compliance Roadmap
Expert Answers to Your Questions
No Obligation, 100% Free

Limited spots available!

This post explains how to configure a Security Information and Event Management (SIEM) system to meet NIST SP 800-171 Rev.2 / CMMC 2.0 Level 2 control AU.L2-3.3.6 — "Provide audit reduction and report generation to support on-demand reporting" — with practical, vendor-agnostic steps, small-business examples, and specific technical snippets you can adapt to Splunk, Elastic/Opensearch, Azure Sentinel, or other common SIEMs.

Understanding AU.L2-3.3.6 and operational objectives

AU.L2-3.3.6 requires reducing audit record volume to a manageable level while preserving the ability to generate accurate, timely reports on demand. The objective is twofold: 1) avoid overwhelming storage, indexing and analyst resources with noisy events; and 2) retain the evidence and drill-down capability auditors and incident responders need. For Compliance Framework implementation, that means documenting reduction rules, validating that filtered events do not remove forensic value, and demonstrating repeatable, auditable report generation.

Practical SIEM configuration steps (high level)

1) Inventory, categorization, and prioritization

Start by cataloging log sources and mapping them to security objectives: authentication, privilege changes, system integrity, remote access, and boundary devices. For a small business this typically includes domain controllers (AD), endpoint EDR alerts, VPN logs, firewall/NGFW syslogs, email gateway, and critical application logs. Label each source with a priority (high/medium/low) and identify the event types that are relevant to detection or audit (e.g., Windows Event IDs 4624/4625/4672, VPN connect/disconnect, firewall deny entries). This inventory is the basis for reduction rules.

2) Ingest-time filtering and drop-rules (reduce at the source)

Implement filtering as early as practical — on collectors, forwarders, or ingest pipelines — to reduce storage and indexing costs. Examples: - Splunk: use props/transforms or heavy-forwarder rules to route or drop events; use whitelist/blacklist transforms to drop known-noise sourcetypes or low-value event IDs. - Elastic/Opensearch: create an ingest pipeline with a "drop" processor to drop documents matching conditions (e.g., informational heartbeat events). Example pipeline snippet: { "processors": [{ "drop": { "if": "ctx.event.action == 'heartbeat' || ctx.winlog.event_id in [5156, 5158]" } }] }. - Syslog collectors (rsyslog/syslog-ng): use filter rules to not forward repetitive info-level events. For a small business, filter non-actionable info-level logs (e.g., periodic service health pings) but keep security-relevant subtypes. Document each rule and the rationale for retention vs drop to satisfy auditors.

3) Normalization, deduplication, and enrichment

Normalize fields (user, src_ip, dest_ip, event_id, timestamp) so rules and reports are consistent across sources. Implement deduplication and enrichment at ingest or indexing: - Deduplicate identical events within short windows (e.g., drop exact duplicates or keep one instance per minute) to avoid event storms. - Enrich events with asset ownership, criticality tags, and business unit to support filtered reports (source: asset database/CMDB). Technical example (Splunk): use the transaction or dedup command in scheduled summary indexing searches to reduce volume and create a summarized event that retains counts and examples. For Elastic, use ingest processors to add enrichment fields (script processor or enrich processor) and use cardinality aggregations in dashboards to show unique occurrences rather than raw counts.

4) Aggregation, summary indexing, and sampling

Use aggregation and summary indexing to reduce archive size while retaining forensic capability. Options: - Create hourly/daily roll-up indices that store counts, top sources, unique user lists, and sample event IDs. - Use sampling for extremely high-volume feeds (e.g., network flow telemetry): keep 1% of routine flows but 100% of flows matching deny or anomaly heuristics. - Implement summary indices or data models (Splunk summary indexing; Elastic rollup indices) for fast, on-demand reporting. Example: a small company might forward all firewall denies and critical IDS alerts in full, but aggregate allow flows by hourly counts per subnet and store full flows for 30 days while keeping rollups for 1 year.

5) Retention policies, index lifecycle, and cold storage

Design retention and ILM policies that align with compliance and business risk. For Compliance Framework, document retention decisions (e.g., 90 days hot, 1 year warm, 3 years cold) and implement automated index rollover and shrink/forcemerge actions to reduce storage. For Elastic, configure ILM policies with hot/warm/cold phases. For Splunk, configure indexes with frozenTimePeriodInSecs and archive or cold bucket scripts. Ensure that archived data remains searchable for the audit window required by your contracting or regulatory needs and that chain-of-custody for exported audit data is maintained.

On-demand reporting: fast queries and exportability

AU.L2-3.3.6 specifically requires supporting on-demand reporting. Implement these capabilities: - Prebuilt saved searches and dashboards for common audit requests (authentication events, privilege escalations, remote access reports). - Use summary/accelerated data models (Splunk data models & pivots, Elastic rollups or materialized views) so queries run quickly. - Provide exportable, tamper-evident reports (PDF/CSV) and the ability to export raw event subsets with metadata for forensic review. Example saved search (Splunk pseudo): savedsearch "Auth_Weekly_Summary" uses tstats to show unique users and failed login counts per host over a range; make it run on-demand via the UI with role-based access. For Elastic, use Kibana saved queries and a watch or dashboard that can be exported as CSV/PNG when requested.

Access control, documentation, and validation

Control who can modify reduction rules and who can run/see audit reports. Use RBAC to prevent accidental disabling of collectors or drop rules. Keep a change log (ticketed approvals) for every filter or drop rule and periodically validate that no critical event class has been removed — e.g., run weekly audits that search raw collectors for a sample of dropped event classes to confirm the filter remains safe. For small businesses, a simple ticketed change control process and a quarterly review by the security owner is often sufficient.

Risks of not implementing AU.L2-3.3.6 properly

Failing to implement audit reduction and on-demand reporting has operational and compliance risks: SIEM overload leading to missed detection and slower incident response, escalating storage and licensing costs, inability to produce timely audit evidence during assessments, and accidental deletion of forensically valuable events. Worse, poorly documented or ad-hoc drop rules create audit findings and may break chain-of-custody for investigations.

In summary, meet AU.L2-3.3.6 by combining source-side filtering, ingest-time processors, deduplication and aggregation, retention lifecycle planning, and prebuilt on-demand reports with role-based controls and documented change management. For a small business, focus first on inventory/prioritization, implement conservative drop rules with validation, build summary indices for rapid reporting, and document everything so you can demonstrate repeatable, auditable processes to assessors.

 

Quick & Simple

Discover Our Cybersecurity Compliance Solutions:

Whether you need to meet and maintain your compliance requirements, help your clients meet them, or verify supplier compliance we have the expertise and solution for you

 CMMC Level 1 Compliance App

CMMC Level 1 Compliance

Become compliant, provide compliance services, or verify partner compliance with CMMC Level 1 Basic Safeguarding of Covered Contractor Information Systems requirements.
 NIST SP 800-171 & CMMC Level 2 Compliance App

NIST SP 800-171 & CMMC Level 2 Compliance

Become compliant, provide compliance services, or verify partner compliance with NIST SP 800-171 and CMMC Level 2 requirements.
 HIPAA Compliance App

HIPAA Compliance

Become compliant, provide compliance services, or verify partner compliance with HIPAA security rule requirements.
 ISO 27001 Compliance App

ISO 27001 Compliance

Become compliant, provide compliance services, or verify partner compliance with ISO 27001 requirements.
 FAR 52.204-21 Compliance App

FAR 52.204-21 Compliance

Become compliant, provide compliance services, or verify partner compliance with FAR 52.204-21 Basic Safeguarding of Covered Contractor Information Systems requirements.
 
Hello! How can we help today? 😃

Chat with Lakeridge

We typically reply within minutes