Essential Cybersecurity Controls (ECC – 2 : 2024) Control 2-1-6 requires organizations to regularly review and document their asset inventory and associated security posture; using automated tools to schedule, track, and capture evidence reduces human error and produces an auditable trail that satisfies Compliance Framework expectations.
Practical implementation overview for Compliance Framework
To implement Control 2-1-6 in a reproducible way you need four things: a reliable asset inventory, a policy that defines review frequency and owners, an automation layer that schedules and runs reviews, and a secure evidence repository that documents findings and remediation. For small businesses this often means combining built-in management platforms (e.g., Microsoft Intune/Azure AD, Jamf, or a lightweight CMDB) with automated scans, scheduled jobs, and ticketing integration so every review produces verifiable evidence aligned with Compliance Framework reporting needs.
Step 1 — Build and maintain a discoverable asset inventory
Start with discovery tools that can integrate into your CMDB or asset database. Options range from agent-based systems (Intune/SCCM, Jamf, Tanium) to agentless scanners (Nmap/NetBox + scripts) and SaaS vulnerability platforms (Tenable, Qualys). For a 50-seat small business example: enable Intune device inventory, configure Azure AD device sync, and run a nightly discovery job using Microsoft Graph to export managed devices via PowerShell: Connect-MgGraph -Scopes DeviceManagementManagedDevices.Read.All; Get-MgDeviceManagementManagedDevice | Export-Csv -Path devices.csv. Ensure each record contains a unique asset ID, owner, location, classification, and last-review timestamp so reviews can be automated and filtered by risk or owner.
Step 2 — Define review frequency, scope, and owners
Create a compliance policy that maps asset categories to review cadence and acceptance criteria: e.g., critical servers monthly, laptops and mobile devices quarterly, IoT/OT devices biannually, and cloud resources monthly. Encode this policy as metadata in your CMDB (fields like review_frequency, owner, next_review_date). Use that metadata to generate schedules: a simple cron entry for a nightly scheduler might look like 0 2 * * * /opt/tools/run-periodic-asset-reviews.sh, where the script queries assets with next_review_date <= today and triggers the review workflow.
Step 3 — Automate reviews and capture for evidence
Automation should perform three tasks: run the checks, collect artifacts, and record outcomes. For checks, call APIs and scanners—e.g., run authenticated vulnerability scans on critical hosts or query configuration states (patch level, AV status, EDR presence) via APIs. Example: export a device's compliance state from Intune with PowerShell and save the JSON with a timestamped filename: Get-MgDeviceManagementManagedDevice -DeviceId <id> | ConvertTo-Json | Out-File device-<id>-$(Get-Date -Format yyyyMMddHHmmss).json. For ticketing, auto-create remediation tasks using REST calls to your ITSM (Jira example): curl -u user:APIToken -X POST -H "Content-Type: application/json" --data '{"fields":{"project":{"key":"IT"},"summary":"Asset review: missing AV on host-01","description":"Details...","labels":["asset-review"]}}' https://your-domain.atlassian.net/rest/api/2/issue. Always capture evidence artifacts (scanner reports, device JSON, screenshots, logs) and compute a SHA-256 hash (e.g., sha256sum artifact.zip > artifact.zip.sha256) before storing for integrity verification.
Step 4 — Track remediation and build an auditable trail
Tracking is best achieved by integrating your workflow engine with ticketing and the CMDB so each asset's record holds the last review date, outcome, and links to evidence/tickets. When an automated review generates findings, auto-open a ticket, assign the asset owner, and set SLA-driven reminders. Include APIs to update the CMDB ticket link: after creating a Jira issue, update the asset record with the issue key and new next_review_date. Use immutable storage or write-once S3 buckets for evidence retention and enable object versioning; keep an audit log that records who accessed evidence and when (CloudTrail, SIEM logs, or the audit features of your storage solution).
Documentation, retention, and compliance proofs
Compliance Framework auditors will expect consistent evidence: the original report, hashes proving integrity, ticket history showing remediation, and a policy demonstrating review criteria and frequency. Store evidence with metadata (asset_id, review_date, tool_version, scan_policy_id). Implement a retention policy that aligns with your regulatory obligations—common practice is 1–7 years depending on sector—and document it in your compliance repository. Regularly test evidence retrieval by running an internal audit exercise: extract a random asset's last three review cycles and verify all artifacts, hashes, and tickets are present and complete.
Failing to implement automated asset reviews introduces several risks: undetected unmanaged or shadow devices, missed patches and vulnerabilities, delayed remediation, and inability to demonstrate compliance during an audit which can lead to fines, contractual penalties, or breach notification obligations. For a small business, a single unmanaged device can become an initial access vector; automated, scheduled reviews materially reduce that risk. Best practices include assigning a named asset owner, enforcing multi-factor authentication for tool access, backing up your CMDB and evidence store, and conducting periodic tabletop exercises to validate the end-to-end workflow.
In summary, meeting ECC 2-1-6 under the Compliance Framework is a matter of codifying review policies, maintaining a reliable asset inventory, automating scheduled scans and evidence capture, and integrating tracking into your ticketing/CMDB systems so every review produces verifiable artifacts. Start small—use existing management platforms and lightweight scripting to automate tasks—then iterate by adding richer scanning, stronger evidence controls, and auditor-friendly reporting to make reviews repeatable, auditable, and defensible.