{
  "title": "How to integrate automated security testing (SAST/DAST) into CI/CD for external web apps to satisfy Essential Cybersecurity Controls (ECC – 2 : 2024) - Control - 2-15-2",
  "date": "2026-04-02",
  "author": "Lakeridge Technologies",
  "featured_image": "/assets/images/blog/2026/4/how-to-integrate-automated-security-testing-sastdast-into-cicd-for-external-web-apps-to-satisfy-essential-cybersecurity-controls-ecc-2-2024-control-2-15-2.jpg",
  "content": {
    "full_html": "<p>Automating security testing (SAST for source and DAST for running apps) inside CI/CD pipelines is a required and practical control under Essential Cybersecurity Controls (ECC – 2 : 2024) — Control - 2-15-2; this post explains how to integrate both methods for external web applications, with concrete pipeline patterns, tool options, compliance evidence needed, and small-business scenarios to make implementation realistic and auditable for the Compliance Framework.</p>\n\n<h2>What Control 2-15-2 Requires (Compliance Framework mapping)</h2>\n<p>Control - 2-15-2 in ECC – 2 : 2024 mandates that external-facing web applications be subject to automated security testing as part of the software delivery lifecycle. That means integrating static application security testing (SAST) to catch code-level issues early and dynamic application security testing (DAST) to find runtime vulnerabilities, with evidence that scans run, results are triaged, and remediation or risk-acceptance is tracked in accordance with your Compliance Framework policies.</p>\n\n<h3>Key Objectives</h3>\n<p>The primary objectives to satisfy the Compliance Framework are: (1) implement automated SAST and DAST in CI/CD for all external web apps, (2) ensure authenticated/runtime scanning of production-like environments (preferably staging) for DAST, (3) produce standardized, exportable scan evidence (SARIF, XML, JSON) and retain logs for audit windows, and (4) enforce remediation and exception workflows tied to ticketing and SLAs.</p>\n\n<h3>Implementation Notes</h3>\n<p>Practical notes for Compliance Framework compliance: run SAST on pre-merge and pre-release stages, run DAST after deployment to a staging environment that mirrors production, use authenticated scanning where applicable, avoid scanning live production without controls, and store scan artifacts and SARIF outputs in the build system or compliance repository for the retention period required by your organization (e.g., 12 months).</p>\n\n<h2>Planning the Integration: pipeline placement, environments, and scope</h2>\n<p>Design your CI/CD so SAST runs earlier (developer feedback loop) and DAST runs later (after deployment to an environment that mimics production). For external web apps: SAST should be executed on feature branches and as part of the merge pipeline; DAST should run in a dedicated staging environment with production-like configuration and sanitized or synthetic data. Define clear scan scopes (allowed hosts, paths to exclude, authenticated endpoints) and register them in pipeline configuration so scans are repeatable and auditable.</p>\n\n<h2>Hands-on: SAST integration (practical steps)</h2>\n<p>Choose a SAST tool that fits your language/platform and compliance needs: Semgrep (fast, free rules + custom rules), GitHub CodeQL (native SARIF integration in GH), SonarQube (enterprise-grade), or commercial scanners (Veracode, Checkmarx). Implement SAST as follows: add a CI job that runs on pull requests and on main branch builds, output SARIF for standardization, fail the build for new HIGH/CRITICAL findings (or create a policy to block merges), and attach the SARIF artifact to the pipeline run for auditors. Example GitHub Actions step (conceptual):</p>\n\n<pre><code>- name: Run Semgrep SAST\n  uses: returntocorp/semgrep-action@v1\n  with:\n    config: p/ci\n    format: sarif\n</code></pre>\n\n<p>Ensure SAST is configured for incremental scans (only changed files) where supported to keep developer feedback fast, but always run a full scan for release builds. Map rule severities to Compliance Framework severity bands so remediation SLAs are consistent with policy.</p>\n\n<h2>Hands-on: DAST integration (practical steps)</h2>\n<p>For DAST, tools like OWASP ZAP (free, scriptable), Burp Suite Enterprise, Detectify, and Nikto are common. Integrate DAST as a pipeline job triggered after deployment to staging. Use a headless browser or scripted authentication to reach protected endpoints (ZAP contexts + user scripts or Selenium login record). Run authenticated scans against a staging URL and export the results as JSON/XML for evidence. Example conceptual pipeline step using Dockerized ZAP:</p>\n\n<pre><code>- name: Start ZAP and Run DAST\n  run: |\n    docker run --rm -v $(pwd)/zap-output:/zap/wrk/:rw owasp/zap2docker-stable zap-full-scan.py -t https://staging.example.com -r zap-report.html -J zap-report.json\n</code></pre>\n\n<p>Be careful with DAST in production: use rate limits, scanning windows, and an approved IP allowlist. For external apps, authenticated DAST is essential to uncover logic and access-control issues; use short-lived service accounts and store credentials in a secrets manager (Vault, AWS Secrets Manager, GitHub Secrets) and inject them securely into the scan job.</p>\n\n<h2>Operationalizing for Compliance: triage, evidence, and SLAs</h2>\n<p>To meet Compliance Framework audit expectations, implement a triage and remediation workflow: automatically create tickets (Jira/Trello) for findings above a threshold, assign severity-based SLAs (e.g., Critical: 72 hours, High: 14 days, Medium: 60 days), and require evidence of remediation (updated SARIF/DAST report showing closure). Retain raw scan output, reports, and triage tickets for the framework's retention period. Use CI artifacts and a secure compliance bucket (e.g., S3 with restricted access) to store SARIF/JSON reports and pipeline logs.</p>\n\n<h2>Small business scenarios and cost-effective approaches</h2>\n<p>Small teams can achieve compliance affordably: use GitHub CodeQL (free on public repos), Semgrep (community rules), and OWASP ZAP in GitHub Actions for DAST. Example: a 3-developer SaaS startup uses GitHub Actions to run Semgrep on PRs and a nightly ZAP scan to staging, with automated creation of GitHub Issues for High/CRITICAL results; they set a policy that Criticals must be fixed before release and Highs are remediated within two sprints. Managed SaaS offerings (Snyk, Detectify, Burp Enterprise) are also cost-effective when factoring engineer time saved on maintenance and false-positive triage.</p>\n\n<h2>Compliance tips and best practices</h2>\n<ul>\n  <li>Standardize outputs: export SARIF for SAST and structured JSON/XML for DAST so reports can be ingested by your compliance tooling.</li>\n  <li>Manage false positives: maintain a documented whitelist and a \"reason for exception\" record that includes compensating controls and expiration dates.</li>\n  <li>Use authenticated scanning with service accounts and rotate credentials regularly; never hard-code secrets in pipeline YAMLs.</li>\n  <li>Define gating rules: block merges for new Criticals, require owner approval for accepted exceptions, and run full scans before major releases.</li>\n  <li>Maintain an audit trail: retain scan artifacts, pipeline logs, and ticketing history to demonstrate compliance to auditors.</li>\n</ul>\n\n<p>Risk of not implementing Control - 2-15-2 is high: without integrated SAST/DAST you push vulnerabilities into production, increasing the chance of data breaches, regulatory penalties, customer churn, and costly post-release remediation. External web apps are frequent attack surfaces — skipping automated testing delays detection, elevates risk, and makes proving due diligence to auditors difficult.</p>\n\n<p>In summary, to satisfy ECC – 2 : 2024 Control - 2-15-2 under your Compliance Framework, implement SAST early in CI, run authenticated DAST in staging as part of deployment pipelines, export standardized evidence, enforce triage/SLAs, and retain artifacts for audits; small businesses can meet these requirements with open-source tools and practical pipeline patterns while larger organizations should adopt enterprise scanners and centralized reporting to scale assurance across many apps.</p>",
    "plain_text": "Automating security testing (SAST for source and DAST for running apps) inside CI/CD pipelines is a required and practical control under Essential Cybersecurity Controls (ECC – 2 : 2024) — Control - 2-15-2; this post explains how to integrate both methods for external web applications, with concrete pipeline patterns, tool options, compliance evidence needed, and small-business scenarios to make implementation realistic and auditable for the Compliance Framework.\n\nWhat Control 2-15-2 Requires (Compliance Framework mapping)\nControl - 2-15-2 in ECC – 2 : 2024 mandates that external-facing web applications be subject to automated security testing as part of the software delivery lifecycle. That means integrating static application security testing (SAST) to catch code-level issues early and dynamic application security testing (DAST) to find runtime vulnerabilities, with evidence that scans run, results are triaged, and remediation or risk-acceptance is tracked in accordance with your Compliance Framework policies.\n\nKey Objectives\nThe primary objectives to satisfy the Compliance Framework are: (1) implement automated SAST and DAST in CI/CD for all external web apps, (2) ensure authenticated/runtime scanning of production-like environments (preferably staging) for DAST, (3) produce standardized, exportable scan evidence (SARIF, XML, JSON) and retain logs for audit windows, and (4) enforce remediation and exception workflows tied to ticketing and SLAs.\n\nImplementation Notes\nPractical notes for Compliance Framework compliance: run SAST on pre-merge and pre-release stages, run DAST after deployment to a staging environment that mirrors production, use authenticated scanning where applicable, avoid scanning live production without controls, and store scan artifacts and SARIF outputs in the build system or compliance repository for the retention period required by your organization (e.g., 12 months).\n\nPlanning the Integration: pipeline placement, environments, and scope\nDesign your CI/CD so SAST runs earlier (developer feedback loop) and DAST runs later (after deployment to an environment that mimics production). For external web apps: SAST should be executed on feature branches and as part of the merge pipeline; DAST should run in a dedicated staging environment with production-like configuration and sanitized or synthetic data. Define clear scan scopes (allowed hosts, paths to exclude, authenticated endpoints) and register them in pipeline configuration so scans are repeatable and auditable.\n\nHands-on: SAST integration (practical steps)\nChoose a SAST tool that fits your language/platform and compliance needs: Semgrep (fast, free rules + custom rules), GitHub CodeQL (native SARIF integration in GH), SonarQube (enterprise-grade), or commercial scanners (Veracode, Checkmarx). Implement SAST as follows: add a CI job that runs on pull requests and on main branch builds, output SARIF for standardization, fail the build for new HIGH/CRITICAL findings (or create a policy to block merges), and attach the SARIF artifact to the pipeline run for auditors. Example GitHub Actions step (conceptual):\n\n- name: Run Semgrep SAST\n  uses: returntocorp/semgrep-action@v1\n  with:\n    config: p/ci\n    format: sarif\n\n\nEnsure SAST is configured for incremental scans (only changed files) where supported to keep developer feedback fast, but always run a full scan for release builds. Map rule severities to Compliance Framework severity bands so remediation SLAs are consistent with policy.\n\nHands-on: DAST integration (practical steps)\nFor DAST, tools like OWASP ZAP (free, scriptable), Burp Suite Enterprise, Detectify, and Nikto are common. Integrate DAST as a pipeline job triggered after deployment to staging. Use a headless browser or scripted authentication to reach protected endpoints (ZAP contexts + user scripts or Selenium login record). Run authenticated scans against a staging URL and export the results as JSON/XML for evidence. Example conceptual pipeline step using Dockerized ZAP:\n\n- name: Start ZAP and Run DAST\n  run: |\n    docker run --rm -v $(pwd)/zap-output:/zap/wrk/:rw owasp/zap2docker-stable zap-full-scan.py -t https://staging.example.com -r zap-report.html -J zap-report.json\n\n\nBe careful with DAST in production: use rate limits, scanning windows, and an approved IP allowlist. For external apps, authenticated DAST is essential to uncover logic and access-control issues; use short-lived service accounts and store credentials in a secrets manager (Vault, AWS Secrets Manager, GitHub Secrets) and inject them securely into the scan job.\n\nOperationalizing for Compliance: triage, evidence, and SLAs\nTo meet Compliance Framework audit expectations, implement a triage and remediation workflow: automatically create tickets (Jira/Trello) for findings above a threshold, assign severity-based SLAs (e.g., Critical: 72 hours, High: 14 days, Medium: 60 days), and require evidence of remediation (updated SARIF/DAST report showing closure). Retain raw scan output, reports, and triage tickets for the framework's retention period. Use CI artifacts and a secure compliance bucket (e.g., S3 with restricted access) to store SARIF/JSON reports and pipeline logs.\n\nSmall business scenarios and cost-effective approaches\nSmall teams can achieve compliance affordably: use GitHub CodeQL (free on public repos), Semgrep (community rules), and OWASP ZAP in GitHub Actions for DAST. Example: a 3-developer SaaS startup uses GitHub Actions to run Semgrep on PRs and a nightly ZAP scan to staging, with automated creation of GitHub Issues for High/CRITICAL results; they set a policy that Criticals must be fixed before release and Highs are remediated within two sprints. Managed SaaS offerings (Snyk, Detectify, Burp Enterprise) are also cost-effective when factoring engineer time saved on maintenance and false-positive triage.\n\nCompliance tips and best practices\n\n  Standardize outputs: export SARIF for SAST and structured JSON/XML for DAST so reports can be ingested by your compliance tooling.\n  Manage false positives: maintain a documented whitelist and a \"reason for exception\" record that includes compensating controls and expiration dates.\n  Use authenticated scanning with service accounts and rotate credentials regularly; never hard-code secrets in pipeline YAMLs.\n  Define gating rules: block merges for new Criticals, require owner approval for accepted exceptions, and run full scans before major releases.\n  Maintain an audit trail: retain scan artifacts, pipeline logs, and ticketing history to demonstrate compliance to auditors.\n\n\nRisk of not implementing Control - 2-15-2 is high: without integrated SAST/DAST you push vulnerabilities into production, increasing the chance of data breaches, regulatory penalties, customer churn, and costly post-release remediation. External web apps are frequent attack surfaces — skipping automated testing delays detection, elevates risk, and makes proving due diligence to auditors difficult.\n\nIn summary, to satisfy ECC – 2 : 2024 Control - 2-15-2 under your Compliance Framework, implement SAST early in CI, run authenticated DAST in staging as part of deployment pipelines, export standardized evidence, enforce triage/SLAs, and retain artifacts for audits; small businesses can meet these requirements with open-source tools and practical pipeline patterns while larger organizations should adopt enterprise scanners and centralized reporting to scale assurance across many apps."
  },
  "metadata": {
    "description": "Practical, step-by-step guidance for integrating SAST and DAST into CI/CD pipelines for external web applications to meet ECC – 2 : 2024 Control 2-15-2 compliance requirements.",
    "permalink": "/how-to-integrate-automated-security-testing-sastdast-into-cicd-for-external-web-apps-to-satisfy-essential-cybersecurity-controls-ecc-2-2024-control-2-15-2.json",
    "categories": [],
    "tags": []
  }
}