{
  "title": "How to Design Cloud Subnetworks in AWS/Azure/GCP for Public-Facing Components — FAR 52.204-21 / CMMC 2.0 Level 1 - Control - SC.L1-B.1.XI Implementation Playbook",
  "date": "2026-04-19",
  "author": "Lakeridge Technologies",
  "featured_image": "/assets/images/blog/2026/4/how-to-design-cloud-subnetworks-in-awsazuregcp-for-public-facing-components-far-52204-21-cmmc-20-level-1-control-scl1-b1xi-implementation-playbook.jpg",
  "content": {
    "full_html": "<p>Designing cloud subnetworks for public-facing components is a core control for small businesses working under federal contracts (FAR 52.204-21) or pursuing CMMC 2.0 Level 1 compliance (SC.L1-B.1.XI); this playbook gives you an actionable, platform-specific approach for AWS, Azure, and GCP that minimizes exposure for Federal Contract Information (FCI) and controlled unclassified information (CUI) while producing the artifacts auditors expect.</p>\n\n<h2>High-level design principles (Compliance Framework)</h2>\n<p>Start with segmentation: isolate public-facing load balancers and jump boxes in narrow public subnets and keep application and data stores in private subnets with no direct internet route. Use defense-in-depth: network security groups / firewall rules + host-based firewall rules + application layer protections (WAF) + strong IAM. Plan CIDR ranges to avoid overlap with on-premises networks (use RFC1918 private ranges) and allocate subnet sizes per availability zone (AZ) for HA. Document the design and map each element to the Compliance Framework controls (e.g., show how public subnet = controlled ingress point and private subnet = restricted processing of FCI/CUI).</p>\n\n<h2>Platform-specific implementation notes</h2>\n<p>AWS example: Create a VPC per environment (prod/stage/dev). In each AZ create one public subnet for an internet-facing Application Load Balancer (ALB) and one or more private subnets for application servers and databases. Attach an Internet Gateway (IGW) to the VPC; route 0.0.0.0/0 from public subnet route tables to IGW. Keep private subnets without a direct route to IGW; if instances need outbound internet for updates, use a NAT Gateway in a public subnet. Use Security Groups: ALB SG allows 443/80 from 0.0.0.0/0 (or restricted IP ranges if possible), app SG allows traffic only from ALB SG, DB SG allows only app SG on DB port. Enable VPC Flow Logs and ship them to CloudWatch/central logging for audit evidence. Use AWS WAF on ALB and ACM for TLS certificates.</p>\n\n<h3>Azure example</h3>\n<p>In Azure, use a Virtual Network (VNet) with subnets per role. Place an Application Gateway (or Azure Front Door for global edge) in a public subnet; keep app servers in private subnets without public IPs, and put databases in isolated subnets with service endpoints or private endpoints. Use Network Security Groups (NSGs) to restrict ingress/egress per subnet and Azure Firewall or NVA for centralized egress control. Enable NSG Flow Logs to Log Analytics and turn on Azure Monitor and Microsoft Defender for Cloud alerts. Use Azure Key Vault for certificate management and avoid storing credentials on public-facing VMs.</p>\n\n<h3>GCP example</h3>\n<p>GCP uses VPCs and subnets per region. Put an external HTTP(S) Load Balancer in a public subnet (or use Cloud CDN/Cloud Armor). Use private GCE instances without external IPs in private subnets and configure Cloud NAT for outbound access when necessary. Use VPC firewall rules to limit ingress to the LB and restrict backend tags to only accept traffic from the LB. Enable VPC Flow Logs (Stackdriver / Cloud Logging) and enable Security Command Center for continuous assessment. Use Identity-Aware Proxy (IAP) or private service connect for management plane access instead of exposing SSH/RDP to the internet.</p>\n\n<h2>Small-business real-world scenarios and examples</h2>\n<p>Scenario 1 — Customer portal: A small software vendor hosts a DoD contractor portal. Design: ALB in public subnets terminates TLS, forwards only to application servers in private subnets; database exists in a separate private subnet that denies any internet routes. Use strict SG/NSG rules that allow traffic only from the ALB. Maintain documented diagrams (public subnet -> ALB -> private app -> DB) and export security group/NSG configurations as compliance evidence.</p>\n\n<p>Scenario 2 — API for partners: Use API Gateway or managed ingress (Application Gateway/Cloud Endpoints). Place edge services in the public subnet but require mutual TLS or token-based auth, use WAF rules and rate limits, and send all logs to centralized SIEM (CloudWatch Logs/Log Analytics/Cloud Logging). For management access, use a bastion host in a public subnet with tightly restricted IP allow-lists or, preferably, use jump hosts accessible only via VPN or IAP to avoid exposing SSH/RDP publicly.</p>\n\n<h2>Practical configuration details and artifacts for auditors</h2>\n<p>Concrete items to create and retain as evidence: network topology diagrams annotated with CIDR blocks and AZs; exports of security group, NSG, and firewall rules; VPC/Subnet route table snapshots; VPC Flow Logs / NSG Flow Logs retention configuration (retain logs for your required retention period); WAF rule set configuration and sampling of blocked requests; Terraform/ARM/GCP deployment scripts showing idempotent configuration; and change control tickets for any modifications. Example AWS SG rule snippet: ingress for app SG: Type=HTTPS, Protocol=TCP, Port=443, Source=sg-ALB (security-group id). Document how each control reduces exposure per the Compliance Framework.</p>\n\n<h2>Compliance tips, best practices, and risk of non-implementation</h2>\n<p>Best practices: minimize public IP usage (use private endpoints and reverse proxies), enforce least privilege on network and IAM, automate provisioning with IaC and scan templates for drift, enable platform-native threat detection, and schedule regular review of public-facing endpoints. For CMMC/FAR compliance, maintain a configuration baseline and change log so you can show auditors \"what changed, when, and why.\" Risks of not implementing: direct exposure of FCI/CUI leading to data leakage, successful lateral movement into private networks, failed audits, contractual penalties, and reputational harm. Even a single misconfigured security group that allows broad ingress can create high-impact breaches.</p>\n\n<p>Summary: For FAR 52.204-21 and CMMC 2.0 Level 1 SC.L1-B.1.XI, design cloud subnets with narrow public-facing choke points (load balancers, WAFs, bastions) and keep processing and storage in private subnets with strict, auditable rules. Use platform-native logging, apply least privilege network rules, automate deployments, and keep clear documentation and logs for audit evidence—these steps give you a defensible, practical architecture that reduces risk and demonstrates compliance.</p>",
    "plain_text": "Designing cloud subnetworks for public-facing components is a core control for small businesses working under federal contracts (FAR 52.204-21) or pursuing CMMC 2.0 Level 1 compliance (SC.L1-B.1.XI); this playbook gives you an actionable, platform-specific approach for AWS, Azure, and GCP that minimizes exposure for Federal Contract Information (FCI) and controlled unclassified information (CUI) while producing the artifacts auditors expect.\n\nHigh-level design principles (Compliance Framework)\nStart with segmentation: isolate public-facing load balancers and jump boxes in narrow public subnets and keep application and data stores in private subnets with no direct internet route. Use defense-in-depth: network security groups / firewall rules + host-based firewall rules + application layer protections (WAF) + strong IAM. Plan CIDR ranges to avoid overlap with on-premises networks (use RFC1918 private ranges) and allocate subnet sizes per availability zone (AZ) for HA. Document the design and map each element to the Compliance Framework controls (e.g., show how public subnet = controlled ingress point and private subnet = restricted processing of FCI/CUI).\n\nPlatform-specific implementation notes\nAWS example: Create a VPC per environment (prod/stage/dev). In each AZ create one public subnet for an internet-facing Application Load Balancer (ALB) and one or more private subnets for application servers and databases. Attach an Internet Gateway (IGW) to the VPC; route 0.0.0.0/0 from public subnet route tables to IGW. Keep private subnets without a direct route to IGW; if instances need outbound internet for updates, use a NAT Gateway in a public subnet. Use Security Groups: ALB SG allows 443/80 from 0.0.0.0/0 (or restricted IP ranges if possible), app SG allows traffic only from ALB SG, DB SG allows only app SG on DB port. Enable VPC Flow Logs and ship them to CloudWatch/central logging for audit evidence. Use AWS WAF on ALB and ACM for TLS certificates.\n\nAzure example\nIn Azure, use a Virtual Network (VNet) with subnets per role. Place an Application Gateway (or Azure Front Door for global edge) in a public subnet; keep app servers in private subnets without public IPs, and put databases in isolated subnets with service endpoints or private endpoints. Use Network Security Groups (NSGs) to restrict ingress/egress per subnet and Azure Firewall or NVA for centralized egress control. Enable NSG Flow Logs to Log Analytics and turn on Azure Monitor and Microsoft Defender for Cloud alerts. Use Azure Key Vault for certificate management and avoid storing credentials on public-facing VMs.\n\nGCP example\nGCP uses VPCs and subnets per region. Put an external HTTP(S) Load Balancer in a public subnet (or use Cloud CDN/Cloud Armor). Use private GCE instances without external IPs in private subnets and configure Cloud NAT for outbound access when necessary. Use VPC firewall rules to limit ingress to the LB and restrict backend tags to only accept traffic from the LB. Enable VPC Flow Logs (Stackdriver / Cloud Logging) and enable Security Command Center for continuous assessment. Use Identity-Aware Proxy (IAP) or private service connect for management plane access instead of exposing SSH/RDP to the internet.\n\nSmall-business real-world scenarios and examples\nScenario 1 — Customer portal: A small software vendor hosts a DoD contractor portal. Design: ALB in public subnets terminates TLS, forwards only to application servers in private subnets; database exists in a separate private subnet that denies any internet routes. Use strict SG/NSG rules that allow traffic only from the ALB. Maintain documented diagrams (public subnet -> ALB -> private app -> DB) and export security group/NSG configurations as compliance evidence.\n\nScenario 2 — API for partners: Use API Gateway or managed ingress (Application Gateway/Cloud Endpoints). Place edge services in the public subnet but require mutual TLS or token-based auth, use WAF rules and rate limits, and send all logs to centralized SIEM (CloudWatch Logs/Log Analytics/Cloud Logging). For management access, use a bastion host in a public subnet with tightly restricted IP allow-lists or, preferably, use jump hosts accessible only via VPN or IAP to avoid exposing SSH/RDP publicly.\n\nPractical configuration details and artifacts for auditors\nConcrete items to create and retain as evidence: network topology diagrams annotated with CIDR blocks and AZs; exports of security group, NSG, and firewall rules; VPC/Subnet route table snapshots; VPC Flow Logs / NSG Flow Logs retention configuration (retain logs for your required retention period); WAF rule set configuration and sampling of blocked requests; Terraform/ARM/GCP deployment scripts showing idempotent configuration; and change control tickets for any modifications. Example AWS SG rule snippet: ingress for app SG: Type=HTTPS, Protocol=TCP, Port=443, Source=sg-ALB (security-group id). Document how each control reduces exposure per the Compliance Framework.\n\nCompliance tips, best practices, and risk of non-implementation\nBest practices: minimize public IP usage (use private endpoints and reverse proxies), enforce least privilege on network and IAM, automate provisioning with IaC and scan templates for drift, enable platform-native threat detection, and schedule regular review of public-facing endpoints. For CMMC/FAR compliance, maintain a configuration baseline and change log so you can show auditors \"what changed, when, and why.\" Risks of not implementing: direct exposure of FCI/CUI leading to data leakage, successful lateral movement into private networks, failed audits, contractual penalties, and reputational harm. Even a single misconfigured security group that allows broad ingress can create high-impact breaches.\n\nSummary: For FAR 52.204-21 and CMMC 2.0 Level 1 SC.L1-B.1.XI, design cloud subnets with narrow public-facing choke points (load balancers, WAFs, bastions) and keep processing and storage in private subnets with strict, auditable rules. Use platform-native logging, apply least privilege network rules, automate deployments, and keep clear documentation and logs for audit evidence—these steps give you a defensible, practical architecture that reduces risk and demonstrates compliance."
  },
  "metadata": {
    "description": "Practical playbook for designing AWS/Azure/GCP subnetworks for public-facing components to meet FAR 52.204-21 and CMMC 2.0 Level 1 SC.L1-B.1.XI requirements, with step-by-step configuration, examples, and audit evidence guidance.",
    "permalink": "/how-to-design-cloud-subnetworks-in-awsazuregcp-for-public-facing-components-far-52204-21-cmmc-20-level-1-control-scl1-b1xi-implementation-playbook.json",
    "categories": [],
    "tags": []
  }
}