{
  "title": "Step-by-Step: Migrating Public Services into Isolated Subnetworks Without Downtime to Comply with FAR 52.204-21 / CMMC 2.0 Level 1 - Control - SC.L1-B.1.XI",
  "date": "2026-04-24",
  "author": "Lakeridge Technologies",
  "featured_image": "/assets/images/blog/2026/4/step-by-step-migrating-public-services-into-isolated-subnetworks-without-downtime-to-comply-with-far-52204-21-cmmc-20-level-1-control-scl1-b1xi.jpg",
  "content": {
    "full_html": "<p>This post shows a practical, low-risk approach to migrate publicly accessible services into isolated subnetworks (a DMZ or internet-facing subnet architecture) without downtime, focused on meeting the Compliance Framework requirements in FAR 52.204-21 and CMMC 2.0 Level 1 control SC.L1-B.1.XI. The steps are oriented to small businesses using cloud (AWS/Azure) or modest on-prem infrastructure, and include concrete configuration examples, rollback strategies, and compliance evidence you can collect along the way.</p>\n\n<h2>Why isolate public services (and what the control requires)</h2>\n<p>FAR 52.204-21 and CMMC 2.0 Level 1 expect contractors to implement basic safeguarding—one core element is separating internet-facing services from systems that process or store controlled/unclassified information (CUI). Isolation reduces lateral movement risk, simplifies access control, and limits blast radius if a public system is compromised. For the Compliance Framework, demonstrate design decisions, implemented network segmentation, and evidence of traffic controls and logging.</p>\n\n<h2>Pre-migration preparation</h2>\n<h3>Inventory, classification, and scope</h3>\n<p>Start by inventorying all public endpoints: web servers, APIs, remote management interfaces, DNS names, and their back-end dependencies (databases, auth services, file stores). Classify each service: does it handle CUI or only public data? For each service collect: IP/DNS, ports, protocols, session/state behavior, health-check endpoints, and expected traffic patterns. This inventory is a required artifact for compliance audits and drives your migration plan.</p>\n\n<h3>Design the isolated topology</h3>\n<p>Design a minimal target topology: public subnet(s)/DMZ for internet-facing components (load balancers, reverse proxies, WAF), private subnets for application servers and data stores, and a management subnet for admin access. In AWS terms: public subnets with an Internet Gateway (IGW) hosting ALB/NGINX, private subnets with NAT Gateway for outbound, Security Groups enforcing ALB → app only on required ports. In Azure: Application Gateway/WAF in a dedicated subnet, backend pools in private subnets with NSGs restricting inbound traffic to gateway IPs only. Document routing, firewall rules, and required ports—this documentation is evidence for Compliance Framework assessments.</p>\n\n<h2>Step-by-step migration without downtime</h2>\n<p>1) Build the parallel environment: provision the isolated subnets, network ACLs/NSGs, load balancer or reverse proxy, WAF, and monitoring (VPC Flow Logs/Azure NSG Flow Logs, SIEM ingestion). 2) Deploy the service into the private subnet (or behind the new ALB) but keep it out of public DNS by assigning a temporary hostname or a private alias. 3) Configure the load balancer to route to the new target group and set health checks that mirror production readiness endpoints. 4) Reduce DNS TTL for the live hostname to a low value (e.g., 60 seconds) at least 48 hours before cutover; this supports rapid rollback if needed.</p>\n\n<p>5) Start traffic shifting: use the load balancer to mirror or split traffic (canary/weighted routing) from the old environment to the new environment. In AWS Application Load Balancer you can register targets gradually; in Azure Traffic Manager or Front Door use priority/weighted routing. Monitor application metrics, error rates, logs, and network telemetry closely. 6) Handle session state: if your app is stateful, move session state to a shared store (Redis, DynamoDB, SQL) before cutover or enable sticky sessions on the load balancer to preserve sessions during traffic shifting.</p>\n\n<p>7) Final cutover: once metrics are stable and you’ve validated functionality under production load, update DNS to point to the new load balancer/ALB IP/CNAME and keep the old environment running as a warm standby for a defined rollback window (e.g., 24–72 hours). 8) Decommission and document: after the rollback window, decommission public routing to the old environment, archive configurations and logs, and update your network diagrams and compliance artifacts.</p>\n\n<h2>Technical examples and small-business scenarios</h2>\n<p>Example: Small SaaS on AWS using EC2 and ALB. Create two private subnets across AZs for app servers, place ALB in public subnets with security group ALB: inbound 443/80 from 0.0.0.0/0, outbound to app SG; app SG: inbound 443 from ALB SG only, outbound to DB SG. Use AWS WAF for OWASP rule set. Use Route 53 weighted records to split traffic 10/90 during canary. Enable VPC Flow Logs to CloudWatch and ship to your SIEM for retention. Evidence: Route 53 change history, Security Group rules, ALB access logs, WAF logs, VPC Flow Logs—store in a compliance folder.</p>\n\n<p>On-prem example for a small business: use a managed switch to create VLANs (VLAN 10 = DMZ, VLAN 20 = internal), place a reverse proxy (NGINX) in DMZ, and configure a firewall (pfSense, Ubiquiti) rules: allow 443 to proxy, allow proxy→internal app on specific ports only, block direct internet-to-internal. Migrate using a temporary proxy hostname and switch firewall NAT rules during a quiet window, or use DNS weight-based rotation if your DNS provider supports it. Collect firewall configs, VLAN maps, and change control tickets as compliance artifacts.</p>\n\n<h2>Compliance tips, controls, and best practices</h2>\n<p>Document the design and produce evidence for each requirement: network diagrams (annotated), ACL/Security Group/NSG screenshots or exports, WAF rule sets, logging retention policies, and incident response playbooks. Automate provisioning with IaC (Terraform/ARM/AWS CloudFormation) and check-in templates to version-control your network baselines; this supports repeatable audits. Schedule periodic verification (quarterly) — run vulnerability scans, review security groups for excess open ports, and validate that public IPs are limited to DMZ components only.</p>\n\n<h2>Risks of not implementing isolation</h2>\n<p>Failing to isolate public services increases the risk of lateral movement: an attacker who compromises an internet-facing service could pivot and access internal systems or CUI, leading to data breaches, contract suspension or termination, and regulatory penalties. Non-compliance with FAR 52.204-21 or CMMC can disqualify a contractor from bidding on government work; beyond regulatory risk, the financial and reputational damage from a breach can be catastrophic for small businesses. From a technical perspective, lack of segmentation also makes incident detection and forensics harder because network telemetry is noisy and undifferentiated.</p>\n\n<p>Summary: migrating public services into isolated subnetworks is achievable for small businesses with careful planning, a parallel build-and-validate approach, traffic-shifting (blue/green or canary) techniques, and clear documentation for the Compliance Framework. Follow the step-by-step plan above, collect the specified evidence, and automate where possible to maintain compliance posture while minimizing downtime and risk.</p>",
    "plain_text": "This post shows a practical, low-risk approach to migrate publicly accessible services into isolated subnetworks (a DMZ or internet-facing subnet architecture) without downtime, focused on meeting the Compliance Framework requirements in FAR 52.204-21 and CMMC 2.0 Level 1 control SC.L1-B.1.XI. The steps are oriented to small businesses using cloud (AWS/Azure) or modest on-prem infrastructure, and include concrete configuration examples, rollback strategies, and compliance evidence you can collect along the way.\n\nWhy isolate public services (and what the control requires)\nFAR 52.204-21 and CMMC 2.0 Level 1 expect contractors to implement basic safeguarding—one core element is separating internet-facing services from systems that process or store controlled/unclassified information (CUI). Isolation reduces lateral movement risk, simplifies access control, and limits blast radius if a public system is compromised. For the Compliance Framework, demonstrate design decisions, implemented network segmentation, and evidence of traffic controls and logging.\n\nPre-migration preparation\nInventory, classification, and scope\nStart by inventorying all public endpoints: web servers, APIs, remote management interfaces, DNS names, and their back-end dependencies (databases, auth services, file stores). Classify each service: does it handle CUI or only public data? For each service collect: IP/DNS, ports, protocols, session/state behavior, health-check endpoints, and expected traffic patterns. This inventory is a required artifact for compliance audits and drives your migration plan.\n\nDesign the isolated topology\nDesign a minimal target topology: public subnet(s)/DMZ for internet-facing components (load balancers, reverse proxies, WAF), private subnets for application servers and data stores, and a management subnet for admin access. In AWS terms: public subnets with an Internet Gateway (IGW) hosting ALB/NGINX, private subnets with NAT Gateway for outbound, Security Groups enforcing ALB → app only on required ports. In Azure: Application Gateway/WAF in a dedicated subnet, backend pools in private subnets with NSGs restricting inbound traffic to gateway IPs only. Document routing, firewall rules, and required ports—this documentation is evidence for Compliance Framework assessments.\n\nStep-by-step migration without downtime\n1) Build the parallel environment: provision the isolated subnets, network ACLs/NSGs, load balancer or reverse proxy, WAF, and monitoring (VPC Flow Logs/Azure NSG Flow Logs, SIEM ingestion). 2) Deploy the service into the private subnet (or behind the new ALB) but keep it out of public DNS by assigning a temporary hostname or a private alias. 3) Configure the load balancer to route to the new target group and set health checks that mirror production readiness endpoints. 4) Reduce DNS TTL for the live hostname to a low value (e.g., 60 seconds) at least 48 hours before cutover; this supports rapid rollback if needed.\n\n5) Start traffic shifting: use the load balancer to mirror or split traffic (canary/weighted routing) from the old environment to the new environment. In AWS Application Load Balancer you can register targets gradually; in Azure Traffic Manager or Front Door use priority/weighted routing. Monitor application metrics, error rates, logs, and network telemetry closely. 6) Handle session state: if your app is stateful, move session state to a shared store (Redis, DynamoDB, SQL) before cutover or enable sticky sessions on the load balancer to preserve sessions during traffic shifting.\n\n7) Final cutover: once metrics are stable and you’ve validated functionality under production load, update DNS to point to the new load balancer/ALB IP/CNAME and keep the old environment running as a warm standby for a defined rollback window (e.g., 24–72 hours). 8) Decommission and document: after the rollback window, decommission public routing to the old environment, archive configurations and logs, and update your network diagrams and compliance artifacts.\n\nTechnical examples and small-business scenarios\nExample: Small SaaS on AWS using EC2 and ALB. Create two private subnets across AZs for app servers, place ALB in public subnets with security group ALB: inbound 443/80 from 0.0.0.0/0, outbound to app SG; app SG: inbound 443 from ALB SG only, outbound to DB SG. Use AWS WAF for OWASP rule set. Use Route 53 weighted records to split traffic 10/90 during canary. Enable VPC Flow Logs to CloudWatch and ship to your SIEM for retention. Evidence: Route 53 change history, Security Group rules, ALB access logs, WAF logs, VPC Flow Logs—store in a compliance folder.\n\nOn-prem example for a small business: use a managed switch to create VLANs (VLAN 10 = DMZ, VLAN 20 = internal), place a reverse proxy (NGINX) in DMZ, and configure a firewall (pfSense, Ubiquiti) rules: allow 443 to proxy, allow proxy→internal app on specific ports only, block direct internet-to-internal. Migrate using a temporary proxy hostname and switch firewall NAT rules during a quiet window, or use DNS weight-based rotation if your DNS provider supports it. Collect firewall configs, VLAN maps, and change control tickets as compliance artifacts.\n\nCompliance tips, controls, and best practices\nDocument the design and produce evidence for each requirement: network diagrams (annotated), ACL/Security Group/NSG screenshots or exports, WAF rule sets, logging retention policies, and incident response playbooks. Automate provisioning with IaC (Terraform/ARM/AWS CloudFormation) and check-in templates to version-control your network baselines; this supports repeatable audits. Schedule periodic verification (quarterly) — run vulnerability scans, review security groups for excess open ports, and validate that public IPs are limited to DMZ components only.\n\nRisks of not implementing isolation\nFailing to isolate public services increases the risk of lateral movement: an attacker who compromises an internet-facing service could pivot and access internal systems or CUI, leading to data breaches, contract suspension or termination, and regulatory penalties. Non-compliance with FAR 52.204-21 or CMMC can disqualify a contractor from bidding on government work; beyond regulatory risk, the financial and reputational damage from a breach can be catastrophic for small businesses. From a technical perspective, lack of segmentation also makes incident detection and forensics harder because network telemetry is noisy and undifferentiated.\n\nSummary: migrating public services into isolated subnetworks is achievable for small businesses with careful planning, a parallel build-and-validate approach, traffic-shifting (blue/green or canary) techniques, and clear documentation for the Compliance Framework. Follow the step-by-step plan above, collect the specified evidence, and automate where possible to maintain compliance posture while minimizing downtime and risk."
  },
  "metadata": {
    "description": "Practical, step-by-step guidance for small businesses to migrate public-facing services into isolated subnetworks (DMZ/private subnets) without downtime to meet FAR 52.204-21 and CMMC 2.0 Level 1 SC.L1-B.1.XI requirements.",
    "permalink": "/step-by-step-migrating-public-services-into-isolated-subnetworks-without-downtime-to-comply-with-far-52204-21-cmmc-20-level-1-control-scl1-b1xi.json",
    "categories": [],
    "tags": []
  }
}