Skip to content

Best Practices

This guide covers proven strategies for designing your cloud network architecture with Subnetter.

CIDR Allocation Strategy

Start Large, Subdivide Small

Always start with the largest CIDR block you can reasonably use. It’s much easier to leave space unused than to expand later.

Base CIDRTotal IPsRecommended For
/816.7MLarge enterprise, multi-cloud
/121MMedium enterprise, single cloud
/1665KSmall organization, single region

Reserve Space Between Allocations

Use larger prefix lengths than strictly necessary to leave room for growth:

Recommended Hierarchy:
├── Base: /8 (16.7M addresses)
├── Account: /16 (65K per account, supports 256 accounts)
├── Region: /20 (4K per region, supports 16 regions per account)
├── AZ: /22 (1K per AZ, supports 4 AZs per region)
└── Subnet: /24-/28 (varies by workload)

Ensure Sufficient Space at Each Level

Each level must have enough space to contain all children. Use the prefixLengths configuration to control this:

{
"baseCidr": "10.0.0.0/8",
"prefixLengths": {
"account": 16,
"region": 20,
"az": 22
}
}

Rule of thumb: Each level’s prefix should be at least 2-4 bits larger than the parent to allow for growth.

ParentChildBits DifferenceChildren Supported
/8/168 bits256
/16/204 bits16
/20/222 bits4

Plan for Multi-Cloud from Day One

Even if you’re single-cloud today, reserve space for future cloud providers:

{
"baseCidr": "10.0.0.0/8",
"accounts": [
{
"name": "production",
"clouds": {
"aws": {
"baseCidr": "10.0.0.0/12",
"regions": ["us-east-1", "us-west-2"]
}
}
}
]
}

This reserves 10.0.0.0/12 for AWS, leaving 10.16.0.0/12 through 10.240.0.0/12 available for Azure, GCP, or future expansion.

Subnet Sizing Guidelines

Size by Workload

Different workloads have different IP requirements:

Subnet TypePrefixUsable IPsUse Case
/22Large1,022Kubernetes node pools, container clusters
/23Medium-Large510Application tier, microservices
/24Standard254General purpose, web tier
/25Medium126Smaller app deployments
/26Small62Databases, caches
/27Smaller30Management, monitoring
/28Minimal14Load balancers, NAT gateways

Account for Cloud Provider Reservations

Cloud providers reserve IPs in each subnet beyond the standard network and broadcast addresses:

ProviderReserved IPsWhat’s ReservedUsable in /24
AWS5Network, VPC router, DNS, future use, broadcast251
Azure5Network, default gateway, 2× Azure DNS, broadcast251
GCP4Network, gateway, second-to-last, broadcast252

Respect Cloud Provider CIDR Limits

Each cloud provider has specific constraints on VPC/VNet and subnet sizes:

ProviderVPC/VNet RangeSubnet RangeMin SubnetNotes
AWS/16 to /28/16 to /28/28 (11 usable)Most restrictive; use secondary CIDRs for larger VPCs
Azure/2 to /29/2 to /29/29 (3 usable)Very permissive; /8 recommended as practical max
GCP/8 to /29/8 to /29/29 (4 usable)Global VPC spans all regions

Example Subnet Type Configuration

{
"subnetTypes": {
"Kubernetes": 22,
"Private": 23,
"Public": 24,
"Data": 26,
"Management": 28
}
}

CIDR Allocation Patterns

Pattern 1: Account-Based Isolation

Best for organizations with strict account boundaries (compliance, multi-tenant):

10.0.0.0/8 (Base)
├── 10.0.0.0/16 - Account A (Development)
├── 10.1.0.0/16 - Account B (Staging)
├── 10.2.0.0/16 - Account C (Production)
└── 10.3.0.0/16 - Account D (Shared Services)

Benefits:

  • Complete network isolation between accounts
  • Simplified security group and NACL rules
  • Clear audit boundaries
  • Easy to delegate to different teams

Pattern 2: Environment-Based Isolation

Best for organizations that want environment-level grouping:

10.0.0.0/8 (Base)
├── 10.0.0.0/12 - Development Environments
│ ├── 10.0.0.0/16 - Team A Dev
│ └── 10.1.0.0/16 - Team B Dev
├── 10.16.0.0/12 - Staging Environments
└── 10.32.0.0/12 - Production Environments

Benefits:

  • Environment-wide policies
  • Consistent routing across teams
  • Clear promotion path (dev → staging → prod)

Pattern 3: Cloud Provider Isolation

Best for multi-cloud organizations:

10.0.0.0/8 (Base)
├── 10.0.0.0/12 - AWS
├── 10.16.0.0/12 - Azure
└── 10.32.0.0/12 - GCP

Benefits:

  • Provider-specific routing tables
  • Simplified cross-cloud connectivity
  • Clear provider boundaries

Naming Conventions

Account Naming

Use consistent, descriptive names:

PatternExamplesBest For
Environmentprod, staging, devSimple organizations
Team + Environmentplatform-prod, data-devMulti-team organizations
Business Unitfinance, marketing, engineeringLarge enterprises
Projectproject-alpha, initiative-betaProject-based organizations

Subnet Type Naming

Choose names that reflect purpose:

{
"subnetTypes": {
"Public": 24,
"Private": 24,
"Protected": 25,
"Isolated": 26
}
}

Version Control Best Practices

Store Configurations in Git

Terminal window
network-configs/
├── production.json
├── staging.json
├── development.json
└── allocations/
├── production.csv
├── staging.csv
└── development.csv

Use Branching for Changes

Terminal window
# Create a branch for network changes
git checkout -b add-new-region
# Edit configuration
vim production.json
# Generate and verify
subnetter generate -c production.json -o allocations/production.csv
# Commit with descriptive message
git add .
git commit -m "feat(network): add ap-southeast-1 region to production"
# Create PR for review
git push origin add-new-region

Tag Deployed Configurations

Terminal window
git tag -a v1.2.0 -m "Production network v1.2.0 with AP Southeast region"
git push origin v1.2.0

Avoiding Common Mistakes

❌ Starting Too Small

Using /16 as base CIDR when /8 would allow future growth without re-IP.

❌ Inconsistent Patterns

Different allocation patterns across accounts makes automation and troubleshooting harder.

❌ No Reserved Space

Allocating every available block leaves no room for new regions or accounts.

❌ Overlapping Overrides

Using custom baseCidr values that overlap with auto-allocated ranges.

How to Fix: Space Exhaustion

If you’re running out of space:

  1. Use larger prefix lengths for subnets (/26 instead of /24)
  2. Override specific accounts with dedicated CIDR ranges
  3. Split by provider using different RFC 1918 ranges
  4. Consolidate regions if some are underutilized

How to Fix: Overlapping CIDRs

If you have overlapping ranges:

  1. Run validation to identify conflicts:
    Terminal window
    subnetter validate -c config.json -v
  2. Check custom overrides for conflicts with auto-allocation
  3. Use non-overlapping ranges for different cloud providers

Integration with IaC

Terraform Integration

locals {
# Parse the Subnetter CSV output
allocations = csvdecode(file("${path.module}/allocations.csv"))
# Filter for AWS subnets only
aws_subnets = {
for row in local.allocations :
"${row["Account Name"]}-${row["Availability Zone"]}-${row["Subnet Role"]}" => {
account = row["Account Name"]
vpc_cidr = row["VPC CIDR"]
az_cidr = row["AZ CIDR"]
subnet_cidr = row["Subnet CIDR"]
az = row["Availability Zone"]
role = row["Subnet Role"]
region = row["Region Name"]
usable_ips = row["Usable IPs"]
}
if row["Cloud Provider"] == "aws"
}
}
resource "aws_subnet" "this" {
for_each = local.aws_subnets
vpc_id = aws_vpc.main.id
cidr_block = each.value.subnet_cidr
availability_zone = each.value.az
tags = {
Name = each.key
Role = each.value.role
Account = each.value.account
UsableIPs = each.value.usable_ips
}
}

CI/CD Pipeline Integration

.github/workflows/network.yml
name: Network Validation
on:
pull_request:
paths:
- 'network-configs/**'
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm install -g subnetter
- run: |
for config in network-configs/*.json; do
subnetter validate -c "$config"
done

Next Steps