Skip to content

0062: Shared VPC Networking

STATUS

Accepted (Historical)

CONTEXT

This ADR is for documenting the historical choice of using a Shared VPC setup for our workload accounts. We made this decision back in 2022 before the ADR repository was setup and we've been using this setup since this time. This ADR is an attempt to capture the context of that decision as best as possible.

Considered Options

  • Shared VPCs using AWS Resource Access Manager
  • Workload Specific Networks with Transit Gateway
  • Isolated Workload Specific Networks

DECISION

We decided on Shared VPCs using AWS Resource Access Manager due to the reduction of complexity it provided the application developers and the overall cost savings it provided.

Complexity Reduction

By offering a series of Shared VPCs for production and pre-production environments, it removes the need for application developers to provision networking resources in their workload accounts and simplifies the process of gaining connectivity between services.

Also, the Shared VPC model means we do not need to provision and manage a Transit Gateway to facilitate communication between services, this offers further reduction in complexity for the DevOps engineers.

Cost Reduction

If every workload or every workload account was provisioning its own networking, we would see an increase in VPC cost due to additional NAT Gateways and the added cost of a Transit Gateway.

Isolated Not an Option

We ruled out the Isolated Workload Specific Networks option as we knew we would need inter-workload communication and being forced to deploy all inter-related resources into a single workload account greatly limited us.

3 Shared Networks + Development

In order to provide network isolation between our Production workload accounts, our Pre-Prod workload accounts, and our Sandbox accounts, we decided on provisioning 3 shared VPCs in 3 separate accounts under the Infrastructure OU and use the OUs to control the Access Policy for the Resource Share.

Account Account Name Shared VPC CIDR RAM Policy Notes
890019845446 networking-prod Production 10.30.0.0/16 Shared with Workloads/Prod OU and Infrastructure/Prod OU
248281107104 networking-stage SDLC 10.31.0.0/16 Shared with Workloads/SDLC OU and Infrastructure/SDLC OU The account was incorrectly named as stage, SDLC or PreProd would have been better suited.
896634455853 networking-sandbox Sandbox 10.32.0.0/16 Shared with Sandbox OU

There is another "Development" Shared VPC in the CDK app but it is only used by the DevOps engineers to work on the Shared VPC infrastructure from their Sandbox.

CIDR Allocation

We wanted to ensure we properly avoided CIDR collisions with our existing VPCs in the event that we needed to peer with one of these legacy VPCs. We ensured our CIDR allocation for these new Shared VPCs avoided any overlap with these existing VPCs.

The CIDR ranges of our existing VPCs that we wanted to avoid colliding with:

  • 10.0.0.0/16 (AdGem API dedicated VPC)
  • 10.20.0.0/16 (AdGem General VPC)
  • 172.31.0.0/16 (AdGem Public VPC)
  • 192.168.0.0/16 (AdAction Vega VPC)
  • 172.31.0.0/16 (AdAction General VPC)

We allocated these CIDR ranges to the new networks:

  • 10.30.0.0/16 - Production
  • 10.31.0.0/16 - SDLC
  • 10.32.0.0/16 - Sandbox
  • 10.35.0.0/16 - Development

CONSEQUENCES

The Shared VPC approach has trade-offs that should be considered.

Risks

  • Increased networking blast radius
  • Reduced network isolation between services (all in the same VPC)
  • Reliance on Peering Connections for VPC-to-VPC connectivity without the Transit Gateway

NOTES

This is a historical ADR documenting a decision made in Summer 2022.

References

Original Author

Nick Haynes

Approval Date

Historical (Summer 2022)