Product docs and API reference are now on Akamai TechDocs.
Search product docs.
Search for “” in product docs.
Search API reference.
Search for “” in API reference.
Search Results
 results matching 
 results
No Results
Filters
Migrating Your Virtual Private Cloud (VPC) From AWS to Akamai Cloud
Traducciones al EspañolEstamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
A virtual private cloud (VPC) is a private network environment that lets you define IP address ranges, segment workloads into subnets, and control how resources communicate. VPCs help isolate and organize infrastructure while enabling internal and external traffic control.
A managed VPC service handles key networking functions like NAT, internet access, and routing - while integrating with other cloud features - so you don’t need to configure them manually.
This guide covers how to migrate a basic AWS VPC environment to Akamai Cloud. The AWS setup includes three private EC2 instances, a NAT gateway for selective outgoing traffic, and a bastion host for SSH access. You will walk through how to recreate this setup in Akamai using Linode compute instances and a manual NAT router.
Feature comparison
Before migrating, it’s useful to understand the differences between managed VPC offerings from AWS and Akamai.
What AWS VPCs offer
With AWS VPCs, administrators can define public and private subnets, attach internet gateways for public-facing resources, and use NAT gateways for managed outgoing internet access from private subnets.
In addition to security group controls, AWS VPCs are tightly integrated with managed services like RDS, Lambda, and ECS. These services can be deployed directly into your VPC with minimal configuration.
What Akamai Cloud VPCs offer
VPCs from Akamai provide private, regional networks that let you define custom IP ranges and subnets. All traffic is isolated from the public internet unless explicitly routed through a configured gateway. This lightweight model is ideal for tightly scoped environments where users want fine-grained control without added complexity.
How to adapt
Some AWS features don’t have direct equivalents in Linode, but can be replicated with custom configuration. For example, Akamai Cloud doesn’t offer a managed NAT service. However, outgoing traffic can be enabled using a Linode Compute Instance manually configured to act as a NAT router. This approach suits teams that prefer direct management of network behavior.
At present, Akamai Cloud does not integrate other services (such as NodeBalancers, LKE clusters, or managed databases) with its VPCs. However, some of these services can be replaced with self-managed equivalents and open-source tooling.
Prerequisites and assumptions
This guide assumes access to administrative credentials and CLI tools for both AWS and Akamai Cloud. You should have the ability to view and modify relevant cloud resources in both environments.
AWS CLI and permissions
Ensure that the AWS CLI is installed and configured with a user or role that has permission to manage EC2 instances, subnets, route tables, and NAT gateways.
Linode CLI and permissions
Install the Linode CLI and authenticate using a personal access token with permissions for managing Linode instances and VPCs. You may also need some familiarity with creating and modifying basic Linux network configuration, including IP routes and ufw
rules.
Example environment used in this guide
The example used throughout this guide involves four AWS EC2 instances that all belong to a single VPC:
- Alice: A private EC2 instance with no internet access.
- Bob: Another private EC2 instance with no internet access.
- Charlie: A private EC2 instance that requires outgoing internet access via a NAT gateway, but is not accessible from the public internet.
- Bastion: A public EC2 instance with a public IP address, used for SSH access to Alice, Bob, and Charlie.
These instances are distributed across two subnets within a single AWS VPC:
- A private subnet (
10.0.1.0/24
) that hosts Alice, Bob, and Charlie. - A public subnet (
10.0.2.0/24
) that hosts the bastion instance, with an attached NAT gateway to enable internet connectivity.
Visually, the AWS environment looks like this:
This example layout is representative of many small-to-medium AWS environments where internal workloads are kept isolated from the public internet but require selective outgoing access and secure administrative access.
Document and back up your current configuration
Before making any changes, document the current AWS setup. Having a full record of your environment will help you replicate the configuration accurately and recover if needed.
VPC CIDR block
Start by recording the CIDR block used by your AWS VPC, along with the IP ranges and names of each subnet. In the AWS Console, navigate to the VPC service. Click on your VPC in the list of Your VPCs.
The IPv4 CIDR is listed, but you can also find it on the VPC details page, under the CIDRs tab.
To obtain this VPC information from the command line, run the following command:
aws ec2 describe-vpcs \
--region REPLACE-AWS-REGION \
--query "Vpcs[*].[VpcId,CidrBlock]
[
[
"vpc-0f4b6caa9cd0062d3",
"10.0.0.0/16"
]
]
Subnets and CIDR blocks
On the Resource map tab on the VPC details page, you’ll see the list of subnets associated with your VPC.
Click on the “open link” icon to see details for each one. The public subnet has been assigned a public IPv4 address. EC2 instances on this subnet will have private IP addresses in the 10.0.2.0/24 CIDR block.
The private subnet does not have a public IPv4 address. EC2 instances on this subnet will have private IP addresses in the 10.0.1.0/24
CIDR block.
From the command line, you can obtain this information with the following command:
aws ec2 describe-subnets \
--region REPLACE-AWS-REGION \
--filter "Name=vpc-id,Values=REPLACE-VPC-ID" \
--query "Subnets[].[SubnetId,MapPublicIpOnLaunch,CidrBlock]"
[
[
"subnet-0e3fea9ada10be83e",
true,
"10.0.2.0/24"
],
[
"subnet-0e9396ff4d2a41385",
false,
"10.0.1.0/24"
]
]
IP addresses and subnets of EC2 instances
Next, find the private IP addresses assigned to each EC2 instance in your VPC. Note also which subnet each instance belongs to. This can also be discerned by viewing the CIDR block to which the IP address belongs (for example 10.0.1.0/24
versus 10.0.2.0/24
).
On the EC2 service page, navigate to Instances.
To obtain this information with the aws CLI, run the following command:
aws ec2 describe-instances \
--region REPLACE-AWS-REGION \
--filter "Name=vpc-id,Values=REPLACE-VPC-ID" \
--query "Reservations[].Instances[].[InstanceId,SubnetId,PrivateIpAddress,Tags[?Key=='Name']]"
[
[
"i-00d4deeef9349223c",
"subnet-0e9396ff4d2a41385",
"10.0.1.18",
[
{
"Key": "Name",
"Value": "Alice"
}
]
],
[
"i-0d406a226a301f056",
"subnet-0e9396ff4d2a41385",
"10.0.1.179",
[
{
"Key": "Name",
"Value": "Charlie"
}
]
],
[
"i-02a88dad9d18bbe4e",
"subnet-0e3fea9ada10be83e",
"10.0.2.78",
[
{
"Key": "Name",
"Value": "Bastion"
}
]
],
[
"i-03101f93798bc4190",
"subnet-0e9396ff4d2a41385",
"10.0.1.236",
[
{
"Key": "Name",
"Value": "Bob"
}
]
]
]
If your environment also uses a bastion instance, then note also the public IP address and verify its security group rules allow inbound SSH from your IP. Under the Security tab for the EC2 instance details, view the inbound rules associated with the instance’s security group.
To fetch associated security group rules from the command line, do the following:
aws ec2 describe-instances \
--region REPLACE-AWS-REGION \
--instance-ids REPLACE-EC2-INSTANCE-ID \
--query "Reservations[0].Instances[0].SecurityGroups"
[
{
"GroupName": "aws-vpc-to-migrate-to-linode-BastionSG-1H0AVaL61L8l",
"GroupId": "sg-0a4a889e8c6b94b74"
}
]
aws ec2 describe-security-group-rules \
--region REPLACE-AWS-REGION \
--filters Name=group-id,Values=sg-0a4a889e8c6b94b74
{
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-0e5a9ef366545f63f",
"GroupId": "sg-0a4a889e8c6b94b74",
"IsEgress": true,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "0.0.0.0/0",
"Tags": []
},
{
"SecurityGroupRuleId": "sgr-0ac42e41d86867f5c",
"GroupId": "sg-0a4a889e8c6b94b74",
"IsEgress": false,
"IpProtocol": "tcp",
"FromPort": 22,
"ToPort": 22,
"CidrIpv4": "174.17.11.41/32",
"Tags": []
}
]
}
Route tables
Note your route table entries, especially any 0.0.0.0/0
routes pointing to the NAT gateway or internet gateway. On the Resource Map tab for your VPC, you can see how each route table in your VPC may be associated with a subnet, a NAT gateway, or an internet gateway.
A closer examination of each route table shows how its 0.0.0.0/0
routes maps to the network resource.
Clicking on the Subnet associations tab shows the individual subnet associated with the route table.
To see this information from the command line, follow these commands:
aws ec2 describe-rouet-tables \
--region REPLACE-AWS-REGION \
--filter "Name=vpc-id,Values=REPLACE-VPC-ID" \
--query "RouteTables[][RouteTableId, Routes, Associations[][Routes[][DestinationCidrBlock, GatewayId, NatGatewayId],SubnetId]]"
[
[
"rtb-03081cf3572ddcd9b",
[
{
"DestinationCidrBlock": "10.0.0.0/16",
"GatewayId": "local",
"Origin": "CreateRouteTable",
"State": "active"
}
],
[
[
null,
null
]
]
],
[
"rtb-0d6cff44cf0ff4cf2",
[
{
"DestinationCidrBlock": "10.0.0.0/16",
"GatewayId": "local",
"Origin": "CreateRouteTable",
"State": "active"
},
{
"DestinationCidrBlock": "0.0.0.0/0",
"NatGatewayId": "nat-0413463bf1921ad76",
"Origin": "CreateRoute",
"State": "active"
}
],
[
[
null,
"subnet-0e9396ff4d2a41385"
]
]
],
[
"rtb-0d9886d4223f8764c",
[
{
"DestinationCidrBlock": "10.0.0.0/16",
"GatewayId": "local",
"Origin": "CreateRouteTable",
"State": "active"
},
{
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": "igw-0859c2900292745ed",
"Origin": "CreateRoute",
"State": "active"
}
],
[
[
null,
"subnet-0e3fea9ada10be83e"
]
]
]
]
The goal is to have a complete snapshot of your VPC layout, connectivity, and access controls before starting the migration.
Plan your VPC mapping strategy
With your AWS environment documented, the next step is to design the equivalent layout in Akamai Cloud. Your goal is to replicate routing behavior, instance roles, and access controls as closely as possible. For example, to replicate the AWS environment in Akamai Cloud, you would need:
- An Akamai VPC with a CIDR block that matches the AWS configuration, if possible
10.0.1.0/24
for private workloads10.0.2.0/24
for public resources
- 2 Linode instances (Alice and Bob) which will be isolated within the private subnet
- 1 Linode instance (Charlie) with access to the internet but within the private subnet
- 1 Linode instance (Bastion) for SSH access to all instances, within the public subnet, also acting as a NAT router
- Static private IPs assigned to all Linode instances, to match their AWS counterparts
Recreate the environment in Akamai Cloud (Linode)
With your strategy mapped out, you can begin provisioning resources in Akamai Cloud.
Create the VPC and subnets
Start by creating a new VPC in your preferred region. This can be done within the Akamai Cloud Manager console, or via the linode
CLI. In the same command for creating the VPC, define two subnets: a private subnet for Alice, Bob, and Charlie, and a public subnet for the bastion host.
With the CLI, the command to create an equivalent VPC would be:
linode vpcs create \
--label "my-migrated-vpc" \
--description "VPC migrated from AWS" \
--region us-lax \
--pretty
--subnets '[{"label":"private-subnet","ipv4":"10.0.1.0/24"},{"label":"public-subnet","ipv4":"10.0.2.0/24"}]'
[
{
"description": "VPC migrated from AWS",
"id": 197854,
"label": "my-migrated-vpc",
"region": "us-lax",
"subnets": [
{
"databases": [],
"id": 199163,
"ipv4": "10.0.1.0/24",
"label": "private-subnet",
"linodes": [],
"nodebalancers": []
},
{
"databases": [],
"id": 199164,
"ipv4": "10.0.2.0/24",
"label": "public-subnet",
"linodes": [],
"nodebalancers": []
}
]
}
]
Create the private Linode compute instances
Next, deploy Linode compute instances that correspond with the private EC2 instances from your AWS environment. For the example in this guide, this means deploying Alice, Bob, and Charlie in the private subnet.
The Linodes will be able to communicate with one another through a VLAN. A VLAN is a private network link between Linodes in the same VPC. It allows internal traffic to flow securely-even between instances in different subnets-as long as they share the same VLAN.
Use the Linode CLI to create each of the Linodes that are in the private subnet. The important configurations to keep in mind are:
- It has a VPC network interface using the private subnet.
- It is not assigned a public IPv4 address.
- It has a VLAN network interface with an IP address management (IPAM) address.
For example, to create the Alice instance and attach it to the private subnet (with the same VPC IP address as in the original AWS environment) and make it part of a VLAN, run the following command:
linode linodes create \
--region us-lax \
--type g6-standard-2 \
--image linode/ubuntu20.04 \
--label alice \
--backups_enabled false \
--private_ip false \
--root_pass mylinodepassword \
--interfaces '[{"purpose":"vpc","subnet_id":199163,"ipv4":{"vpc":"10.0.1.18"}},{"purpose":"vlan","label":"my-vlan","ipam_address":"10.0.1.18/24"}]' \
--pretty
[
{
...
"disk_encryption": "enabled",
"group": "",
"has_user_data": false,
"host_uuid": "6b4ea6a82e3a8bfe2bee090b3caf31dee2f94850",
"hypervisor": "kvm",
"id": 78417007,
"image": "linode/ubuntu20.04",
"ipv4": [
"172.235.252.246"
],
"label": "alice",
"region": "us-lax",
"status": "provisioning",
"tags": [],
"type": "g6-nanode-1",
...
}
]
To see the network interfaces-with VPC IP and VLAN IPAM addresses-retrieve the Linode’s network configuration, supplying the Linode’s id
:
linode linodes configs-list 78417007 --pretty
[
{
...
"interfaces": [
{
"active": true,
"id": 5629467,
"ip_ranges": [],
"ipam_address": null,
"ipv4": {
"nat_1_1": null,
"vpc": "10.0.1.18"
},
"label": null,
"primary": false,
"purpose": "vpc",
"subnet_id": 199163,
"vpc_id": 197854
},
{
"active": true,
"id": 5629468,
"ip_ranges": null,
"ipam_address": "10.0.1.18/24",
"label": "my-vlan",
"primary": false,
"purpose": "vlan",
"subnet_id": null,
"vpc_id": null
}
],
...
}
]
The command to create the Alice Linode provided a VPC interface with the VPC IP address 10.0.1.18
and a VLAN IPAM address of 10.0.1.18/24
.
A public IP address is created for every Linode, but because the Linode creation command did not include a public
interface, the public IP address is not actually attached to a network interface for the Linode.
After creating the Linodes for Bob and Charlie, you should have:
Linode | VPC IP |
---|---|
Alice | 10.0.1.18 |
Bob | 10.0.1.236 |
Charlie | 10.0.1.179 |
Create the public Linode compute instance
Deploy the bastion host in the public subnet. This will be the only instance you can SSH into from the public internet. From this machine, you will be able to SSH into the other private instances in the VLAN.
In the original AWS environment, the NAT gateway service was used to allow outgoing internet access for a machine in the private subnet (for example, Charlie). Because Linode does not offer a NAT gateway service, the bastion host can be configured to function as a NAT router.
The important configurations for the bastion instance are:
- It has a VPC network interface using the public subnet.
- It is assigned a public IPv4 address.
- It has a VLAN network interface with an IP address management (IPAM) address.
linode linodes create \
--region us-lax \
--type g6-standard-2 \
--image linode/ubuntu20.04 \
--label bastion \
--backups_enabled false \
--private_ip false \
--root_pass mylinodepassword \
--interfaces '[{"purpose":"vpc","subnet_id":199164,"ipv4":{"vpc":"10.0.2.78","nat_1_1":"any"}},{"purpose":"vlan","label":"my-vlan","ipam_address":"10.0.2.78/24"}]' \
--pretty
[
{
...
"disk_encryption": "enabled",
"id": 78472676,
"image": "linode/ubuntu20.04",
"ipv4": [
"172.236.243.216"
],
"label": "bastion",
"lke_cluster_id": null,
"region": "us-lax",
"status": "provisioning",
"tags": [],
"type": "g6-standard-2",
}
]
Including nat_1_1:any
in the options for the VPC interface enables the assigned public IPv4 address for the Linode. Examine the resulting network configuration for the bastion instance.
linode linodes configs-list 78472676 --pretty
[
{
...
"interfaces": [
{
"active": false,
"id": 5646878,
"ip_ranges": [],
"ipam_address": null,
"ipv4": {
"nat_1_1": "172.236.243.216",
"vpc": "10.0.2.78"
},
"label": null,
"primary": false,
"purpose": "vpc",
"subnet_id": 199164,
"vpc_id": 197854
},
{
"active": false,
"id": 5646879,
"ip_ranges": null,
"ipam_address": "10.0.2.78/24",
"label": "my-vlan",
"primary": false,
"purpose": "vlan",
"subnet_id": null,
"vpc_id": null
}
],
...
}
]
Notice for this instance that the nat_1_1
address for the VPC interface is set to the auto-generated public IP address (172.236.243.216
).
Verify SSH access for Bastion
Using the public IP address for the bastion instance, connect with SSH to verify access.
ssh root@172.236.2443.216
The remainder of this guide assumes commands performed while logged in as root
. However, you should consider creating and using a limited sudo
user to reduce your risk of accidentally performing damaging operations.
Configure Linodes for SSH access within the VPC
You can connect to the bastion instance with SSH because it has a public IP address attached to its VPC network interface. As expected, you cannot connect to any of the private Linode instances from outside the VPC. This matches the original AWS VPC environment.
To ensure you can SSH into each of the private instances, configure VLAN and firewall settings on each Linode.
Verify VLAN IPAM address on Bastion
Using your already established SSH connection to the bastion instance, examine its VLAN network interface configuration with the following command:
ip addr show dev eth1
The command output should show a line with the IPAM address you specified when creating the Linode, similar to this:
inet 10.0.2.78/24 scope global eth1
If it does not, then you will need to edit the system’s Netplan config, which is a YAML file found in /etc/netplan/
. Edit the contents of the file to look like the following:
network:
version: 2
ethernets:
eth0:
dhcp4: true
eth1:
addresses:
- 10.0.2.78/24
Save the file. This assigns a static IP of 10.0.2.78/24
to eth1
. This will also ensure that the setting persists even when the machine is rebooted. Set proper permissions on the file, then apply the new Netplan configuration.
chmod 600 /etc/netplan/01-netcfg.yaml
netplan apply
Check the network interface configuration again. You should see the line that begins with inet
which includes the IPAM address you specified.
ip addr show dev eth1
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 90:de:01:3c:2e:58 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.78/24 brd 10.0.2.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::92de:1ff:fe3c:2e58/64 scope link
valid_lft forever preferred_lft forever
Configure public instances for intra-VLAN SSH
For each of the private Linode instances (Alice, Bob, Charlie), you will need to:
- Verify properly configured VLAN IPAM addresses
- Use
ufw
(firewall) to allow SSH connections from within the VLAN
Because these Linode instances do not have a public IP address, you cannot connect with SSH from either your local machine or the bastion instance. You will need to log in to your Akamai Cloud Manager, navigate to each Linode, and click Launch LISH Console.
Within the LISH Console, connect to the machine as root
, using the password specified when creating the Linode. Verify that the eth1
VLAN interface shows the inet
line with the expected IPAM address.
ip addr show dev eth1
If the output does not include the inet
line, manually assign by editing /etc/netplan/01-netcfg.yaml
.
- File: Edit /etc/netplan/01-netcfg.yaml on Alice to set eth1 VLAN address
1 2 3 4 5 6 7 8
network: version: 2 ethernets: eth0: dhcp4: true eth1: addresses: - 10.0.1.18/24
- File: Edit /etc/netplan/01-netcfg.yaml on Bob to set eth1 VLAN address
1 2 3 4 5 6 7 8
network: version: 2 ethernets: eth0: dhcp4: true eth1: addresses: - 10.0.1.236/24
Set proper permissions on the file, then apply the new Netplan configuration.
chmod 600 /etc/netplan/01-netcfg.yaml
netplan apply
Each machine also needs to have ufw
rules configured to:
- Deny any incoming or outgoing connections by default
- Explicitly allow incoming and outgoing SSH connections within the VLAN
To do this, run the following commands:
ufw default deny incoming
ufw default deny outgoing
ufw allow from 10.0.0.0/16 to any port 22 proto tcp
ufw allow out to 10.0.0.0/16 port 22 proto tcp
Enable or restart ufw
, then verify the rule setup with the following commands:
ufw enable ufw reload ufw status numbered
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN 10.0.0.0/16
[ 2] 10.0.0.0/16 22/tcp ALLOW OUT Anywhere (out)
Now, you can SSH into these private Linodes from the bastion instance, using the VLAN IPAM address and the root
password. From within your existing SSH connection to the bastion instance, run the following command:
ssh root@10.0.1.18
root@10.0.1.18's password: ****************
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.14.3-x86_64-linode168 x86_64)
...
After configuring each of the Linode instances in the VPC, you will be able to SSH from any machine in the VPC to any other machine within the VPC.
Now that you have successfully configured this Linode instance for SSH access from the Bastion Linode, you can close the LISH Console.
Configure private instance for outgoing internet access
At this point, the configuration for Alice and Bob is complete. However, the Charlie instance needs outgoing internet access. To do this, it will use the bastion instance, which will be configured to function like a NAT router.
Verify outgoing internet access from bastion instance
With the SSH connection to the bastion instance established, verify that it has outgoing internet access. ifconfig.me is an online service that returns the IP address of the calling machine.
curl -i ifconfig.me
HTTP/1.1 200 OK
Content-Length: 15
access-control-allow-origin: *
content-type: text/plain
...
172.236.243.216
Enable IP forwarding on bastion instance
IP forwarding enables a machine to forward packets between network interfaces. To turn the bastion instance into a basic router, it must be configured to forward packets received on one interface to another-for example, between the VLAN on eth1
and the public VPC subnet on eth0
.
Tell the Linux kernel to pass IPv4 packets between interfaces by modifying /etc/sysctl.conf
on the bastion instance. Find or uncomment the following line:
net.ipv4.ip_forward=1
Save the file and apply the new settings with the following command:
sysctl -p /etc/sysctl.conf
net.ipv4.ip_forward = 1
Configure ufw
on bastion instance to allow packet forwarding
By default, ufw
drops forwarded packets. Change this behavior on the bastion instance by modifying /etc/default/ufw
. Change the value of the DEFAULT_FORWARD_POLICY
line from DROP
to ACCEPT
.
DEFAULT_FORWARD_POLICY="ACCEPT"
Add ufw
rules on the bastion instance to allow inbound traffic from eth1
(the VLAN) and outgoing traffic on eth0
(the public interface).
ufw allow in on eth1
ufw allow out on eth0
ufw reload
ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN 10.0.0.0/16
Anywhere on eth1 ALLOW IN Anywhere
10.0.0.0/16 22/tcp ALLOW OUT Anywhere
Anywhere ALLOW OUT Anywhere on eth0
Configure Bastion to use NAT masquerading
NAT masquerading rewrites the outgoing packets’ source IP addresses (forwarded from other machines in the VPC) to use the bastion instance’s public IP address. This is what allows the Charlie instance’s traffic to be routed to the internet, even though it doesn’t have a public IP address. The response packets will come back to the bastion instance, which maps them back to Charlie.
On the bastion instance, edit /etc/ufw/before.rules
to add NAT masquerading. Near the top of the file, above the *filter
line, add the following lines:
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Masquerade traffic from private VLAN subnet to the public internet
-A POSTROUTING -s 10.0.0.0/16 -o eth0 -j MASQUERADE
COMMIT
After saving these changes, restart ufw
.
ufw reload iptables -t nat -L -n -v
...
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE 0 -- * eth0 10.0.0.0/16 0.0.0.0/0
This confirms that the private subnet traffic is being masqueraded out the external interface (eth0
) on the bastion instance.
Configure private instance to route outgoing traffic through Bastion
On the private Linode instance (Charlie), you will need to:
- Set the default route to use the bastion instance’s VLAN IMAP address.
- Configure
ufw
to allow outgoing traffic
From the bastion instance, SSH into Charlie. Confirm that Charlie does not currently have outgoing internet access:
curl ifconfig.me
The above command will hang, with no response. This is expected.
Set the default route for this private instance to use the VLAN IMAP address of the bastion instance. This will send all non-local traffic to the bastion instance’s VLAN interface. Do this by editing the Netplan config file in /etc/netplan/
. Edit the contents of the file to look like the following:
- File: Edit /etc/netplan/01-netcfg.yaml on Charlie to set default IP behavior
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
network: version: 2 ethernets: eth0: addresses: - 10.0.1.179/24 eth1: addresses: - 10.0.2.100/24 nameservers: addresses: - 8.8.8.8 - 8.8.4.4 routes: - to: 0.0.0.0/0 via: 10.0.2.78
Save the file.
This configuration assigns Charlie two static IPs:
10.0.1.179
oneth0
to communicate with other machines in the internal subnet10.0.2.100
oneth1
to route outgoing traffic through the NAT router on the bastion instance at10.0.2.78
By placing Charlie’s default route on eth1
, all internet-bound traffic is directed through the bastion instance, which handles NAT and forwards the traffic externally. This setup keeps internal communication and internet routing on separate interfaces, helping isolate local traffic from upstream NAT operations.
Set proper permissions on the file, then apply the new Netplan configuration.
chmod 600 /etc/netplan/01-netcfg.yaml
netplan apply
Change the ufw
rules to allow outgoing traffic, which will now route through the bastion instance.
ufw default allow outgoing
ufw reload
With these configurations in place, verify that Charlie now has outgoing access to the internet:
curl -i ifconfig.me
HTTP/1.1 200 OK
Content-Length: 15
access-control-allow-origin: *
content-type: text/plain
...
172.236.243.216
ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=110 time=0.639 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=110 time=0.724 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=110 time=0.668 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2038ms
rtt min/avg/max/mdev = 0.639/0.677/0.724/0.035 ms
Monitor post-migration behavior
After initial testing, continue to monitor the new environment to ensure it operates as expected.
On the NAT router, check for dropped or rejected traffic using tools like dmesg
, journalctl
, or iptables
. For example:
dmesg | grep -i drop
shows kernel log messages that contain the word “drop”, which can surface dropped packets.journalctl -u ufw
shows ufw logs.journalctl -k
shows kernel messages.iptables -t nat -L POSTROUTING -v -n
helps you confirm that your NAT rules (such asMASQUERADE
) are being hit by showing how many packets/bytes have matched each rule.
iptables -t nat -L POSTROUTING -v -n
Chain POSTROUTING (policy ACCEPT 25 packets, 1846 bytes)
pkts bytes target prot opt in out source destination
653 149K MASQUERADE 0 -- * eth0 10.0.0.0/16 0.0.0.0/0
Monitor resource usage on the NAT router to ensure it is not becoming a bottleneck. Tools like top
, htop
, and iftop
can help you keep an eye on CPU, memory, and bandwidth usage.
Within the Akamai Cloud Manager, you can set up monitoring and alerts for Linode Compute Instances.
Alternatively, install monitoring agents or set up log forwarding to external observability platforms for more detailed insight into traffic flow, resource utilization, and system health.
Periodic SSH audits and basic connectivity checks between instances can also help validate that the VPC remains stable over time. For example, to check SSH activity, run the following command:
grep 'sshd' /var/log/auth.log
...
2025-06-16T23:28:59.223088-07:00 my-linode sshd[9355]: Accepted password for root from 10.0.2.78 port 53520 ssh2
2025-06-16T23:28:59.227749-07:00 my-linode sshd[9355]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
2025-06-16T23:43:44.812075-07:00 my-linode sshd[9526]: Accepted password for root from 10.0.2.78 port 32886 ssh2
2025-06-16T23:43:44.816294-07:00 my-linode sshd[9526]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
2025-06-16T23:44:30.593329-07:00 my-linode sshd[9355]: Received disconnect from 10.0.2.78 port 53520:11: disconnected by user
2025-06-16T23:44:30.597043-07:00 my-linode sshd[9355]: Disconnected from user root 10.0.2.78 port 53520
2025-06-16T23:44:30.597234-07:00 my-linode sshd[9355]: pam_unix(sshd:session): session closed for user root
Finalize your migration
Once you’ve verified that the Linode environment is functioning correctly, complete the migration by updating services and decommissioning the original AWS infrastructure.
Update any scripts, applications, or service configurations that reference AWS-specific hostnames or IPs. If you use DNS, point records to any new Linode instances with public IPs. This helps minimize downtime and makes the transition seamless to users.
Check your monitoring and alerting setup. Make sure Linode Compute Instances are covered by any health checks or observability tools your team depends on. If you used CloudWatch or other AWS-native tools, replace them with Linode monitoring or third-party alternatives. See “Migrating From AWS CloudWatch to Prometheus and Grafana on Akamai.”
Decommission AWS resources that are no longer needed. This includes EC2 instances, NAT gateways, Elastic IPs, route tables, subnets, and eventually the VPC itself. Make sure to clean up all resources to avoid unnecessary charges.
Finally, update internal documentation, runbooks, and network diagrams to reflect the new environment. A clear and current record will support future audits, troubleshooting, and onboarding.
Additional Resources
The resources below are provided to help you become familiar with Akamai VPC when migrating from AWS VPC.
- AWS
- What is Amazon VPC?
- VPC Service
- NAT Gateways
- Akamai
- Akamai Cloud Manager
- VPC Documentation
- VLAN Documentation
- API reference for VPC management
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
This page was originally published on