Product docs and API reference are now on Akamai TechDocs.
Search product docs.
Search for “” in product docs.
Search API reference.
Search for “” in API reference.
Search Results
 results matching 
 results
No Results
Filters
Migrating Your Azure Virtual Network (VNet) to Akamai Cloud VPC
Traducciones al EspañolEstamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
A Virtual Private Cloud (VPC) is a private network environment that lets you define IP address ranges, create subnets, and control how resources communicate. VPCs isolate workloads while providing controlled access to internal and external traffic.
This guide shows how to migrate a basic Azure VNet environment to Akamai Cloud. The Azure setup includes three private Azure VM instances, a NAT gateway for selective outgoing traffic, and a public VM instance acting as a bastion for SSH access. It walks through how to recreate this setup in Akamai Cloud using Linode compute instances, a VLAN, and a manually configured NAT router.
Feature Comparison
Before migrating, it’s useful to understand the differences between Azure VNet and Akamai Cloud VPCs.
Akamai Cloud VPCs
- Deliver private, regional networks where you control IP ranges and subnets.
- Isolate all traffic by default, with internet access only where you explicitly configure it.
- Provide a straightforward model: no hidden routing, fewer managed abstractions, and predictable behavior.
- Ideal for teams that want fine-grained control and freedom to use open source or self-managed components.
Azure VNets
- Allow administrators to define custom IP ranges, segment workloads into subnets, and configure routing and firewall rules.
- Provide network security groups (NSGs) and managed NAT Gateway services for outbound traffic.
- Integrate tightly with Azure’s managed services (e.g., databases, Kubernetes, and App Service Environments).
How to Adapt
Azure’s managed NAT Gateway can be replicated on Akamai Cloud with a Linode Compute Instance configured as a NAT router. Services integrated into Azure VNets (e.g., databases, load balancers) can be replaced with self-managed equivalents or open source tools on Akamai Cloud.
Before You Begin
Complete the following prerequisites prior to following the steps in this guide:
Follow our Get Started guide to create an Akamai Cloud account if you do not already have one.
Create a personal access token with permissions to manage Linode instances and VPCs using the instructions in our Manage personal access tokens guide.
Install the Linode CLI using the instructions in the Install and configure the CLI guide. See our API reference for comprehensive documentation of Linode CLI functionality.
You need an Azure account with a user or role that has permission to manage VMs, subnets, network security groups, and NAT gateways.
Ensure that the Azure CLI (
az) is installed and configured.
Example Environment
The example used throughout this guide involves four Azure VMs that all belong to a single VNet:
- Alice (
10.0.1.18): Private EC2 instance with no internet access. - Bob (
10.0.1.236): Private EC2 instance with no internet access. - Charlie (
10.0.1.179): Private EC2 instance that requires outgoing internet access via a NAT, but no direct inbound access. - Bastion (
10.0.2.78): Public EC2 instance with a public IP address, used to SSH into Alice, Bob, and Charlie.
These instances are distributed across two subnets within a single Azure VNet:
- Private subnet (
10.0.1.0/24): Alice, Bob, and Charlie. - Public subnet (
10.0.2.0/24): Bastion, with a NAT gateway to provide outbound internet for the private subnet.
The diagram below offers a visual representation of the example Azure VNet setup:


This reflects common small-to-medium Azure environments where workloads remain private but need selective egress and secure administrative access.
Document Your Current Configuration
Before migrating, capture the details of your Azure setup. This record ensures you can replicate it in Akamai Cloud.
VNet and Subnet CIDR Blocks
Record the CIDR block used by your Azure VNet, along with the IP ranges and names of each subnet.
IP Addresses and Subnets of VM Instances
Record the private IPs of each VM instance in your VNet and the subnet it belongs to (e.g., 10.0.1.0/24 versus 10.0.2.0/24).
In Azure, each VM is attached to a network interface card (NIC). The NIC resource belongs to a subnet and has a private IP address assigned.
NAT Gateway and Network Security Group
The example Azure VNet has a NAT gateway and a network security group with firewall rules to enable outbound internet access for Charlie while also denying access for Alice and Bob.
In the example environment, a network security group rule denies outbound internet access for the subnet. However, a higher priority rule (lower number) allows outbound traffic specifically to the Charlie VM at 10.0.1.179.
The goal is to have a complete snapshot of your Azure VNet layout, connectivity, and access controls before starting the migration.
Recreate the Environment in Akamai Cloud
With your Azure VNet environment documented, the next step is to design the equivalent layout in Akamai Cloud. The goal is to replicate routing behavior, instance roles, and access controls as closely as possible. To replicate the example Azure VNet in Akamai Cloud, you would need:
- An Akamai VPC with a CIDR block that matches the Azure VNet configuration, for example:
10.0.1.0/24for private workloads10.0.2.0/24for public resources
- 2 Linode instances (Alice and Bob) isolated within the private subnet
- 1 Linode instance (Charlie) with access to the internet, but within the private subnet
- 1 Linode instance (Bastion) for SSH access to all instances, within the public subnet, which also acts as a NAT router
- Static private IPs assigned to all Linode instances, to match their Azure counterparts
Additionally, a VLAN provides a private Layer-2 link between Linodes in the same VPC, enabling secure internal communication across subnets without exposing traffic to the public internet.
The diagram below offers a visual representation of the equivalent Akamai Cloud setup:


With your strategy mapped out, you can begin provisioning resources in Akamai Cloud.
Create the VPC and Subnets
Create a new VPC in your preferred region. Within the VPC, define a private subnet for Alice, Bob, and Charlie, and a public subnet for Bastion. This can be done within the Akamai Cloud Manager, or via the linode CLI.
Run the following linode-cli command to create an equivalent VPC, replacing AKAMAI_REGION (e.g., us-mia) with the Akamai Cloud region closest to you or your users:
linode-cli vpcs create \
--label "my-migrated-vpc" \
--description "VPC migrated from Azure" \
--region AKAMAI_REGION \
--pretty \
--subnets '[{"label":"private-subnet","ipv4":"10.0.1.0/24"},{"label":"public-subnet","ipv4":"10.0.2.0/24"}]'Take note of the id fields associated with each subnet (e.g., 254564 for private-subnet and 254565 for public-subnet) for use in subsequent commands:
[
{
"created": "2025-09-05T16:25:24",
"description": "VPC migrated from Azure",
"id": 249729,
"label": "my-migrated-vpc",
"region": "us-mia",
"subnets": [
{
"created": "2025-09-05T16:25:24",
"databases": [],
"id": 254564,
"ipv4": "10.0.1.0/24",
"label": "private-subnet",
"linodes": [],
"nodebalancers": [],
"updated": "2025-09-05T16:25:24"
},
{
"created": "2025-09-05T16:25:24",
"databases": [],
"id": 254565,
"ipv4": "10.0.2.0/24",
"label": "public-subnet",
"linodes": [],
"nodebalancers": [],
"updated": "2025-09-05T16:25:24"
}
],
"updated": "2025-09-05T16:25:24"
}
]Create the Private Linodes
Deploy Linode compute instances that correspond with the private VMs from your Azure environment (e.g., Alice, Bob, and Charlie) to the private-subnet. The Linodes can communicate with each other through a VLAN, which is a private network link between Linodes in the same VPC. It allows internal traffic to flow securely, even between instances in different subnets, as long as they share the same VLAN.
The command below creates the Alice instance, attaches it to the private subnet, assigns it the same VPC IP address used in the original Azure environment (e.g,
10.0.1.18), and adds it to the VLAN at10.0.99.18/24. Substitute AKAMAI_REGION (e.g.,us-mia), ALICE_ROOT_PASSWORD (e.g.,myalicerootpassword), and AKAMAI_PRIVATE_SUBNET_ID (e.g.,254564) with your own values:linode-cli linodes create \ --region AKAMAI_REGION \ --type g6-standard-2 \ --image linode/ubuntu24.04 \ --label alice \ --backups_enabled false \ --private_ip false \ --root_pass ALICE_ROOT_PASSWORD \ --interfaces '[{"purpose":"vpc","subnet_id":AKAMAI_PRIVATE_SUBNET_ID,"ipv4":{"vpc":"10.0.1.18"}},{"purpose":"vlan","label":"my-vlan","ipam_address":"10.0.99.18/24"}]' \ --pretty[ { ... "disk_encryption": "enabled", "group": "", "has_user_data": false, "host_uuid": "4a53d44b88e1a32a4194cde65aa65f04721a8a7d", "hypervisor": "kvm", "id": 84164398, "image": "linode/ubuntu24.04", "ipv4": [ "172.238.217.174" ], "ipv6": "2a01:7e04::2000:43ff:feae:fdd2/128", "label": "alice", "lke_cluster_id": null, "region": "us-mia", "specs": { "disk": 81920, "gpus": 0, "memory": 4096, "transfer": 4000, "vcpus": 2 }, "status": "provisioning", "tags": [], "type": "g6-standard-2", ... } ]Note A public IP address is created for every Linode, however, because the Linode creation command did not include apublicinterface, the public IP address is not attached to a network interface.Use a variation of the
createcommand to deploy the Bob Linode, attach it to the private subnet, assign it the original Azure VNet IP (e.g,10.0.1.236), and add it to the VLAN at10.0.99.236/24:linode-cli linodes create \ --region AKAMAI_REGION \ --type g6-standard-2 \ --image linode/ubuntu24.04 \ --label bob \ --backups_enabled false \ --private_ip false \ --root_pass BOB_ROOT_PASSWORD \ --interfaces '[{"purpose":"vpc","subnet_id":AKAMAI_PRIVATE_SUBNET_ID,"ipv4":{"vpc":"10.0.1.236"}},{"purpose":"vlan","label":"my-vlan","ipam_address":"10.0.99.236/24"}]' \ --prettyRepeat the
createcommand once more to deploy the Charlie Linode, attach it to the private subnet, assign it the original Azure VNet IP (e.g,10.0.1.179), and add it to the VLAN at10.0.99.179/24:linode-cli linodes create \ --region AKAMAI_REGION \ --type g6-standard-2 \ --image linode/ubuntu24.04 \ --label charlie \ --backups_enabled false \ --private_ip false \ --root_pass CHARLIE_ROOT_PASSWORD \ --interfaces '[{"purpose":"vpc","subnet_id":AKAMAI_PRIVATE_SUBNET_ID,"ipv4":{"vpc":"10.0.1.179"}},{"purpose":"vlan","label":"my-vlan","ipam_address":"10.0.99.179/24"}]' \ --pretty
Afterwards, you should have three instances with the following corresponding VPC IP addresses and VLAN IPAM addresses:
| Instance | VPC IP Address | VLAN IPAM Address |
|---|---|---|
| Alice | 10.0.1.18 | 10.0.99.18/24 |
| Bob | 10.0.1.236 | 10.0.99.236/24 |
| Charlie | 10.0.1.179 | 10.0.99.179/24 |
Create the Public Linode
In the original Azure VNet, the NAT gateway service is used to allow outgoing internet access for a machine in the private subnet (e.g., Charlie). Because Linode does not offer a NAT gateway service, Bastion is configured to function as a NAT router.
Use the following command to create the Bastion instance on the public subnet. This command assigns it the same VPC IP as in the original Azure VNet (e.g, 10.0.2.78) and adds it to the VLAN at 10.0.99.78/24. Replace AKAMAI_REGION (e.g., us-mia), BASTION_ROOT_PASSWORD (e.g., mybastionrootpassword), and AKAMAI_PUBLIC_SUBNET_ID (e.g., 254565) with your own values:
linode-cli linodes create \
--region AKAMAI_REGION \
--type g6-standard-2 \
--image linode/ubuntu24.04 \
--label bastion \
--backups_enabled false \
--private_ip false \
--root_pass BASTION_ROOT_PASSWORD \
--interfaces '[{"purpose":"vpc","subnet_id":AKAMAI_PUBLIC_SUBNET_ID,"ipv4":{"vpc":"10.0.2.78","nat_1_1":"any"}},{"purpose":"vlan","label":"my-vlan","ipam_address":"10.0.99.78/24"}]' \
--prettyIncluding nat_1_1:any in the options for the VPC interface enables the assigned public IPv4 address for the Linode (e.g, 172.233.162.30):
[
{
...
"id": 83120114,
"image": "linode/ubuntu24.04",
"ipv4": [
"172.233.162.30"
],
...
}
]Afterwards, you should have one instance with the following addresses:
| Instance | VPC IP Address | VLAN IPAM Address | Public IPv4 Address |
|---|---|---|---|
| Bastion | 10.0.1.18 | 10.0.99.18/24 | 172.233.162.30 |
Connect to the Public Linode
You can connect to the Bastion instance with SSH because it has a public IP address. This is the only instance you can SSH into from the public internet. From this machine, you can SSH into the other private instances in the VLAN. This matches the original AWS VPC environment.
Supply the BASTION_PUBLIC_IP_ADDRESS (e.g., 172.236.243.216) to connect with SSH and verify access to the Linode:
ssh root@BASTION_PUBLIC_IP_ADDRESSroot. However, you should consider creating and using a limited sudo user on each Linode to reduce your risk of accidentally performing damaging operations.Enable IP Forwarding
IP forwarding enables a machine to forward packets between network interfaces. For example, between the VLAN on eth1 and the public VPC subnet on eth0.
From your existing SSH connection, use
nanoto modify/etc/sysctl.confon the Bastion instance:Bastion via SSHnano /etc/sysctl.confAdd or uncomment the following line to enable IP forwarding:
- File: /etc/sysctl.conf
1net.ipv4.ip_forward=1
When done, press CTRL+X, followed by Y then Enter to save the file and exit
nano.Reload
sysctlto apply the change:Bastion via SSHsysctl -p /etc/sysctl.confnet.ipv4.ip_forward = 1
Allow Packet Forwarding
By default, ufw drops forwarded packets, so you need to change that behavior on the Bastion instance.
Modify the
/etc/default/ufwfile:Bastion via SSHnano /etc/default/ufwLocate the
DEFAULT_FORWARD_POLICYline and change the value fromDROPtoACCEPT:- File: /etc/default/ufw
1DEFAULT_FORWARD_POLICY="ACCEPT"
When done, press CTRL+X, followed by Y then Enter to save the file and exit
nano.Add
ufwrules to allow inbound traffic frometh1(the VLAN), outgoing traffic oneth0(the public interface), and SSH everywhere:Bastion via SSHufw allow in on eth1 ufw allow out on eth0 ufw allow 22/tcp ufw allow out to 10.0.0.0/16 port 22 proto tcpRules updated Rules updated (v6) Rules updated Rules updated (v6) Rules updated Rules updated (v6) Rules updatedEnable
ufw:ufw enableCommand may disrupt existing ssh connections. Proceed with operation (y|n)? yWhen prompted, press Y then Enter to proceed:
Firewall is active and enabled on system startupRestart
ufwand verify the rule setup:ufw reload ufw status verboseFirewall reloaded Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), allow (routed) New profiles: skip To Action From -- ------ ---- Anywhere on eth1 ALLOW IN Anywhere 22/tcp ALLOW IN Anywhere Anywhere (v6) on eth1 ALLOW IN Anywhere (v6) 22/tcp (v6) ALLOW IN Anywhere (v6) Anywhere ALLOW OUT Anywhere on eth0 10.0.0.0/16 22/tcp ALLOW OUT Anywhere Anywhere (v6) ALLOW OUT Anywhere (v6) on eth0
Configure NAT Masquerading
NAT masquerading rewrites the source IP of packets from private instances with Bastion’s public IP, allowing them to reach the internet. Bastion then maps the return traffic back to the originating instance (e.g., Charlie).
On the Bastion instance, edit
/etc/ufw/before.rulesto add NAT masquerading:Bastion via SSHnano /etc/ufw/before.rulesNear the top of the file, add the following lines above the
*filterline:- File: /etc/ufw/before.rules
1 2 3 4 5 6# NAT table rules *nat :POSTROUTING ACCEPT [0:0] # Masquerade traffic from private VLAN subnet to the public internet -A POSTROUTING -s 10.0.0.0/16 -o eth0 -j MASQUERADE COMMIT
When done, press CTRL+X, followed by Y then Enter to save the file and exit
nano.Restart
ufw, then verify NAT masquerading behavior:Bastion via SSHufw reload iptables -t nat -L -n -v... Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE 0 -- * eth0 10.0.0.0/16 0.0.0.0/0This confirms that private subnet traffic exits via Bastion’s external interface (
eth0).
Secure Firewall on Private Linodes
Set the firewall (ufw) on each of the private instances (Alice, Bob, Charlie) to only allow SSH connections from within the VLAN.
Use your existing SSH connection to Bastion to connect to Alice (e.g.,
10.0.99.18), Bob (e.g.,10.0.99.236), and Charlie (e.g.,10.0.99.179) via their respective VLAN IP Addresses:Bastion via SSHssh root@VLAN_IP_ADDRESSNote If your VLAN configuration initially prevents SSH access, you can use Lish (Linode Shell) instead.
Log in to your Akamai Cloud Manager, navigate to each Linode, and click Launch LISH Console:


Within the LISH Console, connect to the machine as
root, using the password specified when creating the Linode.Configure
ufwrules to deny all incoming and outgoing connections by default, but explicitly allow incoming and outgoing SSH connections within the VLAN:Alice/Bob/Charlie via SSH from Bastionufw default deny incoming ufw default deny outgoing ufw allow from 10.0.0.0/16 to any port 22 proto tcp ufw allow out to 10.0.0.0/16 port 22 proto tcpDefault incoming policy changed to 'deny' (be sure to update your rules accordingly) Default outgoing policy changed to 'deny' (be sure to update your rules accordingly) Rules updated Rules updatedEnable
ufw:Alice/Bob/Charlie via SSH from Bastionufw enableCommand may disrupt existing ssh connections. Proceed with operation (y|n)? yWhen prompted, press Y then Enter to proceed:
Firewall is active and enabled on system startupRestart
ufwand verify the rule setup:Alice/Bob/Charlie via SSH from Bastionufw reload ufw status numberedFirewall reloaded Status: active To Action From -- ------ ---- [ 1] 22/tcp ALLOW IN 10.0.0.0/16 [ 2] 10.0.0.0/16 22/tcp ALLOW OUT Anywhere (out)Log out and return to the Bastion instance:
Alice/Bob/Charlie via SSH from BastionexitRepeat the steps above for the Bob and Charlie instances.
Configure Charlie for Internet Access
At this point, Alice and Bob are now fully configured. However, Charlie requires outgoing internet access. To enable this, Charlie routes traffic through Bastion, which is now configured to function as a NAT router.
Disable Network Helper
By default, Linode’s Network Helper rewrites systemd-networkd configs at boot and forces a public default route (10.0.1.1). For Charlie to use Bastion as its gateway for public internet access, you must first disable Network Helper.
In the Akamai Cloud Manager, navigate to Linodes and click on the entry for charlie.
Click the three horizontal dots (…) in the upper-right corner and select Power Off, then choose Power Off Linode.
Once the instance reports as Offline, open the Configurations tab.
Click the three horizontal dots (…) to the right of the listed Network Interfaces and select Edit.
Scroll to the bottom of the window and switch the toggle next to Auto-configure networking to the off position, then click Save Changes to disable Network Helper.
Click the three horizontal dots (…) in the upper-right corner and select Power On, then choose Power On Linode.
Route Outgoing Traffic Through Bastion
Set Charlie’s default route to use Bastion’s VLAN IPAM address and configure ufw to allow outgoing traffic.
SSH into the Charlie instance from your existing SSH connection to the Bastion instance:
Bastion via SSHssh root@CHARLIE_VLAN_IPUse
nanoto edit the05-eth0.networkconfiguration file in/etc/systemd/network/on the Charlie instance:Charlie via SSH from Bastionnano /etc/systemd/network/05-eth0.networkComment out the
Gatewayline:- File: /etc/systemd/network/05-eth0.network
1 2 3 4 5 6 7 8 9 10 11 12[Match] Name=eth0 [Network] DHCP=no DNS=172.233.160.27 172.233.160.30 172.233.160.34 Domains=members.linode.com IPv6PrivacyExtensions=false #Gateway=10.0.1.1 Address=10.0.1.179/24
When done, press CTRL+X, followed by Y then Enter to save the file and exit
nano.Now edit the edit the
05-eth1.networkconfiguration file:Charlie via SSH from Bastionnano /etc/systemd/network/05-eth1.networkAdd the following lines for
GatewayandDNS:- File: /etc/systemd/network/05-eth1.network
1 2 3 4 5 6 7 8 9 10 11 12 13[Match] Name=eth1 [Network] DHCP=no Domains=members.linode.com IPv6PrivacyExtensions=false Address=10.0.99.179/24 Gateway=10.0.99.78 DNS=1.1.1.1 DNS=8.8.8.8
By setting Charlie’s default route on
eth1, all internet-bound traffic goes through Bastion. This separates internal communication from external routing, isolate local traffic from NAT operations.When done, press CTRL+X, followed by Y then Enter to save the file and exit
nano.Restart
networkdto apply the new configuration:Charlie via SSH from Bastionsystemctl restart systemd-networkdChange the
ufwrules to allow outgoing traffic, which is now routed through the Bastion instance:Charlie via SSH from Bastionufw default allow outgoing ufw reloadDefault outgoing policy changed to 'allow' (be sure to update your rules accordingly) Firewall reloadedUse
curlto query ifconfig.me, an online service that simply returns the public IP address of the calling machine, to verify that Charlie now has outgoing internet access:Charlie via SSH from Bastioncurl -i ifconfig.meHTTP/1.1 200 OK Content-Length: 15 access-control-allow-origin: * content-type: text/plain ... 172.236.243.216Use
pingto test for outgoing internet access from Charlie:Charlie via SSH from Bastionping -c 3 8.8.8.8PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=110 time=0.639 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=110 time=0.724 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=110 time=0.668 ms --- 8.8.8.8 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2038ms rtt min/avg/max/mdev = 0.639/0.677/0.724/0.035 ms
Monitor Post-Migration Behavior
After initial testing, continue to monitor the new environment to ensure it operates as expected.
On the NAT router (Bastion), check for dropped or rejected traffic using tools like dmesg, journalctl, or iptables. For example:
dmesg | grep -i dropshows kernel log messages that contain the word “drop”, which can surface dropped packets.journalctl -u ufwshows ufw logs.journalctl -kshows kernel messages.iptables -t nat -L POSTROUTING -v -nhelps confirm that NAT rules such asMASQUERADEare being used. For example:Bastion via SSHiptables -t nat -L POSTROUTING -v -nThis shows how many packets and bytes have matched each rule:
Chain POSTROUTING (policy ACCEPT 25 packets, 1846 bytes) pkts bytes target prot opt in out source destination 653 149K MASQUERADE 0 -- * eth0 10.0.0.0/16 0.0.0.0/0
Monitor resource usage on the NAT router to ensure it is not becoming a bottleneck. Tools like top, htop, and iftop can help you keep an eye on CPU, memory, and bandwidth usage.
Within the Akamai Cloud Manager, you can set up monitoring and alerts for Linode compute instances.



Alternatively, install monitoring agents or set up log forwarding to external observability platforms for more detailed insight into traffic flow, resource utilization, and system health.
Periodic SSH audits and basic connectivity checks between instances can also help validate that the VPC remains stable over time. For example, run the following command to check auth.log for SSH activity:
grep 'sshd' /var/log/auth.log...
2025-06-16T23:28:59.223088-07:00 my-linode sshd[9355]: Accepted password for root from 10.0.2.78 port 53520 ssh2
2025-06-16T23:28:59.227749-07:00 my-linode sshd[9355]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
2025-06-16T23:43:44.812075-07:00 my-linode sshd[9526]: Accepted password for root from 10.0.2.78 port 32886 ssh2
2025-06-16T23:43:44.816294-07:00 my-linode sshd[9526]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)
2025-06-16T23:44:30.593329-07:00 my-linode sshd[9355]: Received disconnect from 10.0.2.78 port 53520:11: disconnected by user
2025-06-16T23:44:30.597043-07:00 my-linode sshd[9355]: Disconnected from user root 10.0.2.78 port 53520
2025-06-16T23:44:30.597234-07:00 my-linode sshd[9355]: pam_unix(sshd:session): session closed for user rootFinalize Your Migration
Once you’ve verified that the Linode environment is functioning correctly, complete the migration by updating services and decommissioning the original Azure infrastructure.
Update any scripts, applications, or service configurations that reference Azure-specific hostnames or IPs. If you use DNS, point records to any new Linode instances with public IPs. This helps minimize downtime and makes the transition seamless to users.
Check your monitoring and alerting setup. Make sure Linode compute instances are covered by any health checks or observability tools your team depends on. If you used Azure Monitor or other Azure-native tools, replace them with Linode monitoring or third-party alternatives. See Migrating From Azure Monitor to Prometheus and Grafana on Akamai for more information.
Decommission Azure resources that are no longer needed. This includes VM instances, NAT gateways, network security groups, subnets, and eventually the virtual network itself. Make sure to clean up all resources to avoid unnecessary charges.
Finally, update internal documentation, runbooks, and network diagrams to reflect the new environment. A clear and current record helps with future audits, troubleshooting, and onboarding.
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
This page was originally published on







