Skip to content

CHR: Installing on Microsoft Azure

RouterOS CHR can be deployed on Microsoft Azure by uploading the MikroTik VHD disk image and creating a virtual machine from it. Azure does not currently list CHR in the Azure Marketplace, so manual image upload is required.

  • Azure account with an active subscription
  • az CLI installed and logged in (az login)
  • Resource group created in your target region

Azure uses VHD images. MikroTik provides a VHDX image which must be converted to fixed-size VHD format:

Terminal window
# Download MikroTik CHR VHDX
wget https://download.mikrotik.com/routeros/7.x/chr-7.x.vhdx
# Convert VHDX to fixed-size VHD (Azure requirement)
qemu-img convert -f vhdx -O vpc -o subformat=fixed chr-7.x.vhdx chr-7.x.vhd

Azure requires fixed-size VHD format (not dynamic). Dynamic VHDs will be rejected during upload. Use subformat=fixed with qemu-img.

Terminal window
# Set variables
RESOURCE_GROUP="chr-resources"
LOCATION="eastus"
STORAGE_ACCOUNT="chrimagestorage$RANDOM"
CONTAINER="vhds"
VHD_NAME="chr-7.x.vhd"
# Create resource group
az group create --name $RESOURCE_GROUP --location $LOCATION
# Create storage account for VHD upload
az storage account create \
--name $STORAGE_ACCOUNT \
--resource-group $RESOURCE_GROUP \
--location $LOCATION \
--sku Standard_LRS
# Create blob container
az storage container create \
--name $CONTAINER \
--account-name $STORAGE_ACCOUNT
Terminal window
# Get storage account key
STORAGE_KEY=$(az storage account keys list \
--resource-group $RESOURCE_GROUP \
--account-name $STORAGE_ACCOUNT \
--query '[0].value' -o tsv)
# Upload VHD to blob storage
az storage blob upload \
--account-name $STORAGE_ACCOUNT \
--account-key $STORAGE_KEY \
--container-name $CONTAINER \
--name $VHD_NAME \
--file chr-7.x.vhd \
--type page

Step 4: Create a Managed Disk from the VHD

Section titled “Step 4: Create a Managed Disk from the VHD”
Terminal window
# Get the VHD blob URI
VHD_URI=$(az storage blob url \
--account-name $STORAGE_ACCOUNT \
--container-name $CONTAINER \
--name $VHD_NAME \
--output tsv)
# Create a managed disk from the VHD
az disk create \
--resource-group $RESOURCE_GROUP \
--name chr-os-disk \
--location $LOCATION \
--source $VHD_URI \
--os-type Linux \
--hyper-v-generation V1
Terminal window
# Create VM from the managed disk
az vm create \
--resource-group $RESOURCE_GROUP \
--name chr-router \
--attach-os-disk chr-os-disk \
--os-type Linux \
--size Standard_B2s \
--location $LOCATION \
--nics chr-nic

Create Network Interface with IP Forwarding

Section titled “Create Network Interface with IP Forwarding”

CHR must forward packets between interfaces. Azure requires IP forwarding to be enabled on each NIC:

Terminal window
# Create a public IP
az network public-ip create \
--resource-group $RESOURCE_GROUP \
--name chr-public-ip \
--allocation-method Static \
--sku Standard
# Create NIC with IP forwarding enabled
az network nic create \
--resource-group $RESOURCE_GROUP \
--name chr-nic \
--vnet-name chr-vnet \
--subnet chr-subnet \
--public-ip-address chr-public-ip \
--ip-forwarding true
# Create VM using the NIC
az vm create \
--resource-group $RESOURCE_GROUP \
--name chr-router \
--attach-os-disk chr-os-disk \
--os-type Linux \
--size Standard_B2s \
--nics chr-nic \
--location $LOCATION

IP forwarding must be enabled on the Azure NIC (not just in RouterOS) for CHR to forward packets between subnets or to the internet. Without this, Azure drops forwarded packets at the NIC level.

Use CaseVM SizevCPUsRAM
Lab / testingStandard_B1s11 GB
Small routerStandard_B2s24 GB
ProductionStandard_D2s_v328 GB
High throughputStandard_F4s_v248 GB

Azure NSG rules control inbound and outbound traffic. Create rules for CHR management access:

Terminal window
# Create NSG
az network nsg create \
--resource-group $RESOURCE_GROUP \
--name chr-nsg
# Allow SSH
az network nsg rule create \
--resource-group $RESOURCE_GROUP \
--nsg-name chr-nsg \
--name allow-ssh \
--priority 100 \
--direction Inbound \
--access Allow \
--protocol Tcp \
--destination-port-ranges 22 \
--source-address-prefixes 203.0.113.0/24
# Allow WinBox
az network nsg rule create \
--resource-group $RESOURCE_GROUP \
--nsg-name chr-nsg \
--name allow-winbox \
--priority 110 \
--direction Inbound \
--access Allow \
--protocol Tcp \
--destination-port-ranges 8291 \
--source-address-prefixes 203.0.113.0/24
# Associate NSG with the NIC
az network nic update \
--resource-group $RESOURCE_GROUP \
--name chr-nic \
--network-security-group chr-nsg

Common ports required for MikroTik CHR:

ServicePortProtocol
SSH22TCP
WinBox8291TCP
API8728TCP
BGP179TCP
IPsec IKE500UDP
IPsec NAT-T4500UDP

Get the public IP address:

Terminal window
az vm show \
--resource-group $RESOURCE_GROUP \
--name chr-router \
--show-details \
--query publicIps -o tsv

Connect via SSH:

Terminal window
ssh admin@<public-ip>

Default credentials are admin with no password. Change the password immediately.

# Set admin password
/user set admin password=StrongPassword123!
# Verify interface detection
/interface print
# Configure static IP if DHCP is not used
/ip address add address=10.0.1.4/24 interface=ether1
/ip route add dst-address=0.0.0.0/0 gateway=10.0.1.1
# Set DNS
/ip dns set servers=168.63.129.16,8.8.8.8
# Enable SSH and restrict access
/ip service set ssh address=203.0.113.0/24
# Set identity
/system identity set name=azure-chr-01

Azure supports multiple NICs for CHR to enable routing between subnets. Add interfaces at VM creation time:

Terminal window
# Create second NIC (internal, no public IP)
az network nic create \
--resource-group $RESOURCE_GROUP \
--name chr-nic-internal \
--vnet-name chr-vnet \
--subnet internal-subnet \
--ip-forwarding true
# Create VM with both NICs
az vm create \
--resource-group $RESOURCE_GROUP \
--name chr-router \
--attach-os-disk chr-os-disk \
--os-type Linux \
--size Standard_D2s_v3 \
--nics chr-nic chr-nic-internal \
--location $LOCATION

Inside RouterOS:

# ether1 = first NIC (WAN/public)
# ether2 = second NIC (internal/LAN)
/interface print

Apply a CHR license after deployment:

/system/license renew
account=your-mikrotik-account
password=your-password
level=p1

VMs cloned from the same managed disk image may share a system ID. Run /system license generate-new-id on each new instance before requesting a license.

Verify IP forwarding is enabled on each Azure NIC:

Terminal window
az network nic show \
--resource-group $RESOURCE_GROUP \
--name chr-nic \
--query enableIPForwarding
# Should return: true

If false, update the NIC:

Terminal window
az network nic update \
--resource-group $RESOURCE_GROUP \
--name chr-nic \
--ip-forwarding true
  1. Verify the NSG inbound rule allows your source IP on the required port
  2. Confirm the public IP is associated with the NIC
  3. Use Azure Serial Console (portal → VM → Serial Console) for out-of-band access

Ensure the VHD is fixed-size (not dynamic):

Terminal window
qemu-img info chr-7.x.vhd | grep "virtual size\|disk size\|format"
# Format should be: vpc
# Check it was converted with subformat=fixed