Skip to content

Bandwidth Test

The Bandwidth Test tool measures network throughput between two MikroTik devices. It runs a client on one router and a server on another, generating traffic in one or both directions and reporting real-time throughput statistics. Use it to verify link capacity, identify bottlenecks, and validate QoS policy behaviour.

One router acts as the server (passively accepts test connections) and the other acts as the client (initiates the test and reports results). The client connects to the server’s IP address, pushes or receives traffic for the specified duration, and then displays aggregate throughput.

Enable the bandwidth-test server on the target router:

/tool/bandwidth-server set enabled=yes

By default the server requires authentication using the router’s user credentials. To allow unauthenticated tests:

/tool/bandwidth-server set enabled=yes authenticate=no

Full server properties:

PropertyDefaultDescription
enablednoEnable or disable the bandwidth server
authenticateyesRequire client to provide valid username/password
allocate-udp-ports-from2000Starting port for UDP test sessions
max-sessions100Maximum concurrent test sessions

From the client router, run /tool/bandwidth-test:

/tool/bandwidth-test address=192.168.88.1
ParameterDescription
addressIP address of the bandwidth server
directionTraffic direction: transmit, receive, or both (default: receive)
protocolTransport protocol: tcp or udp (default: tcp)
durationTest duration (e.g. 30s, 2m); 0s runs until interrupted
userUsername for server authentication
passwordPassword for server authentication
connection-countNumber of parallel TCP/UDP connections (default: 20)
random-dataSend random (incompressible) data; increases CPU load but defeats compression
local-udp-tx-sizeUDP payload size in bytes for local transmit (range: 28–64000; default: 1500); only affects direction=transmit or direction=both
local-tx-speedLimit local transmit speed (e.g. 100M)
remote-tx-speedLimit remote transmit speed
ValueMeaning
transmitClient sends traffic to server
receiveServer sends traffic to client
bothTraffic flows in both directions simultaneously

Test how fast the client can receive data over TCP:

/tool/bandwidth-test address=10.0.0.1 direction=receive protocol=tcp duration=30s

Measure full-duplex UDP throughput simultaneously:

/tool/bandwidth-test address=10.0.0.1 direction=both protocol=udp duration=60s
/tool/bandwidth-test address=192.168.1.1 user=admin password=secret direction=transmit duration=30s

Increase connection-count to better saturate high-capacity links:

/tool/bandwidth-test address=10.0.0.1 direction=both protocol=tcp connection-count=10 duration=30s

Test transmit throughput with jumbo-frame-sized UDP payloads. local-udp-tx-size controls the local transmit packet size, so direction=transmit is required for it to take effect:

/tool/bandwidth-test address=10.0.0.1 protocol=udp direction=transmit local-udp-tx-size=9000 duration=30s

The protocol parameter controls how test traffic is generated. The two protocols serve different diagnostic purposes.

TCP uses flow control, congestion control, and retransmission. The measured rate reflects practical bulk-transfer throughput — similar to file transfers, SMB, or HTTP downloads over the link.

Use TCP when you want to answer: “How fast can my applications transfer data across this link?”

/tool/bandwidth-test address=10.0.0.1 protocol=tcp direction=both duration=30s

UDP sends at a configured offered rate (local-tx-speed) without retransmission. The result shows whether the link can carry that load without dropping packets, exposing congestion, MTU issues, or queue behaviour under a controlled load.

Use UDP when you want to answer: “Does this link drop packets when loaded at a specific rate?” — particularly useful for validating QoS policies and wireless links.

/tool/bandwidth-test address=10.0.0.1 protocol=udp direction=transmit \
local-tx-speed=50M local-udp-tx-size=1400 duration=30s
CharacteristicTCPUDP
Retransmits lost packetsYesNo
Flow / congestion controlYesNo
local-tx-speed (offered rate)Optional capTarget rate
lost-packets counter meaningfulNoYes
Reflects real application throughputYesNo
Best for QoS / queue validationNoYes

The test displays a continuously updated table during the run. Key output fields:

FieldDescription
tx-currentCurrent transmit throughput (instantaneous sample)
rx-currentCurrent receive throughput (instantaneous sample)
tx-10-second-averageRolling average TX over the last 10 seconds
rx-10-second-averageRolling average RX over the last 10 seconds
tx-total-averageAverage TX for the entire test — use this as the headline result
rx-total-averageAverage RX for the entire test
lost-packetsPackets not received at the far end (meaningful in UDP tests)
random-dataWhether random (incompressible) data mode is active
local-cpu-loadCPU utilisation on the client router during the test
remote-cpu-loadCPU utilisation on the server router during the test

tx-total-average is the most reliable result. The current and 10-second values show stability over time — large swings indicate an unstable path or CPU contention.

In TCP bidirectional tests, tx and rx rates often differ. This is expected: TCP acknowledgements flow in the opposite direction and consume some bandwidth, and flow control on each direction operates independently.

High lost-packets in UDP means the link or a queue is dropping packets at the offered rate. Reduce local-tx-speed until lost-packets drops to zero to find the link’s usable capacity for that traffic type.

High local-cpu-load or remote-cpu-load means the test is CPU-limited. The measured throughput reflects the device’s processing capacity, not the link’s physical capacity. On lower-end hardware, this is the most common cause of unexpectedly low results.

Example: suspicious low result
tx-total-average: 45Mbps (expected 100Mbps)
local-cpu-load: 98% ← CPU bottleneck, not the link

Use this sequence to find a link’s practical capacity, ruling out CPU and configuration factors.

On the remote router:

/tool/bandwidth-server set enabled=yes authenticate=yes

TCP gives a quick estimate of bulk-transfer throughput:

/tool/bandwidth-test address=<server-ip> user=admin password=<pass> \
protocol=tcp direction=both duration=30s

Check local-cpu-load and remote-cpu-load in the output. If either exceeds ~80%, add more parallel connections or reduce to single-direction to rule out CPU limits:

/tool/bandwidth-test address=<server-ip> user=admin password=<pass> \
protocol=tcp direction=transmit connection-count=1 duration=30s

Step 3: Run a UDP test at the expected rate

Section titled “Step 3: Run a UDP test at the expected rate”

To verify the link can carry a specific offered load without loss:

/tool/bandwidth-test address=<server-ip> user=admin password=<pass> \
protocol=udp direction=transmit local-tx-speed=100M \
local-udp-tx-size=1400 random-data=yes duration=30s

A lost-packets count above zero means the link, a queue, or a device is dropping packets at that rate.

Reduce local-tx-speed in 10% steps until lost=0. That rate is the link’s reliable UDP capacity for that packet size.

Bandwidth Test is the most practical way to confirm that queue rules actually enforce priorities under congestion. The principle: generate deliberate overload, then observe whether queue statistics match the configured policy.

1. Generate test traffic that matches a queue mark or interface.

Configure the test so traffic is classified by your existing Mangle/Queue rules. For example, if queue class video marks DSCP CS4:

# On client: run test to fill the link
/tool/bandwidth-test address=10.0.0.1 user=admin password=<pass> \
protocol=udp direction=transmit local-tx-speed=110M duration=60s

2. While the test is running, watch queue statistics on the router:

/queue simple print stats
# or for queue tree:
/queue tree print stats

3. Verify that:

  • High-priority queues show their allocated rate and low/zero drops.
  • Low-priority queues are rate-limited and show drops or queued bytes.
  • Total measured TX in bandwidth-test does not exceed the configured interface rate.
# Queue limits 10.0.0.50 to 20Mbps
/queue simple add name=client-limit target=10.0.0.50 max-limit=20M/20M
# Run test from 10.0.0.50 — server should see ~20Mbps not more
/tool/bandwidth-test address=10.0.0.1 user=admin password=<pass> \
protocol=tcp direction=transmit duration=30s
# Expected: tx-total-average ≈ 20Mbps despite link being faster

Watch queue counters during the test:

/queue simple print stats where name=client-limit
# bytes-out grows; dropped > 0 confirms the shaper is active

Bandwidth Test runs in software on the router’s CPU. On low-end hardware, the test may saturate the CPU before saturating the link, producing lower-than-actual results. Run tests when the router is otherwise idle.

When random-data=yes, the router generates non-compressible random data for each packet. This is more representative of real traffic but substantially increases CPU usage and typically reduces measured throughput compared to random-data=no.

If you run Bandwidth Test on a router that is also forwarding production traffic, test results will reflect both forwarding overhead and test-generated load. Use a dedicated test path or test during low-traffic periods.

UDP tests do not retransmit lost packets. High loss percentages in UDP tests indicate congestion, MTU mismatches, or hardware queuing drops — not a broken connection.

Traffic generated by Bandwidth Test passes through the router’s CPU. Hardware-offloaded bridge or switch paths are not exercised, so results may not reflect true hardware forwarding throughput.