Skip to content

IP Traffic Flow: NetFlow and IPFIX Export

RouterOS Traffic Flow exports per-connection flow records to an external collector using NetFlow v1/v5/v9 or IPFIX. This gives you per-IP, per-protocol visibility into traffic patterns without inspecting every packet in real time.

Common uses: bandwidth accounting, anomaly detection, capacity planning, and feeding tools like ntopng, ElastiFlow, or Elastic Stack.


When a new connection passes through the router, Traffic Flow creates a flow record in a local cache. The record accumulates packet and byte counters until:

  • the connection is idle for longer than inactive-flow-timeout, or
  • the connection has been active for longer than active-flow-timeout.

When either timer fires, the record is exported to all configured targets via UDP and removed from the cache.


  • RouterOS 7.x (IPFIX available from 7.1+; NetFlow v5/v9 available from earlier)
  • Reachable collector host (ntopng, ElastiFlow, nfdump/nfcapd, Elastic Agent, etc.)
  • UDP port open on the collector (default 2055)

Terminal window
/ip traffic-flow
set enabled=yes interfaces=all
PropertyDefaultNotes
enablednoMaster on/off switch
interfacesallComma-separated list, or all
cache-entries64kFlow cache size; tune down on low-memory devices, up on high-connection-rate routers
active-flow-timeout30mExport long-lived flows before they end
inactive-flow-timeout15sExport idle flows promptly
packet-samplingnoEnable packet sampling (uses sampling-interval / sampling-space below)
sampling-interval00 = all packets; N = sample 1 in every N packets
sampling-space0Packets to skip between samples (advanced sampling)

For most deployments, leaving sampling-interval=0 (no sampling) gives accurate counts. Enable sampling only on very high-throughput links where flow export overhead is a concern.


Terminal window
/ip traffic-flow target
add dst-address=192.0.2.50 port=2055 version=ipfix
PropertyDefaultNotes
dst-addressCollector IP address
port2055UDP port on the collector
version9Export format: 1, 5, 9, or ipfix
v9-template-refresh20Re-send template every N flow packets (v9/IPFIX)
v9-template-timeout30mRe-send template after this interval even if packet count not reached

Version guidance:

VersionUse when
NetFlow v5Legacy collectors that do not support v9/IPFIX
NetFlow v9Broad compatibility; supports IPv6 and custom fields via templates
IPFIXRecommended for modern collectors (ntopng, ElastiFlow); supports all RouterOS fields

Multiple targets are supported — the same flows are exported to every target:

Terminal window
/ip traffic-flow target
add dst-address=192.0.2.60 port=2055 version=ipfix
add dst-address=192.0.2.70 port=9995 version=9

Step 3 — Configure IPFIX Fields (IPFIX/v9 Only)

Section titled “Step 3 — Configure IPFIX Fields (IPFIX/v9 Only)”

RouterOS exports a global set of IPFIX Information Elements. All fields are enabled by default. View and adjust them with:

Terminal window
/ip traffic-flow ipfix print
/ip traffic-flow ipfix set nat-events=yes

The ipfix print output lists individual field toggles (e.g. bytes, packets, src-port, dst-port, nat-src-address, etc.). These apply to all IPFIX/v9 targets — there is no per-target template selection.

RouterOS exports the following fields (verified on 7.15.3):

CategoryFields
Layer 2Source/destination MAC address
Layer 3Source/destination IP, source/destination prefix mask, protocol, TOS, TTL, gateway, IP total length
Layer 4Source/destination port, TCP flags, TCP seq/ack/window, UDP length, ICMP type/code, IGMP type
CountersPacket count, byte count
TimingFirst/last forwarded timestamps, system init time
InterfacesIngress/egress interface index
IPv6IPv6 flow label
NATPre-NAT/post-NAT address and port; NAT events

Check that flows are being generated:

Terminal window
/ip traffic-flow print

Confirm targets are receiving data. On the collector side, most tools show a “last received” timestamp per exporter. If no flows arrive:

  1. Confirm UDP reachability: /tool ping 192.0.2.50
  2. Check that enabled=yes and interfaces includes the active interface
  3. Verify the collector is listening on the configured port
  4. For IPFIX: ensure v9-template-refresh is low enough that the collector receives a template before timing out (default 20 packets is usually fine)

ntopng accepts IPFIX and NetFlow v9 natively. Configure a flow interface in ntopng pointing to UDP port 2055, then set RouterOS to export to the ntopng host:

Terminal window
/ip traffic-flow target
add dst-address=<ntopng-ip> port=2055 version=ipfix \
v9-template-refresh=20 v9-template-timeout=30m

Use the default ipv4/ipv6 templates. Custom templates work but require matching field mappings in ntopng’s decoder configuration.

ElastiFlow works best with IPFIX using MikroTik’s default templates. The Elastic Agent or standalone ElastiFlow collector should listen on UDP 2055.

Terminal window
/ip traffic-flow target
add dst-address=<elastiflow-ip> port=2055 version=ipfix

If you see unknown fields in ElastiFlow dashboards, verify your ElastiFlow version includes MikroTik IE definitions, or switch to NetFlow v9 for broader compatibility.

Use NetFlow v5 or v9 for the widest nfdump compatibility:

Terminal window
/ip traffic-flow target
add dst-address=<nfcapd-ip> port=9995 version=5

RouterOS does not write directly to InfluxDB. You need an intermediate collector that transforms flow records into time-series data. A common open source option is pmacct (with the nfacctd daemon):

RouterOS ──IPFIX/NetFlow──► nfacctd (pmacct) ──► InfluxDB ──► Grafana
  1. Configure nfacctd to listen on UDP 2055 and write aggregated metrics (bytes, packets, src/dst IP, protocol) to InfluxDB using pmacct’s print_output: json + InfluxDB plugin.
  2. Point RouterOS at the nfacctd host:
Terminal window
/ip traffic-flow target
add dst-address=<pmacct-ip> port=2055 version=ipfix
  1. Build Grafana dashboards querying the InfluxDB measurement for top talkers, protocol distribution, and bandwidth over time.

ntopng path: ntopng 5.x exposes a REST API and Prometheus metrics endpoint. Grafana can scrape ntopng’s Prometheus endpoint directly:

RouterOS ──IPFIX──► ntopng ──Prometheus metrics──► Grafana

Enable the Prometheus endpoint in ntopng’s ntopng.conf:

--prometheus-port=7070

Then add a Prometheus data source in Grafana pointing to http://<ntopng-ip>:7070.


The MikroTik-maintained guide uses ElastiFlow as the Logstash pipeline that decodes IPFIX records and writes enriched documents to Elasticsearch:

RouterOS ──IPFIX──► ElastiFlow/Logstash ──► Elasticsearch ──► Kibana

RouterOS configuration (export to the Logstash host):

Terminal window
/ip traffic-flow
set enabled=yes interfaces=all cache-entries=4k \
active-flow-timeout=1m inactive-flow-timeout=15s
/ip traffic-flow target
add dst-address=<logstash-ip> port=2055 version=ipfix \
v9-template-refresh=20

ElastiFlow decodes MikroTik’s default ipv4/ipv6 IPFIX templates without extra field mapping. If you see unmapped fields in Kibana, confirm your ElastiFlow version is ≥ 4.x which includes MikroTik enterprise element definitions. Switch to NetFlow v9 if you need broader compatibility with older ElastiFlow releases.


To export flows only from specific interfaces rather than all of them, create an interface list:

Terminal window
/interface list
add name=flow-interfaces
/interface list member
add list=flow-interfaces interface=ether1
add list=flow-interfaces interface=bridge
/ip traffic-flow
set enabled=yes interfaces=flow-interfaces

This is useful on routers with many internal-only or management interfaces where flow export would generate noise without adding value.


Terminal window
# Enable Traffic Flow on all interfaces
/ip traffic-flow
set enabled=yes interfaces=all cache-entries=16k \
active-flow-timeout=30m inactive-flow-timeout=15s
# Export to ntopng (IPFIX) and a backup nfcapd (NetFlow v5)
/ip traffic-flow target
add dst-address=192.0.2.50 port=2055 version=ipfix \
v9-template-refresh=20 v9-template-timeout=30m
add dst-address=192.0.2.51 port=9995 version=5

Once flows are reaching a collector, the following query patterns are most useful for RouterOS deployments:

Aggregate exported flows by source IP and sort by bytes descending over a time window (e.g. last hour). This identifies internal hosts generating the most traffic, useful for bandwidth accountability and spotting unexpected uploads.

Aggregate by destination IP or destination AS (RouterOS exports BGP AS numbers when the router is running BGP). Useful for understanding which external services dominate outbound traffic.

Group flows by L4 protocol (TCP, UDP, ICMP) and by destination port to build a traffic mix profile. A sudden rise in UDP flows may indicate streaming, gaming, or amplification traffic.

Watch for these deviations from baseline in your collector dashboards:

SignalPossible cause
Spike in total flows/sPort scan, SYN flood
High unique source IPs with low bytes/flowDistributed scan or spoofed traffic
Large single-source bytes/s above normalUpload exfiltration, misconfigured backup
Surge in ICMP flowsPing flood or network discovery sweep
Short-duration flows to many destinationsMalware C2 beaconing

RouterOS provides the raw telemetry; baselining and alerting are performed in the collector (Kibana alerts, ntopng thresholds, Grafana alert rules).


  • cache-entries: Default is 64k. On memory-constrained devices, reduce to 4k or 16k. On very high-connection-rate routers, the default is already generous; increase only if you observe dropped flow records. Each entry uses a small amount of RAM.
  • active-flow-timeout: Shorter values (e.g. 1m) give finer-grained time-series data in your collector but increase export volume.
  • inactive-flow-timeout: The default 15s works well. Very short values (1–2 s) can flood the collector on high-connection-rate routers.
  • Template refresh: If your collector loses template mapping after a router reboot, lower v9-template-refresh to 5 or 10 so templates are resent quickly once flows resume.