IP Traffic Flow: NetFlow and IPFIX Export
IP Traffic Flow: NetFlow and IPFIX Export
Section titled “IP Traffic Flow: NetFlow and IPFIX Export”RouterOS Traffic Flow exports per-connection flow records to an external collector using NetFlow v1/v5/v9 or IPFIX. This gives you per-IP, per-protocol visibility into traffic patterns without inspecting every packet in real time.
Common uses: bandwidth accounting, anomaly detection, capacity planning, and feeding tools like ntopng, ElastiFlow, or Elastic Stack.
How Traffic Flow Works
Section titled “How Traffic Flow Works”When a new connection passes through the router, Traffic Flow creates a flow record in a local cache. The record accumulates packet and byte counters until:
- the connection is idle for longer than
inactive-flow-timeout, or - the connection has been active for longer than
active-flow-timeout.
When either timer fires, the record is exported to all configured targets via UDP and removed from the cache.
Prerequisites
Section titled “Prerequisites”- RouterOS 7.x (IPFIX available from 7.1+; NetFlow v5/v9 available from earlier)
- Reachable collector host (ntopng, ElastiFlow, nfdump/nfcapd, Elastic Agent, etc.)
- UDP port open on the collector (default 2055)
Step 1 — Enable Traffic Flow Globally
Section titled “Step 1 — Enable Traffic Flow Globally”/ip traffic-flowset enabled=yes interfaces=all| Property | Default | Notes |
|---|---|---|
enabled | no | Master on/off switch |
interfaces | all | Comma-separated list, or all |
cache-entries | 64k | Flow cache size; tune down on low-memory devices, up on high-connection-rate routers |
active-flow-timeout | 30m | Export long-lived flows before they end |
inactive-flow-timeout | 15s | Export idle flows promptly |
packet-sampling | no | Enable packet sampling (uses sampling-interval / sampling-space below) |
sampling-interval | 0 | 0 = all packets; N = sample 1 in every N packets |
sampling-space | 0 | Packets to skip between samples (advanced sampling) |
For most deployments, leaving sampling-interval=0 (no sampling) gives accurate
counts. Enable sampling only on very high-throughput links where flow export
overhead is a concern.
Step 2 — Add a Collector Target
Section titled “Step 2 — Add a Collector Target”/ip traffic-flow targetadd dst-address=192.0.2.50 port=2055 version=ipfix| Property | Default | Notes |
|---|---|---|
dst-address | — | Collector IP address |
port | 2055 | UDP port on the collector |
version | 9 | Export format: 1, 5, 9, or ipfix |
v9-template-refresh | 20 | Re-send template every N flow packets (v9/IPFIX) |
v9-template-timeout | 30m | Re-send template after this interval even if packet count not reached |
Version guidance:
| Version | Use when |
|---|---|
| NetFlow v5 | Legacy collectors that do not support v9/IPFIX |
| NetFlow v9 | Broad compatibility; supports IPv6 and custom fields via templates |
| IPFIX | Recommended for modern collectors (ntopng, ElastiFlow); supports all RouterOS fields |
Multiple targets are supported — the same flows are exported to every target:
/ip traffic-flow targetadd dst-address=192.0.2.60 port=2055 version=ipfixadd dst-address=192.0.2.70 port=9995 version=9Step 3 — Configure IPFIX Fields (IPFIX/v9 Only)
Section titled “Step 3 — Configure IPFIX Fields (IPFIX/v9 Only)”RouterOS exports a global set of IPFIX Information Elements. All fields are enabled by default. View and adjust them with:
/ip traffic-flow ipfix print/ip traffic-flow ipfix set nat-events=yesThe ipfix print output lists individual field toggles (e.g. bytes, packets,
src-port, dst-port, nat-src-address, etc.). These apply to all IPFIX/v9
targets — there is no per-target template selection.
Supported IPFIX Information Elements
Section titled “Supported IPFIX Information Elements”RouterOS exports the following fields (verified on 7.15.3):
| Category | Fields |
|---|---|
| Layer 2 | Source/destination MAC address |
| Layer 3 | Source/destination IP, source/destination prefix mask, protocol, TOS, TTL, gateway, IP total length |
| Layer 4 | Source/destination port, TCP flags, TCP seq/ack/window, UDP length, ICMP type/code, IGMP type |
| Counters | Packet count, byte count |
| Timing | First/last forwarded timestamps, system init time |
| Interfaces | Ingress/egress interface index |
| IPv6 | IPv6 flow label |
| NAT | Pre-NAT/post-NAT address and port; NAT events |
Step 4 — Verify Export
Section titled “Step 4 — Verify Export”Check that flows are being generated:
/ip traffic-flow printConfirm targets are receiving data. On the collector side, most tools show a “last received” timestamp per exporter. If no flows arrive:
- Confirm UDP reachability:
/tool ping 192.0.2.50 - Check that
enabled=yesandinterfacesincludes the active interface - Verify the collector is listening on the configured port
- For IPFIX: ensure
v9-template-refreshis low enough that the collector receives a template before timing out (default 20 packets is usually fine)
Collector Integration
Section titled “Collector Integration”ntopng
Section titled “ntopng”ntopng accepts IPFIX and NetFlow v9 natively. Configure a flow interface in ntopng pointing to UDP port 2055, then set RouterOS to export to the ntopng host:
/ip traffic-flow targetadd dst-address=<ntopng-ip> port=2055 version=ipfix \ v9-template-refresh=20 v9-template-timeout=30mUse the default ipv4/ipv6 templates. Custom templates work but require
matching field mappings in ntopng’s decoder configuration.
ElastiFlow
Section titled “ElastiFlow”ElastiFlow works best with IPFIX using MikroTik’s default templates. The Elastic Agent or standalone ElastiFlow collector should listen on UDP 2055.
/ip traffic-flow targetadd dst-address=<elastiflow-ip> port=2055 version=ipfixIf you see unknown fields in ElastiFlow dashboards, verify your ElastiFlow version includes MikroTik IE definitions, or switch to NetFlow v9 for broader compatibility.
nfdump / nfcapd (Linux)
Section titled “nfdump / nfcapd (Linux)”Use NetFlow v5 or v9 for the widest nfdump compatibility:
/ip traffic-flow targetadd dst-address=<nfcapd-ip> port=9995 version=5Grafana (via InfluxDB)
Section titled “Grafana (via InfluxDB)”RouterOS does not write directly to InfluxDB. You need an intermediate
collector that transforms flow records into time-series data. A common open
source option is pmacct (with the nfacctd daemon):
RouterOS ──IPFIX/NetFlow──► nfacctd (pmacct) ──► InfluxDB ──► Grafana- Configure
nfacctdto listen on UDP 2055 and write aggregated metrics (bytes, packets, src/dst IP, protocol) to InfluxDB using pmacct’sprint_output: json+ InfluxDB plugin. - Point RouterOS at the nfacctd host:
/ip traffic-flow targetadd dst-address=<pmacct-ip> port=2055 version=ipfix- Build Grafana dashboards querying the InfluxDB measurement for top talkers, protocol distribution, and bandwidth over time.
ntopng path: ntopng 5.x exposes a REST API and Prometheus metrics endpoint. Grafana can scrape ntopng’s Prometheus endpoint directly:
RouterOS ──IPFIX──► ntopng ──Prometheus metrics──► GrafanaEnable the Prometheus endpoint in ntopng’s ntopng.conf:
--prometheus-port=7070Then add a Prometheus data source in Grafana pointing to
http://<ntopng-ip>:7070.
Elasticsearch (ElastiFlow / Logstash)
Section titled “Elasticsearch (ElastiFlow / Logstash)”The MikroTik-maintained guide uses ElastiFlow as the Logstash pipeline that decodes IPFIX records and writes enriched documents to Elasticsearch:
RouterOS ──IPFIX──► ElastiFlow/Logstash ──► Elasticsearch ──► KibanaRouterOS configuration (export to the Logstash host):
/ip traffic-flowset enabled=yes interfaces=all cache-entries=4k \ active-flow-timeout=1m inactive-flow-timeout=15s
/ip traffic-flow targetadd dst-address=<logstash-ip> port=2055 version=ipfix \ v9-template-refresh=20ElastiFlow decodes MikroTik’s default ipv4/ipv6 IPFIX templates without
extra field mapping. If you see unmapped fields in Kibana, confirm your
ElastiFlow version is ≥ 4.x which includes MikroTik enterprise element
definitions. Switch to NetFlow v9 if you need broader compatibility with
older ElastiFlow releases.
Selective Interface Monitoring
Section titled “Selective Interface Monitoring”To export flows only from specific interfaces rather than all of them, create an interface list:
/interface listadd name=flow-interfaces
/interface list memberadd list=flow-interfaces interface=ether1add list=flow-interfaces interface=bridge
/ip traffic-flowset enabled=yes interfaces=flow-interfacesThis is useful on routers with many internal-only or management interfaces where flow export would generate noise without adding value.
Example: Full Configuration
Section titled “Example: Full Configuration”# Enable Traffic Flow on all interfaces/ip traffic-flowset enabled=yes interfaces=all cache-entries=16k \ active-flow-timeout=30m inactive-flow-timeout=15s
# Export to ntopng (IPFIX) and a backup nfcapd (NetFlow v5)/ip traffic-flow targetadd dst-address=192.0.2.50 port=2055 version=ipfix \ v9-template-refresh=20 v9-template-timeout=30madd dst-address=192.0.2.51 port=9995 version=5Traffic Analysis Patterns
Section titled “Traffic Analysis Patterns”Once flows are reaching a collector, the following query patterns are most useful for RouterOS deployments:
Top Talkers
Section titled “Top Talkers”Aggregate exported flows by source IP and sort by bytes descending over a time window (e.g. last hour). This identifies internal hosts generating the most traffic, useful for bandwidth accountability and spotting unexpected uploads.
Top Destinations
Section titled “Top Destinations”Aggregate by destination IP or destination AS (RouterOS exports BGP AS numbers when the router is running BGP). Useful for understanding which external services dominate outbound traffic.
Protocol Distribution
Section titled “Protocol Distribution”Group flows by L4 protocol (TCP, UDP, ICMP) and by destination port to build a traffic mix profile. A sudden rise in UDP flows may indicate streaming, gaming, or amplification traffic.
Anomaly Signals
Section titled “Anomaly Signals”Watch for these deviations from baseline in your collector dashboards:
| Signal | Possible cause |
|---|---|
| Spike in total flows/s | Port scan, SYN flood |
| High unique source IPs with low bytes/flow | Distributed scan or spoofed traffic |
| Large single-source bytes/s above normal | Upload exfiltration, misconfigured backup |
| Surge in ICMP flows | Ping flood or network discovery sweep |
| Short-duration flows to many destinations | Malware C2 beaconing |
RouterOS provides the raw telemetry; baselining and alerting are performed in the collector (Kibana alerts, ntopng thresholds, Grafana alert rules).
Tuning Notes
Section titled “Tuning Notes”cache-entries: Default is64k. On memory-constrained devices, reduce to4kor16k. On very high-connection-rate routers, the default is already generous; increase only if you observe dropped flow records. Each entry uses a small amount of RAM.active-flow-timeout: Shorter values (e.g.1m) give finer-grained time-series data in your collector but increase export volume.inactive-flow-timeout: The default15sworks well. Very short values (1–2 s) can flood the collector on high-connection-rate routers.- Template refresh: If your collector loses template mapping after a router
reboot, lower
v9-template-refreshto5or10so templates are resent quickly once flows resume.