Skip to content

Syslog with Elasticsearch

Elasticsearch is a popular NoSQL database that can store a wide range of data, including Syslog data from RouterOS devices. Combined with Kibana, it creates a powerful tool to analyze Syslog data across multiple routers. This guide covers setting up Syslog log collection and analysis using Elastic integrations.

The typical deployment uses the following components:

ComponentIP AddressPurpose
RouterOS device10.0.0.1Sends Syslog data
Elastic Agent (Custom UDP)10.0.0.2Ingests and processes Syslog data
Fleet Server10.0.0.3Manages Elastic Agents
Elasticsearch10.0.0.4Stores log data
Kibana10.0.0.5Visualizes and searches logs

This guide uses Custom instead of Logstash UDP logs integration, with Fleet Server handling the data pipeline.

All components (Elasticsearch, Kibana, Fleet Server, Custom UDP) can be installed on the same device for testing, but production deployments typically separate them for performance.

Before configuring RouterOS, ensure you have:

The following steps configure the Elastic stack to receive Syslog from RouterOS.

  1. Log into Kibana and navigate to Fleet > Agent policies
  2. Click Create agent policy
  3. Name the policy (e.g., “Syslog policy”)
  4. Configure advanced settings as needed and create the policy

Alternatively, use the API:

Terminal window
POST kbn:/api/fleet/agent_policies
{
"name": "Syslog policy",
"description": "",
"namespace": "default",
"monitoring_enabled": ["logs", "metrics"],
"inactivity_timeout": 1209600,
"is_protected": false
}
  1. Open your created agent policy
  2. Click Add integration
  3. Search for “Custom UDP logs” and add it
  4. Configure:
    • Listen Address: IP of your server (e.g., 10.0.0.2)
    • Listen Port: 5514 (standard Syslog port)
    • Dataset name: routeros
    • Ingest Pipeline: logs-routeros@custom
    • Syslog Parsing: Yes
  5. Save the integration
  6. Follow the Elastic Agent installation instructions to deploy the agent to your host

The ingest pipeline parses RouterOS Syslog messages into structured fields.

  1. In Kibana, go to Stack Management > Ingest Pipelines
  2. Click Create pipeline > New pipeline
  3. Name: logs-routeros@custom
  4. Click Import processors and paste:
{
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"^first L2TP UDP packet received from %{IP:source.ip}$",
"^login failure for user %{USERNAME:user.name} from %{IP:source.ip} via %{DATA:service.name}$",
"^%{USERNAME:user.name} logged in, %{IP:client.ip} from %{IP:source.ip}$",
"^dhcp alert on %{DATA}: discovered unknown dhcp server, mac %{MAC:source.mac}, ip %{IP:source.ip}$",
"in:%{DATA} out:%{DATA}, ?(connection-state:%{DATA},|)?(src-mac %{MAC:source.mac},|) proto %{DATA:network.transport} \\(%{DATA}\\), %{IP:source.ip}:?(%{INT:source.port}|)->%{IP:destination.ip}:?(%{INT:destination.port}|), len %{INT:network.bytes}$",
"in:%{DATA} out:%{DATA}, ?(connection-state:%{DATA},|)?(src-mac %{MAC:source.mac},|) proto %{DATA:network.transport}, %{IP:source.ip}:?(%{INT:source.port}|)->%{IP:destination.ip}:?(%{INT:destination.port}|), len %{INT:network.bytes}$",
"^%{DATA:network.name} (deassigned|assigned) %{IP:client.ip} for %{MAC:client.mac} %{DATA}$",
"^%{DATA:user.name} logged out, %{INT:event.duration} %{INT} %{INT} %{INT} %{INT} from %{IP:client.ip}$",
"^user %{DATA:user.name} logged out from %{IP:source.ip} via %{DATA:service.name}$",
"^user %{DATA:user.name} logged in from %{IP:source.ip} via %{DATA:service.name}$",
"^%{DATA:network.name} client %{MAC:client.mac} declines IP address %{IP:client.ip}$",
"^%{DATA:network.name} link up \\(speed %{DATA}\\)$",
"^%{DATA:network.name} link down$",
"^user %{DATA:user.name} authentication failed$",
"^%{DATA:network.name} fcs error on link$",
"^phase1 negotiation failed due to time up %{IP:source.ip}\\[%{INT:source.port}\\]<=>%{IP:destination.ip}\\[%{INT:destination.port}\\] %{DATA}:%{DATA}$",
"^%{DATA:network.name} (learning|forwarding)$",
"^user %{DATA:user.name} is already active$",
"^%{GREEDYDATA}$"
]
}
},
{"lowercase": {"field": "network.transport", "ignore_missing": true}},
{
"append": {
"field": "event.category",
"value": ["authentication"],
"if": "ctx.message =~ /(login failure for user|logged in from|logged in,)/"
}
},
{
"append": {
"field": "event.outcome",
"value": ["success"],
"if": "ctx.message =~ /(logged in from|logged in,)/"
}
},
{
"append": {
"field": "event.outcome",
"value": ["failure"],
"if": "ctx.message =~ /(login failure for user)/"
}
},
{
"append": {
"field": "event.category",
"value": ["network"],
"if": "ctx.message =~ /( fcs error on link| link down| link up)/"
}
},
{
"append": {
"field": "event.outcome",
"value": ["failure"],
"if": "ctx.message =~ /( fcs error on link)/"
}
},
{
"append": {
"field": "event.category",
"value": ["session"],
"if": "ctx.message =~ /(logged out)/"
}
},
{
"append": {
"field": "event.category",
"value": ["threat"],
"if": "ctx.message =~ /(from address that has not seen before)/"
}
},
{
"append": {
"field": "service.name",
"value": ["l2tp"],
"if": "ctx.message =~ /(^L2TP\\/IPsec VPN)/"
}
},
{"geoip": {"field": "source.ip", "target_field": "source.geo", "ignore_missing": true}},
{"geoip": {"field": "destination.ip", "target_field": "destination.geo", "ignore_missing": true}},
{"geoip": {"field": "client.ip", "target_field": "client.geo", "ignore_missing": true}}
]
}
  1. Save the pipeline
  1. Go to Stack Management > Index Management > Component templates
  2. Create a new template named logs-routeros@custom
  3. Under Index settings:
{
"index": {
"lifecycle": {
"name": "logs"
},
"default_pipeline": "logs-routeros@custom"
}
}
  1. Under Mappings, click Load JSON and paste:
{
"dynamic_templates": [],
"properties": {
"service": {"type": "object", "properties": {"name": {"type": "keyword"}}},
"destination": {"type": "object", "properties": {
"port": {"type": "long"}, "ip": {"type": "ip"}
}},
"host": {"type": "object", "properties": {"ip": {"type": "ip"}}},
"client": {"type": "object", "properties": {
"ip": {"type": "ip"}, "mac": {"type": "keyword"}
}},
"source": {"type": "object", "properties": {
"geo": {"type": "object", "properties": {
"continent_name": {"type": "keyword", "ignore_above": 1024},
"region_iso_code": {"type": "keyword", "ignore_above": 1024},
"city_name": {"type": "keyword", "ignore_above": 1024},
"country_iso_code": {"type": "keyword", "ignore_above": 1024},
"country_name": {"type": "keyword", "ignore_above": 1024},
"location": {"type": "geo_point"},
"region_name": {"type": "keyword", "ignore_above": 1024}
}},
"as": {"type": "object", "properties": {
"number": {"type": "long"},
"organization": {"type": "object", "properties": {
"name": {"type": "keyword", "fields": {"text": {"type": "match_only_text"}}}
}}
}},
"address": {"type": "keyword", "ignore_above": 1024},
"port": {"type": "long"},
"domain": {"type": "keyword", "ignore_above": 1024},
"ip": {"type": "ip"},
"mac": {"type": "keyword"}
}},
"event": {"type": "object", "properties": {
"duration": {"type": "long"},
"category": {"type": "keyword"},
"outcome": {"type": "keyword"}
}},
"message": {"type": "match_only_text"},
"user": {"type": "object", "properties": {"name": {"type": "keyword"}}},
"network": {"type": "object", "properties": {
"bytes": {"type": "long"},
"name": {"type": "keyword"},
"transport": {"type": "keyword"}
}},
"tags": {"type": "keyword", "ignore_above": 1024}
}
}
  1. Save the component template
  1. Go to Stack Management > Index Management > Index templates
  2. Create a new template named logs-routeros
  3. Set Index patterns to logs-routeros-*
  4. Under Component templates, add:
    • logs@settings
    • logs-routeros@custom
    • ecs@mappings
  5. Save the template

Ensure UDP port 5514 is open on your Elastic Agent host and any firewalls between RouterOS and the agent.

Configure RouterOS to send Syslog messages to your Elastic Agent.

Set up a remote logging action pointing to your Elastic Agent:

/system logging action
set [find where name="remote"] bsd-syslog=yes remote=10.0.0.2 remote-port=5514 syslog-facility=syslog

This configures the default “remote” action to send BSD-format Syslog to 10.0.0.2 on port 5514.

Configure which events to send to the remote Syslog server:

/system logging
add action=remote topics=info
add action=remote topics=error
add action=remote topics=critical
add action=remote topics=warning
add action=remote topics=bridge,stp

Add topics based on what you need to monitor. Common topics include:

  • info - General informational messages
  • error - Error conditions
  • critical - Critical system events
  • warning - Warning conditions
  • firewall - Firewall events
  • interface - Interface state changes
  • ip - IP-related events

After configuration, Syslog data immediately starts flowing to Elasticsearch.

  1. Log into Kibana
  2. Navigate to Discover
  3. Add a filter:
    • Field: data_stream.dataset
    • Operator: IS
    • Value: routeros

Add these fields to your Discover view for better analysis:

  • message - The raw Syslog message
  • log.syslog.hostname - Router hostname
  • source.ip - Source IP address
  • destination.ip - Destination IP address
  • network.transport - Protocol (TCP, UDP, etc.)
  • event.category - Event category (authentication, network, session)
  • event.outcome - Event outcome (success, failure)

Save your search for quick access:

  1. Configure your desired columns and filters
  2. Click Save and give it a name

Kibana alerts help notify you of important events:

  1. Go to Alerts > Create rule
  2. Select a rule type:
    • Spike in failed logon events - Alert on excessive failed login attempts
    • Threshold rule - Custom threshold for specific events
  3. Configure connectors (email, webhook, Slack, etc.) to receive notifications

Example: Create a threshold alert for failed SSH logins:

  1. Create a new threshold rule
  2. Set field to event.category = authentication
  3. Set condition where event.outcome = failure
  4. Set threshold (e.g., > 5 in 5 minutes)
  5. Configure notification connector
  1. Verify RouterOS logging configuration:

    /system logging print
    /system logging action print
  2. Check Elastic Agent is running:

    Terminal window
    systemctl status elastic-agent
  3. Verify UDP port 5514 is open:

    Terminal window
    ss -uap | grep 5514
  4. Check Elastic Agent logs:

    Terminal window
    journalctl -u elastic-agent
  5. Verify index template was created:

    Terminal window
    GET /logs-routeros-*/_mapping

If messages appear unparsed in Kibana:

  1. Check the ingest pipeline is correctly configured
  2. Verify the pipeline is set as the default for the index
  3. Test grok patterns using Kibana’s Grok Debugger

Ensure:

  • RouterOS can reach 10.0.0.2:5514 (UDP)
  • No firewall blocking UDP port 5514
  • Elastic Agent is bound to the correct interface