Syslog with Elasticsearch
Syslog with Elasticsearch
Section titled “Syslog with Elasticsearch”Elasticsearch is a popular NoSQL database that can store a wide range of data, including Syslog data from RouterOS devices. Combined with Kibana, it creates a powerful tool to analyze Syslog data across multiple routers. This guide covers setting up Syslog log collection and analysis using Elastic integrations.
Architecture Overview
Section titled “Architecture Overview”The typical deployment uses the following components:
| Component | IP Address | Purpose |
|---|---|---|
| RouterOS device | 10.0.0.1 | Sends Syslog data |
| Elastic Agent (Custom UDP) | 10.0.0.2 | Ingests and processes Syslog data |
| Fleet Server | 10.0.0.3 | Manages Elastic Agents |
| Elasticsearch | 10.0.0.4 | Stores log data |
| Kibana | 10.0.0.5 | Visualizes and searches logs |
This guide uses Custom instead of Logstash UDP logs integration, with Fleet Server handling the data pipeline.
All components (Elasticsearch, Kibana, Fleet Server, Custom UDP) can be installed on the same device for testing, but production deployments typically separate them for performance.
Prerequisites
Section titled “Prerequisites”Before configuring RouterOS, ensure you have:
- Elasticsearch - Set up per Elastic’s documentation. Production deployments should use a cluster.
- Kibana - Installed per Elastic’s documentation. Can be co-located with Elasticsearch or on a separate server.
- Fleet Server - Set up per Elastic’s documentation. Elastic recommends installing on a separate server from Elasticsearch for production.
Elastic Configuration
Section titled “Elastic Configuration”The following steps configure the Elastic stack to receive Syslog from RouterOS.
Create Agent Policy
Section titled “Create Agent Policy”- Log into Kibana and navigate to Fleet > Agent policies
- Click Create agent policy
- Name the policy (e.g., “Syslog policy”)
- Configure advanced settings as needed and create the policy
Alternatively, use the API:
POST kbn:/api/fleet/agent_policies{ "name": "Syslog policy", "description": "", "namespace": "default", "monitoring_enabled": ["logs", "metrics"], "inactivity_timeout": 1209600, "is_protected": false}Add Custom UDP Logs Integration
Section titled “Add Custom UDP Logs Integration”- Open your created agent policy
- Click Add integration
- Search for “Custom UDP logs” and add it
- Configure:
- Listen Address: IP of your server (e.g.,
10.0.0.2) - Listen Port:
5514(standard Syslog port) - Dataset name:
routeros - Ingest Pipeline:
logs-routeros@custom - Syslog Parsing:
Yes
- Listen Address: IP of your server (e.g.,
- Save the integration
- Follow the Elastic Agent installation instructions to deploy the agent to your host
Create Ingest Pipeline
Section titled “Create Ingest Pipeline”The ingest pipeline parses RouterOS Syslog messages into structured fields.
- In Kibana, go to Stack Management > Ingest Pipelines
- Click Create pipeline > New pipeline
- Name:
logs-routeros@custom - Click Import processors and paste:
{ "processors": [ { "grok": { "field": "message", "patterns": [ "^first L2TP UDP packet received from %{IP:source.ip}$", "^login failure for user %{USERNAME:user.name} from %{IP:source.ip} via %{DATA:service.name}$", "^%{USERNAME:user.name} logged in, %{IP:client.ip} from %{IP:source.ip}$", "^dhcp alert on %{DATA}: discovered unknown dhcp server, mac %{MAC:source.mac}, ip %{IP:source.ip}$", "in:%{DATA} out:%{DATA}, ?(connection-state:%{DATA},|)?(src-mac %{MAC:source.mac},|) proto %{DATA:network.transport} \\(%{DATA}\\), %{IP:source.ip}:?(%{INT:source.port}|)->%{IP:destination.ip}:?(%{INT:destination.port}|), len %{INT:network.bytes}$", "in:%{DATA} out:%{DATA}, ?(connection-state:%{DATA},|)?(src-mac %{MAC:source.mac},|) proto %{DATA:network.transport}, %{IP:source.ip}:?(%{INT:source.port}|)->%{IP:destination.ip}:?(%{INT:destination.port}|), len %{INT:network.bytes}$", "^%{DATA:network.name} (deassigned|assigned) %{IP:client.ip} for %{MAC:client.mac} %{DATA}$", "^%{DATA:user.name} logged out, %{INT:event.duration} %{INT} %{INT} %{INT} %{INT} from %{IP:client.ip}$", "^user %{DATA:user.name} logged out from %{IP:source.ip} via %{DATA:service.name}$", "^user %{DATA:user.name} logged in from %{IP:source.ip} via %{DATA:service.name}$", "^%{DATA:network.name} client %{MAC:client.mac} declines IP address %{IP:client.ip}$", "^%{DATA:network.name} link up \\(speed %{DATA}\\)$", "^%{DATA:network.name} link down$", "^user %{DATA:user.name} authentication failed$", "^%{DATA:network.name} fcs error on link$", "^phase1 negotiation failed due to time up %{IP:source.ip}\\[%{INT:source.port}\\]<=>%{IP:destination.ip}\\[%{INT:destination.port}\\] %{DATA}:%{DATA}$", "^%{DATA:network.name} (learning|forwarding)$", "^user %{DATA:user.name} is already active$", "^%{GREEDYDATA}$" ] } }, {"lowercase": {"field": "network.transport", "ignore_missing": true}}, { "append": { "field": "event.category", "value": ["authentication"], "if": "ctx.message =~ /(login failure for user|logged in from|logged in,)/" } }, { "append": { "field": "event.outcome", "value": ["success"], "if": "ctx.message =~ /(logged in from|logged in,)/" } }, { "append": { "field": "event.outcome", "value": ["failure"], "if": "ctx.message =~ /(login failure for user)/" } }, { "append": { "field": "event.category", "value": ["network"], "if": "ctx.message =~ /( fcs error on link| link down| link up)/" } }, { "append": { "field": "event.outcome", "value": ["failure"], "if": "ctx.message =~ /( fcs error on link)/" } }, { "append": { "field": "event.category", "value": ["session"], "if": "ctx.message =~ /(logged out)/" } }, { "append": { "field": "event.category", "value": ["threat"], "if": "ctx.message =~ /(from address that has not seen before)/" } }, { "append": { "field": "service.name", "value": ["l2tp"], "if": "ctx.message =~ /(^L2TP\\/IPsec VPN)/" } }, {"geoip": {"field": "source.ip", "target_field": "source.geo", "ignore_missing": true}}, {"geoip": {"field": "destination.ip", "target_field": "destination.geo", "ignore_missing": true}}, {"geoip": {"field": "client.ip", "target_field": "client.geo", "ignore_missing": true}} ]}- Save the pipeline
Create Component Template
Section titled “Create Component Template”- Go to Stack Management > Index Management > Component templates
- Create a new template named
logs-routeros@custom - Under Index settings:
{ "index": { "lifecycle": { "name": "logs" }, "default_pipeline": "logs-routeros@custom" }}- Under Mappings, click Load JSON and paste:
{ "dynamic_templates": [], "properties": { "service": {"type": "object", "properties": {"name": {"type": "keyword"}}}, "destination": {"type": "object", "properties": { "port": {"type": "long"}, "ip": {"type": "ip"} }}, "host": {"type": "object", "properties": {"ip": {"type": "ip"}}}, "client": {"type": "object", "properties": { "ip": {"type": "ip"}, "mac": {"type": "keyword"} }}, "source": {"type": "object", "properties": { "geo": {"type": "object", "properties": { "continent_name": {"type": "keyword", "ignore_above": 1024}, "region_iso_code": {"type": "keyword", "ignore_above": 1024}, "city_name": {"type": "keyword", "ignore_above": 1024}, "country_iso_code": {"type": "keyword", "ignore_above": 1024}, "country_name": {"type": "keyword", "ignore_above": 1024}, "location": {"type": "geo_point"}, "region_name": {"type": "keyword", "ignore_above": 1024} }}, "as": {"type": "object", "properties": { "number": {"type": "long"}, "organization": {"type": "object", "properties": { "name": {"type": "keyword", "fields": {"text": {"type": "match_only_text"}}} }} }}, "address": {"type": "keyword", "ignore_above": 1024}, "port": {"type": "long"}, "domain": {"type": "keyword", "ignore_above": 1024}, "ip": {"type": "ip"}, "mac": {"type": "keyword"} }}, "event": {"type": "object", "properties": { "duration": {"type": "long"}, "category": {"type": "keyword"}, "outcome": {"type": "keyword"} }}, "message": {"type": "match_only_text"}, "user": {"type": "object", "properties": {"name": {"type": "keyword"}}}, "network": {"type": "object", "properties": { "bytes": {"type": "long"}, "name": {"type": "keyword"}, "transport": {"type": "keyword"} }}, "tags": {"type": "keyword", "ignore_above": 1024} }}- Save the component template
Create Index Template
Section titled “Create Index Template”- Go to Stack Management > Index Management > Index templates
- Create a new template named
logs-routeros - Set Index patterns to
logs-routeros-* - Under Component templates, add:
logs@settingslogs-routeros@customecs@mappings
- Save the template
Firewall Configuration
Section titled “Firewall Configuration”Ensure UDP port 5514 is open on your Elastic Agent host and any firewalls between RouterOS and the agent.
RouterOS Configuration
Section titled “RouterOS Configuration”Configure RouterOS to send Syslog messages to your Elastic Agent.
Configure Logging Action
Section titled “Configure Logging Action”Set up a remote logging action pointing to your Elastic Agent:
/system logging actionset [find where name="remote"] bsd-syslog=yes remote=10.0.0.2 remote-port=5514 syslog-facility=syslogThis configures the default “remote” action to send BSD-format Syslog to 10.0.0.2 on port 5514.
Add Logging Topics
Section titled “Add Logging Topics”Configure which events to send to the remote Syslog server:
/system loggingadd action=remote topics=infoadd action=remote topics=erroradd action=remote topics=criticaladd action=remote topics=warningadd action=remote topics=bridge,stpAdd topics based on what you need to monitor. Common topics include:
info- General informational messageserror- Error conditionscritical- Critical system eventswarning- Warning conditionsfirewall- Firewall eventsinterface- Interface state changesip- IP-related events
After configuration, Syslog data immediately starts flowing to Elasticsearch.
Using Kibana
Section titled “Using Kibana”Viewing Logs
Section titled “Viewing Logs”- Log into Kibana
- Navigate to Discover
- Add a filter:
- Field:
data_stream.dataset - Operator:
IS - Value:
routeros
- Field:
Useful Fields
Section titled “Useful Fields”Add these fields to your Discover view for better analysis:
message- The raw Syslog messagelog.syslog.hostname- Router hostnamesource.ip- Source IP addressdestination.ip- Destination IP addressnetwork.transport- Protocol (TCP, UDP, etc.)event.category- Event category (authentication, network, session)event.outcome- Event outcome (success, failure)
Saving Searches
Section titled “Saving Searches”Save your search for quick access:
- Configure your desired columns and filters
- Click Save and give it a name
Creating Alerts
Section titled “Creating Alerts”Kibana alerts help notify you of important events:
- Go to Alerts > Create rule
- Select a rule type:
- Spike in failed logon events - Alert on excessive failed login attempts
- Threshold rule - Custom threshold for specific events
- Configure connectors (email, webhook, Slack, etc.) to receive notifications
Example: Create a threshold alert for failed SSH logins:
- Create a new threshold rule
- Set field to
event.category=authentication - Set condition where
event.outcome=failure - Set threshold (e.g., > 5 in 5 minutes)
- Configure notification connector
Troubleshooting
Section titled “Troubleshooting”Logs Not Appearing in Kibana
Section titled “Logs Not Appearing in Kibana”-
Verify RouterOS logging configuration:
/system logging print/system logging action print -
Check Elastic Agent is running:
Terminal window systemctl status elastic-agent -
Verify UDP port 5514 is open:
Terminal window ss -uap | grep 5514 -
Check Elastic Agent logs:
Terminal window journalctl -u elastic-agent -
Verify index template was created:
Terminal window GET /logs-routeros-*/_mapping
Parsing Issues
Section titled “Parsing Issues”If messages appear unparsed in Kibana:
- Check the ingest pipeline is correctly configured
- Verify the pipeline is set as the default for the index
- Test grok patterns using Kibana’s Grok Debugger
Network Connectivity
Section titled “Network Connectivity”Ensure:
- RouterOS can reach 10.0.0.2:5514 (UDP)
- No firewall blocking UDP port 5514
- Elastic Agent is bound to the correct interface
Related Topics
Section titled “Related Topics”- System Logging - RouterOS local logging configuration
- NetFlow with Elasticsearch - Flow-based analysis with Elastic
- CEF with Elasticsearch - Common Event Format logging