본문 바로가기
서버구축 (WEB,DB)

OSSEC Log Management with Elasticsearch

by 날으는물고기 2013. 11. 23.

OSSEC Log Management with Elasticsearch

Log Management System Architecture

The OSSEC log management system I’ll discuss here relies on three open source technologies, in addition to OSSEC:

  • Logstash – Parses and stores syslog data to Elasticsearch
  • Elasticsearch - General purpose indexing and data storage system
  • Kibana – User interface that comes with ElasticSearch

Logstash is configured to receive OSSEC syslog output then parse it and forward to Elasticsearch for indexing and long terms storage. Kibana is designed to easily submit queries to Elasticsearch and display results in a number of user designed dashboards. So the steps involved for developing an OSSEC log management system with Elasticsearch are:

  1. Configure OSSEC to output alerts to syslog.
  2. Install and configure Logstash to input OSSEC alerts, parse them and input the fields to Elasticsearch.
  3. Install and configure Elasticsearch to store OSSEC alerts from Logstash.
  4. Install and configure Kibana to work with Elasticsearch.

Configure OSSEC Syslog Output

To keep this article as brief as possible, I won’t go over how to install OSSEC. That is well documented on the OSSEC Project website. To configure OSSEC to send alerts to another system via syslog follow these steps:

  1. Login as root to the OSSEC server.
  2. Open /var/ossec/etc/ossec.conf in an editor.
  3. Let’s assume you want to send the alerts to a syslog server at 10.0.0.1 listening on UDP port 9000.  Add these lines to ossec.conf right above the </ossec_config> statment:
    <syslog_output>
       <server>10.0.0.1</server>
       <port>9000</port>
       <format>default</format>
    </syslog_output>
  4. Enable syslog output with this command:
    /var/ossec/bin/ossec-control enable client-syslog
  5. Restart the OSSEC server with this command:
    /var/ossec/bin/ossec-control start

Install and Configure Logstash

Now Logstash needs to be configured to receive OSSEC syslog output on UDP port 9000 or whatever port you decide to use. The easiest way to do that is to use the precompiled Logstash JAR that includes all the necessary core functionality and plugins you need. The version of logstash used for this article was logstash-1.3.3.flatjar.jar. Note that Logstash as of version 1.4.x is run differently than documented here.

The configuration file you need to capture and parse syslog input is adapted from the rsyslog recipe from Logstash cookbook with a few tweaks for OSSEC derived from the blog by Dan Parriott, my colleague on the OSSEC Project team, who was an early adopter of Logstash and Elasticsearch:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
input {
# stdin{}
  udp {
     port => 9000
     type => "syslog"
  }
}
 
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_host} %{DATA:syslog_program}: Alert Level: %{BASE10NUM:Alert_Level}; Rule: %{BASE10NUM:Rule} - %{GREEDYDATA:Description}; Location: %{GREEDYDATA:Details}" }
      add_field => [ "ossec_server", "%{host}" ]
    }
    mutate {
      remove_field => [ "syslog_hostname", "syslog_message", "syslog_pid", "message", "@version", "type", "host" ]
    }
  }
}
 
output {
#  stdout {
#    codec => rubydebug
#  }
   elasticsearch_http {
     host => "10.0.0.1"
   }
}

Lines [1 – 7] Every Logstash syslog configuration file contains input, filter and output sections. The input section in this case tells Logstash to listen for syslog UDP packets on any IP address and port 9000. For debugging you can uncomment line 2 to get input from stdin. This is handy when testing your parsing code in the filter section

Lines [9 – 11] The filter section divides up the incoming syslog lines that are placed in the logstash input field called “message” with the “match” directive. Logstash grok filters do the basic pattern matching and parsing. You can get a detailed explanation of how grok works on the Logstash grok documentation page. The syntax for parsing fields is %{<pattern>:<field>}, where <pattern> is what will be searched for and <field> is the name of the field that is found.

Line [12] The syslog_timestampsyslog_hostsyslog_program and syslog_pid fields are parsed first. The next three fields are specific to OSSEC: Alert_levelRule and Description. The remainder of the message is placed into Details. Here is the parsing sequence for these fields:

  1. Alert_level – skip past the " Alert level: " string then extract the numeric characters up to the next space.
  2. Rule – skip past the " Rule: " string then extract the numeric characters up to the ” – ” string.
  3. Description – skip past the " - " string then extract any characters, including spaces, up to the "; Location: " string.
  4. Details – skip past the "; Location: " string then extract the remaining characters, including spaces, from the original “message” field.

Line [13] The host field, containing the name of the host on which Logstash is running is mapped to the logstash_host field with the add_field directive in grok.

Lines [15 – 17] All the fields are parsed so the extraneous fields are trimmed from the output with the remove_field directive in the mutate section.

Lines [21 – 24] The output section sends the parsed output to Elasticsearch or to stdout.  You can uncomment codec => rubydebug statement to output the parsed fields in JSON format for debugging.

Lines [25 – 26] The elasticsearch_http directive sends the Logstash output to the Elasticsearch instance running at the IP address specified by the host field.  In this case Elasticsearch is running at IP address 10.0.0.1.

Assuming you saved your configuration in a file called logstash.conf that resides in the same directory as Logstash itself, run Logstash with this command.

java -jar logstash-1.3.3-flatjar.jar agent -f ./logstash.conf

You’ll need at least Java 1.6 on your system to be able to run this command.

Install and Configure Elasticsearch

The easiest way to install Elasticsearch is from RPMs or DEB packages. I use CentOS most of the time so I’ll discuss how to install from RPMs. You can install Elasticsearch in a cluster, but to keep things simple,  I’ll cover installation on a single server and will assume that is the same system where Logstash is installed.

With that said, here is how you install and configure Elasticsearch:

  1. Download the RPMS:
    wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.7.noarch.rpm --no-check-certificate
  2. Login as root.
  3. Install the RPMs with this command:
    rpm -Uvh elasticsearch-0.90.7.noarch.rpm
  4. The RPM will install Elasticsearch in /usr/share/elasticsearch and the configuration files /etc/elasticsearch/elasticsearch.yml and /etc/sysconfig/elasticsearch. It also creates a service script to start, stop and check the status of Elasticsearch. Start Elasticsearch with the service command:
    service elasticsearch start

By default, the Elasticsearch files are maintained in /var/lib/elasticsearch and logs in /var/log/elasticsearch. You can change that in elasticsearch.yml, but for now leave them as is. However let’s set the name of the Elasticsearch cluster to “mycluster” to match the cluster name setting from the Logstash config file of the previous section.  To do that open /etc/elasticsearch/elasticsearch.yml and set the following line as shown:

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: mycluster

Install and Configure Kibana

At this point you are able to collect OSSEC alerts and query them with the Elasticsearch RESTful API. But Elasticsearch provides a web console called Kibana which enables you to build consoles that post queries automatically to your Elasticsearch backend. To install and configure Kibana follow this procedure.

  1. Download Kibana
    wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0milestone4.zip
    --no-check-certificate
  2. Unzip the downloaded package.
  3. Copy the src directory in the unzipped Kibana directory to your the Apache web server htdocs directory or Tomcat webapps directory depending on which web server you are using.
  4. Change the name of the source directory to “kibana”.
  5. Open the kibana/config.js file in an editor.
  6. Change the “elasticsearch:” field value to the IP address of your Elasticsearch system. For the example system I’ve been using so far the IP would be 10.0.0.1 so the line would look like this (including the comma):
    elasticsearch: "http://10.0.0.1:9200",

To test the installation, open the Kibana URL – http://10.0.0.1/kibana/ – in a browser. You should get a screen that looks like this:

To get to the console screen, click on the Logstash Dashboard link in the Yes bullet point under Are you a Logstash User?

Query Elasticsearch with Kibana

If you let your OSSEC system run for a while you should have collected some alerts that were stored in Elasticsearch. After going to the Logstash Dashboard, you’ll see a screen that has some panels on it. The top panel queries Elasticsearch for all alerts by default.

To get specific alerts, you enter a query string for one of the OSSEC fields, such as “Rule = 70001″, then you’ll see the results in a the panel called EVENTS OVER TIME that shows counts of the events returned from Elasticsearch over time. You can do additional queries by clicking on the plus icon of the most recent query then entering the new query strings and clicking on the magnifying glass icon. The illustration below shows results for three queries that I entered looking for alerts for OSSEC rules 700001, 591 and 700012.

The alerts fields are displayed in the panel below  EVENTS OVER TIME. You select the fields you want to see by clicking on the checkboxes for the fields you want to display in the Fields list shown in the lower left hand corner of the illustration. In this case, I’ve selected @timestamp,Alert_levelRuleDescription and Details.

As new alerts are stored in Elasticsearch, they will appear in the Kibana console if your refresh the screen in your browser. Alternatively you can have the console refresh automatically by clicking the time scale menu item, which is labeled a day ago to a few seconds ago, then select Auto-refresh > and one of the several refresh times ranging from seconds to 1 day. The panels will then refresh at every interval you specified and you should see new alerts pop up on the screen, assuming those OSSEC alerts are generated on your OSSEC agent systems.

When you get this system working try experimenting with different queries for other OSSEC alerts. I’ve just scratched the surface of what can be done with Elasticsearch.



출처 : vichargrave.com

728x90

댓글