Winsyk · 2014/08/15 10:58

0 x00 preface:


For large data I believe most people are not strange, special is now in the era of big data, the present popular technology a lot of people can be catchy such as hadoop, hbase, spark, storm, so in data collection, whether there is a good solution to help enterprises to carry out security management and policy analysis, the author searched a lot of information, Through my own practice, I will share my views on the analysis and processing of big data logs. This technology is an open source solution based on LogStash ElasticSearch, Redis and OSSec. Mentioned in “nowhere to hide” snowden NSA on the network and the principle of data collection is to collect all, also need to do so on enterprise security, as is known to all security risks from the border, hackers will pick some operations on the edge of the invasion, but the things we didn’t record but for subsequent emergency response and intrusion detection is impossible, So the principle of doing big data in the enterprise is also “collect everything”. These mentioned at the beginning is mainly to pave the way for the following, this paper is based on the collection and processing of system security logs, hoping to provide you with some ideas, the whole is open source technology, the essence of diaosi security is “fast”, the current society only fast.

0x01: Technical Architecture: Tools used are as follows:


Ossec event source, Alert source, Logstash, Elasticsearch, Kibana3, Redis, data storage, preventing data loss

Advantages of using this solution: OpenSource, disadvantages: less information, especially for Kibana

0x02: Implementation Scheme


On how to install, please refer to: https://app.yinxiang.com/shard/s21/sh/f4e62686-16ef-4549-beb1-c5124b232df6/f538a1ea304ff4191acf494a1a1bd4f9

0x03: Technical Practice:


1. System log collection:

Operating system logs can be collected using syslog or Rsyslog. Syslog is used in this document to collect security logs. The log content ranges from /var/log/secure to /var/log/lastlog. In /var/log/secure, users’ login failures, login exceptions, and whitelisted IP logins can be audited, and then association analysis can be performed. You need to discover and explore more aspects by your own. This article will not go into details.

Log collection functions: Application Scenario: Weak system security Logs During hacker defense, if logs are deleted or the syslog service is suspended, system intrusion analysis will be troubled and tracing will be difficult.

2. Ossec:

If not, the data will not be used to live but dead data. Here we use the open source IDS system, Ossec, Ossec we may not be unfamiliar with, Ossec supports two modes: 1, Ossec agent; 2. Collect logs based on syslog, and send logs to OSSEC Server by configuring syslog on the client. Then OSSEC Server will analyze the formatting rules of analyzing logs, judge anomalies and deal with them, such as writing logs into the database and triggering alarms.

Figure 2 shows the entire working process of OSSEC:

Centralized log management:

Logstash is a completely open source tool that you can use to collect, analyze, and store your logs for later use (e.g., search). Speaking of searching, LogStash has a Web interface that searches and displays all logs, but since the LogStash management interface is not as beautiful as Kibana, kibana is used here for log presentation. I installed the logstash version :logstash-1.2.2-flatjar.jar by using the following command:

Jar agent -f logstash_agent.confCopy the code

Start the logstash and the logstash_agent.conf content is shown below:

Breaking down the logstash content above, it does all of these things.

  1. Data is read into the LogStash agent by reading the OSsec Alert log.
  2. Logstash determines whether the event source is OSsec. If so, the field is segmented.
  3. Write the processed logstash log to redis;
  4. The purpose of using Redis is to cache logs read by logstash into REDis to prevent data loss.

4, full text search for elasticSearch

ElasticSearch is an open source, distributed, RESTful search engine based on Lucene. Designed for cloud computing, can achieve real-time search, stable, reliable, fast, easy to install and use. In this article, ElasticSearch uses ES to access ossec logs in the Logstash stash for full-text retrieval, facilitating subsequent log display and search. The default es port is 9200.

After ElasticSearch is started, you can visit http://127.0.0.1:9200 to check whether ES is working properly. If the following data is displayed, ES is working properly. Port 9300 is used to communicate with the ES cluster and send data.

{" status ": 200," name ":" N 'Gabthoth ", "version" : {" number ":" 1.2.2 ", "build_hash" : "9902f08efc3ad14ce27882b991c4c56b920c9872", "build_timestamp" : "2014-07-09T12:02:32Z", "build_snapshot" : False, "lucene_version" : "4.8"}, "tagline" : "You Know, for Search"}Copy the code

Write the data in REDis to ES by reading the data previously written in Redis. The configuration is as follows:

Input {redis {host =>" 127.0.0.1" data_type =>"list" port =>" 6379" key =>" logstash" type =>" ossec"}} output {redis {host =>" 127.0.0.1" data_type =>"list" port =>" 6379" key =>" logstash" type =>" ossec"} Stdout {codec => rubydebug} if [type] == "ossec" {elasticSearch {host => "127.0.0.1" port => "9300" #cluster => "ossec" index => "logstash-ossec-%{+YYYY.MM.dd}" index_type => "ossec" template_name => "template-ossec" template => "/usr/local/share/logstash/elasticsearch_template.json" template_overwrite => true } } }Copy the code

5. Log Display:

Now that the above work has been done, there is a need for an interface to show the results of labor and subsequent log analysis. Kibana is used here to show. About Kibana, please search by yourself, I just want to say my understanding. Recommend to use the dashboard to show log, download address: https://github.com/magenx/Logstash/blob/master/kibana/kibana_dashboard.json 6, the results show:

6.1 Application Scenarios combined with Ossec

For example, in our daily operation and maintenance environment, we have common login events and also restrict whether some IP addresses can log in to the host. In OSSEC, we customize a rule to determine whether intruders use other IP addresses to log in. In OSsec we can customize a rule in sshd_rules.xml as follows:

<rule id="5739" level="10"> <if_sid>5700</if_sid> <group>authentication_failure</group> <srcip>! 10.10.2.1</srcip> <description>not come from 10.10.2.1</description> </rule> </group>Copy the code

If_sid matches syslog and SSHD as follows:

<! -- SSHD messages --> <group name="syslog,sshd,"> <rule id="5700" level="0" noalert="1"> <decoded_as>sshd</decoded_as> <description>SSHD messages grouped.</description> </rule>Copy the code

If a user triggers the rule, Kibana displays the following alarm:

0 x04:


To borrow a word, safety is such a thing, you have met I have not met, I have met you may not have met, in fact, as long as one said, we all understand.” Hope this article is helpful to you, thank you!