• 📣 📣 Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

background

As for burying data reporting, there are three methods: interface reporting, picture reporting and sendBeacon, each of which has its advantages and disadvantages. Today, let’s take a look at the server configuration related to picture reporting.

Runtime environment

Zookeeper, Kafka, kafka-manager (optional), nginx, rsyslog, rsyslog-kafka(optional)Copy the code

Project Process Description

  • The front end through JS SDK by requesting a minimal GIF picture way, will be collected in the project data, add to the request GIF picture path, so nginx can get the request to write access. Log. At the same time, the configured rsyslog will be written to access. Log through the rsyslog-kafka module forward to kafka topic, at this point, nginx acts as a producer. If the Tianyan server subscribes to this Kafka Topic, it can monitor the forwarded log data, consume it, and save the collected information to the database.

configuration

Nginx configuration

  • Log_format Sets the format for storing nginx logs. To facilitate parsing, set the format to JSON. Its configuration is as follows:
log_format log_json '{ "@timestamp": "$time_local", '
'"ipaddress": "$remote_addr", '
'"args": "$args", '
'"method": "$request_method", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"up_resp_time": "$upstream_response_time",'
'"request_time": "$request_time"'
' }';
	
access_log  logs/access.log  log_json;
Copy the code

GIF configuration

  • To configure the server’s response to empty_gif, see ngx_HTTP_EMPty_GIF_module:
location = /_.gif {
    empty_gif;
}
Copy the code

Header_buffer configuration

Because the SDK collects a lot of data, you need to configure the size of the nginx request body buffer. The default value is 8K. If the request length is too long, nginx will truncate the request and store it in the log, resulting in parameter loss. For details about the configuration, see header_buffer.

client_header_buffer_size 512k; The current configuration is large enough to use large_client_header_buffers 4 512K for now;Copy the code

Rsyslog related configurations

The installation

Rsyslog and rsyslog-kafka are installed. After the installation is complete, check whether omkafka.so exists in /lib64/rysylog/) to check whether rsyslog-kafka is successfully installed.

yum install rsyslog
yum install rsyslog-kafka.x86_64
Copy the code

configuration

Edit the rsyslog configuration file (path /etc/rsyslog.conf) and add the following configuration file to the configuration file #### MODULES #### (or add the xxx.conf configuration file to the /etc/rsyslogd/ directory) :

# ####### rsyslog configuration file ####### # Load omkafka and imfile module module(load="omkafka") module(load="imfile") # nginx template Template (name="nginxAccessTemplate" type="string" string="% MSG %") # ruleset ruleset(name="nginx-kafka") Action (type="omkafka" template="nginxAccessTemplate" topic="tianyan-test" # correspond kafka topic name broker="xx.xxx.xx.xx:9092" } # define the message source and set the related action input(type="imfile" Tag="nginx-accesslog" File="/var/log/access.log") Ruleset="nginx-kafka")Copy the code

Simple configuration description:

  • xx.xx.xxx.xx:9092Change to the corresponding Kafka address on the server (if multiple addresses are separated by commas)
  • /var/log/access.logIs to monitor thenginx access.logThe log file
  • topic: tianyan-testCorresponding to Kafka Topic, can be passed laterkafka-managerTo create a

Run the rsyslogd -n 1 or rsyslogd -dn command to check whether the configuration is incorrect

Then restart rsyslog and run the service rsyslog restart command to check whether errors are reported in /var/log/message.

Kafka configuration

Docker-compose is used to install Kafka and Kafka-Manager. If there are related services, there is no need to install kafka and Kafka-Manager. After installing Kafka and Kafka-Manager, you can refer to my previous article nodejs to install Kafka.

At this point, the entire configuration is complete, and by accessing the nginx service address, you can observe whether logs are being written and forwarded to Kafka.