issue

Take a look at Kong pushing logs to Kafka via plugins. ELK collects logs from Kafka and stores and displays them.

prequel

The Plugin Kafka Log is available in Plugins Hub. And that leads to bad news and good news.

  • The bad news is that it’s only available on the Enterprise version of Kong.
  • The good news is that I found a project on Github called Kong-plugin-kafka-log. By comparing the Readme documentation for the open source plugin with the Kong Enterprise Edition Kafka Log documentation, I suspect that the two are the same thing, or rather, the original project. It would be irresponsible to speculate that the author stopped updating the github open source plugin project after the Kong-plugin-kafka-log plugin was integrated into the enterprise edition by Kong.

Kong >= 0.14.x. This is an early plugin for Kong and does not fit seamlessly with the latest version of Kong (latest version 2.0.2). Therefore, we intend to do secondary development based on this open source plug-in and adapt to the currently selected version of Kong.

I based on kong-plugin-kafka-log plug-in source code to do update, adaptation kong-2.0.x version, specific can refer to kafka-log log code parsing, updated kafka-log code

start

Take a look at the kong-plugin-kafka-log source code, one of the most interesting is the following line of code

local ok, err = producer:send(conf.topic, nil, cjson_encode(message))
Copy the code

This is the code that actually does the sending, and I expected the plug-in to send and receive network requests by calling the Cosocket API provided by OpenResty, which is the heart and soul of OpenResty.

Here’s an explanation from OpenResty columnist Ming Wen: “Cosocket is an openResty term that combines the English words for coroutines and network sockets: cosocket = coroutine + socket. So, you can translate cosocket as “coroutine socket.” Cosocket requires not only the support of Lua’s coroutine features, but also the very important event mechanism in Nginx, which, combined, ultimately implements non-blocking network I/O

The internal cosocket implementation is as follows

What you can achieve with the Cosocket API is that instead of blocking the main process of processing normal requests, logs are pushed to Kafka via lua coroutines, in exchange for space and time.

Back to the code above, I found that the Kong-plugin-kafka-log did not define send. I traced it back to lua-resty-kafka, an OpenResty library, and found that lua-resty-kafka was referenced. Indicates that the library can be used normally.

Lua-resty-kafka is declared in the documentation

lua-resty-kafka - Lua kafka client driver for the ngx_lua based on the cosocket API
Copy the code

The place in its code where cosocket is actually called is inside broker.lua

localTCP = NGX. Socket. TCP...function _M.send_receive(self, request)
    localSock, err = TCP ()...... endCopy the code

Ngx.socket. TCP is the entry point for creating TCP objects in the COsocket API.

So far, you can see that kong-plugin-kafka-log and lua-resty-kafka make up the process I expect Kong to push logs to Kafka.

Practice a

  1. Install in Kong’s containerlua-resty-kafka
luarocks install lua-resty-kafka
Copy the code

After the installation is complete, you can see the installed Kafka package in kong’s runtime folder

  1. Modify thekong-plugin-kafka-logPlug-ins, mainlyhandler.luaThere’s a formation in alphauuidThe function is obsolete. Replace it with a new function that generates uUID.

The original

--- Computes a cache key for a given configuration.
local function cache_key(conf)
  -- here we rely on validation logic in schema that automatically assigns a unique id
  -- on every configuartion update
  return conf.uuid
end
local function log(premature, the conf, message)...localCache_key = cache_key (conf)... endCopy the code

After the modification

local uuid = require "kong.tools.utils".uuid
local function log(premature, the conf, message)...localCache_key = uuid... endCopy the code

After installing the plug-in, start Kong

  1. Configure the plug-in and fill in the information about Kafka

These configurations are provided in kong-plugin-kafka-log and can be modified later as required.

It is observed in KafkaTools that Kong’s logs are successfully pushed to Kafka

practice

Create kaflka+ ZooKeeper + Logstash + ElasticSearch + Kibana environment

Docker-compose provides these services

docker-compose.yml

version: '3'services: zookeeper: container_name : zookeeper image: wurstmeister/zookeeper ports: - 2181:2181-2888:2888:3888 kafka: Container_name: Kafka Image: Wurstmeister /kafka:2.11-0.11.0.3 Depends_on: - zookeeper ports: -"9092"Environment: KAFKA_ADVERTISED_HOST_NAME: 192.168.137.10 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_LOG_RETENTION_HOURS:"168"
      KAFKA_LOG_RETENTION_BYTES: "100000000"
      KAFKA_CREATE_TOPICS: "log:1:1"
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'volumes: - /var/run/docker.sock:/var/run/docker.sock elasticsearch: image: Docker. Elastic. Co/elasticsearch/elasticsearch: 7.6.1 container_name: elasticsearch environment: - xpack. Security. Enabled =false- discovery.type=single-node ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 cap_add: - IPC_LOCK volumes: - elasticsearch-data:/usr/share/elasticsearch/data ports: - 9200:9200 - 9300:9300 kibana: Container_name: kibana image: docker. Elastic. Co/kibana/kibana: 7.6.1 environment: - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 ports: - 5601:5601 depends_on: - elasticsearch logstash: Container_name: logstash image: docker. Elastic. Co/logstash/logstash: 7.6.1 volumes: -"./logstash.conf:/config-dir/logstash.conf"
    restart: always
    command: logstash -f /config-dir/logstash.conf
    ports:
      - "9600:9600"
      - "7777:7777"
    depends_on:
      - elasticsearch
      - kafka
volumes:
  elasticsearch-data:
    driver: local
Copy the code

logstash.conf

input {
  kafka {
    bootstrap_servers => "kafka:9092"
    client_id => "logstash"
    group_id => "logstash"
    consumer_threads => 1
    topics => ["kong-kafka-log"]
    codec => "json"
    tags => ["log"."kafka_source"]
    type= >"log"
  }
}
output {
  elasticsearch {
       hosts => ["elasticsearch:9200"]
       index => "logstash-%{[type]}-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}
Copy the code

After the service is successfully started, the following is shown

Create a Topic manually

docker exec kafka \
kafka-topics.sh \
--create --topic kong-kafka-log \
--partitions 1 \
--zookeeper zookeeper:2181 \
--replication-factor 1
Copy the code

The final result is as follows. Logs generated by Kong can be queried in Kibana

conclusion

Kong pushes logs to Kafka via plugins. ELK collects logs from Kafka and stores and displays them. This path is possible. Secondary development is required on top of existing open source code.