The project source has been uploaded to gitee.com/yangleliu/l… Wechat search a search [AI code division] attention handsome me, reply [dry goods to receive], receive 2021 the latest interview information

【 introduction 】

After the development task, the moment you submit the code, the mood is free… Eighty miles per hour…

I thought it was going to be a game, a GAI or a warm bed.

However, how full the dream, how the backbone of reality.

There is always a test little sister to teach you emergency brake, turn back to do (change) people (bug) : AI master, you this can not ah! (People sit in rows and laugh.)

I looked down at his eight-pack abs: it’s not up to you!

The little elder sister also is not eat vegetarian, pull up sleeve, open her association ten generation: you line you repeatedly report wrong, poison milk teammate!

I: (even though it’s o… That’s what you said. No, I thought…

Little elder sister one face doubt: think what? I really thought I was a god!

I cough up my embarrassment and refuse to admit defeat: I think you passed the wrong parameters. After all, this master does not have any problems when debugging locally.

Little sister, experienced and undefeated: No! Can be! Can!!!! You are you are you! I’m never wrong.

At that moment, I seem to see the period of the girlfriend flashed in front of the heart is broken.

So we argued for a long time, and in the end, not surprisingly, I surrendered.

After all,

The traditional Beauty of the Chinese nation (bad) de (meal) is: good men don’t fight with women!

So I can only go to the server to see the log, but log content such as mountains, many such as cow hair, fully 3.5 G, helpless I had to use a pile of Linux SAO command, the file will be cut into a small file, fortunately, finally found the request, after the investigation to find the reason.

It reminded me how great it would be if there were a platform that could collect our logs in real time and present them as a visual interface. So we don’t have to dig through those thick log files anymore.

[Secret Display]

ELK is a combination of Elasticsearch, Logstash, and Kibana.

In contrast to the architecture diagram, let’s take a look at the working process of these three gods

  1. The user sends a request to our server
  2. The server sends the data that needs to be logged to the Logstash via a network request
  3. The logstash filters and cleans the data before passing it to Elasticsearch
  4. Elasticsearch is responsible for creating indexes and storing data
  5. Users can view logs in real time by visiting the Kibana Web page

All right, I told you the secrets, but now I need to lead you into the field

【 Essential mind 】

Before we go to war, we need soldiers to have the following skills, otherwise when they go to war, they will be beaten to pieces

  • Understand the three components of ELK
  • Have hands-on experience with Docker
  • Docker environment is available locally
  • The IDEA of tool
  • Configure a relatively high weapon (computer), otherwise it will crash

Prepare food. Prepare a Springboot program

Start by creating a SpringBoot project with the following structure

Introduce project prerequisite dependencies

<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.35</version> </dependency> <dependency> <groupId>cn. Hutool </groupId> <artifactId>hutool-all</artifactId> <version>5.4.0</version> </dependency>Copy the code

Create some basic components

Create sections for low-coupling logging

The core code

@around ("controllerLog()") public Object doAround(ProceedingJoinPoint joinPoint) throws Throwable {long startTime = System.currentTimeMillis(); // Get the current request object ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes(); HttpServletRequest request = attributes.getRequest(); ReqRspLog webLog = new ReqRspLog(); Object result = joinPoint.proceed(); Signature signature = joinPoint.getSignature(); MethodSignature methodSignature = (MethodSignature) signature; Method method = methodSignature.getMethod(); Long endTime = System.currentTimemillis (); String urlStr = request.getRequestURL().toString(); webLog.setBasePath(StrUtil.removeSuffix(urlStr, URLUtil.url(urlStr).getPath())); webLog.setIp(request.getRemoteUser()); webLog.setMethod(request.getMethod()); webLog.setParameter(getParameter(method, joinPoint.getArgs())); webLog.setResult(result); webLog.setSpendTime((int) (endTime - startTime)); webLog.setStartTime(startTime); webLog.setUri(request.getRequestURI()); webLog.setUrl(request.getRequestURL().toString()); logger.info("{}", JSONUtil.parse(webLog)); return result; }Copy the code

Creating a test Interface

@RestController @RequestMapping("/api") public class ApiController { @GetMapping public R<String> AddLog (@requestParam (value = "param1", Required = false) String param1){return r.uccess (" Hello, this will be logged "); }}Copy the code

If we now request the interface, we will see this log printed on the console

{" method ":" GET ", "uri" : "/ API", "url" : "http://localhost:8080/api", "result" : {" code ": 200," data ":" hello, This message will be logged ","message":" operation succeeded "},"basePath":"http://localhost:8080","parameter":{"param1":" test ELK"},"startTime":1611529379353 ,"spendTime":9}Copy the code

Now we need to send logs to ES through LogStash according to the architecture diagram. Next, we can integrate logStash to realize the function of sending logs

[Recruiting] integrate Logstash

Add a Logstash dependency

<! Logstash --> <dependency> <groupId>net.logstash. Logback </groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.3</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId> <optional>true</optional> </dependency>Copy the code

Edit the configuration file logback-spring.xml

<? The XML version = "1.0" encoding = "utf-8"? > <! DOCTYPE configuration> <configuration> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <include resource="org/springframework/boot/logging/logback/console-appender.xml"/> <! - the name of the application - > < property name = "APP_NAME" value = "mall - admin" / > <! <property name="LOG_FILE_PATH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/logs}"/> <contextName>${APP_NAME}</contextName> <! - logging to FILE appender every day -- -- > < appender name = "FILE" class = "ch. Qos. Logback. Core. Rolling. RollingFileAppender" > < rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${LOG_FILE_PATH}/${APP_NAME}-%d{yyyy-MM-dd}.log</fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> <encoder> <pattern>${FILE_LOG_PATTERN}</pattern> </encoder> </appender> <! - output to logstash appender - > < appender name = "logstash" class = "net. Logstash. Logback. Appender. LogstashTcpSocketAppender" > <! -- Accessible logstash collection port --> <destination>127.0.0.1:4560</destination> <encoder charset=" utF-8" class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> <root level="info"> <appender-ref ref="CONSOLE"/> <appender-ref ref="FILE"/> <appender-ref ref="LOGSTASH"/> </root> </configuration>Copy the code

After editing, the project structure looks like this:

Although the LogStash feature is integrated in the project, the Logstash feature doesn’t know where to send the logs because we don’t have a city yet.

If not, build it!

[Build city] Build ELK environment

For ELK, I use dokCer-compose. First we agree on a root directory: /user/aimashi/docker execute the following command as required

mkdir -p /Users/yangle/docker
cd /Users/yangle/docker
mkdir elk_stanrd
cd elk_stanrd
mkdir logstash
cd logstash
vim logstash.conf
Copy the code

Copy the following file contents to logstash. Conf

Input {TCP {mode => "server" host => "0.0.0.0" port => 4560 COdec => json_lines}} output {elasticSearch {hosts => "es:9200" index => "logstash-service-%{+YYYY.MM.dd}" } }Copy the code

Run the following command

cd .. / vim docker-compose.ymlCopy the code

Copy the following into the configuration file as well

Version: '3' Services: ElasticSearch: image: ElasticSearch :6.4.0 container_name: ElasticSearch environment: - "cluster.name=elasticsearch" # set cluster name to elasticsearch - "discovery.type=single-node" # start in single-node mode - "ES_JAVA_OPTS= -xms512m Volumes: -xmx512m" - / Users/yangle/docker/elk_stanrd/elasticsearch/plugins: / usr/share/elasticsearch/plugins mount - # plug-in file / Users/yangle docker elk_stanrd/elasticsearch/data: / usr/share/elasticsearch/data # data file mount ports: -9200:9200-9300 :9300 Kibana: Image: Kibana :6.4.0 container_name: Kibana Links: Depends_on: - ElasticSearch: select select * from * where (select * from * where (select * from * where (select * from *))) Hosts =http://es:9200" # set access to elasticSearch ports: -5601:5601 logstash: image: Logstash: 6.4.0 container_name: logstash volumes: - - / Users/yangle/docker/elk_stanrd logstash/logstash. Conf: / usr/share/logstash/pipeline/logstash. Conf # mount logstash config file Depends_on: - ElasticSearch - > select * from * * * * * * * * * * * - 4560:4560Copy the code

So far, the preparation for setting up the ELK environment has been completed. To start elk, run the following command in the /Users/yangle/docker/elk_stanrd directory

docker-compose up -d
Copy the code

If the following information is displayed, the system is successfully created

Next, we perform Docker ps to see if the container is started

If it is as shown in the figure, it means that the container is started normally, but you need to wait about a minute before accessing the visual platform to access the addresshttp://localhost:5601/

If this page appears, it means that the ELK has been built, and now we need to stuff some data into it

Make an attack Send a request

Once the ELK environment is set up, you need to generate a bit of data. How do you do that? As long as the callhttp://localhost:8080/api?param1= test ELKInterface, called a few times, will produce some test data. In addition, some configuration needs to be done for ES to collect these logs so that users can see them:

Select the field and create the index

The screen after a successful index creation

After selecting logstuck-Servicez, the interface looks like this:

You can see that the logs in the system have been collected. Try searching “Hello”.

All the logs that contain “hello” are filtered out, but there are also many search criteria, such as a time filter search in the upper right corner, which I won’t show you, if you are interested, you can study it yourself. Warehouse: gitee.com/yangleliu/l… Helpful and selfless youth – I have uploaded all the above code to git repository, you can get it yourself, remember to start

【 Summary after the War 】

Every new technology is designed to solve some kind of problem.

Elk, for example, was created to reduce the amount of time that balding code raiders spend digging through massive logs and focusing more on business processes. With ELK, we simply type in the key word in the input box, press Enter, and the data we need will be presented to us.

The test little sister waiting time is short, the mood is good, the contradiction is less naturally.

Think of it this way, if there is a platform, will be a girlfriend’s 100,000 emotional outbursts of the cause of real time display, the world will be a wonderful tomorrow!

SHH ~

Filebeat + Kafka + Logstash +Elasticsearch+ Kibana is a more mature framework to optimize the log system. If you are interested, please give me a like.