Starting with this article, we have a series of complete microservices practices from requirements to live, from code to K8S deployment, from logging to monitoring, and more.

Actual combat project address: github.com/Mikaelemmmm…

I. Project Introduction

The whole project uses the micro-services developed by GO-Zero, and basically includes some middleware developed by Go-Zero and related GO-Zero authors. The technology stack used is basically self-developed components of go-Zero project team, which is basically the whole Family of Go-Zero.

The project directory structure is as follows:

  • App: All business code includes API, RPC, and MQ (message queue, delay queue, scheduled task)
  • Common: Common components error, Middleware, Interceptor, Tool, ctxData, etc
  • Data: This project contains the data generated by this directory depending on all middleware (mysql, ES, Redis, Grafana, etc.). Everything in this directory should be in git ignore files without submission.
  • Deploy:
    • Filebeat: Docker configures FileBeat
    • Go-stash: Go-stash configuration
    • Nginx: Nginx gateway configuration
    • Prometheus: Prometheus configuration
    • Script:
      • Gencode: Generate API, RPC, and create kafka statement, copy and paste use
      • Mysql: sh tool to generate model
    • Goctl: the template of the project goctl, which generates the custom code template, can be used by referring to the go-Zero document, copy the home directory
  • Doc: Documentation of the project series

Second, use the technology stack

  • go-zero
  • Nginx gateway
  • filebeat
  • kafka
  • go-stash
  • elasticsearch
  • kibana
  • prometheus
  • grafana
  • jaeger
  • go-queue
  • asynq
  • asynqmon
  • dtm
  • docker
  • docker-compose
  • mysql
  • redis

Iii. Project architecture diagram

Iv. Business architecture Diagram

5. Project environment construction

This project adopts air hot loading power to modify the code immediately and take effect in time, and there is no need to restart every time. The changed code is automatically reloaded in the container. There is no need to start the service locally, and the SDK installed locally is automatically prompted to use the code. So using goland, vscode is the same

Clone code & update dependencies

$ git clone [email protected]:Mikaelemmmm/go-zero-looklook.git
$ go mod tidy
Copy the code

2. The environment on which to start the project

$ docker-compose -f docker-compose-env.yml up -d
Copy the code

Jaeger: http://127.0.0.1:16686/search

Asynq (delayed, timed message queue) : http://127.0.0.1:8980/

Kibana: http://127.0.0.1:5601/

Elastic search: http://127.0.0.1:9200/

Prometheus: http://127.0.0.1:9090/

Grafana: http://127.0.0.1:3001/

Mysql: Native client tools (Navicat, Sequel Pro)

  • Host: 127.0.0.1
  • port : 33069
  • username : root
  • pwd : PXDN93VRKUm8TeE7

Redis: your own tool (redisManager) to view

  • Host: 127.0.0.1
  • port : 63799
  • pwd : G62m50oigInC30sf

Kafka: View by your own client tool

  • Host: 127.0.0.1
  • port : 9092

3. Pull project depends on mirror

Because the project is hot loaded with AIR, it is run in the air+ Golang image. Direct docker-compose is also available. However, considering that the dependency may be relatively large, it will affect the start-up of the project, so it is better to pull down the image first before starting the project

$ docker pull cosmtrek/air:latest
Copy the code

4. Import mysql data

Create database looklook_ORDER && Import deploy/ SQL /looklook_order.sql data

Create database looklook_payment && Import deploy/ SQL /looklook_payment. SQL data

Create database looklook_travel && Import deploy/ SQL /looklook_travel. SQL data

Create database looklook_userCenter && Import looklook_usercenter.sql data

5. Start the project

$ docker-compose up -d 
Copy the code

The docker-comemage. yml configuration is dependent on the project root directory

6. Check the operation of the project

Visit http://127.0.0.1:9090/, click the above menu “Status”, click Targets, the blue one means that the startup is successful, the red one means that the startup is not successful

【 note 】 if it is the first time pull project, each project container construct pull rely on for the first time, the network situation, some services may be slow, so will cause program start failure or be dependent on the service start failure himself boot failure, this is very normal, if encounter the situation of the project start up, such as the order – API, At this point, we can just check the log

$ docker logs -f order-api 
Copy the code

The order-RPC startup took a long time, and the order-API waited for it to start. The order-RPC did not start successfully for a certain period of time, and the order-API was impatient. You can restart the order-API again, this is only the first time you create the container, and then not as long as you don’t destroy the container. Let’s go to the project root directory and restart it

$ docker-compose restart order-api
Copy the code

[Note] Be sure to go to the project root directory to restart, because docker-comemage. yml is in the project root directory

Then we look at it. Here we look at it using Docker logs

__ _ ___ / / \ | | | | _) / _ / - \ | _ | | _ | \ _, Built with the Go 1.17.6 mkdir/go/src/github.com/looklook/app/order/cmd/api/tmp watching. Watching desc watching desc/order watching etc watching internal watching internal/config watching internal/handler watching internal/handler/homestayOrder watching internal/logic watching internal/logic/homestayOrder watching internal/svc watching internal/types ! exclude tmp building... running...Copy the code

You can see that the ORder-API has been successful, so go to Prometheus

See Prometheus also showing success, and similarly check the others for a successful boot

7. Visit projects

Because we use nginx as the gateway, nginx gateway is configured in docker-compose, also configured in docker-compose, nignx exposed port is 8888, so we access through port 8888

$ curl  -X POST "http://127.0.0.1:8888/usercenter/v1/user/register" -H "Content-Type: application/json" -d "{\"mobile\":\"18888888888\",\"password\":\"123456\"}"Returns: {"code":200,"msg":"OK","data":{"accessToken":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NzM5NjY0MjUsImlhdCI6MTY0M jQzMDQyNSwiand0VXNlcklkIjo1fQ.E5-yMF0OvNpBcfr0WyDxuTq1SRWGC3yZb9_Xpxtzlyw","accessExpire":1673966425,"refreshAfter":1658 198425}}Copy the code

If nginx fails to access nginx, the access success can be ignored. If nginx fails to access nginx, the access success can be ignored. If nginx fails to access nginx, the access success can be ignored

$ docker-compose restart nginx
Copy the code

6. Log collection

Collect project logs to ES (FileBeat collects logs ->kafka -> Go-stash consume kafka logs -> Output to ES, Kibana checks es data)

So we are going to create logging topics in Kafka ahead of time

Enter the Kafka container

$ docker exec -it kafka /bin/sh
Copy the code

Create log topic

$ cd /opt/kafka/bin
$ ./kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 -partitions 1 --topic looklook-log
Copy the code

Visit Kibana http://127.0.0.1:5601/ to create the log index

Click on the menu in the upper left corner (three horizontal lines) and go to Analytics – > Discover

Then on the current page, enter looklook-* -> Next Step -> select @timestamp->Create Index pattern

Then click the menu in the upper left corner, go to Analytics-> Discover, the log is displayed (if not, go to check fileBeat, Go-Stash, use docker logs -f filebeat to check)

Vii. Introduction of mirror image of this project

All services started successfully. The following should be used for comparison

  • Nginx: Gateway (nginx-> API -> RPC)
  • Cosmtrek/AIR: the environment image that our business code development depends on, we use this image because air hot loading, writing code real-time compilation is too convenient, this image is Air + Golang, we actually start our own business service, our business service is running in this image
  • Wurstmeister/Kafka: Kafka for business use
  • Wurstmeister/ZooKeeper: The ZooKeeper that Kafka relies on
  • Redis: indicates the redis used by services
  • Mysql: the database used by services
  • PROM/Prometheus: monitors services
  • Grafana/Grafana: Ugly UI for Prometheus showing data collected by Prometheus
  • Elastic/fileBeat: Collect logs to Kafka
  • Go-stash: Consume kafka logs, desensitize, filter, and output to ES
  • Docker. Elastic. Co/elasticsearch/elasticsearch: storage to collect logs
  • Docker. Elastic. Co/kibana/kibana: elasticsearch
  • Jaegertracing/Jaeger-Query, Jaegertracing /jaeger-collector, Jaegertracing/Jaeger-agent: link tracing
  • Go-stash: After fileBeat collects logs to Kafka, the Go-Stash consumes Kafka for data desensitization, filtering logs, and output to ES

Viii. Suggestions for project development

  • App decentralizes all business service code
  • Common is the common base library for all services
  • The data project relies on the data generated by the middleware, and this directory and the data generated in it should be ignored in Git during actual development
  • Generate API, RPC code:

We generally in the generated API, when RPC code manually to knock goctl command is long, also don’t remember, so we go directly to deploy/script/gencode/gen. Sh simply copy the code. Like I was in the usercenter service new added a business, change passwords, after finished the API file, enter the usercenter/CMD directory, direct copy the deploy/script/gencode/gen generated API can be run command in sh

$ goctl api go -api ./api/desc/*.api -dir ./api -style=goZero
Copy the code

Generate RPC, too, after finished the proto file, directly copy the deploy/script/gencode/gen. Generate RPC can be run command in sh

$ goctl rpc proto -src rpc/pb/*.proto -dir ./rpc -style=goZero
$ sed -i 's/,omitempty//g'  ./rpc/pb/*.pb.go
Copy the code

[note] It is recommended to execute the following command once more when generating RPC files to remove the omitEmpty generated by protobuf, otherwise it will not return if the field is nil

  • Generate kafka code:

    Because this project uses kQ of go-queue to do message queue, kQ also relies on Kafka, which actually uses Kafka to do message queue, but KQ needs to be built in advance by default, not automatically generated by default, so the command is also ready. Direct copy the deploy/script/gencode/gen. Simply create kafka topic code in sh

     kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 -partitions 1 --topic {topic}
    Copy the code
  • Generated model code, run directly deploy/script/mysql/genModel. Sh parameters

  • In the API project, we split the API files and put them in the DESC folder of each API, because it might not be easy to view if all contents are written in the API, so we split them and put all methods into one API, while other entities, REq and REP are put into one folder and it is clearer to define them separately

  • The template used in the project is redefined under the project data/goctl

Nine, subsequent

Because the project involves a little bit more technology stack, will be divided into chapters step by step, please pay attention to.

The project address

Github.com/zeromicro/g…

Welcome to Go-Zero and star support us!

Wechat communication group

Pay attention to the public account of “micro-service Practice” and click on the exchange group to obtain the QR code of the community group.