Hello, I am under the tree search hu (name from before you read a novel, may be you have seen, changed a word O (studying studying) O ha ha ~), an old man in the Internet industry for more than a decade of technology, has been fighting in the technical line, overcome difficult, in a not well-known Internet companies as an architect, thank you for your attention; Communicate more in the future.

This article focuses on a hands-on approach to seeing performance improvements through service deployment

Refer to it if necessary

If it is helpful, do not forget the Sunday

Wechat public account has been opened, CTO Union, students who did not pay attention to remember to pay attention to oh!

1. Standalone deployment

1.1. Deployment architecture

If the database and application are on the same physical machine, there will be resource contention between the database and application.

1.2. Pressure test

Aggregate reporting: Average response time, throughput (requests per second)

TPS curve:

RT curve:

Extremely strict requirements: each interface is tens of ms, less than 100ms

2. Separate modes

2.1. Separated deployment architecture

Server communication uses Intranet communication, so network impact is negligible. (Intranet communication)

2.2. Pressure test

Aggregate report: 400QPS improvement

TPS curve:

RT: response time

CPU,IO, and memory are in exclusive mode for applications

3. Distributed deployment

3.1. Deployment architecture

Flow estimation: Single machine: 3288 2 machines: 3288 * 2 = 6500 +

3.2. Openresty

Download openresty.org/cn/download…

.configure

make && make install

Default installation: /usr/local/openresty

Configuration:

3.3. Pressure test

Aggregate report:

TPS curve:

Common server: Multiple services are deployed on a server, which results in poor performance and poor stability. High-performance server: 64cpus 128 gb IBM Unix ($200w) —— single-machine deployment —- restart

4, K8s scalable capacity

4.1. Cloud native

Alibaba: All projects in the cloud last year (order: 50W +/s)

Cloud native:

Massive servers constitute an environment, and this environment can be regarded as a whole, is a computer (CPU: and, memory: and)

Problem: Difficult to manage

Openstack builds enterprise-class private cloud (VM) containers (Docker containers) Docker containers deploy many services, a large number of services need to be deployed, which means a large number of containers (Google: 200 million containers /day)Copy the code

Problem: Difficult to manage

	Kubernetes 
Copy the code

Cloud native:

3, DevOps development + operation and maintenance 4, CI/CD CNCF: Service mesh added cloud native (service flow limiting, fuse breaking, governance...... Istio architect: Build a project architecture suitable for cloud nativeCopy the code

Serverless —- No service is available

4.2. Deployment architecture

If THE USAGE of CPU and memory is too high, THE POD service will expand the capacity immediately to cope with heavy traffic. Therefore, you also need to deploy HPA(automatic capacity expansion and reduction).

Stress test: HPA memory, increased CPU usage, capacity expansion, and increased concurrency

4.3. The deployment of

Service resource object 3. Ingress resource object 4. HPA resource object

Single POD throughput :(1cpus, 2 gb memory)

TPS:

4.4. Capacity to achieve

It can be found that when the traffic peak comes, the POD copy is expanded.