It is a new leading solution of distributed architecture based on container technology. The solution is still new, but it is the culmination of more than a decade of experience in large-scale application of container technology. Rather, Kubernetes is an open-source version of Borg, Google’s closely guarded secret weapon for more than a decade. Borg is a prestigious Google large-scale cluster management system for internal use. It is based on container technology and is designed to automate resource management and maximize resource utilization across multiple data centers. For more than a decade, Google has managed a huge cluster of applications through the Borg system. It has been difficult to learn much about Borg because Google employees have signed confidentiality agreements that prevent them from revealing its internal design even if they leave the company. It wasn’t until April 2015, when Google made the long-rumored Borg paper public for the first time, along with Kubernetes’ high-profile publicity, that we learned more about it. Kubernetes became an instant hit when it came to open source and quickly dominated the container space, thanks to the experience and lessons learned by Borg over the past decade. Secondly, if our system design follows the design idea of Kubernetes, then the underlying code or functional modules that have little to do with the business in the traditional system architecture can disappear from our sight immediately, and we do not have to worry about the selection and deployment of load balancer. You don’t have to consider introducing or developing a complex service governance framework, and you don’t have to worry about developing service monitoring and troubleshooting modules. In short, using the solution provided by Kubernetes, we not only save no less than 30% of the development cost, but also can focus on the business itself, and because Kubernetes provides a powerful automation mechanism, so the difficulty of operation and maintenance of the system in the later period and operation and maintenance costs are greatly reduced. However, Kubernetes is an open development platform. Unlike J2EE, it is not limited to any language and does not limit any programming interface, so services written in Java, Go, C++, or Python can be mapped to Kubernetes services and interact with them through the standard TCP communication protocol. In addition, Kubernetes platform for the existing programming language, programming framework, middleware does not have any invasion, so the existing system is easy to upgrade and migrate to Kubernetes platform. Finally, Kubernetes is a complete distributed system support platform. Kubernetes have complete cluster management ability, including multi-level security protection and access mechanism, multi-tenant application supporting ability, transparent service registration and service discovery mechanisms, the built-in intelligent load balancer, powerful fault detection and self-healing, rolling upgrade service and online expansion ability, extensible resource scheduling mechanism automatically, And multi-granularity resource quota management capability. At the same time, Kubernetes provides a perfect management tools, these tools cover the development, deployment testing, operation and maintenance monitoring and other links. Therefore, Kubernetes is a new distributed architecture solution based on container technology, and is a one-stop complete distributed system development and support platform. In Kubernetes, Service is the core of the distributed cluster architecture, and a Service object has the following key characteristics. ◎ Has a unique specified name (such as mysql-server). ◎ Have a virtual IP (Cluster IP, Service IP, or VIP) and a port number. ◎ Can provide some kind of remote service capability. ◎ is mapped to a set of container applications that provide this service capability. Currently, the Service processes of Service provide services based on Socket communication, such as Redis, Memcache, MySQL, Web Server, or TCPServer processes that implement a specific Service. Although a Service is usually served by multiple related Service processes, each of which has an independent Endpoint (IP+Port) access point, However, Kubernetes allows us to connect to the specified Service via Service (virtual Cluster IP +Service Port). With Kubernetes’ built-in transparent load balancing and failover mechanism, the normal invocation of the service does not matter how many server processes there are on the back end or whether a server process is redeployed to another machine due to a failure. More importantly, the Service itself does not change once created, which means we no longer have to deal with the problem of changing the IP address of the Service in the Kubernetes cluster. The container provides powerful isolation, so it is necessary to isolate the set of processes that provide services to the Service in the container. To do this, Kubernetes designs Pod objects that wrap each server process into a corresponding Pod, making it a Container to run inside the Pod. In order to establish the relationship between Service and Pod, Kubernetes first put a Label on each Pod, put a Label on the Pod running MySQL = MySQL, put a Label on the Pod running PHP = PHP, For example, the Label Selector of MySQL Service is name= MySQL, which means that the Service should apply to all pods containing name= MySQL Label. This neatly solves the problem of Service and Pod association.