Meaning of PaaS platform

Many company technical support jobs, such as configuration domain, the deployment environment, modify the reset configuration, service restart, expansion capacity, carding and perfect monitoring, according to the needs of development find logs, etc, need A lot of communication and development, such as what is A domain name, what is the network domain name, A name, C name, firewall rules how to set, What dependencies are required by the underlying environment such as the operating system. Because many r & D do not understand the terms and knowledge points of operation and maintenance, it leads to communication difficulties and low efficiency. And there are a lot of such needs, the operation and maintenance pressure is overwhelming, took up almost all the time, but the development needs may still be delayed to meet.

Such companies may encounter the following problems:

  • The system architecture is outdated, and the performance and reliability cannot meet the existing requirements.

  • The original IT architecture is not flexible, and the addition or change of business modules brings huge cost pressure;

  • The system functions are multifarious, the structure is disorder, the custom code and the system coupling is very high;

  • A wide variety of services, a variety of technology stack rampant;

  • The transfer of personnel was not sufficient, and the new team did not know the deployment environment, so they could only do peripheral repair and did not dare to migrate.

How can it be solved? The answer is streamlining, standardization, automation and platformization.

A routing

Namely active carding operational tasks, form a standardized operation flow, especially in view of the need to complete the task, such as application of release deployment, solidify the process into the process platform system, ensure that everyone can perform implemented strictly in accordance with the process, not what are the missing links, and the current process state is visible to all, You can clearly see who is working on it, and the handler will be more motivated to complete the task as quickly as possible.

standardized

From the perspective of architecture, the deployment standards of applications should be formulated according to application categories, such as Web applications, servitization applications (JSF used internally), or relatively new micro-service applications (Spring Boot, etc.). The deployment scripts and tool platforms should be designed and developed in accordance with agreed specifications. Reduce the complexity of tools and platforms due to the variety of applications.

automation

A lot of scripts were written in the early stage. SSH key-free authentication was performed between the task execution machine and the server to execute the task, and then commands were executed in batches as required. With the expansion of the server scale and the number of applications, the script execution mode is difficult to meet the needs of business development. Multiple scripts coexist for the same type of task, resulting in misoperations and a lack of clear operation history and rollback mechanism. Even if Puppet, Saltstack, Ansible and other configuration management tools are replaced later, But the fundamental problem has not been solved.

platform

Platformization is the focus of this sharing, so it must be built on the basis of the previous three. If there is no clear process and clear standard, the platform construction will only be the integration of automation tools, which cannot solve the core problems of the company.

The following content is mainly PaaS platformization, that is, mainly from the perspective of application and middleware. IaaS content is not discussed here.

PasS platformization raises the focus of the problem from the basic resources to the application level. The goal is to provide a platform to help developers run and manage applications, and let users pay more attention to the running code (business logic).

Problems PaaS can solve:

  • Application aggregation: If you need a Redis for development, just start a Redis container

  • Service discovery, fast scaling, state management, etc

  • Service monitoring, recovery, and DISASTER recovery

  • Cost statistics: provide computing resource information summary, charge for different projects

  • Security management and control: No matter what platform, security is very important. For example, application A can access APPLICATION B, but application B cannot access application A, and security audit.

  • Rapid deployment

With the emergence of Docker container technology, we have more appropriate tools to build PaaS platform and the ability to build services based on applications.

In Docker container scheduling framework, we also choose Kubernetes platform.

Why Kubernetes

Firstly, the functions and characteristics of the three mainstream container scheduling frameworks are listed:

Kubernetes

Resource scheduling, service discovery, service choreography, resource logical isolation, service self-healing, security configuration management, Job task support, automatic rollback, internal domain name service, health check, stateful support, running monitoring/logging, capacity expansion and reduction, load balancing, gray scale upgrade, Dr, and application HA.

Mesos

Mesos is a distributed kernel headed toward data Center operating Systems (DCOS) that supports frameworks like Marathon, Kubernetes, and Swarm. Mesosphere is also part of the Kubernetes ecosystem.

Note that Marathon’s stack is Scala, while Docker and Kubernetes’ stack are both Go.

Swarm

Since Docker1.12, Swarm comes with Docker by default, and also because it comes with Docker engine, no additional installation is required and configuration is simple.

Support service registration, service discovery, built-in Overlay Network and Load Balancer.

Docker CLI is very similar to the operation command, familiar with Docker is very easy to learn.

Each tool has its own core idea.

Mesos is a data center operating system (DCOS) designed to address networking, computing, and storage issues at the IaaS layer. At its core, Mesos addresses the physical resource layer.

The core of Kubernetes is how to automate deployment, scale, and manage containerized applications.

Therefore, I think Mesos and Kubernetes are two dimensions. For our scenario, we care more about application state than physical resource layer management, so we care more about containerized application management. This is the main reason we choose Kubernetes.

In addition, other advantages are taken into consideration in choosing Kubernetes, such as:

  • Born in the famous Google, its development and design were influenced by Google’s famous Borg system;

  • GitHub has a large number of developers who follow the Kubernetes project and submit the code. The community is active, and if there is a problem, it will be quicker to consult the community and solve the problem.

  • Kubernetes supports stateful services very well.

Application of microservice architecture on PaaS platform

Let’s take a look at what I see as a design based on the most popular microservices architecture, that is, the typical application we want to run on our PaaS platform.

When the client requests come in, they first go to the front-end Nginx server and then to the Zuul proxy network management system, where Zuul sends these tasks to different services. The Eureka cluster acts as a registry service, providing service discovery and registration functions. The service back-end then invokes other dependent services, such as database cluster and Redis cluster.

Because the micro-service architecture has a registry to automatically solve the problems found in service registration, internal services do not need to rely on traditional load balancer and other tools, and it is easy to migrate the various services to the DOCker-based PaaS platform for unified management.

PaaS platform architecture design

Principles of Platform Construction

Before the PaaS platform is built, the principles of PaaS platform construction are set by referring to the Seven Habits of Highly Effective People:

  1. Start with the End: When you do something, you need to know what you want to achieve, and when you know what you want to achieve, you can take some actions around that goal.

  2. Understanding each other: As an operation and maintenance personnel, you need to have a relatively wide range of knowledge. Only in this way can we develop some suitable programs and products for ourselves.

  3. Proactive: To do the PaaS so their subjective initiative to turn up.

  4. Important things first: There are many contents related to PaaS mentioned above, due to time and personnel limitations. How to pick out the most important things from the many infrastructure components and start building.

  5. Win-win: to make the system is best to achieve development, operation and maintenance, the company are beneficial.

  6. Synergy: Everyone has an opinion on the same thing. At the same time to ask partners, to collect all the points of view to do a comprehensive analysis and comparison, in order to receive better results.

  7. Keep it up to date: Remind yourself to keep learning and embrace change so that you can see the limitations of the platform. Keep iterating on better products.

Based on the above principles, our team sketched a minimal PaaS diagram:

  1. The uppermost Ingress service functions like a traditional load balancer, providing request distribution.

  2. A Service acts as a proxy server for a back-end Pod. A Service needs to be accessed by an external Client through the Ingress Service.

  3. Pod is our traditional service.

  4. The minimal PaaS platform also uses the DNS component, which assigns an internal domain name for access after the internal service is up and running.

Kubernetes provides external services using Lvs+Ingress. Each time a new service is added, the API of DNS will be called once, and an internal domain name will be generated for access according to the rules.

Key platform capabilities

  • Continuous application deployment enables rapid and visual automatic deployment. Supports rapid and visual application deployment. You only need to select the image and component, fill in the configuration information, and click the Deploy button to install or upgrade the application. In the test environment, Jenkins can realize continuous integration and fully automatic upgrade of the application, while supporting one-click rollback and resume publishing functions.

  • Elastic scaling is applied to build an elastic scaling subsystem with demand prediction and container on-demand supply capability. It has elastic scaling capability based on application load and resources to cope with the characteristics of high concurrency of Internet users and traffic impact. The elastic scaling functions include containers and physical machines.

  • Unified management of containers and components. From the perspective of the overall application, the platform not only manages images and containers, but also all components involved in an application, such as front-end DNS, load balancing (F5/Nginx), back-end database management, etc. By centrally managing system components and containers, the platform implements unified deployment, configuration, upgrade/rollback, monitoring, and troubleshooting of the system.

  • High reliability: Container fault recovery. When the server is down, the platform system automatically restarts the container on other servers and allocates resources to the container to restore services in seconds. Ensure that the business does not fall off line, high reliable operation; The reliability of the mirror warehouse is improved by expanding the stand-alone version of the mirror warehouse to the mirror warehouse cluster, so as to realize the statelessness of Registry and facilitate the realization of high availability of services.

  • With the application of Docker-based encapsulation, the system supports the following common applications: Tomcat, Jboss, Nginx, Redis, ZooKeeper, etc.

PaaS platform functional components

During the implementation, there are mainly several basic components to be developed:

  • Image management: The actual application image is automatically constructed by “basic middleware image”, “Application package” and “Configuration”, without the need for developers to understand the concept of image and manual image making.

  • DNS management: Customizes the DNS management platform for internal use of the company to centrally manage the COMPANY’s DNS.

  • Service management, need to customize a set of Kubernetes Deployment template, from Ingress to Service and then to RC are defined in this template, convenient for container expansion, reduction and deletion operations;

  • Pod management within the Service, belongs to Kubernetes own scope, check the Pod running situation within the Service, Pod log output and other functions;

  • Log management, log output to the company’s log platform (such as ELK platform), connected with the R & D personnel to troubleshoot problems and use buried data;

  • Monitoring and management, reference solution: cAdvisor + + Grafana InfluxDB/Heapster + InfluxDB + Grafana/Prometheus/Zabbix.

After a small and medium-sized enterprise makes it like this, the workload of daily operation and maintenance can be greatly reduced. Two or three people can complete daily application operation and maintenance work. If you are interested, you can challenge yourself. Of course, after all this, it’s still a small PaaS platform.

If it’s a more complex PaaS platform, what else should I do?

Environment management: one platform manages multiple sets of different Kubernetes clusters. Security management, process management, billing management and other functional modules.

Also because of the increased scale and higher reliability requirements, corresponding network, IO and other optimization.

In fact, there are many other functions are not listed, you can add functional modules according to your actual situation, design your own company characteristics of the PaaS platform.

Braised Kubernetes combat training camp

The training theory combined with practice, mainly includes: Kubernetes architecture and resource scheduling principle, Kubernetes DNS and service discovery, continuous deployment scheme based on Kubernetes and Jenkins , Kubernetes network deployment practice, monitoring, logging, Kubernetes and cloud native applications, deployment of Kubernetes cluster in CentOS, container design mode in Kubernetes, steps of developing Kubernetes native applications, etc. Click the qr code below to add wechat friends to understand
Specific training content.

Click on the
Read the originalLink to register.