The term “High Availability” is often used to describe a system that is specifically designed to reduce downtime while maintaining High Availability of its services. In short, it means providing services to the outside world without interruption.

At the beginning of the architecture

Architecture diagram

  

Architecture is briefly

This type of architecture is suitable for startups or platforms with low traffic. Such architecture is generally used at the beginning of the platform operation, the average daily PV is not large, simple architecture is enough to cope with user traffic requests, such as Apache/ Nginx front-end website can be used, APP server directly use JAVA environment such as Tomcat application, most of the database of the Internet platform use Mysql, A backup server usually backs up backup files of common configurations, codes, and database data.

Architecture mid –

As the number of users increases and the number of visits increases, the subsequent problem is that a single server can no longer withstand the user access traffic. Therefore, the initial infrastructure needs to be adjusted as follows

Architecture diagram

  

There are many open source software load balancing schemes

Hardware load – F5 Layer 7 or layer 4 network agent

Software load – Nginx, HaProxy open source load balancing software

Load balancing algorithms are as follows:

The polling method

Random method

Source address hashing

Weighted polling method

Weighted stochastic method

Minimum join number method

Specific use what way, all with enterprise actual demand, input and output ratio (costs), but this kind of architecture also has some disadvantages exist, temporarily don’t consider the high availability of front-end load equipment, such as user upload and view the file problem (through A service access to upload, then load view is B server, Will cause users can not view the problem), APP server is the same; The database server has the single point problem of master and slave libraries. Once the database server fails, it can be manually switched over, but it may cause data loss or disunity, and it will give users bad experience to a certain extent. So it still needs to evolve.

Architecture diagram

  

1. NFS network shared storage file system

FastDFS lightweight network file system

MooseFS distributed file system

GlusterFS file system

However, all architectures are based on practical requirements (such as data integrity and consistency requirements), so as to solve the single point problem of the master library and the performance problem of large read from the library, and ensure the platform performance and high availability when a certain amount of access is used

Architecture eventually period

Architecture diagram

  

Architecture is briefly

The architecture evolves to a point where parallel scaling or increasing the number of servers alone may not be able to solve the related performance problems, therefore

First, CDN acceleration technology will be used in the initial stage of user access to improve user access experience;

Secondly, services can be divided into different services according to the business. By splitting services and deploying multiple instances of the same service architecture, the purpose of expansion can be achieved, so as to improve certain performance and ensure the high availability of the platform. After service separation, the issue of release can also be solved to a certain extent, because services are independent from each other, the coupling between services is not strong, and it is convenient for daily maintenance.

Third, for the user and the database of the bottleneck problem, consider adding cache technology to improve certain access performance and high availability, let users access flow part before reaching the database directly filtered out, even more than half, so as to reduce the pressure of the database access, in more write less, very suitable use caching to improve the performance of the query service, Reduce stress on the database. Redis is usually used as the cache server, and some clustering mechanisms of Redis can largely ensure the high availability of the cache service. When the cache service fails, information can be obtained from the database. In addition, simple adjustments are made to the backup server to ensure data integrity and timely high availability of applications. In fact, some large distributed systems prefer memcached service. For example, when some users’ login information is synchronized to other systems, it is generally deployed between the foreground application layer and the data storage layer to cope with a large number of requests and fast response from the front-end application, thus reducing the database access pressure. It also improves the user’s access experience.

There are also parts of the platform architecture that are layered

Front-end application layer:

Provide users with page display, search, search interface and related final results display page copy code

Back-end service layer:

Generally including commonly used background services, user management, interface management, payment management, etc., are the front-end application to provide services to copy the code

** Data storage layer: **

This is easy to understand, as databases, file systems, caches, and other replication code provide data access and storage

Hence the emergence of the following type of architecture