preface

At present, all walks of life are actively embracing cloud computing, but due to some historical reasons and compliance requirements, it is difficult for many enterprises to fully integrate cloud. For example, the enterprise regulatory system and compliance requirements require that some core databases must be kept in the local data center. Local data centers are not easy to completely discard as enterprise fixed assets; The IT architecture of some large groups is complex, and IT is difficult to assess the impact of a comprehensive migration to the cloud. Therefore, many domestic enterprises are still in the traditional IT architecture mode based on their own servers, IDC and other deployment services.

Local or IDC hosting server cluster in computing, storage, backup, and security protection will be limited, and public cloud with flexible extension, elastic paid superiority, safe and reliable, and if the local environment and public clouds through merging into a hybrid cloud architecture, will be able to guarantee the local deployment of independently controlled at the same time, also can enjoy the dividends of public cloud.

Gartner predicts that in 2021, more than 75% of medium and large businesses will adopt some form of hybrid cloud and/or hybrid IT strategy. IDC predicts that by 2022, more than 90% of the world’s enterprises will rely on a combination of locally deployed/dedicated private clouds, multiple public clouds, and traditional platforms to meet their infrastructure needs. The hybrid IT strategy, which combines the dual advantages of the flexible low-cost public cloud and the private security of local deployment, is being widely adopted by enterprises around the world.

Hybrid cloud background

Before the emergence of cloud computing and the early development of cloud computing, many enterprises choose to rent the server of traditional IDC or self-built IDC to build the local environment. In particular, renting IDC servers has carried the development of enterprise IT systems for a long time, which has provided great convenience compared with building and purchasing servers, networking and renting IDC servers by oneself. However, compared with cloud computing, traditional IDC has many problems, such as long delivery cycle, poor ability of resource elastic expansion and large one-time hardware cost.

The local environment also includes the servers that the enterprise deploies in the office, which are generally used to run internal systems, internal test environments, official websites and other services. Even if these businesses are in the cloud, it is difficult to remove the local servers at one time.

In addition, many enterprises use private cloud mode to build local environment. Although private cloud has advantages such as on-demand billing and independent management compared with IDC, it is also faced with problems such as insufficient data storage and backup capacity of private cloud, incomplete existing product line of private cloud, weak security protection ability and so on.

For the above problems, hybrid cloud architecture is undoubtedly the best choice for enterprises. While retaining the original local data center resources, enterprises can also use the public cloud platform to realize the elastic expansion of resources, making up for the shortcomings of local data center security protection, data storage and backup capabilities.

Three forms of hybrid cloud architecture



(Architecture Diagram: UCloud Hybrid Cloud Architecture Solution)

As shown in the figure above, the shape of a hybrid cloud architecture generally includes:

Based on the requirements of compliance and system, some users in specific industries do not adopt the public cloud mode for their business, but can choose to purchase hardware servers independently or rent IDC. The existing problem is that the maintenance cost is high and professional operation and maintenance team is needed for maintenance.

For users who have already purchased or rented a server cluster, it can be considered to host the server to the UCloud cloud platform to reduce the work of hardware server operation and maintenance on the user side and enjoy the same level of power, networking, resource maintenance and other services as the public cloud data center.

In contrast, the privatization deployment of the virtualization operating system of UCloud cloud computing, namely the delivery method of UCloudStack private cloud, can copy and learn from the use method of public cloud, enjoy stable cloud operating system and various products and services, and have the UCloud technical team to provide operation and maintenance services. Provide private cloud services to users through a “turnkey” approach.

IDC server + public cloud

Cloud computing first appeared in 2006, and developed in China in 2010 or even later, while traditional IT systems, server clusters, data centers and other equipment that had been very mature before adopted virtualization technologies such as VMware or OpenStack operating system. Banks, government affairs, traditional enterprises, e-commerce and other platforms have built very perfect infrastructure resources and application systems based on the traditional IT architecture.

These equipment and systems have been able to operate stably, but in the process of development, they will also encounter problems such as equipment aging and need to be updated, system capacity is not enough to support the current business expansion, and safety protection capacity is limited. Upgrading equipment also requires repurchasing resources and estimating usage over the next few years, which in turn requires paying for redundant computing capacity.

With the method of mixed cloud architecture, can be the original server cluster, data center and public clouds are connected through the network, the application system and part of the data is copied to the public cloud, and the core database write operations are all on the local environment, which can realize more business request distribution to the back-end cloud platforms.

Managed cloud + public cloud



(Architecture diagram: UCloud hosted hybrid cloud architecture)

The managed server is more suitable for the projects that need to go to the cloud urgently and do not want to modify and adapt the existing business. At the same time, it also needs to ensure the physical isolation of part of the core business. By moving directly at the server level, it can directly run the services consistent with the original local environment in the managed cloud area.

The managed cloud can be used as an intermediate process of complete migration from the local environment to the public cloud, or the managed area on the public cloud can be used as the user’s “private IDC” on the public cloud to host their own physical server, and then connect to the public cloud area through the internal special line. Resource expansion and expansion and adoption of more public cloud services are completed in the public cloud area, and finally the hybrid cloud architecture of the managed area and the public cloud area is realized.

Private cloud + public cloud

The IDC/ Server Cluster mentioned above means that users deploy operating systems such as VMware and OpenStack on the server or conduct virtualization on their own, which is relatively high maintenance cost and requires a mature technical team to maintain. A better solution is to choose the cloud computing operating system of the cloud manufacturer for privatization deployment.

UCloud provides a more lightweight cloud operating system — uLoudStack. In the deployment scheme, the management node only needs 3+ cloud server, because it is optimized at the cloud operating system level. By default, it only configures part of the basic cloud computing functions, and provides more products and services for privatized deployment through configuration and selection of modules. UloudStack is able to implement POC verification on one server and build production environments on three servers, and the deployment time is only a few hours. It is suitable for users who are unable to use the public cloud in the short term due to security or compliance restrictions but have cloud-based or virtualized requirements. UloudStack has a public cloud docking interface, which is convenient to realize the integrated management of private cloud and public cloud.

Comparison of three forms:

Usage scenarios for hybrid cloud architectures

1. Expand computing power

Due to the limited capacity of the data center and the limited computing power of the server cluster, the local environment is not easy to realize the resource expansion quickly. Especially in the face of sudden increase of business traffic, predicted business traffic peak or temporary business traffic jitter, it is difficult to complete the resource expansion in a short period of time. If the server is configured according to the peak business traffic, the resources will be idle for a long time during the business downturn, and it is not cost-effective.

For example, in the new retail scene, the business flow continues to increase during the Singles’ Day and will decrease after the Singles’ Day. It is a typical scene with the peaks and troughs of the traffic. It needs to carry the huge business flow during the peak period and hopes to reduce the overall expenditure cost.

The solution

The local environment and the public cloud are connected to form a hybrid cloud architecture to realize the rapid expansion of the computing capacity of the local environment. To build a hybrid cloud architecture, it is necessary to connect the network first so as to realize cross-platform database write requests and component calls. Secondly, the business and data of the local environment need to be synchronized to the cloud, where the business traffic can be carried. Finally, traffic segmentation is carried out to forward part of the traffic to the cloud platform. From this point of view, building a hybrid cloud architecture can greatly extend the computing power of the local environment.



(Architecture diagram: Expanding computing power through UCloud hybrid cloud architecture)

  1. Access the local environment and cloud platform through UConnect, public network or SD-WAN through dedicated line;
  2. Reference to the typical three-tier architecture of Internet business, the application layer and logic layer are stateless design, which can be independently deployed in the local environment and the cloud environment.
  3. The local self-built database is used as the master database and the cloud database is used as the slave database to realize the synchronization of data. Of course, the cloud database can also be used as the master database. In addition, the master and slave databases maintain one-way data synchronization through the binlog mechanism of MySQL and other databases. The logical layer of the local environment and the cloud environment connect to the corresponding master and slave databases for read operations, while the write operations need to connect to the master database of the local environment for read operations.
  4. The normal traffic is allocated to the local environment and the additional traffic is allocated to the cloud platform.
  5. In the cloud platform, automatic scaling UAS can be adopted to automatically increase or delete cloud host resources according to the monitoring data of the CPU and other indicators of the cloud host Uhost to cope with the change of business traffic.

The above is a hybrid cloud architecture built with local environment first, so it is introduced according to the deployment of the master database in the local environment. For the hybrid cloud architecture, the master database can be located on one side, while the slave database is used on the other side.

If the business operates locally, the current business pressure is close to the upper limit of computing capacity, and the computing capacity will be online in a predictable period of time, it can be considered to divide part of the business traffic to the public cloud, and a small amount of traffic can be selected at the beginning.

2, expand the storage backup capacity

Local storage is the expansion and limited capacity range, not timely, expansion is difficult to predict the future storage capacity, choose a hybrid cloud architecture the data storage capacity expansion to the public cloud, public cloud storage capacity for users “infinite”, the user can focus on access to data, the capacity and the reliability by the cloud platform.

The solution



(Architecture Diagram: Expanding Storage Backup Capability through UCloud Hybrid Cloud Architecture)

In order to store logs and other data to the public cloud, the local environment can be connected to the public cloud to achieve Intranet communication, and then the local data can be stored to the cloud. The host is separately divided in the local environment as a storage gateway, the local data is collected and transferred to UFS and US3 for object storage in the UCloud public cloud according to the configuration rules. The logs of the local environment can be uploaded directly to the public cloud by selecting the Intranet IP of the public cloud Elasticsearch service to output the logs.

Some users have their own business needs, compliance requirements and some industry regulations for data backup. It usually takes a long time and cost to build a standard backup data center. The available area in the same city as the UCloud public cloud and the local environment is a very good choice for backup. The public cloud data center is at least Tier 3 level, and its operation has been verified by a large number of user services, and its stability and security are beyond doubt. A storage space for backup is created in the UCloud public cloud object storage US3, and the local environment will collect, encrypt and other data through the storage backup gateway before uploading it to US3. In UCloud US3, long-term backup files can be saved as “low-frequency storage” according to storage cycle management to reduce user costs. After one or three months, they can be automatically transferred to “archive storage” according to the set policy to further reduce storage costs.

In addition, warm backup can also be realized in the public cloud of UCloud, and hosts in the local environment can be deployed to the public cloud in the way of migration. There is no need to deploy according to the number of hosts in the production environment. You only need to select the minimum operating environment in the cloud, which can realize the disaster recovery service for the local environment under the hybrid cloud architecture.

When the local storage has limited capacity, inconvenient expansion, and difficult to predict the future storage capacity, you can choose to desensitize the data and then store the local data or backup it to the public cloud.

3. Expand the safety protection capability

Due to the slow upgrading of security protection equipment in the local environment, the local ability of security attack protection and security risk prediction is limited in the face of endless and upgraded attacks. In addition, the local environment usually adopts hardware WAF and access devices for attack detection and interception, so it is difficult to deploy hardware security services to respond to large-scale network attacks in a timely and effective manner.

The solution

UCloud cloud platform for multi-tenant providing services such as computing, storage, security, relative to meet all kinds of challenges and attack type more and more complex, cloud service providers in order to ensure the security of user business platform, reliable, is bound to time input energy to cope with the challenge and attacks, thus can provide better security solutions, Have more practical experience in dealing with security attack risk.

In the hybrid cloud architecture, when the business runs in the local environment, all the traffic can be cut to the cloud, filtered through the cloud security service, and then the normal business traffic is divided into the local environment and the back end of the cloud environment for processing. When encountering a network attack, the attack traffic will also be distributed to the cloud platform for traffic cleaning, so that the attack traffic will be filtered and blocked by the cloud platform.

From the perspective of security protection, the cloud platform is equivalent to the extension of the local environment. In addition to using the massive resources of the cloud platform to clean the DDoS attack traffic, it can also rely on the rich variety of security products and stronger protection ability of the UCloud cloud platform. Provide security measures such as security attack intercept, security risk early warning and identification for businesses, resources and data in the local environment.



(Architecture diagram: realize DDOS high protection service of local environment through UCloud hybrid cloud architecture)

As shown in the figure, in the case of DDoS attack, the request can be switched to the cloud, and the traffic can be cleaned through the DDoS high protection service deployed at the front of the business system. After all requests are cleaned and filtered in the cloud, the normal traffic is then divided into the local environment and the back-end resources of the cloud environment for request processing. Here, UCloud breaks down into two types:

When the general attack flow is less than 10G, the high defense service will automatically clean the traffic and adopt a variety of defense strategies to support the defense against network layer attacks, such as TCP type packet attack, SYN Flood attack and ACK Flood attack. Traffic cleaning is usually provided free of charge to cope with some small-scale attacks and large traffic. Because it is the DDOS high protection service processing, management personnel do not need to intervene;

When the attack traffic is greater than 10G, it is recommended to use high-defense IP to pull the traffic and hide the source IP. However, it requires personnel intervention to replace the source IP with high-defense IP, and it also requires a certain switching effective time.

In addition to DDoS attack protection, in the face of Web application attacks, Web application firewall WAF can be used in the cloud to deal with CC attacks, SQL injection, XSS attacks, etc. When the application is connected, WAF will assign a CNAME domain name, and add a new CNAME resolution at the domain name service provider to bring traffic into WAF. After filtering, the traffic will be returned to the IP of the source station, and the source station can be in the local environment, in the cloud, or on other cloud platforms.

If there is no effective security protection capability in the local environment and the flow cleaning and protection against DDoS and CC attacks are needed, all the traffic can be filtered and cleaned from the public cloud in a circle and then returned to the source.

4. Expand product capabilities

In the local environment, the original product has limited capability, which may be used in computing virtualization, storage, MySQL database, Hadoop and other services. With the development of the business, in addition to the use of computing, storage capacity expansion, may also encounter the need to use more technical capabilities, such as docking data lake, the analysis and processing of massive logs, Serverless development framework, if the local environment installation, long-term maintenance and need a certain technical threshold.

The solution

For example, the host log and user business log are generated in the local environment and stored locally through local self-built Elasticsearch. Now, after version upgrade, weekly report needs to be generated and pushed to relevant administrators or users. Public cloud log service can be used to store the log and bidding instance can be used to handle the report task. The specific steps are as follows:

The local log analysis module can be placed on the public cloud for processing. When the log is generated, it can be uploaded to the cloud Elasticsearch through tools such as Logstash for storage and analysis in the cloud. The collection and storage of logs requires the support of Elk, a big data component. Using Elasticsearch on the public cloud can reduce the operation of self-installation and deployment, and also save the maintenance cost.

Generating and pushing reports is not a requirement of real-time performance, and will preoccupy resources of the current core business. The function of generating and pushing reports to users can be built on the cloud host, or it can be handled by triggering the Serverless function. Reduce costs through flexible adoption of billable, pay-per-use computing resource products on the public cloud; Public greetings to the host for instance cloud host, when not to for instance cloud host to wait, after get the task processing, and in a timely manner to report data written to the US3 object storage or delivery, if you do not get the bid instance of a long time, haven’t apply before the report must be pushed to the resources, will start to create common cloud hosting for computing tasks, Priority guarantees do not affect business.

For another example, for businesses that need to train AI models, they can also be handed over to the UCloud public cloud. Usually by way of public data transmission time will be longer, if you can’t accept a long time to consider line transmission, sending hard drive way to upload data, or directly through the UCloud public cloud UAI – Train training platform load data model, the model Train back to the local environment, each optimized model also update to the local environment, At the same time, UAI-Inference online service is provided locally.

There are big data analysis, AI model training and other functional components that can be decoupled from the main business, which can be calculated by the public cloud, and the calculation results can be transmitted back to the local, cloud storage, and news pushed to the outside, so as to reduce local deployment, operation and maintenance.

5. Achieve smooth business transition

During the design and implementation of the business migration scheme, IT is difficult to fully grasp the existing IT resources and business architecture. Migrating to the public cloud requires some product changes and replacements, as well as doubts about whether it will be truly available after the migration, which makes the business migration difficult.

The solution

First, duplicate the business and data, keep the same business and data in the local environment and the public cloud, divide the traffic through global load balancing, and then switch when the business is stable. The advantage of this approach is that the migrated business is validated with a small number of real requests, ensuring that the migrated environment is available.



(Architecture diagram: smooth business migration through UCloud hybrid cloud architecture)

As shown in the figure above, the process of achieving smooth business migration can be divided into the following steps:

1. Realize the business and data replication to the cloud and provide services through the hybrid cloud architecture in the migration process. The application layer does not save the state, making it easier to copy the business to the public cloud; 2. In the database part, the historical data is copied first, and then incremental data synchronization is gradually realized through the migration tool UDTS; All write operations of the database are still placed in the local environment. The read operations of the database by the cloud host in the public cloud can be placed in the database in the public cloud. 3. In the overall process of migration, a temporary state of hybrid cloud architecture is formed, and the user request traffic is divided into local environment and public cloud environment according to a certain proportion; 4. During the migration and cutting window, the write operations of database, message queue and other services can be switched from the local environment to the public cloud, and the original read operations can remain unchanged; 5. Gradually reduce the traffic in the local environment to 0, so as to complete the smooth transfer of business and data to the cloud; 6. Finally, clean up the business and data in the local environment.

The smooth migration mode of hybrid cloud architecture can be considered for both business and data migration.

conclusion

This paper introduces three types of hybrid cloud architectures, including renting IDC or owning servers to connect with the public cloud, private cloud to connect with the public cloud (cloud operating system privatized deployment), and managed cloud to connect with the public cloud. It also focuses on five scenarios for extending the local environment through hybrid cloud architecture and the corresponding solutions. The author thinks that whether it is renting IDC, self-purchasing server, private cloud or managed server, the resources and capabilities of its local environment are limited. In contrast, public cloud resources and capabilities are “infinitely” extensible, so hybrid cloud architectures can be used to extend computing, storage backup, security, capabilities for new products and services, as well as smooth business transitions during migration.

In the next part, we will continue to introduce the network connectivity under the hybrid cloud architecture, realize the global load balancing through DNS and forwarding cluster, and the cloud management platform CMP for the unified management of the hybrid cloud architecture, as well as the cross-platform scheduling of resources and the practice sharing of ensuring business continuity.