preface

At present, all industries are actively embracing cloud computing. However, due to some historical reasons and compliance requirements, it is difficult for many enterprises to fully implement cloud computing. For example, some core databases must be kept in local data centers. As fixed assets of an enterprise, local data centers cannot be abandoned completely. Some large conglomerates have complex IT architectures, and IT is difficult to assess the impact of cloud on overall migration. Therefore, many domestic enterprises are still in the traditional IT architecture mode based on their own servers, IDC and other deployment services.

Local or IDC hosting server cluster in computing, storage, backup, and security protection will be limited, and public cloud with flexible extension, elastic paid superiority, safe and reliable, and if the local environment and public clouds through merging into a hybrid cloud architecture, will be able to guarantee the local deployment of independently controlled at the same time, also can enjoy the dividends of public cloud.

According to Gartner, more than 75% of medium to large enterprises will adopt some form of hybrid cloud and/or hybrid IT strategy in 2021. IDC predicts that by 2022, more than 90% of enterprises worldwide will rely on a combination of on-premise deployment/dedicated private clouds, multiple public clouds, and legacy platforms to meet their infrastructure needs. Hybrid IT strategies, which combine the advantages of public cloud flexibility, low cost and private security of local deployment, are being widely adopted by global enterprises.

Mixed cloud background

Before the emergence of cloud computing and the early development of cloud computing, many enterprises choose to rent traditional IDC or self-built IDC servers to build local environments. Renting IDC servers, in particular, has carried the development of enterprise IT systems for a long time. Compared with buying servers, networking and renting IDC servers, IT has provided great convenience. However, compared with cloud computing, traditional IDC has many problems, such as long delivery cycle, poor resource elastic expansion ability, and large one-time hardware cost.

The local environment also includes servers deployed in the office, which are used to run services such as internal systems, internal test environments, and official websites. Even if these services are in the cloud, local servers cannot be removed at a time.

In addition, many enterprises use private clouds to set up local environments. Although private clouds have advantages over IDC in on-demand billing and independent management, they also face problems such as insufficient data storage and backup capabilities, incomplete product lines, and weak security protection capabilities.

For the above problems, hybrid cloud architecture is undoubtedly the best choice for enterprises. Enterprises can use public cloud platforms to flexibly expand resources while retaining the original local DC resources, making up for the shortcomings of local DC security protection and data storage backup.

Hybrid cloud architecture in three forms

(Architecture diagram: UCloud Hybrid Cloud architecture solution)

As shown in the figure above, the forms of hybrid cloud architecture generally include:

Based on the requirements of compliance and system, some users in specific industries can choose to purchase hardware servers independently or rent IDCs instead of using public cloud. The problem is that high maintenance labor costs require professional operation and maintenance teams.

If you have purchased or rented a server cluster, you can host the server on the UCloud cloud platform to reduce the operation and maintenance of hardware servers on the user side and enjoy the same level of power, networking, and resource maintenance services as public cloud data centers.

In contrast, the private deployment of UCloud cloud computing virtualization operating system, namely the delivery method of UCloudStack private cloud, can copy and reference the use mode of public cloud, enjoy stable cloud operating system and various products and services, and have UCloud technical team to provide operation and maintenance services. Provide users with private cloud services in turnkey mode.

IDC server + public cloud

Cloud computing first appeared in 2006, and developed in China in 2010 or even later. However, the traditional IT systems, server clusters, data centers and other devices that had been very mature before adopted virtualization technology such as VMware or OpenStack operating system. Banking, government affairs, traditional enterprises, e-commerce and other platforms have built very perfect infrastructure resources and application systems based on traditional IT architecture.

These devices and systems can run stably. However, in the process of development, some problems may occur, such as aging devices that need to be replaced, insufficient system capabilities to support current service expansion, and limited security protection capabilities. Upgrading equipment will require repurchasing resources and projecting usage over the next few years, which will require paying for redundant computing capacity.

With the method of mixed cloud architecture, can be the original server cluster, data center and public clouds are connected through the network, the application system and part of the data is copied to the public cloud, and the core database write operations are all on the local environment, which can realize more business request distribution to the back-end cloud platforms.

Hosted cloud + public cloud

(Architecture diagram: UCloud Hosted hybrid cloud Architecture)

The form of managed server is more suitable for the urgent need to go on the cloud, do not want to modify the adaptation of the existing business projects, but also need to ensure the physical isolation of part of the core business, through the server level of direct relocation, in the managed cloud area can directly run the same services as the original local environment.

The managed cloud can be used as the intermediate process of the complete migration from the local environment to the public cloud, or the managed area on the public cloud can be used as the “private IDC” of users on the public cloud to host their own physical servers, and then connected to the public cloud area through the internal private line. The expansion and scaling of resources and the adoption of more public cloud services are completed in the public cloud area, and the hybrid cloud architecture of hosting area and public cloud area is finally realized.

Private cloud + public cloud

In the IDC/ server cluster mentioned above, users deploy operating systems such as VMware and OpenStack on servers or perform virtualization by themselves. However, the maintenance cost is relatively high and a mature technical team is required to maintain the cluster. A better solution is to use cloud computing operating systems of cloud vendors for private deployment.

UCloud provides a lighter cloud operating system, UCloudStack. In the deployment solution, only 3+ cloud servers are required for management nodes. By default, only some basic cloud computing functions are configured on the cloud operating system level. UCloudStack can implement POC authentication on one server and build a production environment on three servers. The deployment takes only a few hours. UCloudStack is suitable for users who cannot use public clouds due to security or compliance restrictions but have cloud or virtualization requirements. UCloudStack provides interfaces for connecting to public clouds, facilitating integrated management of private and public clouds.

Comparison of three forms:

Usage scenarios of hybrid cloud architectures

1. Expand computing power

In a local environment, the capacity of data centers and computing capabilities of server clusters are limited, making it difficult to quickly expand resources. In particular, it is difficult to expand resources in a short period of time when sudden increase of service traffic, predicted service traffic peak, or temporary service traffic jitter occurs. If the server is configured according to the service traffic peak value, resources will be idle for a long time in the service off-peak period, which is not cost-effective.

For example, in the new retail scenario, the business traffic continues to increase on The 11th Singles Day and then decreases after the 11th Singles Day, which is a typical scenario with peaks and troughs of traffic. It not only needs to shoulder the huge business traffic during the peak period, but also hopes to reduce the overall expenditure cost.

The solution

The local environment and the public cloud are connected to form a hybrid cloud architecture to achieve rapid expansion of the computing capacity of the local environment. To build hybrid cloud architecture, network connection is required to realize cross-platform database write requests and component calls. Secondly, services and data in the local environment need to be synchronized to the cloud so that the cloud can carry service traffic. Finally, traffic is segmented and part of the traffic is forwarded to the cloud platform. From this perspective, building a hybrid cloud architecture can greatly expand the computing power of the local environment.

(Architecture diagram: Extending computing power through UCloud hybrid cloud architecture)

  1. Connect to the local environment and cloud platform through UConnect, public network, or SD-WAN.
  2. Referring to typical three-tier Internet services, the application layer and logical layer are stateless and can be independently deployed in the local environment and the cloud environment.
  3. The local self-built database is used as the master database and the cloud database is used as the slave database to realize the master/slave synchronization of data. Of course, the cloud database can also be used as the master database. In addition, the master and slave databases keep one-way data synchronization through the binlog mechanism of MySQL and other databases. The logical layer of the local environment and cloud environment connects to the corresponding master and slave libraries for read operations, and the write operations need to connect to the master database of the local environment.
  4. Normal traffic is sharded to the local environment, and additional traffic is sharded to the cloud platform.
  5. The cloud platform can automatically scale the UAS and add or delete cloud host resources based on UHost CPU monitoring data to cope with service traffic changes.

The above consideration is the hybrid cloud architecture with the local environment first and then constructed, so the introduction is based on the deployment of the master database in the local environment. For the hybrid cloud architecture, the master database can be on one side, and the slave database can be on the other side.

If the service is running locally, the current service pressure is close to the upper limit of the computing capacity, and the computing capacity will come online in an expected period of time, you can allocate some of the service traffic to the public cloud. You can select a small amount of traffic to start with.

2. Expand storage backup capabilities

Local storage is the expansion and limited capacity range, not timely, expansion is difficult to predict the future storage capacity, choose a hybrid cloud architecture the data storage capacity expansion to the public cloud, public cloud storage capacity for users “infinite”, the user can focus on access to data, the capacity and the reliability by the cloud platform.

The solution

(Architecture diagram: Expanding storage backup capability through UCloud hybrid cloud architecture)

To save logs and other data to the public cloud, connect the local environment to the public cloud for Intranet communication and store local data to the cloud. In the local environment, hosts are separated as storage gateways. Local data is collected and stored to UCloud public cloud file storage UFS and object storage US3 based on configuration rules. Local environment logs can be collected using a LogStash file and exported using the Intranet IP address of ElasticSearch of the public cloud. In this way, local environment logs can be directly uploaded to the public cloud.

Some users have requirements and compliance requirements for data backup in the same city, and some industrial systems also have requirements for data backup. However, it usually takes a long time and costs to build a standard backup data center. The available area of UCloud public cloud and local environment in the same city is a good choice for backup. Data centers in public clouds must be at least Tier 3 and have been verified by a large number of user services, ensuring their stability and security. Create storage space for backup in UCloud OBJECT storage US3. The local environment uses the storage backup gateway to collect and encrypt data and then upload the data to US3. UCloud US3 can manage long-term backup files according to storage periods. Long-term backup files can be saved as low-frequency storage to reduce user costs. After one month or three months, they can be automatically saved as archive storage according to preset policies to further reduce storage costs.

In addition, warm backup can be implemented on the UCloud public cloud platform. The hosts in the local environment can be migrated to the public cloud platform. You do not need to deploy the hosts based on the number of hosts in the production environment.

If the local storage capacity is limited, inconvenient to expand, or difficult to predict the future storage capacity, desensitize the local data and store or back up the local data to the public cloud platform.

3. Expand security protection capabilities

In the local environment, the security protection equipment is updated slowly, and the local security attack defense and security risk prediction capabilities are limited in the face of endless and updated attacks. In addition, the local environment usually uses hardware WAF and access devices for attack detection and interception. Once large-scale network attacks are encountered, it is difficult to respond effectively by deploying hardware security services.

The solution

UCloud cloud platform for multi-tenant providing services such as computing, storage, security, relative to meet all kinds of challenges and attack type more and more complex, cloud service providers in order to ensure the security of user business platform, reliable, is bound to time input energy to cope with the challenge and attacks, thus can provide better security solutions, More practical experience in dealing with security attack risks.

In the hybrid cloud architecture, when services are running in the local environment, all traffic can be transferred to the cloud and filtered by the cloud security service. Normal traffic is then divided into the local environment and the cloud backend for processing. When network attacks occur, the attack traffic is also sent to the cloud platform for cleaning. Then, the cloud platform filters and blocks the attack traffic.

From the perspective of security protection, the cloud platform is an extension of the local environment. In addition to cleaning DDoS attack traffic with the massive resources of the cloud platform, the UCloud cloud platform provides a variety of security products and stronger defense capabilities. Security measures such as security attack interception and security risk warning identification are provided for services, resources, and data in the local environment.

(Architecture diagram: Using UCloud hybrid cloud architecture to implement anti-ddos services in the local environment)

As shown in the figure, when a DDoS attack occurs, the request is switched to the cloud center and traffic is cleaned by the advanced anti-ddos service deployed in the service system. After all requests are cleaned and filtered in the cloud center, normal traffic is divided into the local environment and back-end resources of the cloud center for request processing. Here UCloud will be divided into two cases:

When the attack traffic is less than 10 GB, the high defense service automatically cleans the traffic and adopts various defense policies to defend against network-layer attacks, such as TCP packet attacks, SYN Flood attacks, and ACK Flood attacks. The high defense service provides traffic cleaning for free and defends against small-scale and large-volume attacks. The advanced anti-ddos service processes the DDoS by itself, so the management personnel do not need to intervene.

If the attack traffic is larger than 10 GB, you are advised to use a high defense IP address to divert traffic and hide the source IP address. However, replacing the source IP address with a high defense IP address requires personnel intervention and the switchover takes effect within a certain time.

In addition to DDoS attack defense, Web application firewall WAF can be used on the cloud to deal with CC attacks, SQL injection, and XSS attacks. When applications are connected, WAF allocates a CNAME domain name, and a new CNAME is added to the domain name service provider to resolve the problem and divert traffic to WAF. After filtering, the traffic is returned to the source IP address, which can be in the local environment, cloud platform, or other cloud platforms.

If the local environment does not have effective security defense capabilities and needs to clean and defend against DDoS and CC attacks, you can filter and clean all the traffic from the public cloud and then back to the source.

4. Expand product capabilities

In the local environment, the original products have limited capabilities and may be used by computing virtualization, storage, MySQL database, and Hadoop services. With the development of services, in addition to the expansion of computing and storage capabilities, more technical capabilities may be required, such as data lake interconnection, analysis and processing of massive logs, and Serverless development framework. However, certain technical thresholds are required for local installation and long-term maintenance.

The solution

For example, host logs and user service logs are generated in the local environment and saved locally by ElasticSearch. After the version is upgraded, weekly reports need to be generated and sent to administrators or users. You can use the public cloud log service to store logs and use bidding instances to process reports. The specific steps are as follows:

The local log analysis module can be processed on the public cloud. When logs are generated, they are uploaded to the cloud ElasticSearch using a LogStash tool for storage and analysis on the cloud. The big data component ELK is required to collect and store logs. ElasticSearch on public cloud can reduce self-installation and deployment operations and reduce maintenance costs.

Generating and pushing reports are not real-time requirements, and will also occupy the resources of the current core business. The function of generating reports and pushing reports to users can be set up and run on the cloud host, or the Serverless function can be triggered to deal with it. Reduce costs by flexibly adopting billing by the hour, billing by using computing resources on the public cloud; Public greetings to the host for instance cloud host, when not to for instance cloud host to wait, after get the task processing, and in a timely manner to report data written to the US3 object storage or delivery, if you do not get the bid instance of a long time, haven’t apply before the report must be pushed to the resources, will start to create common cloud hosting for computing tasks, Ensure that services are not affected.

For example, the business needs to train the AI model, can also be handed over to UCloud public cloud. Usually by way of public data transmission time will be longer, if you can’t accept a long time to consider line transmission, sending hard drive way to upload data, or directly through the UCloud public cloud UAI – Train training platform load data model, the model Train back to the local environment, each optimized model also update to the local environment, At the same time, online services of UAI-Inference are provided locally.

Big data analysis, AI model training and other functional components that can be decouples from major services can be delivered to the public cloud for calculation, and the calculation results can be sent back to the local, cloud storage, and external message push, reducing local deployment, operation and maintenance.

5. Smooth service transition

During the design and implementation of business migration scheme, IT is difficult to fully grasp the existing IT resources and business architecture; Some product changes and replacements need to be made during the migration to the public cloud, and whether it can be used after the migration is also questionable. All of these make it difficult to migrate services.

The solution

Services including data are replicated first, and the same services and data are kept in the local environment and the public cloud. Traffic is sliced through global load balancing, and switchover is performed after services become stable. The advantage of this approach is that the migrated business is verified with a few real requests, ensuring that the migrated environment is available.

(Architecture diagram: Smooth business migration through UCloud hybrid cloud architecture)

As shown in the figure above, the process of realizing smooth service migration can be divided into the following steps:

1. During the migration process, services and data are copied to the cloud and services are provided through the hybrid cloud architecture. The application layer does not save state, so services can be easily replicated to the public cloud. 2. In the database part, historical data is first replicated, and then incremental data is gradually synchronized through UDTS. All database write operations are stored in the local environment. The read operations of the public cloud host can be stored in the public cloud database. 3. The temporary state of hybrid cloud architecture is formed in the overall process of migration, and user request traffic is divided into local environment and public cloud environment in a certain proportion; 4. During the migration cutting window, switch the write operations of database, message queue and other services from the local environment to the public cloud, and the original read operations can remain unchanged; 5. Gradually reduce the traffic in the local environment to 0 to complete the smooth migration of services and data to the cloud. 6. Clear services and data in the local environment.

If services and data need to be migrated, the smooth migration mode of hybrid cloud architecture can be considered.

conclusion

This paper introduces three types of hybrid cloud architecture, including the connection between rented IDC or private server and public cloud, the connection between private cloud and public cloud (private deployment of cloud computing operating system), and the connection between hosted cloud and public cloud. It also focuses on five scenarios and corresponding solutions for extending the local environment through hybrid cloud architecture. The author believes that the resources and capabilities of the local environment are limited, whether it is the rented IDC, the self-purchased server, the private cloud or the managed server. In contrast, public cloud resources and capabilities are infinitely scalable. Therefore, the hybrid cloud architecture can be used to expand the capabilities of computing, storage backup, security protection, new product services, and smooth service transition in the process of migration.

In the next chapter, we will continue to introduce network connectivity under hybrid cloud architecture, global load balancing through DNS and forwarding cluster, cloud management platform CMP for unified management of hybrid cloud architecture, as well as practice sharing on resource cross-platform scheduling and ensuring business continuity.