You are familiar with using Kubernetes to manage applications, but are there different requirements for application deployment and management in edge scenarios? This article explores common container application management solutions for edge scenarios.

Edge simple service scenario

Some of the marginal requirements the author has contacted are relatively simple, such as dial-and-test service. This scenario is characterized by a desire to deploy the same services on each node with only one POD per node, and it is generally recommended that users deploy directly with DaemOnset. The official Kubernetes documentation is available for details on the characteristics and usage of Daemonset.

Edge single-site deployment microservice scenario

The second scenario is the deployment of edge SaaS services, which I won’t illustrate here because of customer trade secrets. Users will deploy a complete set of micro-services in an edge machine room, including account service, access service, business service, storage and message queue, and service registration and discovery will be done with the help of Kubernetes DNS. In this case, the client can use Deployment and Service directly, where the main difficulty is not service management but the ability to edge autonomy.

For details on deployment and service usage, you can read the official documentation of Kubernetes. We will introduce the marginal autonomy of TKE@edge in a subsequent article.

Edge multi-site deployment micro-service scenario

Scene features:

• Edge computing scenarios tend to have multiple edge sites managed in the same cluster, with one or more compute nodes within each edge site.

• It is expected to run a group of services connected with business logic in each site. The service in each site is a complete set of micro-services, which can provide services for users

• Cross-site access between business connected services is not desired or not possible due to network constraints

General scheme:

1. Limit services to one node

The program features:

• Services are deployed with DaemOnset so that PODs for all services are available on each edge node. As shown in the figure above, there are services A and B on the cluster, with one POD-A and one POD-B on each edge node deployed with Daemonset.

• The service is accessed through localhost so that the call chain is locked within the same node. As shown in the figure above, POD-A and POD-B are accessed as localhost.

Disadvantages of this scheme:

• Each service can only have one POD on the same node, due to Daemoncet, which is inconvenient for services that need to run multiple PODs on the same node.

• PODS need to use HostNetwork mode so that PODS can be accessed through localhost+port. This means that users need to manage service resource usage well and avoid resource contention, such as port contention.

2. The service is called by different names on different sites

The program features:

• The same service is called by different names on different sites to lock access between services within the same site. As shown in the figure above, there are two services A and B in the cluster, which are named SVR-A-1 and SVC-B-1 respectively in Site-1 and SVC-A-2 and SVC-B-2 respectively in Site-2.

Disadvantages of this scheme: • Services have different names in different sites, so services cannot simply be called by service names A and B, but by SVC-A-1 and SVC-B-1 in Site-1, and by SVC-A-2 and SVC-B-2 in Site-2. For the use of K8S DNS implementation of micro-service business is extremely unfriendly.

Scene pain points:

1. K8S itself does not directly provide solutions for the following scenarios.

, the first is a multitude of regional deployment issues: usually, the edge of a cluster can manage many edge site (each edge site contains one or more computational resources), the center of the cloud scene is often some of the biggest region at the center of the room, the edge region is relatively center region more cloud scene, maybe a small city has a marginal room, area number may be very much; In native K8S, POD creation is difficult to specify unless node affinity is used to deploy for each region, but if there are dozens or even dozens of regions, such as a Deployment that requires multiple services to be deployed in each region, and each Deployment needs to have a different name and selector, Dozens of regions means hundreds of different names, selectors, pod labels and affability deployments to YAML. Just writing these YAMLs is a lot of work.

• Services need to be associated with regions, such as transcoding and composition services in audio and video services. To complete access to audio and video services within their respective regions, users hope that mutual calls between services can be limited within their own regions, rather than cross-regional access. This again requires the user to prepare the deployment YAML of a local Deployment specific service with hundreds of different selectors and names;

• A further complication is that if the service name is used to access each other in a user’s application, the application will not even work in the current environment because the service name is different from one region to another. It will need to be adapted for each region, which is too complicated for the user.

2. In addition, in order to make the containerized services consistent with the services running on VM or physical machines in the scheduling scheme, users naturally want to allocate a public network IP for each pod. However, the number of public network IP is obviously not enough.

To sum up, although native K8s can meet demand 1) in a disguised way, the actual scheme is very complicated, and there is no good solution for demand 2).

In order to solve the above pain points, TKE@edge creatively proposed and implemented the ServiceGroup feature. Two YAML files can easily realize the deployment of services in hundreds of regions without application adaptation or transformation.

SeviceGroup features

ServiceGroup can easily deploy a group of services in different rooms or regions belonging to the same cluster, and make the requests between each service can be completed within the same room or region, avoiding cross-regional access of services.

The native K8S cannot control the specific node location created by the POD of Deployment, which needs to be completed indirectly through the overall planning of the affinity of nodes. When the number of edge sites and the number of services to be deployed is too large, the management and deployment aspects are extremely complex, and even there is only theoretical possibility. At the same time, in order to limit the mutual invocation of services to a certain range, the business side needs to create exclusive services for each Deployment respectively, which requires a huge amount of management work and is prone to errors and causes online business exceptions.

ServiceGroup is designed for this kind of scenario. Customers can easily deploy services to these node groups by using DeploymentGrid and ServiceGrid, two Kubernetes resources developed by TkeEdge. And the service flow control, in addition, but also to ensure the number of regional service and disaster tolerance.

Key ServiceGroup concepts

1. Overall structure

NodeUnit

•NodeUnit is usually one or more instances of computing resources located in the same edge site, and you need to ensure that the Intranet of the nodes in the same NodeUnit is connected

• Services in the ServiceGroup group run within a NodeUnit

• TkeEdge allows users to set the number of PODs a service can run in a NodeUnit

• TkeEdge can restrict calls between services to this NodeUnit

NodeGroup

•NodeGroup contains one or more NodeUnits

• Ensure that services in ServiceGroup are deployed on each NodeUnit in the collection

• Automatically deploy services from ServiceGroup to the new NodeUnit when NodeUnit is added to the cluster

ServiceGroup

•ServiceGroup contains one or more business services

• Applicable scenarios: 1) Business needs to be packaged and deployed; 2) Or, it needs to run in each NodeUnit and guarantee the number of PODs; 3) Alternatively, calls between services need to be controlled in the same NodeUnit and traffic cannot be forwarded to other NodeUnits.

• Note: ServiceGroup is an abstract resource and multiple ServiceGroups can be created in a cluster

2. Types of resources involved

DepolymentGrid

DeploymentGrid is formatted similarly to Deployment, with the <deployment-template> field being the template field of the original Deployment and, in particular, the GridUniQKey field. This field specifies the key value of the label of the node group;

apiVersion: tkeedge.io/v1 

kind: DeploymentGrid 

metadata:  

name:  

namespace: 

spec:  

gridUniqKey: <NodeLabel Key>  

<deployment-template>

ServiceGrid

The format of the ServiceGrid is similar to that of a Service, with the <service-template> field being the template field of the original Service. In particular, the GridUniQKey field specifies the key value of the label of the node group.

apiVersion: tkeedge.io/v1 

kind: ServiceGrid 

metadata:  

name:   

namespace: 

spec:  

gridUniqKey: <NodeLabel Key>  

<service-template>

3. Use examples

In the case of deploying Nginx on the edge, we want to deploy the Nginx service separately in multiple node groups. We need to do the following:

1) Determine the ServiceGroup unique identity

This step is logical planning, and you don’t need to do anything. We will use UniqKey as: zone for the ServiceGroup logical tag we are currently creating.

2) Group edge nodes

For this step, use the console of TKE@edge or kubectl to label the edge node. The operation entry of the console is as shown below:

3) In the node list page of the cluster, click “Edit Label” to label the node

• Using the diagram from the “Overall Architecture” section, we select Node12, Node14 and type label, zone= nodeUnit1; Zone = nodeUnit2; zone= nodeUnit2;

• Note: The key of the label in the previous step is the same as the UniqKey of ServiceGroup. Value is the unique key of NodeUnit. Nodes with the same value belong to the same NodeUnit. The same node can be labeled with multiple labels to separate NodeUnit from multiple dimensions. For example, label Node12 with test=a1

• If there are more than one ServiceGroup in the same cluster, assign a different UNIQKey to each ServiceGroup

4) DeploymentGrid

apiVersion: tkeedge.io/v1 kind: DeploymentGrid metadata: name: deploymentgrid-demo namespace: default spec: gridUniqKey: zone template: selector: matchLabels: appGrid: nginx replicas: 2 template: metadata: labels: appGrid: Nginx spec: containers: -name: nginx image: nginx:1.7.9 ports: -containerport: 80

apiVersion: tkeedge.io/v1 

kind: ServiceGrid 

metadata:  

name: servicegrid-demo  

namespace: default 

spec:  

gridUniqKey: zone  

template:    

   selector:      

      appGrid: nginx    

   ports:    

   – protocol: TCP      

     port: 80      

     targetPort: 80

   sessionAffinity: ClientIP

5) Deploy the ServiceGrid

GridUniQKey is set to ‘zone’. If there are three groups of nodes, add ‘zone’ : ‘zone-0’, ‘zone’ : ‘zone-1’, ‘zone’ : ‘zone-2’. At this point, NGINX Deployment and the corresponding POD are available in each group of nodes, and accessing the uniform service-name within the node will only send the request to the node in that group.

In addition, for node groups that have been added to the cluster after DeploymentGrid and ServiceGrid have been deployed, this feature automatically creates the specified Deployment and Service in the new node group.