preface

Having already deployed the highly available Kubernetes cluster and Rancher cluster, I need to deploy seata-Server in advance these two days. I thought it would be a good opportunity to experiment with Kubernetes with a low risk.

Architectural description

Before deploying, I need to describe the current state of the existing architecture. The existing architecture is mainly a micro-service system based on the SpringCloud framework. Ali Nacos components are used in the configuration center and service registry.

From the perspective of architecture transformation, it is definitely not realistic to fully switch to kubernetes-based microservice system. For the existing mainstream solutions, Kubernetes is generally used for basic measures and service scheduling. The software level is still based on SpringCloud development mode. This is also a more realistic plan to gradually switch.

So in terms of the current architecture, Seata-Server will be deployed and managed on Kuberetes, Seata client applications will be deployed using Docker containers, and Nacos services will be physically installed in binary mode. Seata – Server has no management background, so it does not need to expose services to the outside world. Only the SeATA client application can make inter-service calls.

The deployment process

The focus of this process description is not on how to deploy seata-Server, but on how to deploy the application through Kubernetes.

The official Seata documentation supports Kubernetes deployment and provides resource files in YAML format (because Nacos is used, custom files are used) :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: seata-server
  namespace: default
  labels:
    k8s-app: seata-server
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: seata-server
  template:
    metadata:
      labels:
        k8s-app: seata-server
    spec:
      containers:
        - name: seata-server
          image: docker.io/seataio/seata-server:latest
          imagePullPolicy: IfNotPresent
          env:
            - name: SEATA_CONFIG_NAME
              value: file:/root/seata-config/registry
          ports:
            - name: http
              containerPort: 8091
              protocol: TCP
          volumeMounts:
            - name: seata-config
              mountPath: /root/seata-config
      volumes:
        - name: seata-config
          configMap:
            name: seata-server-config

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: seata-server-config
data:
  registry.conf: | registry {type = "nacos nacos {application =" seata - server "# according to the need to configure the nacos information serverAddr =" 192.168.199.2 "}} config {type = "nacOS" nacOS {serverAddr = "192.168.199.2" group = "SEATA_GROUP"}}Copy the code

Resources can be submitted by using the following command:

kubectl apply -f seata-server.yaml
Copy the code

Of course, it can also be created from the Rancher interface:

This process is quick, and you can see that both pods have started successfully.

Pod External access mode

Seata-server is registered with Nacos as a service. In order to meet the needs of inter-service calls, the corresponding Pod needs to be accessible outside the cluster. This involves several ways Kubernetes exposes the service (roughly speaking) :

  • Define the Pod network directly

    HostNetwork: true and hostPort. These two methods are to configure Pod network directly. The former is equivalent to using hostNetwork directly, and the latter is to map container port to host node port. The disadvantages of this approach are obvious. Due to Pod scheduling, the IP address of the node where the current Pod is located must be clearly known to access, and the number of PODS cannot exceed the total number of cluster nodes due to the specified port. However, such a problem will not occur in our usage scenario. The Seata client will use Nacos to get the address of the Seata-Server service to call, even if the Pod IP changes, the only thing to be noticed is that the number of pods should not exceed the total number of nodes in the cluster.

  • Through the service

    NodePort is not recommended under any circumstances, and LoadBalancer is the standard way to expose services to the public network.

  • Ingress

    Ingress may be the most powerful way to expose a service, but it’s also the most complex. Well, that’s enough. We’ll look at it separately.

In this scenario, I choose HostPort. The specific implementation is very simple: configure Pod and add HostPort attribute:

If the Rancher

IP issues

Seata-server is successfully deployed, but the IP registered with Nacos is the Kubernetes internal cluster IP:

As such, seata client applications cannot access the Seata-Server service unless nacos and Seata client applications are also deployed within the Kubernetes cluster, which is obviously not possible at this time. Seata-server’s registered IP address is the host IP address. Seata-server’s registered IP address is the host IP address.

Use the Pod field as the value of the environment variable

According to the documentation, modify YAML as follows:

Using Rancher, the configuration is as follows

Port problem

In theory, using Kubernetes to deploy applications should not care about IP and port issues, and the scheduling process should be unconditional. However, because we use external Nacos for service registration, inter-service invocation must clearly know the IP and port number of the service, which implicitly imposes conditions on Pod scheduling:

(1) The number of services cannot exceed the number of nodes in the cluster

(2) Port numbers cannot conflict. Only one service can be deployed on each cluster node

This is also a weakness of the current architecture that can be resolved by integrating SpringCloud-Kubernetes in the future. The port problem is also one of the reasons for choosing HostPort. When Kubernetes obtains Pod information, it cannot obtain the port number, so it cannot register the correct port number with Nacos.

Data Volume Configuration

I won’t go into details about the data volume, but it is not too different from the concept of Dokcer, so far I just mapped the log directory.