“This is the seventh day of my participation in the First Challenge 2022.

Download the official nacOS-K8S package

git clone https://github.com/nacos-group/nacos-k8s.git
Copy the code

If not, you can use Gitee to download the file as follows:

2. Deploy NFS

Nacos has automatic capacity expansion and reduction and data persistence features in K8S. Please note that if you need to use these features, use PVC persistent volumes. The automatic capacity expansion and reduction of Nacos depends on persistent volumes, as well as data persistence.

2.1 Installing the NFS Service

  • Master, node01 and node02 NFS – utils

    yum -y install nfs-utils rpcbind
    Copy the code
  • Added NFS exports configuration for master,node01,node02

    Create the following three directories on the three nodes:

    mkdir -p /data/nfs-share
    mkdir -p /data/mysql
    
    chmod 777 /data/nfs-share
    chmod 777 /data/mysql
    Copy the code
    vi /etc/exports
    Copy the code

    Add the following configuration for the three nodes

    /data/mysql*(insecure,fsid=0,rw,async,no_root_squash)
    /data/nfs-share *(rw,fsid=0,async,no_root_squash)
    Copy the code
  • Start and set NFS on boot:

    systemctl enable nfs
    systemctl enable rpcbind
    
    systemctl start nfs
    systemctl start rpcbind
    Copy the code

2.2 Creating a Role

The K8S namespace is not default, please execute the following script before deploying RBAC:

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
NAMESPACE=${NS:-default}
sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/nfs/rbac.yaml
Copy the code

Using the default life space default, use the following command to create create

kubectl create -f deploy/nfs/rbac.yaml
Copy the code

2.3 Creating a ServiceAccount and deploying nfS-Client Provisioner

kubectl create -f deploy/nfs/deployment.yaml
Copy the code

2.4 Creating an NFS StorageClass

kubectl create -f deploy/nfs/class.yaml
Copy the code

2.5 Verifying NFS Deployment

kubectl get pod -l app=nfs-client-provisioner
Copy the code

If the Pods status is always below, you can view the error message as follows:

kubectl describe pods nfs-client-provisioner-9647df776-szzx8
Copy the code

The docker image may be slow to download during creation. Please be patient.

2.6 What If the Image cannot be downloaded

The describe command shows that image has been pulling:

You can change the mirror address in deployment.yaml:

registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest
Copy the code

Mysql > install mysql

It is recommended to use external data, or the following if you are using a YAML installation.

Modify the mysql-nfs.yaml file

apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  replicas: 1
  selector:
    name: mysql
  template:
    metadata:
      labels:
        name: mysql
    spec:
      containers:
      - name: mysql
        image: Nacos/nacos - mysql: 5.7
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "root"
        - name: MYSQL_DATABASE
          value: "nacos_devtest"
        - name: MYSQL_USER
          value: "nacos"
        - name: MYSQL_PASSWORD
          value: "nacos"
      volumes:
      - name: mysql-data
        nfs:
          server: 192.168184.139.
          path: /data/mysql
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  ports:
  - port: 3306
    targetPort: 3306
  selector:
    name: mysql
Copy the code

View status:

[root@k8s-master deploy]# kubectl get pods -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
mysql-9lbkp                               1/1     Running   0          3m30s   10.244.4.9   k8s-node2   <none>           <none>
nfs-client-provisioner-557df77947-npjdt   1/1     Running   0          33m     10.244.4.2   k8s-node2   <none>           <none>
Copy the code

Deploy NACOS

kubectl create -f nacos/nacos-pvc-nfs.yaml
Copy the code

If the following problems occur:

Kubeadmin installed K8S will restart automatically if kube-apiserver.yaml adds the following configuration: kube-apiserver.yaml

vi /etc/kubernetes/manifests/kube-apiserver.yaml
Copy the code

Add the following

- --feature-gates=RemoveSelfLink=false
Copy the code

The location is shown below:

View pod after modification

Delete nacOS and PVC and recreate

kubectl delete -f deploy/nacos/nacos-pvc-nfs.yaml
kubectl delete pvc datadir-nacos-0 logdir-nacos-0 plugindir-nacos-0
Copy the code

4.1 Key Issues

  • Yaml still fails to be started and cannot be mounted to the specified path. As a result, the whole service cannot be started. Here is a record that no problems have been found.

    Therefore, nacos-quick-start.yaml is used in this paper to directly start:

    kubectl create -f nacos/nacos-quick-start.yaml
    Copy the code
  • There is a pending node in k8S, because the default master node does not deploy the service, so you can change the configuration:

    kubectl taint nodes --all node-role.kubernetes.io/master-
    Copy the code

    Remember to use NodePort or Ingress to expose Internet access.

  • If the node fails to access the page after being started, use the following method to resolve the problem

    To determine whether the POD started successfully, use the logs command. Basically because the service is not up and can not be accessed. Nacos is basically a result of database configuration. Take a look at the YAML database configuration for NACOS.

    If the startup is successful and still cannot be resolved, try the following solution to modify the Docker startup parameters, add parameters at the end of the [Service] field:

    Vi/usr/lib/systemd/system/docker. Service # to add the following configuration ExecStartPost = / sbin/iptables -i FORWARD -s 0.0.0.0/0 -j ACCEPT systemctl daemon-reload systemctl restart dockerCopy the code