says

In this article you will learn how to run Longhorn on Civo using K3S. If you haven’t used Civo yet, you can sign up at www.civo.com/ and apply for a free subscription. First, we need a Kubernetes cluster, then we’ll install Longhorn and walk through an example to show how to use it.

One of the tenets of cloud native applications is that they are intended to be stateless and therefore directly scale the application horizontally. However, the reality is that unless your website or application takes up very little memory, you need to store it somewhere.

Industry giants such as Google and Amazon often have custom systems for scalable storage solutions for their own products. But what about small businesses?

In March 2018, Rancher Labs (Rancher), creator of the most widely adopted Kubernetes management platform in the industry, released the Containerized distributed storage project Longhorn (now donated to CNCF), which fills the above gap. In short, what Longhorn is doing is using the existing disks of the Kubernetes node to provide stable storage for the Kubernetes Pod.

preparation

Before we use Longhorn, you need to have a Kubernetes cluster running. You can simply install a K3S cluster (github.com/rancher/k3s…) Or if you’re using Civo’s Kubernetes service, you can use it. This article uses Civo’s Kubernetes service to create clusters.

We recommend using the fewest instances of Medium because we will be testing MySQL’s state store, which can take up a lot of RAM.

$ civo k8s create longhorn-test --wait
Building new Kubernetes cluster longhorn-test: \
Created Kubernetes cluster longhorn-test
Copy the code

Your cluster needs open-iscsi installed on each node, so if you are not using Civo’s Kubernetes service, you will need to run the following command on each node in addition to the instructions in the link above:

sudo apt-get install open-iscsi
Copy the code

Next, you need to download the Kubernetes configuration file and save it to ~/.kube/config, and also set the environment variable named KUBECONFIG to its filename:

cd ~/longhorn-play
civo k8s config longhorn-test > civo-longhorn-test-config
export KUBECONFIG=civo-longhorn-test-config
Copy the code

Install the Longhorn

Installing Longhorn on an existing Kubernetes cluster requires only two steps: install the Controller and extension pack for Longhorn, and then create a StorageClass that can be used with pods. The first step:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
namespace/longhorn-system created
serviceaccount/longhorn-service-account created
...
Copy the code

Creating StorageClass requires another command, but as an additional step, you can set the new class to default, so you don’t have to specify it every time:

$ kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/examples/storageclass.yaml
storageclass.storage.k8s.io/longhorn created

$ kubectl get storageclass
NAME       PROVISIONER           AGE
longhorn   rancher.io/longhorn   3s

$ kubectl patch storageclass longhorn -p \
  '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    storageclass.storage.k8s.io/longhorn patched

$ kubectl get storageclass
NAME                 PROVISIONER           AGE
longhorn (default)   rancher.io/longhorn   72s
Copy the code

Visit the Longhorn Dashboard

Longhorn has a neat Dashboard where you can see used Spaces, available Spaces, volume lists, and so on. But first, we need to create the authentication details:

$ htpasswd -c ./ing-auth admin
$ kubectl create secret generic longhorn-auth \
  --from-file ing-auth --namespace=longhorn-system
Copy the code

Now we’ll create an Ingress object that can use the built-in Traefik in K3S and expose the Dashboard externally. Create a file called longhorn-ingress.yaml and put it inside:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: longhorn-ingress
  annotations:
    ingress.kubernetes.io/auth-type: "basic"
    ingress.kubernetes.io/auth-secret: "longhorn-auth"
spec:
  rules:
  - host: longhorn-frontend.example.com
    http:
      paths:
      - backend:
          serviceName: longhorn-frontend
          servicePort: 80
Copy the code

Then apply it:

$ kubectl apply -f longhorn-ingress.yaml -n longhorn-system
ingress.extensions/longhorn-ingress created
Copy the code

Now you need to add an entry to the /etc/hosts file to point your arbitrary Kubernetes IP address to longhorn-frontend.example.com:

echo "Longhorn-frontend.example.com" 1. 2. >> /etc/hosts
Copy the code

Now, you can go to http://longhorn-frontend.example.com in your browser, and after authenticating with admin and the password you entered with htpasswd, you should see something like the following:

Install MySQL using persistent storage

Running MySQL in a single container makes no sense, because when the underlying node (container) dies, the related business can’t run, and you lose customers, you lose orders. In this case, we will configure a new Longhorn persistent volume for it.

First, we need to create a few resources in Kubernetes. Each of these is a YAML file in an empty directory, or you can put them all in one file, separated by –.

Mysql /pv.yaml persistent volume


apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  namespace: apps
  labels:
    name: mysql-data
    type: longhorn
spec:
  capacity:
    storage: 5G
  volumeMode: Filesystem
  storageClassName: longhorn
  accessModes:
    - ReadWriteOnce
  csi:
    driver: io.rancher.longhorn
    fsType: ext4
    volumeAttributes:
      numberOfReplicates: '2'
      staleReplicaTimeout: '20'
    volumeHandle: mysql-data
Copy the code

A declaration of the volume in mysql/Pv-claim.yaml (similar to an abstract request so that someone can use the volume) :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    type: longhorn
    app: example
spec:
  storageClassName: longhorn
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
Copy the code

There is also a pod in mysql/pod.yaml that can run mysql and use the volume life described above (please note: We are using password as root password for MySQL here, but in practice you should use secure password and store password in Kubernetes Secret instead of YAML, we are just keeping it simple here) :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-mysql
  labels:
    app: example
spec:
  selector:
    matchLabels:
      app: example
      tier: mysql
  strategy:
    type: set template: metadata: labels: app: example tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claimCopy the code

Now, the application folder or a single file (depending on your previous selection) :

$ kubectl apply -f mysql.yaml
persistentvolumeclaim/mysql-pv-claim created
persistentvolume/mysql-pv created
deployment.apps/my-mysql created

# or

kubectl apply -f ./mysql/
persistentvolumeclaim/mysql-pv-claim created
persistentvolume/mysql-pv created
deployment.apps/my-mysql created
Copy the code

Test whether MySQL can persist storage

Our test was simple: create a new database, delete the container (Kubernetes will help us recreate it), and reconnect, ideally with our new database still visible.

Create a database should_STILL_be_here:

$ kubectl get pods | grep mysql
my-mysql-d59b9487b-7g644   1/1     Running   0          2m28s
$ kubectl exec -it my-mysql-d59b9487b-7g644 /bin/bash
root@my-mysql-d59b9487b-7g644:/# mysql -u root -p mysql
Enter password: 
mysql> create database should_still_be_here;
Query OK, 1 row affected (0.00 sec)

mysql> show databases;
+----------------------+
| Database             |
+----------------------+
| information_schema   |
| #mysql50#lost+found |
| mysql                |
| performance_schema   |
| should_still_be_here |
+----------------------+
5 rows in set(0.00 SEC) mysql >exit
Bye
root@my-mysql-d59b9487b-7g644:/# exit
exit
Copy the code

Now we will delete the container:

kubectl delete pod my-mysql-d59b9487b-7g644
Copy the code

After about a minute, we’ll look for the new container name again, connect to the container name, and see if our database still exists:

$ kubectl get pods | grep mysql
my-mysql-d59b9487b-8zsn2   1/1     Running   0          84s
$ kubectl exec -it my-mysql-d59b9487b-8zsn2 /bin/bash
root@my-mysql-d59b9487b-8zsn2:/# mysql -u root -p mysql
Enter password: 
mysql> show databases;
+----------------------+
| Database             |
+----------------------+
| information_schema   |
| #mysql50#lost+found |
| mysql                |
| performance_schema   |
| should_still_be_here |
+----------------------+
5 rows in set(0.00 SEC) mysql >exit
Bye
root@my-mysql-d59b9487b-7g644:/# exit
exit
Copy the code

Complete success! Our storage persisted in each container that was killed.