This article comes from the edge computing K3S community

Author’s brief introduction

Cello Spring, Swiss. Started in electronics and has a degree in electrical engineering. Then I began to focus on the computer field and have many years of working experience in the software development field.

Traefik is a very reliable cloud native dynamic reverse proxy. Traefik was built into the lightweight Kubernetes distribution K3s last year as the default reverse proxy and Ingress Controller for clustering. However, the default built-in Traefik version in K3s at the time of writing was V1.7.14. This version works fine, but it’s missing some useful features. The feature I most want to use is to automatically generate Let’s Encrypt certificates for the Ingress Route in use. Traefik 2.x offers this and more. So, let’s see how to set up and use the new version of Traefik using K3s.

The goals of this article are to set up a new K3s cluster, install the Traefik 2.x version, and configure some Ingress that will be protected by an automatically generated Let’s Encrypt certificate.

Here are the steps we are going to take:

  • Create a tiny K3s cluster on Civo

  • Point our domain (I’ll use my virtual domain celleri.ch) to the cluster IP

  • Install Klipper LB as our LoadBalancer

  • Install Traefik V2 on the cluster

  • Deploy a small workload (WHOAMI) to the cluster

  • Create Traefik ingress to service (with TLS termination or without)

  • Use the Traefik middleware to access the Traefik Dashboard with basic authentication

Create a Civo cluster

To do this, go to Civo (civo.com/) and create a very small cluster with only 2 nodes. If you don’t already have an account, you can sign up and apply for the KUBE100 Beta program to use the Kubernetes products it offers.

Need to make sure we don’t install Traefik using basic Settings (deselect Traefik on the Architecture TAB)

After about 2 minutes, we will have the following cluster:

Next, we need to remember the IP address of the master node and download the KubeconFig file. In this case, it is named civo-k3s-with-traefik2-kubeconfig because we named the cluster k3s-with-Traefik2. To access the cluster from the command line using Kubectl, we need to point the environment variable to the Kubeconfig file and change the context to our new cluster.

# set env variable with new cluster config
export KUBECONFIG=./civo-k3s-with-traefik2-kubeconfig
kubectl config use-context k3s-with-traefik2

#check the available nodesKubectl get nodes NAME STATUS ROLES AGE VERSION kube-master-de56 Ready master 9M15s v1.16.3-k3s.2 kube-node-40e7 Ready 7 m21s v1.16.3 - k3s. 2Copy the code

As we can see, the cluster with 1 master node and 1 worker node is ready! Proceed to the next step.

Point the domain name celleri. ch to the new cluster IP address

Recently, I have been using Cloudflare’s DNS service (cloudflare.com/dns/) to handle Kubernetes. It’s reliable, has a user-friendly interface and the basic services I use are free.

In Cloudflare we apply the following Settings:

In this example, we don’t want to create a CNAME entry for every subfield we might use, so we create a wildcard (*) entry here as a CNAME. Traefik ensures that traffic is later routed to the correct location.

Install Klipper LB as our LoadBalancer

By default, Traefik built into K3s is v1.7.x. The default installation also deployes internal LoadBalancer, Klipper LB from Rancher. Since we did not install Traefik when we set up the cluster, we now have to manually install Klipper LB ourselves.

Klipper will hook itself up to the host port of the cluster node and use ports 80, 443, and 8080.

You can find all the files I mentioned on my Github Repo:

Github.com/cellerich/k…

# install KlipperLB
kubectl apply -f 00-klipper-lb/klipper.yaml

# see if klipper hooked up to the host ports and is working
kc get pods --all-namespaces | grep svclb
kube-system  svclb-traefik-gc8lg     3/3     Running   0          96s
kube-system  svclb-traefik-pqbzb     3/3     Running   0          96s
Copy the code

These Pods seem to work with the three containers (one for each host port) running in them. Next, let’s start installing Traefik V2.

Install Traefik V2 in the cluster

Traefik V2 comes with a number of CRDS, which seems to be a new way to extend Kubernetes objects. I haven’t fully focused on these CRDS, but we’re going to use them anyway. You can be in Traefik document (docs. Traefik. IO/v2.0 / user – g… The 01-Traefik-crd /traefik-crd.yaml file available on repo.

# apply traefik crd's
kubectl apply -f 01-traefik-crd/traefik-crd.yaml
Copy the code

This command should create five CRDS.

In order for Traefik to do what it needs to do, we need a ClusterRole and a ClusterRoleBinding. We can use the following command:

# apply clusterrole and clusterrolebinding
kubectl apply -f 01-traefik-crd/traefik-clusterrole.yaml
Copy the code

Note: We will do a ClusterRoleBinding for ServiceAccount in the namespace kube-System, because we will install traffic into that namespace later.


kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: kube-system
Copy the code

Finally, we deploy Traefik Service, ServiceAccount, and Deployment to the cluster:

kubectl apply -f 02-traefik-service/traefik.yaml
Copy the code

This should give us a LoadBalancer service with the external address of the cluster’s master node:

# get traefik serviceKubectl kube get SVC - n - system | grep traefik traefik LoadBalancer 192.168.211.177 185.136.232.122 80:32286/TCP,443:30108/TCP,8080:30582/TCP 3m43sCopy the code

Deploy a small workload (WHOAMI) to the cluster

Now it’s time to create a service in our cluster and try to invoke it externally through our Traefik proxy. I’m using the WHOami service in this example, which is also used for all the examples in the Traefik documentation. Let’s deploy it:


# deploy `whoami` in namespace `default`
kubectl apply -f 03-workload/whoami-service.yaml

#check the deployment 
kubectl get pods | grep whoami

whoami-bd6b677dc-lfxbx   1/1     Running   0          5m37s
whoami-bd6b677dc-92jzj   1/1     Running   0          5m37s
Copy the code

It looks like it’s working. Now for the exciting part, Traefik Ingress.

Create two Traefik Ingress (with/without TLS Termination) for the service

We want to access the WHOAMI service externally so that we can finally define the IngressRoute object. Yes, those objects were defined in the CRD we installed earlier. Now they come in handy. We deploy IngressRoutes as follows:

kubectl apply -f 03-workload/whoami-ingress-route.yaml
Copy the code

As you can see in the definition, we specify a route using TLs.certresolver =default (with PathPrefix ‘/ TLS ‘). The certResolver is defined in our 02-Traefik-service/traefik.yaml file.


apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingressroute-notls
  namespace: default
spec:
  entryPoints:
    - web
  routes:
    - match: Host(`celleri.ch`) && PathPrefix(`/notls`)
      kind: Rule
      services:
        - name: whoami
          port: 80

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingressroute-tls
  namespace: default
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`celleri.ch`) && PathPrefix(`/tls`)
      kind: Rule
      services:
        - name: whoami
          port: 80
  tls:
    certResolver: default
Copy the code

Now launch your browser and visit the address http://celleri.ch/notls to see what’s coming — pod is responding:

So what happened to https://celleri.ch/tls? It also works, but tells us that the first connection is not secure. If we look at the certificate, we can find out why:

To prevent the production server from being plagued with many requests because our setup program didn’t work properly, we started out using a staging server in Traefik Service instead of Let’s Encrypt. Therefore, we are going to change this setting and use the production server to get a real certificate.

Change certresolver to use Let’s Encrypt production server

In our definition of Traefik Deployment, we have the following parameters:

. croppedforreadability ... Spec: serviceAccountName: Traefik-Ingress-Controller containers: - name: Traefik image: Traefik: V2.0 ARGS: - --api.insecure - --accesslog - --entrypoints.web.Address=:80 - --entrypoints.websecure.Address=:443 - --providers.kubernetescrd - --certificatesresolvers.default.acme.tlschallenge - [email protected] - --certificatesresolvers.default.acme.storage=acme.json# Please note that this is the staging Let's Encrypt server.
            - --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory

    ... cropped for readability ...
Copy the code

We told Traefik to use Tlschallenge’s method to use Certificate Solvers named Default. In addition, we also need to provide storage for our mail and certificates. We also mentioned in the previous article that we would use staging Caserver.

Important: In our deployment, we did not store providers or volumes. This means that our certificate will disappear after the deployment reloads. The certificate only exists in our POD memory. In a production environment, we had to address this and provide a volume.

Ok, let’s comment out the caserver line and redeploy Traefik Deployment to see if we have a real certificate:

. croppedforreadability ... Spec: serviceAccountName: Traefik-Ingress-Controller containers: - name: Traefik image: Traefik: V2.0 ARGS: - --api.insecure - --accesslog - --entrypoints.web.Address=:80 - --entrypoints.websecure.Address=:443 - --providers.kubernetescrd - --certificatesresolvers.default.acme.tlschallenge - [email protected] - --certificatesresolvers.default.acme.storage=acme.json# Please note that this is the staging Let's Encrypt server.
            # - --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory. croppedfor readability ...
Copy the code
# deploy the changed file
kubectl apply -f 02-traefik-service/traefik.yaml
Copy the code

After a while, we’ll get a valid certificate:

After this, we can relax a little. But we wanted to take it a step further, Traefik V2 also has a nice dashboard to view all the Ingress content that’s running. But we don’t want everyone to be able to access our dashboard. There are some basic authentication to better protect our dashboard.

Use the Traefik middleware to access the Traefik Dashboard with basic authentication

Traefik 2.x introduced a new mechanism, middleware, that can help us with many tasks when processing incoming requests. We will use basicAuth middleware to protect the Traefik Dashboard and expose it to the outside world.

First, we need to create a Secret with our username and hash password for the basicAuth middleware to use later:

# create user:password file 'user'
htpasswd -c ./user cellerich

# enter password twice...

# create secret from password file 
kubectl create secret generic traefik-admin --from-file user -n kube-system
Copy the code

Be sure to create the Secret in the namespace kube-system, because Traefik Service and its Dashboard are also in this namespace.

We then deploy the middleware and IngressRoute to our cluster:

kubectl apply -f 04-traefik-dashboard/traefik-admin-withauth.yaml
Copy the code

Now, we visit https://traefik.celleri.ch, login prompt:

With the correct credentials, we will access Traefik V2’s dashboard:

And we will get a lot of information about our Ingress Route:

“Language

That’s all for the tutorial. In the course of my research, I didn’t find many examples of how to set up Traefik V2 in K3S, and the Klipper LB section in particular was never mentioned. That’s why I want to share my experience with you in the hope that it will help you, at the very least, in my future.

Reference links:

Making:

Github.com/cellerich/k…

About Rancher’s Klipper LB:

Github.com/rancher/kli…

Traefik Middleware documentation:

Docs. Traefik. IO/v2.0 / middle…