In this article, we’ll take a quick look at K3D, a tool that lets you run one-time Kubernetes clusters anywhere you have Docker installed, and explore any problems that can arise when using K3D.

What is K3D?

K3d is a small program for running K3s clusters in Docker. K3s is a CNCF certified lightweight Kubernetes distribution and sandbox project. Designed for resource-limited environments, it is packaged as a single binary and requires less than 512MB of RAM. To learn more about K3s, check out our previous articles on our official account and the video on our website.

K3d starts multiple K3s nodes in a Docker container on any machine where Docker is installed with the help of a Docker image built from the K3s warehouse. In this way, a single physical (or virtual) machine (called a Docker Host) can run multiple K3s clusters, with multiple servers and agent nodes in each cluster.

What can K3D do?

In January 2021, K3DV4.0.0 was released with the following features:

  • Create/Stop/Start/Delete/Expand/shrink a K3s cluster (and individual nodes)
  • Through the command line flag
  • Through the configuration file
  • Manage and interact with container image repositories that can be used with clusters
  • Kubeconfigs for managing clusters
  • Import images from the local Docker Daemon into the container runtime running in the cluster

Obviously, there are more ways you can tweak the details of your usage.

What is k3D used for?

The main application scenario for K3D is native development on Kubernetes, where there are few hassles and resource usage issues due to the lightweight and simple nature of K3D. K3d was developed to provide a simple tool for developers to run lightweight Kubernetes clusters on machines in their development environment, This results in fast iteration times in a production-like environment (much faster than running Docker-compose locally with Kubernetes in production).

Over time, K3D has also evolved into an operations tool for testing certain Kubernetes (or K3s in particular) capabilities in isolated environments. For example, with K3D, you can easily create a multi-node cluster, deploy some applications on it, easily stop a node and see how Kubernetes react, and reschedule your application to another node.

In addition, you can use K3D in a continuous integration system to quickly start a cluster, deploy a test stack on it, and run integration tests. Once you’re done, you can easily shut down the entire cluster. No need to worry about proper cleaning and possible residue.

We also provided a k3D-Dind image (similar to a dream within a dream in the movie Inception where we placed containers within containers). By doing this, you can create a Docker-in-Docker environment running K3D, which generates a CLUSTER of K3s in the Docker. This means that you only have one container (K3D-Dind) running on your Docker host, which in turn runs the entire K3s/Kubernetes cluster.

How do I use K3D?

1, install K3D (if necessary can also install Kubectl)

Note: There are version requirements for this article, please use k3D V4.1.1 or later

2. Try one of the following examples, or use documentation or CLI help text to find your own way (k3D [command] –help)

The “easy” way

k3d cluster create

This command creates a K3s cluster with two containers: a Kubernetes control plane node (Server) and a load balancer (ServerLB) in front of it. It places them all on a dedicated Docker network and exposes the Kubernetes API on randomly selected free ports on Docker hosts. It also creates a volume called Docker in the background in preparation for the image import.

By default, if the name parameter is not provided, the cluster is named k3S-default, and the container is displayed as K3D –<#>, so in this case, The two containers will be displayed as K3D-k3S-default-Serverlb and K3D-k3s-default-server-0

K3d waits until everything is ready, pulls Kubeconfig from the cluster and merges it with the default Kubeconfig (usually located in $HOME /.kube/config or whatever the Kubeconfig environment variable points to). Don’t worry, you can adjust this behavior as well.

Using kubectl view you just created to display the node of content: the kubectl get nodes k3d also provides you with some command to list the things that you create: the k3d cluster | node | registry list

A “simple but subtle” approach

K3d cluster create mycluster --api-port 127.0.0.1:6445 -- Servers 3 --agents 2 --volume '/home/me/mycode:/code@agent[*]' --port '8080:80@loadbalancer'Copy the code

This command generates a K3s cluster with six containers: * 1 load balancer * 3 Servers (control plane nodes) * 2 Agents (formerly worker nodes)

With –api-port 127.0.0.1:6445, you can use k3D to map the Kubernetes API port (inside 6443) to port 6445 for 127.0.0.1 / localhost. This means that you will then include the following connection string in Kubeconfig: server: https://127.0.0.1:6445 to connect to the cluster. This port will be mapped from the load balancer to your host system. Requests will be proxied from there to the Server node, effectively simulating a production environment setup where the server node could also fail and want to fail over to another server.

–volume /home/me/mycode:/code@agent[] bind Mount your local directory /home/me/mycode to the internal path /code of all ([] agent nodes). Replace * with an index (0 or 1) so that it is mounted to only one of the nodes. The specification that tells K3D which node to install the volume to is called a “node filter” and is also used for other flags, such as the –port flag for port mapping.

That is, –port ‘8080:80@loadbalancer’ maps port 8080 on the localhost to port 80 on the loadbalancer (ServerLB) that can be used to forward HTTP ingress traffic to the cluster. For example, a Web application can be deployed in a cluster that is externally exposed (Service) through an Ingress such as myapp.k3D.localhost.

Then (assuming everything is set to resolve the domain to the localhost IP), you can access your application by pointing your browser to myapp.k3d.localhost:8080. Traffic then flows from your host to the load balancer through the Docker bridge interface. From there, it is propped to the cluster and passed to your application Pod via Ingress and Service.

Note: you must set up some mechanism to route myapp.k3d.localhost to the localhost IP (127.0.0.1). The most common way to do this is to use the 127.0.0.1 myapp.k3d.localhost entry (C:\Windows\System32\drivers\etc/hosts) in your /etc/hosts file. However, this doesn’t allow wildcards (.localhost), so it can get a little cumbersome after a while, so you might want to know something like DNSMasq (MacOS/UNIX) or Acrylic (Windows) to lighten the load. Tip: You can install the libnss-MyHostname package on some systems (at least Linux operating systems, including SUSE Linux and openSUSE) to resolve the. Localhost field to 127.0.0.1 automatically, which means you don’t have to do this manually. For example, if you want to test through Ingress, you need to set up a domain in it.

One thing to note here is that if multiple server nodes are created, K3s will be assigned the –cluster-init flag, which means it changes the default internal database for K3s (which defaults to SQLite) to ETCD.

“Configuration is code” mode

Starting with K3D V4.0.0 (released in January 2021), we support the use of configuration files to configure everything you’ve previously done with command line flags (and possibly even more soon). As of this writing, you can find the JSON schema in the repo for validating configuration files: github.com/rancher/k3d…

Sample configuration file:

# k3d configuration file, saved as e.g. /home/me/myk3dcluster.yaml apiVersion: k3d.io/v1alpha2 # this will change in the future as we make everything more stable kind: Simple # internally, we also have a Cluster config, which is not yet available externally name: mycluster # name that you want to give to your cluster (will still be prefixed with `k3d-`) servers: 1 # same as' -- Servers 1 'hostIP: 2 # same as' --api-port 127.0.0.1:6445' hostIP: "127.0.0.1" hostPort: "6445" ports: -port: 8080:80 # same as' --port 8080:80@loadbalancer nodeFilters: - loadbalancer options: k3d: # k3d runtime settings wait: true # wait for cluster to be usable before returining; same as `--wait` (default: true) timeout: "60s" # wait timeout before aborting; same as `--timeout 60s` k3s: # options passed on to K3s itself extraServerArgs: # additional arguments passed to the `k3s server` command - --tls-san=my.host.domain extraAgentArgs: [] # addditional arguments passed to the `k3s agent` command kubeconfig: updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true) switchCurrentContext: true # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)Copy the code

Assuming we’ll save it as/home/me/myk3dcluster yaml, we can use it to configure the new cluster k3d cluster create — config/home/me/myk3dcluster yaml

Note: You can still set additional parameters or flags, which will take precedence over (or will be merged with) any parameters you define in the configuration file.

What else can K3D do?

You can use K3D in many scenarios, such as:

  • Create clusters with k3D-hosted container repositories
  • Use clustering for rapid development through hot code overloading
  • Use K3D in conjunction with other development tools, such as Tilt or Skaffold
  • Both can take advantage of the image import feature through k3D Image Import
  • Both can leverage k3D-hosted warehouses to speed up development cycles
  • Use K3D in your CI system (for which we provide PoC: github.com/iwilltry42/…
  • Use the community-maintained vscode extension (github.com/inercia/vsc…
  • It is used in the vscode workflow to set up the high availability of K3s

You can try all of this yourself by using the scripts prepared in this Demo repo: github.com/iwilltry42/…

THORSTEN KLEIN Trivago DevOps Engineer, SUSE free software engineer, and k3D maintainer.