Recently, I have been doing various tests and deployments on Kubernetes. As a result, I had to create and destroy Kubernetes clusters over and over again, sometimes several times in an hour. But since something I need to test requires a brand new cluster, simply removing all the PODS, Services, Deployment, and so on to make the cluster “like new” doesn’t make any sense.

At the same time, I needed a cluster that was as similar to the production environment as possible, so all the local solutions (Minikube, Vagrant, etc.) were useless.

At first, I used a cloud provider’s hosted Kubernetes because it was easy to deploy and once the cluster started, I could download the Kubectl configuration with a click of a button. But there are three problems with it:

  • It takes a lot of time — about 10 minutes per cluster to deploy. If I had to deploy and destroy it every day, that would add up to a lot of time.

  • You need to manually download and load the Kubectl configuration file (although this operation is simple, it is still a bit of a hassle manually)

  • This is a managed service, so I don’t have full access to the cluster.

So I decided to create a solution that would allow me to quickly and easily deploy and destroy Kubernetes clusters on the cloud:

Github.com/DavidZisky/…

I ended up with a simple Bash script that could create a virtual machine on Google Cloud, deploy a 4-node Kubernetes cluster (1 master node and 3 worker nodes), download the Kubectl configuration, and load it into my system in just 60 seconds! Starting from scratch (without even a virtual machine) to being able to execute Kubectl apply -f any_deployment.yaml in less than 1 minute! So how do you do it?

The specific requirements

To me, an important consideration for this solution is to be as portable as possible. So I try not to use too many tools (hence no Terraform, Ansible, installation or configuration). That’s why I wrote it in Bash, and my only dependency was to install and configure the GCloud CLI (with default regions and project sets).

Start the VM at 30 seconds

Let’s start with the virtual machine. Typically, it takes about 45 to 60 seconds to create a VM on the cloud. For example, on DigitalOcean, it takes 40 seconds to start the virtual machine (meaning ping starts responding), but you need an additional 15 seconds to start other system services (most importantly, SSH Server is able to receive connections).

So, first we need to make the whole process faster, at least twice as fast.

We can do this by using an OS image with less memory. That’s why I stick with Google Cloud, because they offer minimal Ubuntu images (less than 200MB). At the same time, I tried many lightweight distributions, but they either had no core modules or took a long time to launch.

It takes about 30 seconds to create and start the Ubuntu mini-virtual machine on Google Cloud (from GCloud API call to SSH Server ready). So, we’re done with the first step, and now we’re going to look at the last 30 seconds.

Deploy the K8S cluster in 30 seconds

How do we deploy a Kubernetes cluster in 30 seconds? The answer is to use K3s! If you haven’t heard of K3S, check out our previous article or sign up for today’s 8:30 PM online training (z-mz.cn/Pmwv).

With K3S, we don’t have to worry too much about getting Kubernetes up and running because the K3S installer does it for us. So, my script just downloads and executes it.

Connect everything

We started the virtual machine in less than 30 seconds using a lightweight OS image. We used K3S, which allowed us to run Kubernetes in less than 20 seconds. Now, we need to connect all the pieces together. To do this, we have prepared a Bash script:

  • The GCloud command to deploy the virtual machine
  • Download and execute the K3S installer on the primary node
  • Gets the token generated by K3S, which can be used to add nodes to the cluster
  • Download and execute the K3S installer on the Worker node (with token as argument)

The only challenge is getting the generated Kubectl configuration — the public IP address of the Google virtual machine is not visible/accessible on the computer (you can’t find the IP address when you execute “IP addr” or “ifconfig”). So, when K3S generates certificates and KubeconFig, external access to the cluster is not valid.

But after much searching, I found the parameter “– TLS-SAN =”, which provides additional IP addresses for certificate generation. Therefore, we can get the IP address through the GCloud command and then pass it as the value of the parameter when we install K3S. If K3S is deployed on all nodes and the worker node is properly registered on the master node, the cluster is ready.

The last thing left is to download the Kubectl configuration (using SCP to get the file from the master node). It only takes 55 to 58 seconds to complete all the steps. As you can see, there’s nothing special about this solution, just a few GCloud and curl commands pasted into a bash script. But it can get the job done quickly.

What’s next?

First, the current entire solution is hard-coded to have a cluster of four nodes (one master node and three worker nodes). Make it easy to configure, but I’ve never tested a larger cluster. But I’ll add that option soon.

Second, the Kubectl configuration is now available for download only (so you can pass it as a parameter to the Kubectl command), or to override the existing Kubectl configuration (which already works for my needs since I don’t have a long-running cluster). However, adding a feature option to attach a configuration to an existing configuration and then change the context is beneficial in the long run.