In a job long, long ago, I was tasked with switching the old LAMP stack to Kubernetes. At the time, my boss was always chasing new technologies, thinking it would take a few days to iterate over old and new technologies — a bold idea, considering we didn’t even know how containers worked.

After reading official documents and searching for a lot of information, we began to feel overwhelmed — there were many new concepts to learn: POD, container, Replica, etc. It seems to me that Kubernetes was just designed for a bunch of smart developers.

Then I did what I always do in such situations: I learned by doing. The complexities are well understood with a simple example, so I went through the deployment process step by step.

In the end, we did it, although it was well short of the required week — it took nearly a month to create the three clusters, including their development, testing, and production.

In this article I’ll show you in detail how to deploy an application to Kubernetes. By the end of this article, you will have an efficient Kubernetes deployment and continuous delivery workflow.

Continuous integration and delivery

Continuous integration is the practice of building and testing every application update. By doing little work, errors are detected earlier and resolved immediately.

With the integration complete and all the tests passed, we were able to add continuous delivery to the automated release and deployment process. Projects that use CI/CD can be released more frequently and reliably.

We will use Semaphore, a fast, powerful and easy-to-use continuous integration and delivery (CI/CD) platform that automates all processes:

1. Install project dependencies

2. Run unit tests

3. Build a Docker image

4. Push image to Docker Hub

5. One-click Kubernetes deployment

For the application, we have a Ruby Sinatra microservice that exposes some HTTP endpoints. The project has everything you need for deployment, but you still need some components.


The preparatory work


To start, you’ll need to log in to Github and Semaphore accounts. In addition, you need to log in to the Docker Hub for the convenience of pulling or pushing Docker images later.

Next, you need to install some tools on your computer:

  • Git: processing code
  • Curl: The Swiss Army Knife of the Web
  • Kubectl: Remotely control your cluster

And of course, don’t forget Kubernetes. Most cloud providers offer this service in one form or another, so choose one that suits your needs. The minimal machine configuration and cluster size is sufficient to run our sample app. I like to start with a 3-node cluster, but you can just use a 1-node cluster.

Once the cluster is ready, download the KubeconFig file from your vendor. Some allow you to download directly from their Web console, while others require helpers. We need this file to connect to the cluster.

With this, we’re ready to go. The first thing to do is fork the repository.


The Fork repository


In this article fork the demo application we will use.

  1. Access the Semaphore-Demo-Ruby-Kubernetes repository and click the Fork button in the upper right
  2. Click the Clone or Download button and copy the address
  3. $git clone https://github.com/your_repository_path…

Use Semaphore to connect to a new repository

1. Log in to Semaphore

2. Click the link in the sidebar to create a new project

3. Click the Add Repository button next to your Repository

Use Semaphore tests


Continuous integration makes testing fun and efficient. A well-developed CI pipeline can create a rapid feedback loop to detect errors before any damage is done. Our project comes with some off-the-shelf tests.

Open the initial pipeline file at.semaphore/semaphore.yml and take a quick look. This pipeline describes all the steps Semaphore should follow to build and test the application. It starts with the version and name.

Version: v1.0 name: CICopy the code

Next comes the Agent, which is the virtual machine that powers the job. We can choose from three types:

agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804
Copy the code

Blocks, tasks, and jobs define the actions to be performed at each step in the pipeline. At Semaphore, blocks run sequentially, and jobs within blocks run in parallel. The pipeline contains two blocks, one for library installation and one for running tests.

The first block downloads and installs Ruby Gems.

- name: Install dependencies
  task:
    jobs:
      - name: bundle install
        commands:
          - checkout
          - cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
          - bundle install --deployment --path .bundle
          - cache store gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock) .bundle
Copy the code

Checkout replicates the code on Github. Since each job runs on a completely separate machine, we must rely on caches to store and retrieve files between job runs.

blocks:
  - name: Install dependencies
    task:
      jobs:
        - name: bundle install
          commands:
            - checkout
            - cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
            - bundle install --deployment --path .bundle
            - cache store gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock) .bundle
Copy the code

The second block tests. Note that we reused the checkout and cache code to put the initial file into the job. The last command is used to start the RSpec test suite.

- name: Tests
  task:
    jobs:
      - name: rspec
        commands:
          - checkout
          - cache restore gems-$SEMAPHORE_GIT_BRANCH-$(checksum Gemfile.lock),gems-$SEMAPHORE_GIT_BRANCH,gems-master
          - bundle install --deployment --path .bundle
          - bundle exec rspec
Copy the code

And finally, let’s look at Promotion. Promotion enables pipelining under certain conditions to create complex workflows. After all the jobs are done, we use auto_promote_ON to start the next pipeline.

promotions:
  - name: Dockerize
    pipeline_file: docker-build.yml
    auto_promote_on:
      - result: passed
Copy the code

The workflow proceeds to the next pipeline.


Build the Docker image


We can run anything on Kubernetes as long as it’s packaged in a Docker image. In this section, we’ll learn how to build mirrors.

Our Docker image will contain the application’s code, Ruby, and all of its libraries. Let’s take a look at Dockerfile first:

FROM ruby:2.5
 
RUN apt-get update -qq && apt-get install -y build-essential
 
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
 
ADD Gemfile* $APP_HOME/
RUN bundle install --without development test
 
ADD . $APP_HOME
 
EXPOSE 4567
 
CMD ["bundle"."exec"."rackup"."--host"."0.0.0.0"."-p"."4567"]
Copy the code

The Dockerfile is like a detailed recipe containing all the steps and commands needed to build the container image:

1. Start with a pre-built Ruby image

Install the build tool using apt-get

Copy Gemfile as it has all dependencies

4. Install them using bundles

5. Copy the app source code

6. Define listening ports and startup commands

We will bake our production mirror in Semaphore. However, if you want to do a quick test on your computer, type:

$ docker build . -t test-image
Copy the code

Run and expose internal port 4567 with Docker to start the server locally:

$ docker run -p 4567:4567 test-image
Copy the code

You can now test an available HTTP endpoint:

$ curl -w "\n" localhost:4567
hello world :))
Copy the code

Add Docker Hub account to Semaphore


Semaphore has a secure mechanism to store sensitive information such as passwords, tokens, or keys. In order to be able to push images into your Docker Hub repository, you need to create a Secret using your username and password:

  1. Open your Semaphore
  2. In the left navigation bar, click [Secret]
  3. Click Creat New Secret
  4. The name of Secret should be Dockerhub, type in the login information (as shown in the image below), and save.

Build the Docker pipeline


The pipeline starts building and pushing images to the Docker Hub with just 1 block and 1 job:

This time, we need to use better performance, because Docker tends to be more resource-intensive. We chose a mid-range machine with four cpus, 8GB RAM, and 35GB disk space e1-standard-4:

Name: Docker Build Agent: Machine:type: e1-standard-4
    os_image: ubuntu1804
Copy the code

The build block is started by logging into the Docker Hub, and the username and password can be imported from the Secret we just created. Once logged in, Docker can directly access the mirror repository.

The next command is docker pull, which attempts to pull the latest image. If an image is found, Docker may be able to reuse some of those layers to speed up the build process. If you don’t have the latest image, don’t worry, it just takes a little longer to build.

Finally, we push the new image. Note that we use the SEMAPHORE_WORKFLOW_ID variable to mark the image.

blocks:
  - name: Build
    task:
      secrets:
        - name: dockerhub
      jobs:
      - name: Docker build
        commands:
          - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
          - checkout
          - docker pull "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest || true
          - docker build --cache-from "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest -t "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID .
          - docker images
          - docker push "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID
Copy the code

When the image is ready, we move into the delivery phase of the project. We will extend our Semaphore pipeline with manual Promotion.

promotions:
  - name: Deploy to Kubernetes
    pipeline_file: deploy-k8s.yml
Copy the code

To do the first automated build, push:

$ touch test-build
$ git add test-build
$ git commit -m "Initial run on Semaphore" $git push originCopy the code

With the image ready, we can move into the deployment phase.

Deployed to Kubernetes


Automated deployment is Kubernetes’ strong suit. All we need to do is tell the cluster our final expected state, and it will take care of the rest.

However, before deploying, you must upload the KubeconFig file to Semaphore.


Upload Kubeconfig to Semaphore


We need the second Secret: kubeconFig for the cluster. This file grants administrative access to it. Therefore, we do not want to check files into the repository.

Create a group called the do – k8s secret and upload kubeconfig file to/home/semaphore /. Kube/dok8s yaml:

The deployment of listing

Although Kubernetes is already a container choreography platform, we don’t manage containers directly. In fact, the smallest unit of deployment is a POD. A POD is like a group of inseparable friends who always go to the same place together. So make sure the containers in the POD are running on the same node and have the same IP. They can start and stop synchronously, and because they run on the same machine, they can share resources.

The problem with pods is that they can be started and stopped at any time, and there is no way to determine which POD IP they will be assigned. To forward the user’s HTTP traffic, you also need to provide a public IP and a load balancer that tracks the pod and forwarding client traffic.

Open the file in deploymente.yml. This is a checklist for deploying our application, split by three Dash into two resources. First, deploy resources:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: semaphore-demo-ruby-kubernetes
spec:
  replicas: 1
  selector:
    matchLabels:
      app: semaphore-demo-ruby-kubernetes
  template:
    metadata:
      labels:
        app: semaphore-demo-ruby-kubernetes
    spec:
      containers:
        - name: semaphore-demo-ruby-kubernetes
          image: $DOCKER_USERNAME/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID
Copy the code

Here are a few concepts to clarify:

  • Resources have a name and several labels for organization
  • The Spec defines the final expected state, and the template is the model used to create the Pod.
  • Replica Sets the number of copies of pods to be created. We often set this to the number of nodes in the cluster. Since we are using three nodes, I change this command line to replicas: 3


The second resource is services. It binds to port 80 and forwards HTTP traffic to the deployed POD:

---
 
apiVersion: v1
kind: Service
metadata:
  name: semaphore-demo-ruby-kubernetes-lb
spec:
  selector:
    app: semaphore-demo-ruby-kubernetes
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 4567
Copy the code

Kubernetes matches the selector with the tag to connect the service to the pod. Therefore, we have many services and deployments in the same cluster and connect them as needed.

Deployment pipeline


We are now entering the final stage of CI/CD configuration. At this point, we have a CI pipeline defined in Semaphore. Yml and a Docker pipeline defined in docker-build.yml. In this step, we will deploy to Kubernetes.

Open the deployment pipeline at.semaphore/ deploy-k8s.yML:

Version: v1.0 Name: Deploy to Kubernetes agent: Machine:type: e1-standard-2
    os_image: ubuntu1804
Copy the code

Two jobs make up the final assembly line:

Job 1 starts to be deployed. After importing the Kubeconfig file, envSubst replaces the placeholder variable in deployment.yaml with its actual value. Kubectl Apply then sends the manifest to the cluster.

blocks:
  - name: Deploy to Kubernetes
    task:
      secrets:
        - name: do-k8s
        - name: dockerhub
 
      env_vars:
        - name: KUBECONFIG
          value: /home/semaphore/.kube/dok8s.yaml
 
      jobs:
      - name: Deploy
        commands:
          - checkout
          - kubectl get nodes
          - kubectl get pods
          - envsubst < deployment.yml | tee deployment.yml
          - kubectl apply -f deployment.yml
Copy the code

Job 2 marks the image up to date so that we can use it as a cache on the next run.

- name: Tag latest release
  task:
    secrets:
      - name: dockerhub
    jobs:
    - name: docker tag latest
      commands:
        - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
        - docker pull "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID
        - docker tag "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest
        - docker push "${DOCKER_USERNAME}"/semaphore-demo-ruby-kubernetes:latest
Copy the code

This is the last step in the workflow.


Deploying the application

Let’s teach our Sinatra app to sing. Add the following code to the app class in app.rb:

get "/sing" do
  "And now, the end is near And so I face the final curtain..."
end
Copy the code

Push modified files to Github:

$ git add .semaphore/*
$ git add deployment.yml
$ git add app.rb
$ git commit -m "Test Deployment" $git push origin masterCopy the code

Once the Docker pipeline is complete, you can check Semaphore’s progress:

When it’s time to deploy, click the Promote button to see if it works:

We’ve made a good start, now it’s up to Kubernetes. We can use Kubectl to check the deployment status and the initial state is three required pods with none available:

$ kubectl get deployments
NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
semaphore-demo-ruby-kubernetes   3         0         0            0           15m
Copy the code

A few seconds later, pod starts and Reconciliation is complete:

$ kubectl get deployments
NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
semaphore-demo-ruby-kubernetes   3         3         3            3           15m
Copy the code

Use get All to get the common status of the cluster, which shows POD, service, deployment, and Replica:

$ kubectl get all NAME READY STATUS RESTARTS AGE pod/semaphore-demo-ruby-kubernetes-7d985f8b7c-454dh 1/1 Running 0 2m pod/semaphore-demo-ruby-kubernetes-7d985f8b7c-4pdqp 1/1 Running 0 119s pod/semaphore-demo-ruby-kubernetes-7d985f8b7c-9wsgk 1/1 Running 0 2m34s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Service /kubernetes ClusterIP 10.12.0.1 443/TCP 24m service/semaphore-demo-ruby-kubernetes-lb LoadBalancer 10.12.15.50 35.232.70.45 80:31354/TCP 17M NAME DESIRED CURRENT up-date AVAILABLE AGE deployment.apps/semaphore-demo-ruby-kubernetes 3 3 3 3 17m NAME DESIRED CURRENT READY AGE replicaset.apps/semaphore-demo-ruby-kubernetes-7d985f8b7c 3 3 3 2m3Copy the code

The Service IP is shown after the POD. For me, the load balancer is assigned to external IP 35.232.70.45. You need to change it to the one your provider assigned to you, and then let’s try out the new server.

$ curl -w "\n" http://YOUR_EXTERNAL_IP/sing
Copy the code

Now, the end is not far away.

Victory is within reach


Deploying to Kubernetes is not that difficult when you use the right CI/CD solution. You now have a fully automated continuous delivery pipeline for Kubernetes.

Here are a few suggestions to fork and play with Semaphore-Demo-ruby-Kubernetes:

  • Create a staging cluster
  • Build a deployment container and run tests in it
  • Extend the project with more microservices