This article is licensed under a “CC BY 4.0” license. You are welcome to reprint or modify this article, but the source must be noted. 4.0 International (CC BY 4.0)

By Su Yang

Creation time: sep 08, 2019 statistical word count: 15348 words reading time: 31 minutes to read this article links: soulteary.com/2019/09/08/…


Build your K8s environment with MicroK8s

Last year, I wrote about how to build a simple Kubernetes cluster using the official tool kit, Kubeadm, which was too complicated for anyone who just wanted to try it out. Here is a simple tool: MicroK8s.

Officials describe the tool as “o&m free Kubernetes for workstations and the Internet of Things.” The greatest value is the ability to quickly build a single-node container arrangement system for production testing.

The documents on the official website have a brief introduction of how to install and use, but did not consider the installation process of network problems of mainland China students, this article will be combined with this situation to talk.

Writing in the front

It was announced earlier that docker would be replaced with Containerd:

The upcoming release of V1.14 Kubernetes will mark The MicroK8s switch to Containerd and enhanced security. As this is a big step forward we would like to give you a heads up and offer you a preview of what is coming. Give it a test drive with:

Snap install microk8s –classic –channel=1.13/edge/secure-containerd

You can read more in our blog 117, and the respective pill request 13 Please, let us know 5 how we can make this transition smoother for you. Thanks

The community has already been consulted/joked about, here consider to reduce the changes, for the time being, it is still 1.13 using docker as a container, the new version will be left to the next “toying”.

Install MicroK8S using SNAP

Snap is canonical’s more “advanced” package management solution, first available on Ubuntu Phone.

Installing K8s with Snap is really simple, with a single command like the following:

Snap install microk8s --classic --channel=1.13/stableCopy the code

However, if this command is not executed on an overseas host, you should experience slow installation problems.

Snap install microk8s --classic --channel=1.13/stable Download Snap"microk8s" (581) from channel "1.13 / stable"2 h32m 0% 25.9 kB/sCopy the code

To solve this problem, we can only add a proxy to Snap for the time being. Snap does not read the environment variables of the system, but only the variable files of the application.

You can easily modify snap’s environment variables with the following commands, but the default editor is ** nano **, which is very difficult to use.

systemctl edit snapd.service
Copy the code

Here we can first update the editor to the familiar ** vim ** :

sudo update-alternatives --install "$(which editor)" editor "$(which vim)" 15
sudo update-alternatives --config editor
Copy the code

The interactive terminal requires us to manually enter numbers and then press Enter to confirm the selection:

There are 5 choices forthe alternative editor (providing /usr/bin/editor). Selection Path Priority Status ------------------------------------------------------------ * 0 /bin/nano 40 auto mode 1 /bin/ed -100 manual mode 2 /bin/nano 40 manual mode 3 /usr/bin/vim 15 manual mode 4 /usr/bin/vim.basic 30 manual mode 5 /usr/bin/vim.tiny 15 manual  mode Press <enter> to keep the current choice[*], ortype selection number: 5
update-alternatives: using /usr/bin/vim.tiny to provide /usr/bin/editor (editor) in manual mode
Copy the code

Run the edit environment variable command again and add a proxy configuration:

[Service]

Environment=HTTP_PROXY = "http://10.11.12.123:10240"
Environment=HTTPS_PROXY = "http://10.11.12.123:10240"
Environment="NO_PROXY = localhost, 127.0.0.1, 192.168.0.0/24, *. Domain. LTD"
Copy the code

Perform the installation again, and the installation progress takes off:

Snap install microk8s --classic --channel=1.13/stable Download Snap"microk8s" (581) from channel "1.13 / stable"31% 14.6 MB/s to 11.2 sCopy the code

If the speed doesn’t change, consider overriding the Snap service.

systemctl daemon-reload && systemctl restart snapd
Copy the code

If all goes well, you should see something like this:

Snap install microk8s --classic --channel=1.13/stable microk8s (1.13/stable) v1.13.6 from Canonical✓ installedCopy the code

Run the following command to view the tools installed in snap:

Snap List Name Version Rev Tracking Publisher Notes Core 16-2.40 7396 stable Canonical ✓ core microk8s v1.13.6 581 1.13 Canonical ✓ classicCopy the code

Previously installing K8s independently required installing Docker first, while with snap installing, this is all in place by default.

Docker Version Client: version: 18.09.2 API version: 1.39 Go version: go1.10.4 Git Commit: 6247962 Built: Tue Feb 26 23:56:24 2019 OS/Arch: linux/amd64 Experimental:false

Server:
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       6247962
  Built:            Tue Feb 12 22:47:29 2019
  OS/Arch:          linux/amd64
  Experimental:     false
Copy the code

Get the Kubernetes dependency image

To use Kubernetes, in addition to installing MicroK8s, you also need to obtain the images of the tools it depends on. However, it still takes some work to get the image, first get the 1.13 MicroK8s code:

git clone- single - branch - branch = 1.13 https://github.com/ubuntu/microk8s.gitCopy the code

Then get the list of container images declared therein:

grep -ir 'image:' * | awk '{print $3 $4}' | uniq
Copy the code

Because of the wild nature of the official code, we get grotesque mirror names:

Localhost: 32000 / my - busybox elasticsearch: 6.5.1 alpine: 3.6 docker. Elastic. The co/kibana/kibana - oss: 6.3.2 time ="The 2016-02-04 T07:53:57. 505612354 z"level=error
cdkbot/registry-$ARCH: 2.6... . Quay. IO/Prometheus, Prometheus quay. IO/coreos/kube - rbac - proxy: v0.4.0 k8s. GCR. IO/metrics - server -$ARCH: v0.2.1 cdkbot/addon - resizer -$ARCH:, version 1.8.1 cdkbot/microbot -$ARCH
"K8s. GCR. IO/cuda - vector - the add: v0.1"Nginx: latest istio/examples - bookinfo - details - v1:1.8.0 comes with busybox busybox: 1.28.4Copy the code

According to the specific needs of the target server we are deploying, replace the $ARCH variable and remove the meaningless image name. The list is as follows:

K8s. GCR. IO/fluentd - elasticsearch: v2.2.0 elasticsearch: 6.5.1 alpine: 3.6 docker. Elastic. The co/kibana/kibana - oss: 6.3.2 Cdkbot/registry - amd64:2.6 GCR. IO/google_containers/defaultbackend - amd64:1.4 Quay. IO/kubernetes - ingress - controller/nginx - ingress - controller - amd64:0.22.0 jaegertracing/jaeger - operator:, version 1.8.1 GCR. IO/google_containers/k8s - DNS - kube - DNS - amd64:1.14.7 GCR. IO/google_containers/k8s - DNS - dnsmasq - nanny - amd64:1.14.7 GCR. IO/google_containers k8s - DNS - sidecars - amd64:1.14.7 k8s. GCR. IO/kubernetes - dashboard - amd64: v1.8.3 K8s. GCR. IO/heapster - influxdb - amd64: version 1.3.3 k8s. GCR. IO/heapster - grafana - amd64: v4.4.3 k8s. GCR. IO/heapster - amd64: v1.5.2 Cdkbot/addon - resizer - amd64:, version 1.8.1 cdkbot/hostpath - provisioner - amd64: the latest Quay. IO/coreos k8s - Prometheus - adapter - amd64: v0.3.0 grafana/grafana: 5.2.4 quay. IO/coreos/kube - rbac - proxy: v0.4.0 Quay. IO/coreos/kube - state - metrics: v1.4.0 quay. IO/coreos/addon - resizer: 1.0 quay. IO/Prometheus/Prometheus Quay. IO/coreos Prometheus - operator: v0.25.0 quay. IO/Prometheus alertmanager quay. IO/Prometheus/node - exporter: v0.16.0 Quay. IO/coreos/kube - rbac - proxy: v0.4.0 k8s. GCR. IO/metrics - server - amd64: v0.2.1 cdkbot/addon - resizer - amd64:, version 1.8.1 Nvidia/k8s - device - the plugin: 1.11 cdkbot/microbot - amd64 k8s. GCR. IO/cuda - vector - the add: v0.1 nginx: the latest Istio/examples - bookinfo - details - v1:1.8.0 comes with istio/examples - bookinfo - ratings - v1:1.8.0 comes with Istio/examples - bookinfo - reviews - v1:1.8.0 comes with istio/examples - bookinfo - reviews - v2:1.8.0 comes with Istio/examples - bookinfo - reviews - v3:1.8.0 comes with istio/examples - bookinfo - productpage - v1:1.8.0 comes with busybox busybox: 1.28.4Copy the code

Save the above list as package-list. TXT To save the K8s dependent tool images offline using the following script on a cloud server with normal network access:

PACKAGES=`cat ./package-list.txt`;

for package in $PACKAGES; do docker pull "$package"; done

docker images | tail -n +2 | grep -v "<none>" | awk '{printf("%s:%s\n", $1, $2)}' | while read IMAGE; do
    for package in $PACKAGES;
    do
        if [[ $package! = * [':']* ]];then package="$package:latest"; fi

        if [ $IMAGE= =$package ];then
            echo "[find image] $IMAGE"
            filename="$(echo $IMAGE| tr ':' '-' | tr '/' '-').tar"
            echo "[save as] $filename"
            docker save ${IMAGE} -o $filename
        fi
    done
done
Copy the code

There are many ways to dump an image to a server to be deployed, but here is the simplest solution: SCP

PACKAGES=`cat ./package-list.txt`;

for package in $PACKAGES;
do
    if [[ $package! = * [':']* ]];then package="$package:latest";fi
    filename="$(echo $package| tr ':' '-' | tr '/' '-').tar"
    Change the address according to the actual scenario
    scp "mirror-server:~/images/$filename" .
    scp ". /$filename" "deploy-server:"
done
Copy the code

If all goes well you will see a log like this:

K8s.gcr. IO -fluentd-elasticsearch-v2.2.0.tar 100% 140MB 18.6MB/s 00:07 elasticsearch-6.5.1.tar 100% 748MB 19.4MB/s 00:38 Alpine -3.6.tar 100% 4192KB 15.1MB/s 00:00 docker.elastic. Co-kibana-kibana-oss-6.3.2.tar 100% 614MB 22.8MB/s 00:26 Cdkbot-registry-amd64-2.6.tar 100% 144MB 16.1MB/s 00:08 gcr.io- google_containers-DefaultBackend -amd64-1.4.tar 100% 4742 KB 13.3 MB/s 00:00... .Copy the code

Finally, use the docker load command on the target server to import the image.

ls *.tar | xargs -I {} microk8s.docker load -i {}
Copy the code

Next you can officially install K8s.

Start installing Kubernetes

Configuring various components with MicroK8s is simple with a single command:

microk8s.enable dashboard dns ingress istio registry storage
Copy the code

The complete list of components can be viewed with microk8s.enable –help:

microk8s.enable --help
Usage: microk8s.enable ADDON...
Enable one or more ADDON included with microk8s
Example: microk8s.enable dns storage

Available addons:

  dashboard
  dns
  fluentd
  gpu
  ingress
  istio
  jaeger
  metrics-server
  prometheus
  registry
  storage
Copy the code

If the enable execution went well, you should see a log like this:

logentry.config.istio.io/accesslog created
logentry.config.istio.io/tcpaccesslog created
rule.config.istio.io/stdio created
rule.config.istio.io/stdiotcp created
...
...
Istio is starting
Enabling the private registry
Enabling default storage class
deployment.extensions/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
Storage will be available soon
Applying registry manifest
namespace/container-registry created
persistentvolumeclaim/registry-claim created
deployment.extensions/registry created
service/registry created
The registry is enabled
Enabling default storage class
deployment.extensions/hostpath-provisioner unchanged
storageclass.storage.k8s.io/microk8s-hostpath unchanged
Storage will be available soon
Copy the code

Use microk8s.status to check the status of each component:

microk8s is running
addons:
jaeger: disabled
fluentd: disabled
gpu: disabled
storage: enabled
registry: enabled
ingress: enabled
dns: enabled
metrics-server: disabled
prometheus: disabled
istio: enabled
dashboard: enabled
Copy the code

However, K8s installation is not ready because components are ready. Run microk8s.inspect to check the installation result:

Inspecting services
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-docker is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Service snap.microk8s.daemon-etcd is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system info
  Copy network configuration to the final report tarball
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Inspect kubernetes cluster

 WARNING:  IPtables FORWARD policy is DROP. Consider enabling traffic forwarding with: sudo iptables -P FORWARD ACCEPT
Copy the code

The solution is as simple as adding a few rules to the UFW:

sudo ufw allow in on cbr0 && sudo ufw allow out on cbr0
sudo ufw default allow routed
sudo iptables -P FORWARD ACCEPT
Copy the code

Check again with the microk8s.inspect command and you will see that WARNING has disappeared.

But is Kubernetes really installed? Follow the next section to find out.

Kubernetes cannot start properly

With that done, use microk8s.kubectl Get Pods to check the current state of Kubernetes Pods. If you see ContainerCreating, That means Kubernetes needs some extra “tinkering.”

NAME                                      READY   STATUS              RESTARTS   AGE
default-http-backend-855bc7bc45-t4st8     0/1     ContainerCreating   0          16m
nginx-ingress-microk8s-controller-kgjtl   0/1     ContainerCreating   0          16m
Copy the code

Using microk8s.kubectl Get Pods — All-namespaces, you will see a log output similar to this:

NAMESPACE            NAME                                              READY   STATUS              RESTARTS   AGE
container-registry   registry-7fc4594d64-rrgs9 0/1 Pending 0 15m default default-http-backend-855bc7bc45-t4st8 0/1 ContainerCreating 0 16m default nginx-ingress-microk8s-controller-kgjtl 0/1 ContainerCreating 0 16m ... .Copy the code

The first problem is to resolve the Pending state of the container. Using microk8s.kubectl describe POD, you can quickly see the detailed status of the pod in question:

Events:
  Type     Reason                  Age                 From                         Message
  ----     ------                  ----                ----                         -------
  Normal   Scheduled               22m                 default-scheduler            Successfully assigned default/default-http-backend-855bc7bc45-t4st8 to ubuntu-basic-18-04
  Warning  FailedCreatePodSandBox  21m                 kubelet, ubuntu-basic-18-04  Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "K8s. GCR. IO/pause: 3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  FailedCreatePodSandBox  43s (x45 over 21m)  kubelet, ubuntu-basic-18-04  Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "K8s. GCR. IO/pause: 3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Copy the code

Referring to the log output, we can see that the list of dependent mirrors that we collated previously has a “fish out of the net” : MicroK8s also contains images that were not written in the program and obtained from the remote configuration.

In this case, we can only solve the problem by adding agents to Docker (or manually doing one by one).

Edit MicroK8s use docker environment variable configuration file/var/snap/vi MicroK8s/current/args/dockerd – env, add proxy configuration, such as:

HTTPS_PROXY HTTPS_PROXY = = http://10.11.12.123:10555 http://10.11.12.123:10555 NO_PROXY = 127.0.0.1Copy the code

Restart Docker:

sudo systemctl restart snap.microk8s.daemon-docker.service
Copy the code

With that in place, execute the following commands to reset MicroK8s and try again to install the various components:

microk8s.reset
microk8s.enable dashboard dns ingress istio registry storage
Copy the code

Kubectl get Pods: After this command is completed, execute microk8s.kubectl get Pods again a moment later and all pods will be in the Running state:

NAME                                      READY   STATUS    RESTARTS   AGE
default-http-backend-855bc7bc45-w62jd     1/1     Running   0          46s
nginx-ingress-microk8s-controller-m9lc2   1/1     Running   0          46s
Copy the code

Microk8s. kubectl Get Pods –all-namespaces

NAMESPACE            NAME                                              READY   STATUS      RESTARTS   AGE
container-registry   registry-7fc4594d64-whjnl 1/1 Running 0 2m default default-http-backend-855bc7bc45-w62jd 1/1 Running 0 2m default nginx-ingress-microk8s-controller-m9lc2 1/1 Running 0 2m istio-system grafana-59b8896965-xtc27 1/1 Running 0 2m istio-system istio-citadel-856f994c58-fbc7c 1/1 Running 0 2m istio-system istio-cleanup-secrets-9q8tw 0/1 Completed 0 2m  istio-system istio-egressgateway-5649fcf57-cbqlv 1/1 Running 0 2m istio-system istio-galley-7665f65c9c-l7grc 1/1 Running 0 2m istio-system istio-grafana-post-install-sl6mb 0/1 Completed 0 2m istio-system istio-ingressgateway-6755b9bbf6-hvnld 1/1 Running 0 2m istio-system istio-pilot-698959c67b-zts2v 2/2 Running 0 2m istio-system istio-policy-6fcb6d655f-mx68m 2/2 Running 0 2m istio-system istio-security-post-install-5d7bb 0/1 Completed  0 2m istio-system istio-sidecar-injector-768c79f7bf-qvcjd 1/1 Running 0 2m istio-system istio-telemetry-664d896cf5-jz22s 2/2 Running 0 2m istio-system istio-tracing-6b994895fd-z8jn9 1/1 Running 0 2m istio-system prometheus-76b7745b64-fqvn9 1/1 Running 0 2m istio-system servicegraph-5c4485945b-spf77 1/1 Running 0 2m Kube-system heapster-v1.5.2-64874F6bc6-8gHNr 4/4 Running 0 2m kube-system hostpath-provisioner- 599DB8d5Fb-kxtjw 1/1 Running 0 2m kube-system kube-dns-6ccd496668-98mvt 3/3 Running 0 2m kube-system kubernetes-dashboard-654cfb4879-vzgk5 1/1 Running 0 2m kube-system monitoring-influxdb-grafana-v4-6679c46745-68vn7 2/2 Running 0 2mCopy the code

If you see something like this, Kubernetes is really ready.

Creating applications quickly

Anduanan finished, must try to play to see, of course, here will not follow the flow of the show under the management of the background in a hurry to stop writing.

Use Kubectl to create a Deployment based on a ready-made container:

microk8s.kubectl create deployment microbot --image=dontrebootme/microbot:v1
Copy the code

It would be a shame not to experience automatic expansion since you are using the most advanced orchestration system:

microk8s.kubectl scale deployment microbot --replicas=2
Copy the code

To expose services and create traffic forwarding:

microk8s.kubectl expose deployment microbot --type=NodePort --port=80 --name=microbot-service
Copy the code

Run the get command to check the service status:

microk8s.kubectl get all
Copy the code

If all goes well, you should see something like the following log output:

NAME READY STATUS RESTARTS AGE pod/default-http-backend-855bc7bc45-w62jd 1/1 Running 0 64m pod/microbot-7c7594fb4-dxgg7 1/1 Running 0 13m pod/microbot-7c7594fb4-v9ztg 1/1 Running 0 13m pod/nginx-ingress-microk8s-controller-m9lc2 1/1 Running 0 64m NAME TYPE cluster-ip external-ip PORT(S) AGE service/default-http-backend ClusterIP 10.152.183.13 <none> 80/TCP 64m service/kubernetes ClusterIP 10.152.183.1 <none> 44m service/microbot-service NodePort 10.152.183.15 <none> 80:31354/TCP 13m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 64m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/default-http-backend 1/1 1 1 64m deployment.apps/microbot 2/2 2 2 13m NAME DESIRED CURRENT READY AGE replicaset.apps/default-http-backend-855bc7bc45 1 1 1 64m replicaset.apps/microbot-7c7594fb4 2 2 2 13mCopy the code

You can see that the Service address we just created is 10.11.12.234:31354. If you use your browser, you can see that the application is already running.

In line with the green concept of “who makes, who cleans up”, in addition to “brainless” creation, we also need to learn how to manage (destroy), using the delete command, first destroy Deployment:

microk8s.kubectl delete deployment microbot
Copy the code

After the execution, the log output is as follows:

deployment.extensions "microbot" deleted
Copy the code

To destroy a service, we need to get the names of all the services using the get command:

microk8s.kubectl get services microbot-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE microbot-service NodePort 10.152.183.15 < none > 80:31354 / TCP 24 mCopy the code

Select * from ‘service’ where ‘service’ = ‘service’; select * from ‘service’ where ‘service’ = ‘service’;

microk8s.kubectl delete service microbot-service
Copy the code

The execution result is as follows:

service "microbot-service" deleted
Copy the code

View the Dashboard

Some of you might want to get a peek at Dashboard.

You can run the microk8s.config command to obtain the IP address monitored by the current server:

Microk8s. config apiVersion: v1 clusters: -cluster: server: http://10.11.12.234:8080 name: Microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: username: adminCopy the code

You can see that the IP address of the listening service is 10.11.12.234. Run the proxy command to enable traffic forwarding.

Microk8s. Kubectl proxy -- accept - hosts =. * - address = 0.0.0.0Copy the code

Then visit the following address to see the familiar Dashboard:

http://10.11.12.234:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#! /login
Copy the code

other

Complete installation, connected to the system spent a total of nearly 8G of storage space, so if you intend to continue to use, you can plan the disk space in advance, such as the migration of Docker container storage location to continue to expand the future Docker image package address to do a migration.

Df -h Filesystem Size Used Avail Use% Mounted on udev 7.8g 0 7.8g 0% /dev/tmpfs 1.6g 1.7m 1.6g 1% /run /dev/sda2 79G 16G  59G 21% /Copy the code

The last

This article was written a month ago. Since the “Docker” scheme is used, the timeliness is still reliable in theory. If you encounter any problems, please feel free to discuss and communicate.

As the draft box piles up with more and more interesting content, it may be time to consider the “co-authored” model.

– EOF


I now have a small toss group, which gathered some like to toss about the small partners.

Without advertising, we would talk about software, HomeLab, programming issues, and also share some technical salon information in the group from time to time.

Like to toss friends welcome to scan code to add friends. (Please indicate source and purpose, otherwise it will not be approved)

All that stuff about being in a group