1. Environment preparation

Before you can deploy Kubernetes, you need to provide the required hardware and software environments. This document describes a scenario: how to install and deploy Kubernetes when there is no direct Internet connection.

1.1 Operating System

Recommended operating system:

  • Ubuntu 16.04 (64 – bit)
  • Red Hat Enterprise Linux 7.5 (64-bit)
The serial number The host role Hardware and software Environment The hardware configuration note
1 k8s-master The master node
  • CentOS Linux Release 7.2.1511 (Core)
  • Container environment: Docker Version 1.13.1
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk:
    • Root disk: 500GB
    • All disks: 1TB
The Kunbernetes cluster master node and ETCD are deployed to manage and monitor other Kubernetes working nodes and existence information.
2 k8s-worker01 Work node
  • CentOS Linux Release 7.2.1511 (Core)
  • Container environment: Docker Version 1.13.1
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk:
    • Root disk: 500GB
    • All disks: 1TB
Deploy the work nodes of the Kubernetes cluster to run containerized applications.
3 k8s-worker02 Work node
  • CentOS Linux Release 7.2.1511 (Core)
  • Container environment: Docker Version 1.13.1
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk:
    • Root disk: 500GB
    • All disks: 1TB
Deploy the work nodes of the Kubernetes cluster to run containerized applications.
4 k8s-worker03 Work node
  • CentOS Linux Release 7.2.1511 (Core)
  • Container environment: Docker Version 1.13.1
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk:
    • Root disk: 500GB
    • All disks: 1TB
Deploy the work nodes of the Kubernetes cluster to run containerized applications.
5 nfs-server The NFS server
  • CentOS Linux Release 7.2.1511 (Core)
  • NFS: NFS v4
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk:
    • Root disk: 1TB
    • All disks: 2TB
The NFS service is deployed to provide persistent storage for all upper-layer applications.
6 registry-server Private mirror warehouse
  • CentOS Linux Release 7.2.1511 (Core)
  • Container environment: Docker Version 1.13.1
  • Mirror repository: Sonatype/Nexus3
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk:
    • Root disk: 1TB
    • All disks: 2TB
A mirror repository is deployed to store and pull images.
7 kubectl/helm Tool node
  • Operating system: Windows
  • Command line tool: Kubectl
  • Package installation tool: HELM
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk: 500 gb
8 public Download resources
  • CentOS Linux Release 7.2.1511 (Core)
  • Container environment: Docker Version 1.13.1
  • CPU: 8 cores
  • Memory: 8 gb
  • Disk:
    • Root disk: 500GB
    • All disks: 1TB

1.2 Configuring firewall Policies

The following ports need to be opened:

agreement port describe
TCP 80 Rancher UI/API when external SSL termination is used
TCP 443 Rancher agent, Rancher UI/API, kubectl
TCP 6443 Kubernetes apiserver
TCP 22 SSH provisioning of nodes using Node Driver
TCP 2379 etcd client requests
TCP 2380 etcd peer communication
UDP 8472 Canal/Flannel VXLAN overlay networking
TCP 10250 kubelet
TCP/UDP 30000-32767. NodePort port range
TCP 8081 Nexus Port
TCP 5001 Registry Port

If you are just starting the trial, you can turn off the firewall first:

$ systemctl stop firewalldCopy the code

Ubuntu does not enable the UFW firewall by default. It can also be closed manually:

$ sudo ufw disableCopy the code

1.3 (Optional) Clearing the Environment

1) Check whether /var/lib/rancher/state/ exists and delete it if it does;

Kubernetes installed via Rancher

# Delete the running container

$ docker rm -f -v $(docker ps -aq)Copy the code

# Delete the storage volume

$ docker volume rm $(docker volume ls)Copy the code

Delete legacy directories

$ rm -rf /etc/kubernetes/ssl
$ rm -rf /var/lib/etcd
$ rm -rf /etc/cni
$ rm -rf /opt/cni
$ rm -rf /var/run/calicoCopy the code

2. Download and prepare installation media

1) the docker

Docker-engine-selinux-1.12.6-1. El7.centos.x86_64. RPM and docker-engine-selinux-1.12.6-1. El7.centos.noarch.

$$wget wget HTTP: / / https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.12.6-1.el7.centos.x86_64.rpm https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-selinux-1.12.6-1.el7.centos.noarch.rpmCopy the code

2) Private mirror warehouse

Download the image of Nexus3 and use nexus3 as a private mirror repository:

$ docker pull sonatype/nexus3:latest
$ docker save sonatype/nexus3:latest > nexus3.tarCopy the code

3) Rancher mirror image

Download the scripts for pulling images: rancher-save-images.sh and upload images to the image repository: rancher-load-images.sh.

$$wget wget HTTP: / / https://github.com/rancher/rancher/releases/tag/v2.0.0/rancher-save-images.sh https://github.com/rancher/rancher/releases/tag/v2.0.0/rancher-load-images.shCopy the code

Run the rancher-save-images.sh command to pull images:

$ . rancher-save-images.shCopy the code

This script is used to download all images required for deployment and compress them into rancher-images.tar.gz.

4) kubectl

Download the Kubectl tool for Windows:

$wget HTTP: / / https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/windows/amd64/kubectl.exeCopy the code

5) helm

Download the Helm client for Windows version 2.8.0:

$wget HTTP: / / https://storage.googleapis.com/kubernetes-helm/helm-v2.8.0-windows-amd64.tar.gzCopy the code

Download helm server tiller for Windows 2.8.0:

$docker pull rancher/tiller:v2.8.2 $docker save rancher/tiller:v2.8.2 > tiller.tarCopy the code

 

3, Docker

Kubernetes1.8 requires Docker 1.12.6, 1.13.1, 17.03; Kubernets 1.8 does not support later versions of Docker.

3.1 Docker installation

Copy docker-engine-1.12.6-1. El7.centos.x86_64. RPM and docker-engine-selinux-1.12.6-1. El7.centos.noarch , using the yum localinstall command:

$yum localinstall -y docker-engine-1.12.6-1.el7.centos.x86_64. RPM docker-engine-selinux-1.12.6-1.el7.centos.noarchCopy the code

3.2 (Optional) Setting the Root Directory

After docker is installed successfully, run the following command to view docker information:

$ docker infoCopy the code

By default, the root directory of docker is /var/lib/docker, which takes up a lot of disk space. Therefore, you need to provide sufficient disk space for the disk, and attach a dedicated disk to the disk. Suppose there is a new /dev/vdc disk.

1) Create a new dedicated root directory

$ mkdir /docker-rootCopy the code

2) Mount the disk to the new root directory

$ mount /dev/vdc /docker-rootCopy the code

3) Set the hang connection to be permanent

$echo "/dev/vdc/docker -root ext4 defaults 0 0" > /etc/fstabCopy the code

4) Set the docker to use the new root directory

$ vi /etc/docker/daemon.jsonCopy the code

Add: “graph”: “/docker-root”

{
"graph":"/docker-root"
}Copy the code

5) Restart docker

$ systemctl daemon-reload
$ systemctl restart dockerCopy the code

4. Provide network storage (optional)

It is stored in NFS Chinese network.

4.1 Configuring a Shared Directory

Configuring shared directories for clients on the NFS server:

$mkdir /nfs-share/docker-registry $echo "/nfs-share *(rw,async,no_root_squash)" >> /etc/exportsCopy the code

You can run the following command to make the configuration take effect:

$exportfs -rCopy the code

4.2 Start the service

1) The Rpcbind service must be started before the NFS service can be successfully registered with the rpcBind service:

$ systemctl start rpcbindCopy the code

2) Start the NFS service:

$ systemctl start nfs-serverCopy the code

Rpcbind and nfS-server:

$ systemctl enable rpcbind
$ systemctl enable nfs-serverCopy the code

4.3 Check whether the NFS service is started properly

$showmount -e localhost $mount -t NFS 127.0.0.1:/data/MNTCopy the code

5. Install the private mirror repository

1) Import an image

Copy the nexnexus. Tar file to the machine where the image repository is to be installed and import the image using the docker load command:

$ docker load < nexus.tarCopy the code

2) Set the storage directory

Create a persistent directory and mount the NFS shared directory:

$ mkdir /mnt/nexus-data && chmod 777 /mnt/nexus-data
$ mount -t nfs {nfs-server}:/nfs-share/docker-registry /mnt/nexus-dataCopy the code

3) Run the private mirror repository

Run the Nexus3 container and port 8081 and port 5001. Port 5001 is the external port of the Docker private mirror repository:

$ docker run -d -p 8081:8081 -p 5001:5001 -v /mnt/nexus-data:/nexus-data --name nexus sonatype/nexus3Copy the code

4) Create docker image repository

Create a mirror repository named docker on nexus3 with port 5001.

6 Installation and Deployment

Copy tiller.tar, rancher-images.tar.gz, and Rancher-load-images. sh to the machine where Rancher is installed.

1) Upload rancher related images to a private image repository

Perform rancher – load – images. Sh:

$ . rancher-load-images.shCopy the code

The system imports all images, labels them with the private image vault, and uploads them to the private image vault.

2) Upload the Tiller image to the private image repository

$docker tag Rancher /tiller:v2.8.2 } {registry - IP/rancher/tiller: v2.8.2 # uploaded to private images warehouse $docker push 10.10.30.190:5001 / rancher/tiller: v2.8.2Copy the code

6.1 Installing the Rancher Service

Install the Rancher service by executing the docker run command:

$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 {registry-ip}/rancher/rancher:2.0.0Copy the code

6.2 Creating a Cluster

6.2.1 Logging In to the System

1) Log in to Rancher

After the Rancher service starts normally, access Rancher through a browser.

2) Set the administrator password

During this login, set the administrator password as prompted.

3) Set the access address

After setting the administrator’s password, set rancher’s address for external access.

4) Set up a private mirror repository

Set system-default-Registry to the private mirror repository created earlier in this article, which is 10.10.30.190:5001.

6.2.2 Creating a Cluster

Once in Rancher, create a Custom type cluster named Demo.

6.3 Adding a Node

6.3.1 Adding a Master and etCD

On the create cluster page, select “Etcd” and “Control” as the node role, that is, add the Master and etcd nodes:

And run the following commands on the cluster to add the machine to the cluster as the Master and ETCD nodes.

$ sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes \ -v /var/run:/var/run {registrie-ip}/ rancher-agent:v2.0.0 --server https://10.0.32.172 \ --token pn7g52q7htck8s5pgmpdvbsq2lrplw8cxnvhjm4rp5kvf2k9ntx7tt \ --ca-checksum d8be0a0b9f16c3238836e23b338630ab0c737051ceb14ccc35afd13c2898369a --etcd --controlplaneCopy the code

6.3 add the worker

On the page for adding clusters, select node role as “worker”, that is, the added node is the worker node:

Run the following command on the cluster to add the machine to the cluster as the Worker node.

$ sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes \ -v /var/run:/var/run {registrie-ip}/ rancher-agent:v2.0.0 --server https://10.0.32.172 \ --token pn7g52q7htck8s5pgmpdvbsq2lrplw8cxnvhjm4rp5kvf2k9ntx7tt \ --ca-checksum d8be0a0b9f16c3238836e23b338630ab0c737051ceb14ccc35afd13c2898369a --workerCopy the code

7 installation kubectl

Kubectl is installed on Windows:

1) Install and deploy Kubectl

Copy kubectl and add the address of kubectl.exe to the Path of the Windows environment variable.

2) Configure the Kubeconfig file

In the current user’s directory, create the./kube folder and create the config file.

Rancher system, enter the home page of the cluster created by clicking “Kube Config File” to enter the KubeconFig information page. Copy the contents of the kubeconfig file to ~/.kube/config.

3) verification

Verify that the kubectl configuration is successful by executing the following command:

$ kubectl get nodesCopy the code

8 installation helm

Kubectl is installed on Windows and on the same machine as Kubectl:

8.1 Install the Helm client

Copy helm-v2.8.0-Windows-amd64.tar. gz, unzip it locally, and add the address of helm.exe to the Path of the Windows environment variable.

 

8.2 Installing the Tiller Server

1) Create a Service Account named tiller

$ kubectl create serviceaccount tiller --namespace kube-systemCopy the code

2) Grant the Service Account named tiller to cluster-admin:

The YAML file that binds tiller to the cluster administrator role looks like this:

apiVersion: rbac.authorization.k8s.io/v1beta1 
kind: ClusterRoleBinding 
metadata: 
  name: tiller 
roleRef: 
  apiGroup: rbac.authorization.k8s.io 
  kind: ClusterRole 
  name: cluster-admin 
subjects: 
- kind: ServiceAccount 
  name: tiller 
  namespace: kube-systemCopy the code

Kubectl create -f grants tiller the cluster administrator role by executing kubectl create -f:

$ kubectl create -f rbac-config.yamlCopy the code

3) Install the Tiller server

Since the installation is offline, start the local chart repository:

$ helm serveCopy the code

Install Tiller server side in Kubernetes cluster by helm init command, set user as Tiller by -service-account field, set repository as local repository by -stables – rebo-url field. The -tiller-image field specifies the use of tiller:v2.8.2 image in the private image repository.

$helm init --service-account=tiller -- stables -repo-url=http://127.0.0.1:8879 \ - - tiller - image = {registry - IP} / rancher/tiller: v2.8.2Copy the code

8.3 Verify the installation

After the installation is complete, run the following command to check whether the installation is successful:

$ helm versionCopy the code

If the versions of the Helm client and Tiller server are displayed correctly, the installation is successful.

Or check to see if Tiller server is functioning properly by executing kubectl’s following command:

$ kubectl get pods -n kube-systemCopy the code

The resources

1. The Single Node Installation address: https://rancher.com/docs/rancher/v2.x/en/installation/single-node-install

2. The Quick Start Guide “address: https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/

3. The Installation address: https://rancher.com/docs/rancher/v2.x/en/installation/

About the author: Ji Xiangyuan, product manager of Beijing Shenzhou Aerospace Software Technology Co., LTD. The copyright of this article belongs to the author.

Wechat official account: IK8S