2019-k8s-centos

Centos K8S centos K8S There are all outdated or incomplete files on the Internet, most of which are documents from 2016 and 2017. After trying for N times, the centos system is in a mess, and various configurations conflict with each other and affect each other. Kubeadm init has been reporting errors. So I reinstalled the centos system and started all over again

We would like to thank our technical support and patient guidance

  • First, fork my Github locally
git clone https://github.com/qxl1231/2019-k8s-centos.git
cd 2019-k8s-centos
Copy the code
  • After installing master, install dashboard. See the MD documentation for another dashboard

The K8S cluster is deployed on centos7

Install the docker – ce

The official documentation

Docker must be installed and configured on both Master and Node nodes

# Uninstall the original Docker
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

# install dependencies
sudo yum update -y && sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
  
# add the official yum library
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
# installation docker
sudo yum install docker-ce docker-ce-cli containerd.io

# Check the docker version
docker --version

# Boot
systemctl enable --now docker
Copy the code

Or use a script to install with one click

curl -fsSL "https://get.docker.com/" | sh
systemctl enable --now docker
Copy the code

Modify docker cgroup driver, same as k8S, use systemd

#Modify the docker cgroupdriver: native. cgroupDriver =systemdcat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": ["overlay2.override_kernel_check=true"]} EOF systemctl restart dockerCopy the code

Install kubelet kubeadm kubectl

The official documentation

Kubelet kubeadm kubectl must be installed on both master and node nodes.

Install kubernetes, need to install kubelet, kubeadm package, but k8s website to yum source is packages.cloud.google.com, access to domestic no, at this time we can use ali cloud yum warehouse mirror.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#Close the SElinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#Install kubelet kubeadm kubectlYum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet # yum install -y kubeadm kubectl -- Disableexcludes = Kubernetes systemctl enable --now kubelet
#Centos7 users also need to set routes:Yum install -y bridge-utils.x86_64 modprobe br_netfilter # yum install -y bridge-utils.x86_64 modprobe br_netfilter # yum install -y bridge-utils.x86_64 modprobe br_netfilter Run the lsmod command to view the enabled module cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 Bridge -nf-call-iptables = 1 EOF sysctl --system # reload all configuration files systemctl disable --now firewalld # Disable firewall
#K8s request to close swap (QXL)Swapoff -a && sysctl -w vm. Swappiness =0 # Close swap sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab #Copy the code

If the VM is used, perform the preceding steps and clone the vm. The experimental environment is 1 Master and 2 nodes

Preparations for creating a cluster

#The Master side:Kubeadm config images pull #
#-- If you can't climb the wall, try the followingKubeadm config images list ##(Not necessarily the following, according to the actual situation)
#Pull domestic resources based on the desired image nameDocker pull mirrorgooglecontainers/kube - apiserver: v1.14.1 docker pull Mirrorgooglecontainers/kube - controller - manager: v1.14.1 docker pull mirrorgooglecontainers/kube - the scheduler: v1.14.1 docker Pull mirrorgooglecontainers/kube - proxy: v1.14.1 docker pull mirrorgooglecontainers/pause: 3.1 docker pull Mirrorgooglecontainers/etcd: 3.3.10 docker pull coredns/coredns: this in mirrorgooglecontainers no # 1.3.1
#Modifying a Mirror TagDocker tag mirrorgooglecontainers/kube - apiserver: v1.14.1 k8s. GCR. IO/kube - apiserver: v1.14.1 docker tag Mirrorgooglecontainers/kube - controller - manager: v1.14.1 k8s. GCR. IO/kube - controller - manager: v1.14.1 docker tag Mirrorgooglecontainers/kube - the scheduler: v1.14.1 k8s. GCR. IO/kube - the scheduler: v1.14.1 docker tag Mirrorgooglecontainers/kube - proxy: v1.14.1 k8s. GCR. IO/kube - proxy: v1.14.1 docker tag mirrorgooglecontainers/pause: 3.1 K8s. GCR. IO/pause: 3.1 docker tag mirrorgooglecontainers/etcd: 3.3.10 k8s. GCR. IO/etcd: 3.3.10 docker tag Coredns/coredns: 1.3.1 k8s. GCR. IO/coredns: 1.3.1

#Download the required image, init will not pull the image, because it cannot connect to the Google image library error

#Example Delete the original mirrorDocker rmi mirrorgooglecontainers/kube - apiserver: v1.14.1 docker rmi Mirrorgooglecontainers/kube - controller - manager: v1.14.1 docker rmi mirrorgooglecontainers/kube - the scheduler: v1.14.1 docker Rmi mirrorgooglecontainers/kube - proxy: v1.14.1 docker rmi mirrorgooglecontainers/pause: 3.1 docker rmi Mirrorgooglecontainers/etcd: 3.3.10 docker rmi coredns/coredns: 1.3.1
#-- Can't climb over the wall can try to use --

#The Node side:
#Pull domestic resources based on the desired image nameDocker pull mirrorgooglecontainers/kube - proxy: v1.14.1 docker pull mirrorgooglecontainers/pause: 3.1

#Modifying a Mirror TagDocker tag mirrorgooglecontainers/kube - proxy: v1.14.1 k8s. GCR. IO/kube - proxy: v1.14.1 docker tag Mirrorgooglecontainers/pause: 3.1 k8s. GCR. IO/pause: 3.1
#Example Delete the original mirrorThe docker rmi mirrorgooglecontainers/kube - proxy: v1.14.1 docker rmi mirrorgooglecontainers/pause: 3.1#The mirror node cannot be loaded
Copy the code

Create the cluster using kubeadm

#For the first time in the process of initialization/etc/kubernetes/admin. Conf this file exists, is an empty file (my own manually create), complains: panic: the runtime error: invalid memory address or nil pointer dereferenceThe ls/etc/kubernetes/admin. Conf && mv/etc/kubernetes/admin. Conf. Bak # remove the backup
#Initialize the Master (Master requires at least 2 cores) That's where success liesKubeadm init --apiserver-advertise-address 192.168.200.25 --pod-network-cidr 10.244.0.0/16 # --kubernetes-version 1.14.1#--apiserver-advertise-address Specifies the interface for communicating with other nodes
#--pod-network-cidr Specifies the POD network subnet. This CIDR must be used when using fannel networks
Copy the code
  • Run the initialization, the program will check the consistency of the environment, you can further fix the problem according to the actual error prompt.
  • Program will visit https://dl.k8s.io/release/stable-1.txt for the latest k8s version, visit this link to FQ, if don’t have access to, will use kubeadm version of the client as the installed version number, Use kubeadm version to check the client version. You can also specify the version explicitly using –kubernetes-version.
Initialization result:
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs]  Using existing front-proxy-client certificate and key on disk [certs] Using the existing"sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 21.503375 seconds [uploa-config] storing the configuration usedin ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w2i0mh.5fxxz8vk5k8db0wq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

The following parts of the master created by each machine are different and need to be saved by themselves - QXLKubeadm join 192.168.200.25:6443 --token our9A0.zl490imi6t81tn5u \ --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16 
Copy the code

Set permissions for common users

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Use the Flannel network

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Copy the code

Node joins machine

# node1:Kubeadm join 192.168.20.5:6443 --token w2i0mh.5fxxz8vk5k8db0wq \ --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252# node2:Kubeadm join 192.168.20.5:6443 --token w2i0mh.5fxxz8vk5k8db0wq \ --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252Copy the code

Output log:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "Kubelet - config - 1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy the code
#Master:
kubectl get pods --all-namespaces
#-- Output information --NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-rn8kd 1/1 Running 0 170m kube-system coredns-fb8b8dccf-slwr4 1/1 Running 0 170m kube-system etcd-master 1/1 Running 0 169m kube-system kube-apiserver-master 1/1 Running 0 169m kube-system kube-controller-manager-master 1/1 Running 0 169m kube-system kube-flannel-ds-amd64-l8c7c  1/1 Running 0 130m kube-system kube-flannel-ds-amd64-lcmxw 1/1 Running 1 117m kube-system kube-flannel-ds-amd64-pqnln 1/1 Running 1 72m kube-system kube-proxy-4kcqb 1/1 Running 0 170m kube-system kube-proxy-jcqjd 1/1 Running 0 72m kube-system kube-proxy-vm9sj 1/1 Running 0 117m kube-system kube-scheduler-master 1/1 Running 0 169m#-- Output information --


kubectl get nodes
#-- Output information --NAME STATUS ROLES AGE VERSION Master Ready Master 171m v1.14.1 node1 Ready < None > 118m v1.14.1 node2 Ready < None > 74m v1.14.1#-- Output information --
Copy the code

misarrangement

Journalctl -f -u kubelet # Only look at the current kubelet process logCopy the code