preface

I was recently assigned to set up a k8S environment for the department, so I supported an operations team by myself (no).

It took 4-5 days to build a usable environment and step on many pits.

So write an article to record the pit you encounter and how to solve it, are dry goods, I hope to help you are building a K8S cluster environment.

This article is not new to the introduction of basic concepts, please bear with me. The concepts involved can be studied systematically in other excellent articles.

The cluster structures,

I use a total of 4 machines, centos 7, K8S version 1.17.0, Docker version 19.3.5, one of them as the master, three as the node cluster, can only say that it is not very HA, can cope with small concurrent scenarios.

I set up the cluster in accordance with another article of the Nuggets, according to his approach step by step is basically able to set up a successful cluster, of course, there are still some problems, otherwise there would be no record of my article.

Post a link to the article: juejin.cn/post/684490… (I use)

By the way, there is an open source project on Github, which can be installed with one click using Ansible scripts: github.com/easzlab/kub…

Record on pit

First of all, the premise of the original blog is to close the firewall, but when we use the online environment, we cannot completely close it, so I choose to open the corresponding port:

master:

0. The pit in the text:

Calico.yaml (192.168.0.0/16, 10.96.0.0/12, 192.168.0.0/16, 10.96.0.0/12, 192.168.0.0/16, 10.96.0.0/12, 192.168.0.0/16, 10.96.0.0/12, 192.168.0.0/16, 10.96.0.0/12)

If you do this, then your POD will not be able to access the Internet later. This pit has bothered me for a long time. The specific reason is related to the underlying network forwarding of K8S. The pod network segment cannot be the same as the network segment you initialized, otherwise the POD request will never be forwarded, so what we need to do is, Change CALICO_IPV4POOL_CIDR in calico.yaml to 192.168.0.0/16 or another unique network segment

1. The calico. Yaml modifications

We need to modify calico. Yaml to solve the calico communication failure of node

Yaml file add the following two lines - name: IP_AUTODETECTION_METHOD value:"interface=ens.*"  Ens The configuration is based on the actual nic start
Copy the code

Configuration is as follows

           - name: CLUSTER_TYPE
              value: "k8s,bgp"
            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens.*"
              Value: "interface=ens160"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
Copy the code

And then we need to open port 179

Docs.projectcalico.org/v3.8/gettin…

2. The image failed to pull

Describe a problem that occurs when the POD is started. If so, first try to see if you can login to your mirror library

Docker login –username= your username, harbor. XXXX (your library address, I use harbor as an example) and then enter your password

Then you need to generate a Sercet authentication, Kubectl create secret docker-registry custom name –docker-server=harbor.xxxxx. Tech –docker-username= username – the docker – password = password

Finally, add it to your deployment.yaml

ImagePullSecrets: - name: user-defined nameCopy the code

3. The port request of ingress 10245 fails

This problem occurs when the ingress is configured to request port 10254 for a health check. Kube-proxy configuration file add a line of -masquerade-all=true but the bloggers this way to install kube-proxy is started by docker, configuration file can not find 😒.

Kubectl edit cm kube-proxy -n kube-system kube-proxy configuration file kubectl edit cm kube-proxy -n kube-system kube-proxy configuration file kube-proxy configuration file kubectl edit cm kube-proxy -n kube-system

Ipvs network model (iptables by default)

Add masquerade-all=true to the iptables model and add masqueradeAll: false to the iptables model.

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
 #! /bin/bash
 modprobe -- ip_vs 
 modprobe -- ip_vs_rr 
 modprobe -- ip_vs_wrr 
 modprobe -- ip_vs_sh 
 modprobe -- nf_conntrack_ipv4 
EOF
Copy the code

Then execute:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
Copy the code

Kube-proxy:

kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
Copy the code

For the record, I am not sure if I need to switch to the IPVS model or if it is possible to set masqueradeAll to true in iptables mode, but there is no harm in ipvs 🤣.

Error: Err services “ingress-nginx” not found

Please refer to the solution on Github:

Github.com/kubernetes-…

To be continued…

At present first write so many pits, later encountered will continue to supplement, I K8S pure white one, is also forced to life concurrently up operation and maintenance, but also understand the operation and maintenance comrades is not easy! If there are any mistakes, please correct them in time!

Fill a POD state map, obsessive-compulsive Gospel 😉