Install kubernetes

K8s website

# Check systemhonglei@k8s-master-ubuntu:~$uname -a Linux K8S-master-Ubuntu 5.4.0-58-generic#64~18.04.1-Ubuntu SMP Wed Dec 9 17:11:11 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Copy the code

Ready to start

  • One or more machines running the following systems:
    • Ubuntu 16.04 +
    • Debian 9+
    • CentOS 7
    • Red Hat Enterprise Linux (RHEL) 7
    • Fedora 25+
    • HypriotOS v1.0.1 +
    • Container Linux (testing version 1800.6.0)
  • 2 GB or more RAM per machine (less than this will affect the running memory of your application)
  • 2 CPU cores or more
  • All the machines in the cluster can connect to each other (both the public and internal networks can be connected).
  • The node cannot have duplicate host names, MAC addresses, or product_uUID. See here for more details.
  • Open certain ports on the machine. See here for more details.
  • Disable swap partitions. To keep Kubelet working, you must disable swap partitions.

Disable the firewall and swap

The firewalls on all hosts must be disabled

sudo ufw disable
Copy the code

Disable disable swap for all hosts

Kubernetes 1.8 requires that Swap be disabled at the beginning. If it is not disabled, kubelet will not start in default configuration. Comment out this line in /etc/fstab

root@master:/etc/docker# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# 
       
        
         
          
           
           
          
         
        
       
      
# / was on /dev/sda1 during installation
UUID=d1cc0144-5acb-4e88-95f5-965ce4d0bc30 /               ext4    errors=remount-ro 0       1
#/swapfile none swap sw 0 0
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0
Copy the code
# to perform
sudo swapoff -a
Copy the code

Root account in Ubuntu

The root account in Ubuntu is disabled by default, so there is no root account when you log in

Open the terminal. Open the root account

sudo passwd -u root
Enter the root password twice
sudo passwd   root
# Switch to user root
su -
Exit the root account
exit
Copy the code

Ensure that the MAC address and product_uUID are unique on each node

  • You can use commandsip linkifconfig -aTo obtain the MAC address of the network interface
  • You can usesudo cat /sys/class/dmi/id/product_uuidThe product_uuid command verifies the product_uuid

Generally, hardware devices have unique addresses, but some virtual machines may have duplicate addresses. Kubernetes uses these values to uniquely identify the nodes in the cluster. If these values are not unique on each node, the installation may fail.

Checking network Adapters

If you have more than one network adapter and your Kubernetes components are unreachable via default routing, we recommend that you pre-add IP routing rules so that the Kubernetes cluster can connect via the corresponding adapter.

Ensure that the iptables tool does not use the NFtables backend

In Linux, nftables can currently be used as a replacement for the kernel iptables subsystem. The iptables tool can act as a compatibility layer that behaves like iptables but actually configures NFtables. The nftables back end is incompatible with the current kubeadm package: it causes firewall rules to be duplicated and breaks kube-Proxy.

If your system’s iptables tool uses the NFtables backend, you need to switch the iptables tool to “old” mode to avoid these problems. By default, this problem occurs in at least Debian 10 (Buster), Ubuntu 19.04, Fedora 29, and newer distributions. RHEL 8 does not support switching to an earlier version, so it is incompatible with the current Kubeadm software package.

update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set arptables /usr/sbin/arptables-legacy
update-alternatives --set ebtables /usr/sbin/ebtables-legacy
Copy the code

Example Modify the host name and host

sudo vi /etc/hostname
/etc/hosts
sudo  vi /etc/hosts
#/etc/hosts stores the mapping between the domain name and the hostname. There is no direct relationship between the domain name and the hostname. You can specify any corresponding name for an IP address.

/etc/hosts for master
192.168.39.3 master
192.168.39.4 node1
192.168.39.5 node2
/etc/hosts on node1
192.168.39.3 master
192.168.39.4 node1
192.168.39.5 node2
/etc/hosts on node2
192.168.39.3 master
192.168.39.4 node1
192.168.39.5 node2
Copy the code

To install the runtime

As of v1.6.0, Kubernetes allows CRI (Container runtime interface) by default.

As of v1.14.0, Kubeadm will automatically detect container runtimes on Linux nodes by observing known UNIX domain sockets. The following table shows the runtime and socket paths that can be detected running.

The runtime Domain socket
Docker /var/run/docker.sock
containerd /run/containerd/containerd.sock
CRI-O /var/run/crio/crio.sock

If both a Docker and containerd are detected, the Docker is preferred. This is of course because Docker 18.09 ships with Containerd and both are detectable. If two or more other runtimes are detected, kubeadm exits with a reasonable error message.

On non-Linux nodes, docker is used as the container Runtime by default.

If the container runtime is docker, it is implemented inside Kubelet by built-in Dockershim CRI.

Other RUNtimes based on CRI are:

  • Containerd (Containerd’s built-in CRI plug-in)
  • cri-o
  • frakti

Refer to the CRI Installation Guide for more information.

Install the Docker

Install Docker CE on each of your nodes (all hosts).

The Kubernetes release notes list which versions of Docker are compatible with that version of Kubernetes.

Install Docker on your operating system using the following command:

# install Docker CE
# set up warehouse:
Install the package to allow APT to use the repository via HTTPS
sudo apt-get update && sudo apt-get install -y \
  apt-transport-https ca-certificates curl software-properties-common gnupg2

### Add Docker official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
If the above address is not available, you can use a domestic image
### Add Docker university of Science and Technology of China GPG key:
curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg  | sudo apt-key add -

## add Docker apt repository:
sudo add-apt-repository \
  "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) \
  stable"

sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

Install Docker CESudo apt-get update && sudo apt-get install -y \ containerd. IO =1.2.13-2 \ docker-ce=5:19.03.11~3-0~ Ubuntu -$(lsb_release - cs) \ docker - ce - cli = from 03.11 ~ 3-0 ~ ubuntu - $(lsb_release - cs)# or the following command installation is the same.
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io  

#kubernetes docker driver is recommended to use systemd, of course not change, but kubeadm init will be warning
Set up the Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.conf { "registry-mirrors": [ "https://dr6xf1z7.mirror.aliyuncs.com", ], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF

# Create /etc/systemd/system/docker.service.d
sudo mkdir -p /etc/systemd/system/docker.service.d

# restart docker.
sudo systemctl daemon-reload
sudo systemctl restart docker
# View information
docker version
docker info | grep "Cgroup Driver"
Copy the code

If you want to start the Docker service at startup, run the following command:

sudo systemctl enable docker
Copy the code

Please refer to the official Docker Installation Guide for more information.

Install kubeadm, kubelet, and kubectl

You need to install the following packages on each machine:

  • kubeadm: the instruction used to initialize the cluster.
  • kubelet: used to start pods, containers, etc., on each node in the cluster.
  • kubectl: command line tool used to communicate with the cluster.

Kubeadm cannot help you install or manage Kubelet or Kubectl, so you need to make sure that they match the version of the control plane installed through Kubeadm. If you don’t, you run the risk of version bias, which can lead to unexpected errors and problems. However, a minor version discrepancy between the control plane and Kubelet is supported, but the version of Kubelet cannot exceed the version of the API server. For example, Version 1.7.0 of Kubelet is fully compatible with version 1.8.0 of API servers, but not vice versa.

For information about installing Kubectl, see the Installing and Setting up Kubectl documentation.

Warning:

These guidelines do not include all of the Kubernetes packages used for system upgrades. This is because kubeadm and Kubernetes have special upgrade considerations.

Add Ali source

Because foreign websites are slow to access and easy to freeze, so you need to add the source.

in/etc/apt/sources.listAdd:

vi /etc/apt/sources.list
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
Copy the code

Add the source key:

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
Copy the code

Update the source:

sudo apt-get update
Copy the code

Start the installation

sudo apt-get install -y kubelet kubeadm kubectl

# check version number:
root@master:/etc/docker# kubelet --versionKubernetes v1.20.0 root @ master: / etc/docker# 
Copy the code

Start the kubelet:

systemctl start kubelet
If you want to start the Docker service at startup, run the following command
sudo systemctl enable kubelet
Copy the code

Deploy the master node

View the list of images required by Kubernetes

Get the list of images required for the latest version
root@master:/etc/docker# kubeadm config images listK8s. GCR. IO/kube - apiserver: v1.20.1 k8s. GCR. IO/kube - controller - manager: v1.20.1 k8s. GCR. IO/kube - the scheduler: v1.20.1 K8s. GCR. IO/kube - proxy: v1.20.1 k8s. GCR. IO/pause: 3.2 k8s. GCR. IO/etcd: 3.4.13-0 k8s. GCR. IO/coredns: 1.7.0Copy the code

Perform the init

Kubeadm init - kubernetes - version = v1.20.0 - image - repository registry.aliyuncs.com/google_containers - pod - network - cidr = 10.244.0.0/16** Execute successfully if ** is displayed as follows:Kubeadm join 192.168.39.3:6443 --token 973slm.bl1aa33bx0wns5sj \ --discovery-token-ca-cert-hash sha256:9ca9b8eb33e291f83acc2245d919443431692be5ab8527bd8cf58f57c5a18be5Copy the code

Failure to retry

If the init command fails to be executed, run the following command to clear the garbage generated by init

kubeadm reset
rm -rf /etc/kubernetes
Copy the code

The root user

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf
Copy the code

Non-root users operate kubectl

After successful deployment, if we want to operate kubectl as a non-root user, we can use the following command, which is also part of the kubeadm init output

The connection to The server localhost:8080 was refused

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Node node performs join

Run commands on the node node

Kubeadm join 192.168.39.3:6443 --token 973slm.bl1aa33bx0wns5sj \ --discovery-token-ca-cert-hash sha256:9ca9b8eb33e291f83acc2245d919443431692be5ab8527bd8cf58f57c5a18be5Copy the code

The join process

root@node2:/# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@node2:/etc/apt# kubeadm join 192.168.39.3:6443 --token 973slm.bl1aa33bx0wns5sj \
>     --discovery-token-ca-cert-hash sha256:9ca9b8eb33e291f83acc2245d919443431692be5ab8527bd8cf58f57c5a18be5
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster...  [preflight] FYI: You can look at this config file with'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Copy the code

Viewing Node Status

Check the status of the master node:

You can see that all nodes are in an unready state.
root@master:/# kubectl get nodesNAME STATUS ROLES AGE VERSION master NotReady control-plane,master 24M V1.20.0 node1 NotReady < None > 14s v1.20.0 node2 NotReady < none > 24 s v1.20.0Copy the code

Install the Pod network add-on

Note:

This section contains important information about network setup and deployment sequence. Please read all the suggestions carefully before proceeding.

You must deploy one based on the Pod networking pluginContainer network interface(CNI) so that your PODS can communicate with each other. Cluster DNS (CoreDNS) will not start until the network is installed.

  • Be careful that your Pod network does not overlap with any host network: if it does, you are likely to run into problems. If you find conflicts between the network plugin’s preferred Pod network and some host network, consider using an appropriate CIDR block instead, and then executekubeadm initThe use of--pod-network-cidrParameter and replace it in YAML for your web plug-in.
  • By default,kubeadmSet the cluster to use and force useRBACRole-based access control. Make sure your Pod networking plug-in supports RBAC, as does the Manifests for deploying it.
  • If you are using IPv6 (dual-stack or single-stack IPv6 networking) for your cluster, make sure your Pod networking plug-in supports IPv6. IPv6 support has been added in CNI V0.6.0.

Note: Currently Calico is the only CNI plug-in that performs E2E testing in kubeadm project. If you find problems related to CNI plug-ins, you should record them in their respective problem trackers rather than in kubeadm or Kubernetes problem trackers.

A list of add-ons to the Kubernetes network model

Flannel is a very simple coverage network that meets Kubernetes’ needs. Many people have reported success using Flannel and Kubernetes.

You can install Pod network add-ons on control plane nodes or nodes with KubeconFig credentials using the following commands:

# flannel network cluster setup, the original content in: https://github.com/coreos/flannel#flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Only one Pod network can be installed per cluster.
Copy the code

Implementation process

root@master:/opt/k8syaml# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
1234567
IO/Coreos/Flannel images will be downloaded.
root@master:/opt/k8syaml# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f89b7bc75-7vwsj         1/1     Running   0          47m
kube-system   coredns-7f89b7bc75-sb9d2         1/1     Running   0          47m
kube-system   etcd-master                      1/1     Running   0          47m
kube-system   kube-apiserver-master            1/1     Running   0          47m
kube-system   kube-controller-manager-master   1/1     Running   0          47m
kube-system   kube-flannel-ds-8qd6j            1/1     Running   0          13m
kube-system   kube-flannel-ds-cwjpg            1/1     Running   0          13m
kube-system   kube-flannel-ds-ql9kn            1/1     Running   0          13m
kube-system   kube-proxy-hcgqb                 1/1     Running   0          24m
kube-system   kube-proxy-qbl9b                 1/1     Running   0          47m
kube-system   kube-proxy-t9zd9                 1/1     Running   0          24m
kube-system   kube-scheduler-master            1/1     Running   0          47m

Copy the code

Check the node status again. All the nodes are ready:

root@master:/opt/k8syaml# kubectl get nodesNAME STATUS ROLES AGE VERSION Master Ready Control-plane,master 49m V1.20.0 node1 Ready < None > 25m v1.20.0 node2 Ready 25 m v1.20.0 < none >Copy the code

If the following error occurs:

Unable to connect to the server: read tcp 192.168.20.5:37246->151.101.228.133:443: read: connection reset by peer

Install apt-get install ca-certificates and apt-get install ssl-cert.

# If the flannel image cannot be downloaded but the local Docker has it, the yML file is changed to the local imagePullPolicy: IfNotPresent
Get the mirror ID
docker images
# where -o and > indicate that the output to the file source image can use the image ID or image name (name:tag).Docker save ImagesId > / opt/flannel. Tar docker save - o/opt/flannelrar quay. IO/coreos/flannel: v0.13.1 - rc1# File transferSCP flannel. Tar [email protected]: / opt# load the image on the machine that needs it -i and < indicate input from the file
docker load -i /opt/flannel.tar
docker load < flannel.tar
If the mirror name is inconsistent, change itDocker tag ImagesId quay. IO/coreos/flannel: v0.13.1 - rc1Copy the code

Test the K8S cluster

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
Visit http://NodeIp:Port
Copy the code

Dashboard UI

Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy container applications to Kubernetes clusters, troubleshoot container applications, and manage cluster resources. You can use the Dashboard to get an overview of the applications running in the cluster and to create or modify Kubernetes resources (such as Deployment, Job, DaemonSet, etc.). For example, you can scale the Deployment flexibly, initiate a rolling upgrade, restart the Pod, or create a new application using the wizard.

Dashboard also displays resource status information and all error messages in the Kubernetes cluster

Deploy the Dashboard UI

Deploy the Dashboard UI

Dashboards are not deployed by default. You can use the following command to deploy:

Kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yamlCopy the code

If the VM cannot be accessed, you can open the web page, create a file in the VM, and copy the content into it.

If a remote download is not possible, load the image locally. In recommended. Yaml, just put “imagePullPolicy: IfNotPresent” under “image” which means local image is preferred.

root@master:/opt/k8syaml# vi recommended.yaml
root@master:/opt/k8syaml# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

# check all pods
root@master:/opt/k8syaml# kubectl get pod --all-namespaces 
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            coredns-7f89b7bc75-7vwsj                     1/1     Running   0          14h
kube-system            coredns-7f89b7bc75-sb9d2                     1/1     Running   0          14h
kube-system            etcd-master                                  1/1     Running   0          14h
kube-system            kube-apiserver-master                        1/1     Running   0          14h
kube-system            kube-controller-manager-master               1/1     Running   0          14h
kube-system            kube-flannel-ds-8qd6j                        1/1     Running   0          14h
kube-system            kube-flannel-ds-cwjpg                        1/1     Running   0          14h
kube-system            kube-flannel-ds-ql9kn                        1/1     Running   0          14h
kube-system            kube-proxy-hcgqb                             1/1     Running   0          14h
kube-system            kube-proxy-qbl9b                             1/1     Running   0          14h
kube-system            kube-proxy-t9zd9                             1/1     Running   0          14h
kube-system            kube-scheduler-master                        1/1     Running   0          14h
kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc-49b5z   1/1     Running   0          142m
kubernetes-dashboard   kubernetes-dashboard-b8995f9f8-nvz8f         1/1     Running   0          142m
# check the cause of not starting the image
kubectl describe pod dashboard-metrics-scraper-7b59f7d4df-5vmn2 -n kubernetes-dashboard

Copy the code

You can download mirror download is slow, sometimes docker pull kubernetesui/metrics – scraper: 1.0.4 docker pull kubernetesui/dashboard: v2.0.0

Access to the Dashboard UI

To protect your cluster data, Dashboard is deployed by default with minimal RBAC configuration. Currently, Dashboard only supports login using Bearer tokens. To create a token for this sample demonstration, you can follow the instructions on creating a sample user.

The kubernetes Dashboard login permission is insufficient

Create a Cluster Administration Service Account In this step, we will create a service account for the dashboard and obtain its credentials.

Run the following command:

This command creates a service account for the dashboard in the default namespace

kubectl create serviceaccount admin-user -n kubernetes-dashboard
Copy the code

Add cluster binding rules to your dashboard account

kubectl create clusterrolebinding admin-user -n kubernetes-dashboard  --clusterrole=cluster-admin  --serviceaccount=kubernetes-dashboard:admin-user
Copy the code

Use the following command to copy the token required for the dashboard login:

kubectl -n kubernetes-dashboard  describe secret admin-user | awk '$1=="token:"{print $2}'

# More information
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Copy the code

Copy the token and paste it into the dashboard login page by selecting the token option

Command line agent

You can use the Kubectl command line tool to access Dashboard as follows:

kubectl proxy
Copy the code

Kubectl will make Dashboard passable Visit http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

The UI can only be accessed from the machine executing this command. See Kubectl proxy –help for more options.

Obtain the token and log in
kubectl describe secret admin-user -n kubernetes-dashboard
Copy the code

A cross-domain problem occurs

Error Trying to reach service: dial TCP 10.244.1.18:8443: connect: connectio
Kubectl --namespace=kube-system port-forward 
      
        8443
      [root@master01 ~]kubectl -n kubernetes-dashboard get pod -o name | grep dashboard pod/dashboard-metrics-scraper-6cd59dd9c7-tbh2h pod/kubernetes-dashboard-5b9d976b79-7clvr [root@master01 ~]kubectl --namespace=kubernetes-dashboard port-forward pod/kubernetes-dashboard-5b9d976b79-7clvr 8443 Forwarding from Forwarding from [::1]: 127.0.0.1:8443 -> 8443Access address
https://localhost:8443
Copy the code

Q&A

When the docker is installed, the error cannot be verified because the public key is not available

honglei@master:~$ sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common gnupg2 Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease Hit:2 http://mirrors.aliyun.com/ubuntu bionic-security InRelease Hit:3 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease Hit:4 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease Hit:5 http://mirrors.aliyun.com/ubuntu Bionic - programs InRelease Get: 6 https://download.docker.com/linux/ubuntu bionic InRelease [64.4 kB] Err: 6 https://download.docker.com/linux/ubuntu bionic InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7EA0A9C3F273FCD8
Reading package lists... Done
W: Skipping acquire of configured file 'multivers/source/Sources' as repository 'http://mirrors.aliyun.com/ubuntu bionic-backports InRelease' doesn't have the component 'multivers' (component misspelt in sources.list?)
W: GPG error: https://download.docker.com/linux/ubuntu bionic InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7EA0A9C3F273FCD8
E: The repository 'https://download.docker.com/linux/ubuntu bionic InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
honglei@master:~$ 
12345678910111213141516
Copy the code

Solutions:

Add the following to sources.list:

deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable

# Update apt source:
apt-get  update
Copy the code

Dec 19 20:33:46 Master Systemd [1]: Docker. Socket: Failed with result ‘service-start-limit-hit’ Dec 19 20:33:46 Master Systemd [1]: Docker. Socket: Failed with result ‘service-start-limit-hit’ Dec 19 20:33:46 Master Systemd [1]: Docker.

/etc/docker/daemon.json, modify daemon.json file to daemon.conf
mv daemon.json daemon.conf
Copy the code