Using Kubernetes from 0 to 1 (part 2) : Start a CentOS VIRTUAL machine with Virtualbox + Vagrant using Ansible. Here is how to start a CentOS VIRTUAL machine using Virtualbox + Vagrant. Use ansible script to build kubernetes cluster in virtual machine and how to add new nodes to existing cluster.

Starting a VM

First, clone and enter the project with the following command:

git clone https://github.com/choerodon/kubeadm-ansible.git && cd kubeadm-ansible
Copy the code

Virtualbox + Vagrant start three CentOS VMS. Vagrantfile is stored in the root directory of the project.

The Vagrantfile file is as follows:

Vagrant.configure(2) do|config| (1.. 3).eachdo |i|
    config.vm.define "node#{i}" do |s|
    s.vm.box = "Bento/centos - 7.3"
    s.vm.box_url = "http://file.choerodon.com.cn/vagrant/box/bento_centos-7.3.box"
    s.vm.hostname = "node#{i}"
    n = 10 + i
    s.vm.network "private_network", ip: "192.168.56. # {n}"
    s.vm.provider "virtualbox" do |v|
        v.cpus = 2
        v.memory = 4096
    end
    end
end
end
Copy the code

In the command, box_url specifies the box image download address, hostname specifies the vm hostname, private_network specifies the internal IP address, and cpus and memory specifies the vm hardware resource requirements.

The Vagrant-Cachier plug-in is used to share the common package cache between different virtual machines, reducing the package download time for the virtual machine. The Vagrantfile file is used to start the VM.

Hostname CPU Memory IP System
node1 2 4G 192.168.56.11 CentOS 7.3
node2 2 4G 192.168.56.12 CentOS 7.3
node3 2 4G 192.168.56.13 CentOS 7.3

In the project root directory, run the following command to start the VM:

* Ensure that CPU virtualization is enabled on the host.

vagrant up
Copy the code

Log in to VM node1

vagrant ssh node1
Copy the code

The deployment of Kubernetes

The environment required to deploy Ansible on Node1

sudo yum install -y epel-release && \
sudo yum install -y \
    ansible \
    git \
    httpd-tools \
    pyOpenSSL \
    python-cryptography \
    python-lxml \
    python-netaddr \
    python-passlib \
    python-pip
Copy the code

Clone the project code again in node1 (to prevent late deployment errors due to newline changes)

git clone https://github.com/choerodon/kubeadm-ansible.git && cd kubeadm-ansible
Copy the code

Edit the kubeadm-ansible/ Inventory /hosts file under node1 to modify the access address, user name, and password of each machine, and maintain the relationship between each node and the role. The preceding name is hostname of the machine. The user must have the root permission, but it does not have to be the root user. Other users can also have the root permission. For example, if you want to deploy a cluster of single-master nodes, you need to configure it like this (see) :

* In the all partition, each row contains information about a node. Node1 is the hostname of the node, ansible_host is the internal IP address of the node, IP is the IP address of the Kubernetes target bound network card, ansible_user is a user with administrator permission of the node, Ansible_ssh_pass indicates the password of the user. Ansible_become indicates that the administrator permission is used when executing commands.

Kube-master Node is Kubernetes Master Node, kube-node is Kubernetes common Node, and Etcd Node is the Node where Etcd will be deployed. Install kube-master Node and Etcd Node according to this tutorial. Etcd officially recommends an odd number of Etcd cluster nodes (e.g., 1, 3, 5) to prevent brain splitting.

[all] node1 AnSIBLE_host =192.168.56.11 IP =192.168.56.11 ANSIBLE_USER =root AnSIBLE_SSH_pass = Vagrant anSIBLE_become =trueNode2 AnSIBLE_host =192.168.56.12 IP =192.168.56.12 ANSIBLE_USER =root AnSIBLE_SSH_pass =vagrant anSIBLE_become =trueNode3 AnSIBLE_host =192.168.56.13 IP =192.168.56.13 ANSIBLE_USER =root AnSIBLE_SSH_pass =vagrant anSIBLE_become =true
[kube-master]
node1
[etcd]
node1
[kube-node]
node1
node2
node3
Copy the code

* The kubeadm-ansible/ Inventory /hosts file under the project defaults to 3 master nodes.

Run the following command on node1 to deploy the cluster:

ansible-playbook -i inventory/hosts -e @inventory/vars cluster.yml
Copy the code

In cluster.yml, we divide the cluster installation into 6 stages, which are as follows:

  • Install the preparation
    • Pre-installation check: check the system, confirm the Yum library, download CFSSL.
    • Docker related check: check the Docker Engine, the Configuration, Proxy.
  • Etcd installation
    • Generate an Etcd certificate
    • Install the Docker
    • Configuring the System Environment
  • Kube-master: The kube-node must have components installed
    • kubelet
  • Kube – master of installation
    • Check kubeadm
    • Generate a certificate
    • Modify the configuration
  • Kube – node installation
    • Generating a Configuration File
    • kubeadm join
  • Installation of other Components
    • Configure the Flannel network
    • Install the ingress – nginx
    • Install the dashboard
    • Install heapster
    • Install kube – lego

Run the following command to check the POD status. If the pod status is Running, the cluster deployment is successful:

kubectl get po -n kube-system
Copy the code

If the deployment fails and you want to reset the cluster (all data), do:

ansible-playbook -i inventory/hosts reset.yml
Copy the code

Add a node

If you want to add a new node to an existing cluster, do the following: Edit kubeadm-ansible/ Inventory /hosts to add the new node information. For example, if the hostname of the new node is node4, the IP address is 192.168.56.14, and other information is the same as that of other nodes, add the following information:

[all] node1 AnSIBLE_host =192.168.56.11 IP =192.168.56.11 ANSIBLE_USER =root AnSIBLE_SSH_pass = Vagrant anSIBLE_become =trueNode2 AnSIBLE_host =192.168.56.12 IP =192.168.56.12 ANSIBLE_USER =root AnSIBLE_SSH_pass =vagrant anSIBLE_become =trueNode3 AnSIBLE_host =192.168.56.13 IP =192.168.56.13 ANSIBLE_USER =root AnSIBLE_SSH_pass =vagrant anSIBLE_become =trueNode4 AnSIBLE_HOST =192.168.56.14 IP =192.168.56.14 ANSIBLE_USER =root AnSIBLE_SSH_pass =vagrant anSIBLE_become =true
[kube-master]
node1
[etcd]
node1
[kube-node]
node1
node2
node3
node4
Copy the code

After node information is added, you can add nodes:

ansible-playbook -i inventory/hosts -e @inventory/vars scale.yml
Copy the code

View node information after the node is added:

kubectl get node
Copy the code

That concludes the introduction to cluster deployment, and the next article will show you how to set up your first application.

For more articles on The Kubernetes series, please click to read ▼

  • Use Kubernetes from 0 to 1
  • From 0 to 1 using Kubernetes series (2) – Installation tool introduction

About the Choerodon toothfish

Choerodon is an open source enterprise services platform that builds on Kubernetes’ container orchestration and management capabilities and integrates DevOps toolchains, microservices and mobile application frameworks to help enterprises achieve agile application delivery and automated operations management. It also provides IoT, payment, data, intelligent insights, enterprise application marketplace and other business components to help enterprises focus on their business and accelerate digital transformation.

You can also learn about the latest developments of toothfish, product features and participate in community contributions through the following community channels:

  • Liverpoolfc.tv: choerodon. IO
  • BBS: forum. Choerodon. IO
  • Github:github.com/choerodon/
  • Choerodon toothfish
  • The Choerodon toothfish

Welcome to join the Choerodon Toothfish community to create an open ecological platform for enterprise digital services.