The project is open source address: https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster, welcome to submit Issue and PR.

Just yesterday at 0:00, Istio 1.0 was released and ready for production! Many students are eager to try Istio 1.0 but have no running environment. Today I’m offering you an out-of-the-box and easily customizable approach to a distributed development environment that can be used to better test Istio, Kubernetes itself, and your applications. Now we use Vagrant and VirtualBox to create such an environment.

Note: Istio 1.0, Kubernetes 1.11, and Kube-Proxy use IPVS mode and are recommended to run on macs with at least 16GB memory.

Prepare the environment

The following software and environment are available:

  • More than 8GB memory

  • Vagrant 2.0 +

  • VirtualBox 5.0 +

  • Download Kubernetes 1.9 or later (support the latest 1.11.0) release package in advance

  • Mac/Linux, Windows not supported

The cluster

We used Vagrant and Virtualbox to install a three-node Kubernetes cluster, where the master node is also a node.

Note: The above IP, host name, and components are fixed to these nodes and will remain unchanged even after they are destroyed the next time they are rebuilt using Vagrant.

The container IP address range is 172.33.0.0/30

Kubernetes service IP address range: 10.254.0.0/16

Installed Components

The installed cluster contains the following components:

  • Flannel (Host-GW mode)

  • kubernetes dashboard

  • Etcd (single node)

  • kubectl

  • CoreDNS

  • Kubernetes (version depends on download kubernetes installation package, support Kubernetes1.9+)

Optional plug-in

  • Heapster + InfluxDB + Grafana

  • ElasticSearch + Fluentd + Kibana

  • Istio service mesh

  • Vistio

Directions for use

Clone the repo locally and download Kubernetes to the root directory of the project.

git clone https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster.gitcd kubernetes-vagrant-centos-clusterwget https://storage.googleapis.com/kubernetes-release/release/v1.11.0/kubernetes-server-linux-amd64.tar.gzCopy the code

Note: You can find the Kubernetes distribution download address here.

Start the cluster using Vagrant.

vagrant upCopy the code

If it is the first deployment, centos/7 box is automatically downloaded, which takes some time. In addition, a series of software packages need to be downloaded and installed on each node. The whole process takes about 10 minutes.

If you cannot download the centos/7 box while running Vagrant Up, you can manually download it and add it to Vagrant.

Add centos/7 Box manually

wget -c http://cloud.centos.org/centos/7/vagrant/x86_64/images/CentOS-7-x86_64-Vagrant-1801_02.VirtualBox.boxvagrant box  add CentOS-7-x86_64-Vagrant-1801_02.VirtualBox.box --name centos/7Copy the code

The next time you run Vagrant Up, your centos/7 box will be automatically read instead of downloaded from the Internet.

Access the Kubernetes cluster

There are three ways to access the Kubernetes cluster:

  • Local access

  • Access within the VM

  • Kubernetes dashboard

Local access

You can operate the Kubernetes cluster directly from your own local environment without having to log in to a virtual machine.

To operate the Kubernetes cluster locally, you need to install the Kubectl command line tool on your computer. For Mac users, perform the following steps:

Wget XVF https://storage.googleapis.com/kubernetes-release/release/v1.11.0/kubernetes-client-darwin-amd64.tar.gztar kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/Copy the code

Conf /admin.kubeconfig file in ~/. Kube /config to run the kubectl command locally.

mkdir -p ~/.kubecp conf/admin.kubeconfig ~/.kube/configCopy the code

We recommend that you use this approach.

Access from within the VM

If you have any problems, you can log in to the VM to debug:

vagrant ssh node1sudo -ikubectl get nodesCopy the code

Kubernetes dashboard

Can also directly through the dashboard UI to visit: https://172.17.8.101:8443

You can obtain the token value locally by executing the following command (kubectl installed in advance is required) :

kubectl -n kube-system describe secret `kubectl -n kube-system get secret|grep admin-token|cut -d "" -f1`|grep "token:"|tr -s ""|cut -d "" -f2Copy the code

Note: The value of the token can also be seen at the end of vagrant Up’s log.

You can only see the monitoring metrics shown above once you have installed the heapster component below.

component

Heapster monitoring

Create Heapster monitor:

kubectl apply -f addon/heapster/Copy the code

Visit Grafana

For services exposed using Ingress, add a configuration to /etc/hosts:

172.17.8.102 grafana. Jimmysong. IOCopy the code

Visit Grafana: http://grafana.jimmysong.io

Traefik

Deploy Traefik Ingress Controller and add ingress configuration:

kubectl apply -f addon/traefik-ingressCopy the code

Add a configuration to /etc/hosts:

172.17.8.102 traefik. Jimmysong. IOCopy the code

Visit Traefik UI: http://traefik.jimmysong.io

EFK

Use EFK for log collection.

kubectl apply -f addon/efk/Copy the code

Note: Each node running EFK consumes a lot of CPU and memory. Please ensure that at least 4 GB memory is allocated to each VIRTUAL machine.

Helm

Used to deploy the HELM.

hack/deploy-helm.shCopy the code

Service Mesh

We used ISTIO as the service mesh.

The installation

Go to the Istio Release page to download the Istio installation package, install the Istio command line tool, put the istioctl command line tool in your $PATH directory, for Mac users:

Wget https://github.com/istio/istio/releases/download/1.0.0/istio-1.0.0-osx.tar.gztar XVF istio - 1.0.0 - osx. Tar. GZMV bin/istioctl /usr/local/bin/Copy the code

Deploying IStio in Kubernetes:

kubectl apply -f addon/istio/Copy the code

Run the example

kubectl apply -n default -f <(istioctl kube-inject -f yaml/istio-bookinfo/bookinfo.yaml)istioctl create -f yaml/istio-bookinfo/bookinfo-gateway.yamlCopy the code

Add the following configuration items to the /etc/hosts file of your own local host.

172.17.8.102 grafana. Istio. Jimmysong. Io172.17.8.102 servicegraph. Istio. Jimmysong. IOCopy the code

We can access the above services through the following URL address.


See Vistio – Visualizing Istio Service Mesh using Netflix’s Vizceral for more details.

management

Unless otherwise noted, the following commands operate in the current repo directory.

hang

Suspend the current VM for recovery.

vagrant suspendCopy the code

restore

Restore the last status of the VM.

vagrant resumeCopy the code

Note: Every time we restart the virtual machines after suspending them, the time in the virtual machines is still the time when they were mounted, which makes monitoring difficult to view. Therefore, consider stopping the VM and then restarting the VM.

restart

Restart after shutdown.

vagrant haltvagrant up# login to node1vagrant ssh node1# run the prosivision scripts/vagrant/hack/k8s-init.shexit# login to node2vagrant ssh node2# run the prosivision scripts/vagrant/hack/k8s-init.shexit# login to node3vagrant ssh node3# run the prosivision scripts/vagrant/hack/k8s-init.shsudo -icd /vagrant/hack./deploy-base-services.shexitCopy the code

Now that you have a complete base kubernetes runtime, execute the following command in the root directory of the repo to obtain the admin user token of the Kubernetes dahsboard.

hack/get-dashboard-token.shCopy the code

Log in as prompted.

Clean up the

Clear the VM.

vagrant destroyrm -rf .vagrantCopy the code

Pay attention to

Do development testing only, do not use this project in production.

reference

  • Kubernetes Handbook — Kubernetes Chinese Handbook/Cloud Native Application Architecture Practice Manual

  • duffqiu/centos-vagrant

  • coredns/deployment

  • Kubernetes 1.8 kube-proxy Enable ipvS

  • Vistio – Visualize the Istio Service mesh using Netflix’s Vizceral



Welcome aboard in this public contribute enthusiastically, contribute way please visit: https://github.com/servicemesher/trans