An overview of the

For the two-node cluster, there are few things to be configured for the computing node, so it is suggested to configure the computing node first, and then switch to the Master node to work slowly. Opened two ECs in the US area of Aliyun (Silicon Valley) (by volume)

  • Master: 2CPU, 16GB RAM, CentOS 7.4 64-bit
  • Node1:1CPU, 8GB RAM, CentOS 7.4 64-bit

However, the custom mirror can be copied across the country, but finally with the help of the United States network is the whole process of running through, the domestic network to go abroad a variety of cards.

configuration

Configure the compute node and the control node with a slight difference, as follows

Compute nodes

Yum install -y docker wget git nettools bind-utils: yum install -y docker wget git nettools bind-utils: yum install -y docker wget git nettools bind-utils: yum install -y docker wget git nettools bind-utils: yum install -y docker wget git nettools bind-utils Iptables -services bridge-utils bash-completion # : enable Docker; Systemctl start docker # enable systemctl enable NetworkManager; Systemctl stop firewalld; systemctl stop firewalld Systemctl diable Firewalld # Ansible is in conflict with urllib3 Error unpacking RPM package python-urllib3-1.10.2-3. El7. Noarch PIP uninstall urllib3

Master control node

Echo "172.20.62.195 master.example.com" >> /etc/hosts echo "172.20.62.196 node1.example.com" >> /etc/hosts # install yum install -y docker wget git nettools bind-utils Iptables -services bridge-utils bash-completion # : enable Docker; Systemctl start docker # enable systemctl enable NetworkManager; Systemctl stop firewalld; systemctl stop firewalld Systemctl diable Firewalld # Ansible is in conflict with urllib3 Error unpacking RPM package python-urllib3-1.10.2-3. El7. Noarch PIP uninstall urllib3 # Yum -y install ETCD systemctl enable ETCD; Systemctl start etcd # download EPEL yum -y install https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm # enable=0 sed -i -e "S / ^ enabled = 1 / enabled = 0 /"/etc/yum repos. D/epel. '# install yum - y - enablerepo = epel install ansible pyOpenSSL # to generate the secret key Ssh-keygen-f /root/.ssh/ id_rsa-n "# Copy the secret key to all nodes in the cluster, enabling password-free access for host in master.example.com; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; Done # Download OpenShift-Ansible wget https://github.com/openshift/openshift-ansible/archive/openshift-ansible-3.7.0-0.126.0.tar.gz tar ZXVF Openshift-ansible -3.7.0-0.126.0.tar.gz # backup cp /etc/ansible/hosts /etc/ansible/hosts.bak # configure /etc/ansible/hosts # The contents of the /etc/ansible/hosts file are changed to the following code block
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # openshift_deployment_type=origin openshift_release=3.6.0 # If the CPU memory is satisfied, Openshift_disable_check # Master Node requires 2 CPU core, 16GB RAM, 40GB disk # Node Node requires 1 CPU core, 8GB RAM, 20G disk openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability # uncomment  the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider openshift_master_identity_providers=[{'name':'htpasswd_auth','login':'true','challenge':'true','kind':'HTPasswdPasswordI dentityProvider','filename':'/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host  group for nodes, includes region info [nodes] master.example.com node1.example.com node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'east'}" [etcd] master.example.com

Work on it and wait for results

Ansible - the playbook ~ / openshift - ansible - openshift - ansible - 3.7.0-0.126.0 / playbooks/byo/config. Yml

then

If there is anything wrong, copy the error message down Google. baidu did not! If all is well, you can view the cluster information with some of the following commands

View the Node List

oc get nodes

Who am I

Who is the current logged-in user?

oc whoami

Displays a list of cluster resources

oc get all -o wide

Create a user

htpasswd -b /etc/origin/master/htpasswd dev dev

Log in as cluster administrator

oc login -u system:admin

Add the Cluster Administrator role to the Dev account

oc adm policy add-cluster-role-to-user cluster-admin dev

Make hole

Master.example.com, node1.example.com, are resolved through the local /etc/hosts file and are not accessible through the public network. For public network access, use DNS.

In the native/etc/hostsAdd the following line:

127.0.0.1 master.example.com

Execute the following command to punch a hole into the remote Master

SSH -l 127.0.0.1:8443:master.example.com: 8443 [email protected]

47.88.54.94 is the real IP, but who use the latter do not know!!

Browser open:https://master.example.com:8443

The resources