Initializing the Environment

Minimum operating system installation. The official recommended operating system version is CentOS 7.3 or later.

[root@localhost ~]# cat /etc/redhat-release CentOS Linux release 7.8.2003 (Core) [root@localhost ~]# uname -r 3.10.0-1127. El7 x86_64Copy the code

The memory must be at least 4 GB, and cluster startup failure may occur.

Initialize the environment with the following script:

[root@localhost ~]# vi tidb-init.sh
#! /bin/bash

#Disabling the Firewall
echo "=========stop firewalld============="
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

#Close the NetworkManager
echo "=========stop NetworkManager ============="
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl status NetworkManager

#Close the selinux
echo "=========disable selinux============="
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
getenforce

#Close the swap
echo "=========close swap============="
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
free -m

#Time synchronizationecho "=========sync time=============" yum install chrony -y cat >> /etc/chrony.conf << EOF server ntp.aliyun.com iburst  EOF systemctl start chronyd systemctl enable chronyd chronyc sources
#Configuration yum
echo "=========config yum============="
yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix -y
wget -O /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo  http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

[root@localhost ~]# sh tidb-init.sh
Copy the code

Configuring the network:

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="96350fad-1ffc-4410-a068-5d13244affb7"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.44.134
NETMASK=255.255.255.0
GATEWAY=192.168.44.2
DNS1=192.168.44.2

[root@localhost ~]# /etc/init.d/network restart
Copy the code

Change the host name:

[root@localhost ~]# hostnamectl set-hostname tidbtest01
[root@tidbtest01 ~]# echo "192.168.44.134   tidbtest01" >> /etc/hosts
Copy the code

Cluster topology

Minimum size TiDB cluster topology:

The instance The number of IP configuration
TiKV 3 192.168.44.134 192.168.44.134 192.168.44.134 Avoid port and directory conflicts
TiDB 1 192.168.44.134 Default port global directory configuration
PD 1 192.168.44.134 Default port global directory configuration
TiFlash 1 192.168.44.134 Default port global directory configuration
Monitor 1 192.168.44.134 Default port global directory configuration

Implementation of the deployment

  1. Download and install TiUP
/ root @ tidbtest01 ~ # curl -- proto '= HTTPS -- tlsv1.2 - sSf https://tiup-mirrors.pingcap.com/install.sh | sh  %Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8697k  100 8697k    0     0  4637k      0  0:00:01  0:00:01 --:--:-- 4636k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

[root@tidbtest01 ~]# source /root/.bash_profile
Copy the code
  1. Install the Cluster component of TiUP
[root@tidbtest01 ~]# tiup cluster The component `cluster` is not installed; Downloading from the repository. Download https://tiup-mirrors.pingcap.com/cluster-v1.3.2-linux-amd64.tar.gz 10.05 MiB / 10.05 MiB 100.00% 9.91 MiB P/S Starting Component 'cluster' : / root/tiup/components/cluster/v1.3.2 / tiup - cluster Deploy a TiDB cluster for production Usage: tiup cluster [command] Available Commands: check Perform preflight checks for the cluster. deploy Deploy a cluster for production start Start a TiDB cluster stop Stop a TiDB cluster restart Restart a TiDB cluster scale-in Scale in a TiDB cluster scale-out Scale out a TiDB cluster destroy Destroy a specified cluster clean (EXPERIMENTAL) Cleanup a specified cluster upgrade Upgrade a specified TiDB cluster exec Run shell command on host in the tidb cluster display Display information of a TiDB cluster prune Destroy and remove instances that is in tombstone state list List all clusters audit Show audit log of cluster operation import Import an exist TiDB cluster from TiDB-Ansible edit-config Edit TiDB cluster config. Will use editor from environment variable `EDITOR`, default use vi reload Reload a TiDB cluster's config and restart if needed patch Replace the remote package with a specified package and restart the service rename Rename the cluster enable Enable a TiDB cluster automatically at boot disable Disable starting a TiDB cluster automatically at boot help Help about any command Flags: -h, --help help for tiup --ssh string (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'. --ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5) -v, --version version for tiup --wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120) -y, --yes Skip all confirmations and assumes 'yes' Use "tiup cluster help [command]" for more information about a command.Copy the code
  1. Example Set the connection number limit of the SSHD service
[root@tidbtest01 ~]# vi /etc/ssh/sshd_config
MaxSessions 20
[root@tidbtest01 ~]# service sshd restart
Copy the code
  1. Creating a Configuration File
[root@tidbtest01 ~]# vi topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "Info" pd_Servers: -host: 192.168.44.134 tidb_Servers: -host: 192.168.44.134 tikv_Servers: -host: 192.168.44.134 192.168.44.134 port: 20160 status_port: 20180 config: server.labels: {host: "logic-host-1"} -host: 192.168.44.134 port: 20161 status_port: 20181 config: server.labels: {host: "logic-host-2"} -host: 192.168.44.134 port: 20162 status_port: 20182 config: server.labels: {host: "logic-host-3"} tiflash_Servers: -host: 192.168.44.134 Monitoring_Servers: -host: 192.168.44.134 Grafana_Servers: -host: 192.168.44.134Copy the code
  1. Viewing available versions

Run the tiup list tidb command to view the tiDB versions supported by the current deployment

[root@tidbtest01 ~]# tiup list tidb
Copy the code
  1. The deployment of the cluster

Command as follows:

tiup cluster deploy <cluster-name> <tidb-version> ./topo.yaml --user root -p
Copy the code
  • parametercluster-nameIndicates the cluster name
  • parametertidb-versionIndicates setting the cluster version
Yaml --user root -p Starting component 'cluster' : [root@tidbtest01 ~]# tiup cluster deploy tidb-cluster v4.0.10./topo.yaml --user root -p Starting component 'cluster' : / root/tiup/components/cluster/v1.3.2 / tiup - cluster deploy tidb - cluster v4.0.10. / topo yaml - user root -p both Please confirm  your topology: Cluster type: tidb Cluster name: tidb-cluster Cluster version: V4.0.10 Type Host Ports OS/Arch Directories -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- pd 192.168.44.134 Linux/x86_64 2379/2380 /tidb-deploy/ pD-2379,/tidb-data/ PD-2379 tiKV 192.168.44.134 20160/20180 Linux /x86_64 /tidb-deploy/ tikV-20160,/tidb-data/ tikV-20160 tikv 192.168.44.134 20161/20181 Linux /x86_64 /tidb-deploy/ tikV-20161,/tidb-data/ tikV-20161 tikv 192.168.44.134 20162/20182 Linux /x86_64 /tidb-deploy/ tiKV-20162,/tidb-data/ tikV-20162 tidb 192.168.44.134 4000/10080 Linux /x86_64 /tidb-deploy/tidb-4000 tiflash 192.168.44.134 9000/8123 20292/8234/3930/20170 / Linux/x86_64 / tidb - deploy/tiflash - 9000, / tidb - data/tiflash - 9000 Prometheus 192.168.44.134 9090 Linux /x86_64 /tidb-deploy/ promethes-9090,/tidb-data/ promethes-9090 grafana 192.168.44.134 3000 Linux /x86_64 /tidb-deploy/grafana-3000 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y/N]: y Input SSH password: Generate SSH keys... Done + Download TiDB Components - Download PD: V4.0.10 (Linux/AMD64)... Done - Download TikV: V4.0.10 (Linux/AMD64)... Done-download TIDB: V4.0.10 (Linux/AMD64)... Done - Download TiFlash: V4.0.10 (Linux/AMD64)... Done - Download Prometheus: V4.0.10 (Linux/AMD64)... Done-download Grafana: V4.0.10 (Linux/AMD64)... Done - Download node_EXPORTER: V0.17.0 (Linux/AMD64)... Done - Download BlackBox_EXPORTER: V0.12.0 (Linux/AMD64)... Done + Initialize target host environments - Prepare 192.168.44.134:22... Done + Copy files - Copy pd -> 192.168.44.134... Done - Copy tikv -> 192.168.44.134... Done - Copy tikv -> 192.168.44.134... Done - Copy tikv -> 192.168.44.134... Done-copy tidb -> 192.168.44.134... Done-copy tiflash -> 192.168.44.134... Done - Copy Prometheus -> 192.168.44.134... Done - Copy grafana -> 192.168.44.134... Done - Copy node_exporter -> 192.168.44.134... Done - Copy blackbox_exporter -> 192.168.44.134... Done + Check status Enabling component PD Enabling instance PD 192.168.44.134:2379 Enable PD 192.168.44.134:2379 success  Enabling component node_exporter Enabling component blackbox_exporter Enabling component tikv Enabling instance tikv 192.168.44.134:20162 Enabling the instance tikv 192.168.44.134:20160 Enabling the instance tikv 192.168.44.134:20161 Enable tikv 192.168.44.134:20162 success Enable tikv 192.168.44.134:20161 success Enable tikv 192.168.44.134:20160 success Enabling Component Tidb Enabling instance TIDB 192.168.44.134:4000 Enable TIDB 192.168.44.134:4000 Success Enabling Component Tiflash Enabling Instance Tiflash 192.168.44.134:9000 Enable Tiflash 192.168.44.134:9000 Success Enabling Component Prometheus Enabling instance Prometheus 192.168.44.134:9090 Component grafana Enabling instance grafana 192.168.44.134:3000 Enable grafana 192.168.44.134:3000 Success Cluster `tidb-cluster` deployed successfully, you can start it with command: `tiup cluster start tidb-cluster`Copy the code
  1. Start the cluster
[root@tidbtest01 ~]# tiup cluster start tidb-cluster Starting component `cluster`: / root/tiup/components/cluster/v1.3.2 / tiup - cluster start tidb - cluster Starting cluster tidb - cluster... + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsa.pub + [Parallel] - UserSSH: User =tidb, host=192.168.44.134 + [Parallel] -userssh: user=tidb, host=192.168.44.134 + [Parallel] -userssh: User =tidb, host=192.168.44.134 + [Parallel] -userssh: user=tidb, host=192.168.44.134 + [Parallel] -userssh: User =tidb, host=192.168.44.134 + [Parallel] -userssh: user=tidb, host=192.168.44.134 + [Parallel] -userssh: User =tidb, host=192.168.44.134 + [Parallel] -userssh: user=tidb, Host =192.168.44.134 + [Serial] -startCluster Starting Component pd Starting instance pd 192.168.44.134:2379 Start pd 192.168.44.134:2379 SUCCESS Starting Component node_half Starting instance 192.168.44.134 Start 192.168.44.134 Component blackbox_half Starting instance 192.168.44.134 Start 192.168.44.134 SUCCESS Component tikV Starting instance tikV 192.168.44.134:20162 Starting instance TikV 192.168.44.134:20160 Starting instance Tikv 192.168.44.134:20161 Start tikv 192.168.44.134:20162 success Start tikv 192.168.44.134:20161 success Start tikv 192.168.44.134:20161 192.168.44.134:20160 Success Starting Component tidb 192.168.44.134:4000 Start tidb 192.168.44.134:4000 Success Starting Component tiflash Starting instance tiflash 192.168.44.134:9000 Start tiflash 192.168.44.134:9000 Success Starting Component Prometheus Starting instance Prometheus 192.168.44.134:9090 Start Prometheus 192.168.44.134:9090 success Starting Component grafana Starting instance grafana 192.168.44.134:3000 Start Grafana grafana 192.168.44.134:3000 SUCCESS + [Serial] - UpdateTopology: cluster=tidb-cluster Started cluster `tidb-cluster` successfullyCopy the code

Access to the cluster

  1. Install the MySQL client
[root@tidbtest01 ~]# yum -y install mysql
Copy the code
  1. Access TiDB with an empty password
[root@tidbtest01 ~]# mysql -h 192.168.44.134 -p 4000 -u root Welcome to the MariaDB monitor. Commands end with; or \g. Your MySQL connection id is 2 Server version: 5.7.25-TIDB-v4.0.10 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 Compatible Copyright (C) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | INFORMATION_SCHEMA | | METRICS_SCHEMA | | PERFORMANCE_SCHEMA | | mysql | | test | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + 5 rows in the set (0.00 SEC) mysql/(none) > select tidb_version()\G *************************** 1. row *************************** tidb_version(): Release Version: V4.0.10 Edition: Community Git Commit Hash: dbade8cda4c5a329037746e171449e0a1dfdb8b3 Git Branch: Heads/Refs /tags/ V4.0.10 UTC Build Time: 2021-01-15 02:59:27 GoVersion: GO1.13 Race Enabled: false TiKV Min Version: V3.0.0-60965 b006877ca7234adaced7890d7b029ed1306 Check Table Before Drop: false 1 row in the set (0.00 SEC)Copy the code
  1. View the list of deployed clusters
[root@tidbtest01 ~]# tiup cluster list Starting component `cluster`: / root/tiup/components/cluster/v1.3.2 / tiup - cluster list Name User Version Path PrivateKey -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- tidb cluster tidb v4.0.10 / root/tiup/storage/cluster/clusters/tidb - cluster /root/.tiup/storage/cluster/clusters/tidb-cluster/ssh/id_rsaCopy the code
  1. View the topology and status of the cluster
[root@tidbtest01 ~]# tiup cluster display tidb-cluster Starting component `cluster`: / root/tiup/components/cluster/v1.3.2 / tiup - cluster display tidb - cluster cluster type: tidb cluster name: Tidb-cluster cluster version: v4.0.10 SSH type: builtin Dashboard URL: http://192.168.44.134:2379/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 192.168.44.134:3000 grafana 192.168.44.134 Linux/x86_64 Up - 3000 / tidb - deploy/grafana - 3000 192.168.44.134:2379 pd 192.168.44.134 2379/2380 Linux/x86_64 Up | L | UI/tidb - data/tidb - deploy/pd/pd - 2379-2379 192.168.44.134:9090 Prometheus 192.168.44.134 9090 Linux /x86_64 Up /tidb-data/ Prometheus -9090 /tidb-deploy/ prometry-9090 192.168.44.134:4000 tidb 192.168.44.134 4000/10080 Linux /x86_64 Up - /tidb-deploy/tidb-4000 192.168.44.134:9000 tiflash 192.168.44.134 9000/8123 20292/8234 Linux/x86_64/3930/20170 / Up/tidb - data/tiflash - 9000 /tidb-deploy/tiflash-9000 192.168.44.134:20160 tikv 192.168.44.134 20160/20180 Linux /x86_64 Up /tidb-data/ tiKV-20160 /tidb-deploy/ tiKV-20160 192.168.44.134:20161 tikv 192.168.44.134 20161/20181 Linux /x86_64 Up /tidb-data/ tiKV-20161 20162/20182 Linux /x86_64 Up /tidb-data/ tiKV-20162 / deploy/ tikV-20161 192.168.44.134:20162 tikv 192.168.44.134 20162/20182 Linux /x86_64 Up /tidb-data/ tikV-20162 /tidb-deploy/tikv-20162 Total nodes: 8Copy the code
  1. Access to the Dashboard

Through the above output Dashboard URL: http://192.168.44.134:2379/dashboard access to the cluster console page, the default user name root, password is empty.

  1. Access Grafana monitoring of TiDB

Access the grafana monitoring page using the grafana Host and Ports output above. The default user name and password are admin.