My first blog post, starting with TIDB, is about deploying standalone TIDB.

Blogging philosophy:

  1. Not step by step to record, reading official documents is the first material;
  2. While reviewing official documents, note the general point of your problem.
  3. Supplement the problem points and general process

Prepare the environment

  • Please refer to pingcap.com/docs-cn/sta…

  • Main concerns: firewall, secret free login, close swap

  • Note the following when preparing the environment: The current host allows tiDB users to log in without password:
Ssh-copy-id -i ~/. SSH /id_rsa.pub tidb@ Local hostCopy the code

Start the installation

  • Please refer to pingcap.com/docs-cn/sta…

    Refer to solution 3. Solution 3: Use TiUP Cluster to simulate production deployment steps on a single machine

Download and install TiUP

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
Copy the code

Install the Cluster component of TiUP


tiup cluster
Copy the code

Configure template 10.0.1.1 To Replace your OWN IP address

#@ #@ Global variables are applied to all deployments and used as the default value of
#@ #@ the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"
    
#@ #@ Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115
    
server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
 tiflash:
   logger.level: "info"Tidb_servers: -host: 10.0.1.1 tikv_Servers: -host: 10.0.1.1 port: 20160 status_port: 20180-host: 10.0.1.1 port: 20161 status_port: 20181 -host: 10.0.1.1 port: 20162 status_port: 20182 tiflash_Servers: -host: 10.0.1.1 monitoring_Servers: -host: 10.0.1.1 grafana_Servers: -host: 10.0.1.1Copy the code

Run the cluster deployment command



[root@localhost tidb]# @tiup cluster deploy tidb-test v4.0.0./ topological. Yaml -i ~/.ssh/id_rsa --user tidbStarting component `cluster`: / root/tiup/components/cluster/v1.0.3 / tiup - cluster deploy tidb - test v4.0.0. / topology yaml - I/root /. SSH/id_rsa - user tidb Please confirm your topology: TiDB Cluster: tidb-test TiDB Version: V4.0.0 Type Host Ports OS/Arch Directories -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- pd 192.168.56.14 Linux/x86_64 2379/2380 /tidb-deploy/ pD-2379,/tidb-data/ PD-2379 tiKV 192.168.56.14 20160/20180 Linux /x86_64 /tidb-deploy/ tikV-20160,/tidb-data/ tikV-20160 tikv 192.168.56.14 20161/20181 Linux /x86_64 /tidb-deploy/ tikV-20161,/tidb-data/ tikV-20161 tikv 192.168.56.14 20162/20182 Linux /x86_64 /tidb-deploy/ tiKV-20162,/tidb-data/ tikV-20162 tidb 192.168.56.144000 /10080 Linux /x86_64 /tidb-deploy/ tikv-4000 tiflash 192.168.56.14 9000/8123 20292/8234/3930/20170 / Linux/x86_64 / tidb - deploy/tiflash - 9000, Prometheus/tidb - data/tiflash - 9000 192.168.56.14 9090 Linux /x86_64 /tidb-deploy/ Prometheus -9090,/tidb-data/ Prometheus -9090 grafana 192.168.56.14 3000 linux/x86_64 /tidb-deploy/grafana-3000 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflictsin same host.
Do you want to continue? [y/N]: y + Generate SSH keys ... Done + Download TiDB Components - Download PD: V4.0.0 (Linux/AMD64)... Done - Download TikV: V4.0.0 (Linux/AMD64)... Done-download TiDB: V4.0.0 (Linux/AMD64)... Done - Download TiFlash: V4.0.0 (Linux/AMD64)... Done - Download Prometheus: V4.0.0 (Linux/AMD64)... Done-download Grafana: V4.0.0 (Linux/AMD64)... Done - Download node_EXPORTER: V0.17.0 (Linux/AMD64)... Done - Download BlackBox_EXPORTER: V0.12.0 (Linux/AMD64)... Done + Initialize target host environments - Prepare 192.168.56.14:22... Done + Copy files - Copy pd -> 192.168.56.14... Done - Copy tikv -> 192.168.56.14... Done - Copy tikv -> 192.168.56.14... Done - Copy tikv -> 192.168.56.14... Done-copy tidb -> 192.168.56.14... Done-copy tiflash -> 192.168.56.14... Done-copy Prometheus -> 192.168.56.14... Done-copy grafana -> 192.168.56.14... Done - Copy node_exporter -> 192.168.56.14... Done - Copy blackbox_exporter -> 192.168.56.14... Done + Check status Deployed cluster `tidb-test` successfully, you can start the cluster via `tiup cluster start tidb-test`Copy the code

Start the cluster

[root@localhost tidb]#@ tiup cluster start tidb-testStarting component `cluster`: / root/tiup/components/cluster/v1.0.3 / tiup - cluster start tidb - test Starting cluster tidb - test... + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub + [Parallel] - UserSSH: User =tidb, host=192.168.56.14 + [Parallel] -userssh: user=tidb, host=192.168.56.14 + [Parallel] -userssh: User =tidb, host=192.168.56.14 + [Parallel] -userssh: user=tidb, host=192.168.56.14 + [Parallel] -userssh: User =tidb, host=192.168.56.14 + [Parallel] -userssh: user=tidb, host=192.168.56.14 + [Parallel] -userssh: User =tidb, host=192.168.56.14 + [Parallel] -userSSH: user=tidb, host=192.168.56.14 + [Serial] -clusteroperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:falseSSHTimeout:5 OptTimeout:60 APITimeout:300} Starting Component pd 192.168.56.14:2379 SUCCESS Starting Component node_half Starting instance 192.168.56.14 Start 192.168.56.14 SUCCESS Starting Component Blackbox_half Starting instance 192.168.56.14 Start 192.168.56.14 SUCCESS Starting Component tikv Starting instance tikV 192.168.56.14:20162 Starting instance tikV 192.168.56.14:20160 Starting instance tikV 192.168.56.14:20161 Start tikv 192.168.56.14:20160 success Start tikv 192.168.56.14:20162 success Start tikv 192.168.56.14:20161 Success Component tidb Starting instance tidb 192.168.56.14:4000 Start tidb 192.168.56.14:4000 Success Starting Component tiflash Starting instance tiflash 192.168.56.14:9000 Start tiflash 192.168.56.14:9000 Success Starting Component Prometheus Starting instance Prometheus 192.168.56.14:9090 Start Prometheus 192.168.56.14:9090 success Starting Component grafana Grafana 192.168.56.14:3000 Success Checking service state of PD 192.168.56.14 Active: active (running) since Sat 2020-06-06 05:51:30 UTC; 25s ago Checking Service State of TIkV 192.168.56.14 Active: Active (running) since Sat 2020-06-06 05:51:32 UTC; 23s ago 192.168.56.14 Active: Active (running) since Sat 2020-06-06 05:51:32 UTC; 23s ago 192.168.56.14 Active: Active (running) since Sat 2020-06-06 05:51:32 UTC; 24s ago Checking Service State of TIdb 192.168.56.14 Active: Active (running) Since Sat 2020-06-06 05:51:36 UTC; 19s ago Checking Service State of Tiflash 192.168.56.14 Active: Active (running) since Sat 2020-06-06 05:51:43 UTC; 13s ago Checking Service State of Prometheus 192.168.56.14 Active: Active (running) Since Sat 2020-06-06 05:51:47 UTC; 13s ago Checking Service State of Prometheus 192.168.56.14 Active: Active (running) since Sat 2020-06-06 05:51:47 UTC; 9s ago Checking Service State of Grafana 192.168.56.14 Active: Active (running) since Sat 2020-06-06 05:51:48 UTC; 8s ago + [ Serial ] - UpdateTopology: cluster=tidb-test Started cluster `tidb-test` successfullyCopy the code

Check the status

[root@localhost tidb]#@ tiup cluster listStarting component `cluster`: / root/tiup/components/cluster/v1.0.3 / tiup - cluster list Name User Version Path PrivateKey -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- tidb test tidb v4.0.0 / root/tiup/storage/cluster/clusters/tidb - test /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa [root@localhost tidb]#@ tiup cluster display tidb-testStarting component ` cluster ` : / root /. Tiup/components/cluster/v1.0.3 / tiup - cluster display tidb - test tidb cluster: tidb-test TiDB Version: V4.0.0 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - 192.168.56.14:3000 grafana 192.168.56.14 3000 Linux /x86_64 Up - /tidb-deploy/grafana-3000 192.168.56.14:2379 pd 192.168.56.14 2379/2380 Linux/x86_64 Healthy | L | UI/tidb - data/tidb - deploy/pd/pd - 2379-2379 192.168.56.14:9090 Prometheus 192.168.56.14 9090 Linux /x86_64 Up /tidb-data/ promethet-9090 /tidb-deploy/ promethet-9090 192.168.56.14:4000 tidb 192.168.56.144000/10080 Linux /x86_64 Up - /tidb-deploy/tidb-4000 192.168.56.14:9000 tiflash 192.168.56.14 9000/8123 20292/8234 Linux/x86_64/3930/20170 / Up/tidb - data/tiflash - 9000 / tidb - deploy / 192.168.56.14 tiflash - 9000:20160 Tikv 192.168.56.14 20160/20180 Linux /x86_64 Up /tidb-data/ tiKV-20160 /tidb-deploy/ tikV-20160 192.168.56.14:20161 tikv 192.168.56.14 20161/20181 Linux /x86_64 Up /tidb-data/ tiKV-20161 /tidb-deploy/ tiKV-20161 192.168.56.14:20162 tikv 192.168.56.14 20162/20182 Linux /x86_64 Up /tidb-data/ tikV-20162 /tidb-deploy/ tikV-20162 [root@localhost tidb]# @

Copy the code

Executing command tests

[root@localhost tidb]#@ mysql -h 127.0.0.1 -P 4000 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 86
Server version: 5.7.25-TiDB-v4.0.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 
mysql> 
mysql> 
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA     |
| PERFORMANCE_SCHEMA |
| mysql              |
| test               |
+--------------------+
5 rows in set(0.00 SEC) mysql >Copy the code

Effect of interface

Grafana monitoring of TiDB

The Dashboard TiDB