An overview of

Background: There has been some confusion about the functionality of ETCD in k8S applications recently. Learning etCD separately can help you understand some features of K8S more deeply.

1.1 introduction of etcd

Etcd is an open source project launched by the CoreOS team in June 2013 with the goal of building a highly available distributed key-value database. Raft protocol is used as consistency algorithm in ETCD, and ETCD is implemented based on Go language.

1.2 Development History

1.3 Features of etCD

  • Simple: Easy to install and configure, and with the HTTP API provided for interaction, it’s easy to use
  • Security: Supports SSL certificate verification
  • Fast: According to official Benchmark data, single instance supports 2k+ read operations per second
  • Reliability: Raft algorithm is used to achieve the availability and consistency of distributed system data

1.4 Concept Terms

  • Raft: An algorithm used by ETCD to ensure strong consistency of distributed systems.

  • Node: a Raft state machine instance.

  • Member: an etCD instance. It manages a Node and serves client requests.

  • Cluster: An ETCD Cluster consisting of multiple members that can work together.

  • Peer: the name of another Member in the same ETCD cluster.

  • Client: the Client that sends HTTP requests to the ETCD cluster.

  • WAL: write – ahead log format. Etcd log format for persistent storage.

  • Snapshot: ETCD Snapshot set to prevent WAL files from being too many. Etcd data status is stored.

  • Proxy: A mode of ETCD that provides reverse Proxy services for etCD clusters.

  • Leader: The node generated by campaigning in the Raft algorithm to process all data submissions.

  • Follower: Failed nodes act as slave nodes in Raft, providing a strong consistency guarantee for the algorithm.

  • Candidate: When the Follower fails to receive the heartbeat of the Leader for a certain period of time, the Follower becomes the Candidate and starts the campaign.

  • Term: A node becomes the Leader until the next election.

  • Index: indicates the data item number. Raft uses Term and Index to locate data.

1.5 Data read and write sequence

To ensure strong data consistency, all data in an ETCD cluster flows in the same direction, from the Leader (primary node) to the followers. That is, the data of all followers must be consistent with that of the Leader. If the data is inconsistent, it will be overwritten.

Users can read and write data to all nodes in the ETCD cluster

  • Read: Since data of all nodes in the cluster is strongly consistent, read data can be read from any node in the cluster
  • Write: The ETCD cluster has a leader. If a write is written to the leader, it can be written directly to the leader, and then the leader will distribute the write to all followers. If a write is written to the Follower, then the leader will distribute the write to all followers

1.6 leader election

The Raft algorithm uses the random Timer to initialize the Leader election process. The first node completes the Timer first, and then it sends a request to the other two nodes to become the Leader. The other nodes receive the request and respond with a vote and the first node is elected as the Leader.

After becoming the Leader, the node sends notifications to other nodes at a fixed interval to ensure that it remains the Leader. In some cases when followers do not receive notification from the Leader, such as when the Leader is down or disconnected, the other nodes will repeat the previous election process to elect a new Leader.

1.7 Checking whether data is written

Etcd considers a write to be successful when the write request is processed by the Leader node and distributed to a majority of nodes. How to determine how many nodes, assuming the sum number is N, then Quorum=N/2+1 for most nodes. On the question of how to determine how many nodes an ETCD cluster should have, the diagram on the left of the figure shows the Quorum number corresponding to the total number of nodes in the cluster (Instances). Subtract Quorom from Instances is the number of fault-tolerant nodes (nodes that are allowed to fail) in the cluster.

Therefore, the recommended minimum number of nodes in a cluster is 3, because the number of fault-tolerant nodes of 1 and 2 nodes is 0, and once one node goes down, the whole cluster will not work properly.

Etcd architecture and analysis

2.1 architecture diagram

2.2 Architecture Resolution

From the etCD architecture diagram, we can see that etCD is divided into four main parts.

  • HTTP Server: Processes API requests sent by users and requests for synchronization and heartbeat information from other ETCD nodes.
  • Store: Transactions that handle the various functions supported by ETCD, including data indexing, node state changes, monitoring and feedback, event processing and execution, and so on, are concrete implementations of most of the API functions provided by ETCD to users.
  • Raft: The implementation of Raft strong consistency algorithm is the core of ETCD.
  • WAL: Write Ahead Log. It is used to store data on the ETCD. In addition to storing the state of all the data and the index of the node in memory, etCD is persisted through WAL. In WAL, all data is logged before committing.
    • Snapshot is a state Snapshot to prevent data overload.
    • Entry Specifies the stored log content.

In general, a request sent by a user will be forwarded to Store for specific transaction processing via HTTP Server. If node modification is involved, it will be handed to Raft module for state change, log recording, and then synchronized to other ETCD nodes to confirm data submission. Finally, the data is submitted and synchronized again.

3 Application Scenarios

3.1 Service Registration and discovery

Etcd can be used for service registration and discovery

  • Front-end and back-end service registration discovery

The midpoint is already registered in the ETCD for the back-end service. The front end and midpoint can easily find the related servers from the ETCD and then call each other based on the related binding between the servers

  • Multiple groups of back-end server registration discovery

Multiple stateless apps with the same copy can be registered in etCD. The front end can obtain the IP address and port group of the back end from ETCD through HAProxy, and then forward the request, which can be used to failover mask the back end port and multiple app instances of the back end.

3.2 Message publishing and Subscription

An ETCD can act as a messaging middleware. A producer can register a topic and send messages to an ETCD, and a consumer can subscribe to a topic from the ETCD to get the messages sent by the producer to the ETCD.

3.3 Load Balancing

In the backend, multiple groups of the same service providers can be registered with their own services in etCD. Etcd will monitor and check the registered services. Service requests first get the real IP :port of the available service provider from ETCD, and then send requests to the multiple groups of services, etCD acts as load balancing function

3.4 Deployment Notification and Coordination

  • When the ETCD Watch service detects a loss, it will notify the service to check
  • The controller sends the startup service to the ETCD, and the ETCD notifies the service to perform corresponding operations
  • When the service completes the work, the status is updated to the ETCD, which notifies the user

3.5 Distributed Lock

When there are multiple competitor nodes, etCD acts as master control and successfully assigns a lock to one node in a distributed cluster

3.6 Distributed Queue

There are two nodes, etCD creates a corresponding node queue according to each node, and the corresponding competitor can be found in ETCD according to different queues

3.7 Cluster and Monitoring and Leader election

Etcd can select a leader from multiple nodes based on raft algorithm

4 Installation and Deployment

4.1 Single-Node Deployment

You can use binary or source download installation, but you need to write your own configuration file, how to start the need to write their own service startup file, recommended to use yum installation

hostnamectl set-hostname etcd-1
wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm
#The ETCD version in the YUM repository is 3.3.11 and can be installed in binary if the latest etCD is required
yum -y install etcd
systemctl enable etcd
Copy the code

You can view the valid configuration file of the ETCD installed on yum, modify the data storage directory according to your needs, and listen to the port URL/etCD name

  • Etcd stores data in the current path by defaultdefault.etcd/directory
  • inhttp://localhost:2380Communicate with other nodes in the cluster
  • inhttp://localhost:2379Provides HTTP API services for clients to interact
  • The default name of the node isdefault
    • The heartbeat is 100ms and this configuration is explained later
  • Election is 1000ms. This configuration will be explained later
  • Snapshot count is 10000. We’ll see how this configuration works later
  • A UUID is generated for the cluster and each node
  • When launched, raft will run and the leader will be elected
[root@VM_0_8_centos tmp]# grep -Ev "^#|^$" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
[root@VM_0_8_centos tmp]# systemctl status etcd
Copy the code

4.2 Cluster Deployment

In cluster deployment, it is recommended to deploy odd digits to achieve the best cluster fault tolerance

4.2.1 Host information

The host name system The IP address The deployment of the component
etcd-0-8 CentOS 7.3 172.16.0.8 etcd
etcd-0-17 CentOS 7.3 172.16.0.17 etcd
etcd-0-14 CentOS 7.3 172.16.0.14 etcd

4.2.2 host configuration

In this example, the ETCD cluster is deployed with three nodes, and hosts are modified on each node

Cat >> /etc/hosts << EOF 172.16.0.8 etcd-0-8 172.16.0.14 etCd-0-14 172.16.0.17 etCd-0-17 EOFCopy the code

Holdings etcd installation

Etcd is installed on all three nodes

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -ivh epel-release-latest-7.noarch.rpm yum  -y install etcd systemctl enable etcd mkdir -p /data/app/etcd/ chown etcd:etcd /data/app/etcd/Copy the code

4.2.4 etcd configuration

  • Etcd Default configuration file
[root@etcd-0-8 app]# cat /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""ETCD_DATA_DIR="/data/app/etcd/#ETCD_WAL_DIR=""ETCD_LISTEN_PEER_URLS="http://172.16.0.8:2380" And the other nodes company address ETCD_LISTEN_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.8:2379 #" foreign service address#ETCD_MAX_SNAPSHOTS="5"																										# etcd Maximum number of snapshots to save
#ETCD_MAX_WALS="5"																												# etCD Max WalsETCD_NAME="etcd-0-8" # etCD node name, unique in the cluster#ETCD_SNAPSHOT_COUNT="100000"													      # Specifies how many transactions are committed when the interception snapshot is triggered to be saved to disk
#ETCD_HEARTBEAT_INTERVAL="100"															# How often the leader sends heartbeat to followers The default value is 100ms
#ETCD_ELECTION_TIMEOUT="1000"			                          If follow does not receive a heartbeat packet within this interval, a revote will be triggered. Default is 1000 ms
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.0.8:2380" This value will tell the other nodes in the cluster ETCD_ADVERTISE_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.8:2379 #" foreign announcement client to monitor the node address, and this value will tell the other nodes in the cluster#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""ETCD_INITIAL_CLUSTER = "etcd - 0 to 8 = http://172.16.0.8:2380, etcd - 0-17 = http://172.16.0.17:2380, etcd - 0 to 14 = http://172.16.0.14:238 ETCD_INITIAL_CLUSTER_TOKEN=" etCD-token "ETCD_INITIAL_CLUSTER_TOKEN=" etCD-token" This way, if you recreate the cluster, the new cluster and node UUIds will be generated again, even if the configuration is the same as before. ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_STATE="new"#ETCD_STRICT_RECONFIG_CHECK="true"									 # When creating a new cluster, this value is new; If the cluster already exists, the value is existing
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
Copy the code

Etcd – 0 to 8 configuration:

[root@etcd-server ~]# hostnamectl set-hostname etcd-0-8 [root@etcd-0-8 ~]# egrep "^#|^$" /etc/etcd/etcd.conf -v ETCD_DATA_DIR = "/ data/app/etcd/" ETCD_LISTEN_PEER_URLS =" http://172.16.0.8:2380" ETCD_LISTEN_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.8:2379" ETCD_NAME = "etcd - 0 to 8" ETCD_INITIAL_ADVERTISE_PEER_URLS = "http://172.16.0.8:2380" ETCD_ADVERTISE_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.8:2379" ETCD_INITIAL_CLUSTER = "etcd - 0 to 8 = http://172.16.0.8:2380, etcd - 0-17 = http://172.16.0.17:2380, etcd - 0 to 14 = http://172.16.0.14:238 0" ETCD_INITIAL_CLUSTER_TOKEN="etcd-token" ETCD_INITIAL_CLUSTER_STATE="new"Copy the code

Etcd – 0 to 14 configuration:

[root@etcd-server ~]# hostnamectl set-hostname etcd-0-14 [root@etcd-server ~]# mkdir -p /data/app/etcd/ [[email protected] ~] # egrep "^ # | ^ $"/etc/etcd/etcd conf - v ETCD_DATA_DIR ="/data/app/etcd/" ETCD_LISTEN_PEER_URLS = "http://172.16.0.14:2380" ETCD_LISTEN_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.14:2379" ETCD_NAME = "etcd - 0 to 14" ETCD_INITIAL_ADVERTISE_PEER_URLS = "http://172.16.0.14:2380" ETCD_ADVERTISE_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.14:2379" ETCD_INITIAL_CLUSTER = "etcd - 0 to 8 = http://172.16.0.8:2380, etcd - 0-17 = http://172.16.0.17:2380, etcd - 0 to 14 = http://172.16.0.14:238 0" ETCD_INITIAL_CLUSTER_TOKEN="etcd-token" ETCD_INITIAL_CLUSTER_STATE="new"Copy the code
  • Etcd 0 to 7 configuration:
[root@etcd-server ~]# hostnamectl set-hostname etcd-0-17 [root@etcd-server ~]# mkdir -p /data/app/etcd/ [root@etcd-0-17 ~] # egrep "^ # | ^ $"/etc/etcd/etcd conf - v ETCD_DATA_DIR ="/data/app/etcd/" ETCD_LISTEN_PEER_URLS = "http://172.16.0.17:2380" ETCD_LISTEN_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.17:2379" ETCD_NAME = "etcd - 0-17" ETCD_INITIAL_ADVERTISE_PEER_URLS = "http://172.16.0.17:2380" ETCD_ADVERTISE_CLIENT_URLS = "http://127.0.0.1:2379, http://172.16.0.17:2379" ETCD_INITIAL_CLUSTER = "etcd - 0 to 8 = http://172.16.0.8:2380, etcd - 0-17 = http://172.16.0.17:2380, etcd - 0 to 14 = http://172.16.0.14:238 0" ETCD_INITIAL_CLUSTER_TOKEN="etcd-token" ETCD_INITIAL_CLUSTER_STATE="new"Copy the code
  • After the configuration, start the service
systemctl start etcd
Copy the code

4.2.5 Checking cluster Status

  • Check the ETCD status
[root@etcd-0-8 default.etcd]# systemctl status etcd ● etcd.service Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; Vendor PRESET: Disabled) Active: Active (running) since 二 2019-12-03 15:55:28 CST; 8s ago Main PID: 24510 (etcd) CGroup: / system. Slice/etcd service └ ─ 24510 / usr/bin/etcd - name = etcd - 0 to 8 - data - dir = / data/app/etcd / --listen-client-urls=http://172.16.0.8:2379 December 03 15:55:28 etcd-0-8 etcd[24510]: Set the initial cluster version to 3.0 12月 03 15:55:28 ETCD-0-8 ETCD [24510]: Enabled capabilities for version 3.0 12月 03 15:55:30 ETCD-0-8 ETCD [24510]: Peer 56e0b6DAD4c53d42 became active December 03 15:55:30 ETCD-0-8 ETCD [24510]: Established A TCP streaming connection with peer 56e0B6DAD4C53D42 (Stream Message Reader) 12月 03 15:55:00ETCD-0-8 etcd[24510]: Established A TCP streaming connection with peer 56e0B6DAD4C53D42 (Stream Message Writer) 12月 03 15:55:00ETCD-0-8 etcd[24510]: Established A TCP streaming Connection with peer 56e0B6DAD4C53D42 (Stream MsgApp V2 Reader) 12月 03 15:55:00ETCD-0-8 etcd[24510]: Established A TCP streaming connection with peer 56e0B6DAD4C53D42 (STREAM MsgApp V2 Writer) dec 03 15:55:32 ETCD-0-8 Etcd [24510]: Updating the cluster version from 3.0 to 3.3 dec 03 15:55:32 ETCD 0-8 ETcd [24510]: Updated the cluster version from 3.0 to 3.3 December 03 15:55:32 ETCD-0-8 ETCD [24510]: enabled capabilities for version 3.3Copy the code
  • Check the port listening (if the loopback address is not listening locally, then etCDctl cannot connect to it properly locally)
[root @ etcd - 0 to 8 default etcd] # netstat lntup | grep etcd TCP 0 0 172.16.0.8:0.0.0.0:2379 * 25167 / LISTEN etcd TCP 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 25167/etcd TCP 0 0 172.16.0.8:2380 0.0.0.0:* LISTEN 25167/etcdCopy the code
  • View cluster status (see ETCD-0-17)
[root@etcd-0-8 default.etcd]# etcdctl member list 2d2e457c6a1a76cb: Name = etcd - 0 to 8 peerURLs clientURLs = = http://172.16.0.8:2380, http://127.0.0.1:2379, http://172.16.0.8:2379 isLeader = false 56e0b6dad4c53d42: Name = etcd - 0 to 14 peerURLs clientURLs = = http://172.16.0.14:2380, http://127.0.0.1:2379, http://172.16.0.14:2379 isLeader = true d2d2e9fc758e6790: Name = etcd - 0-17 peerURLs clientURLs = = http://172.16.0.17:2380, http://127.0.0.1:2379, http://172.16.0.17:2379 isLeader = false [root@etcd-0-8 ~]# etcdctl cluster-health member 2d2e457c6a1a76cb is healthy: Got healthy result from http://127.0.0.1:2379 member 56e0B6DAD4C53D42 is healthy: Got healthy result from http://127.0.0.1:2379 member d2d2E9FC758E6790 is healthy: Got healthy result from http://127.0.0.1:2379 cluster is healthyCopy the code

Five Simple use

5.1 increase

  • set

Specifies the value of a key. Such as:

$ etcdctl set /testdir/testkey "Hello world"
Hello world
Copy the code

Supported options include:

--ttl '0'If this parameter is not configured (0 by default), it will never timeout. --swap-with-value value If the current value of the key is value, it will be set. --swap-with-index'0'If the current index value of the key is the specified index, the set operation is performedCopy the code
  • mk

If the given key does not exist, a new key value is created. Such as:

$ etcdctl mk /testdir/testkey "Hello world"
Hello world
Copy the code

When the key is present, an error is reported, for example:

$ etcdctl mk /testdir/testkey "Hello world"
Error:  105: Key already exists (/testdir/testkey) [8]
Copy the code

The following options are supported:

--ttl '0'Timeout duration (unit: second). This parameter is not required. The default value is 0. Never run out of timeCopy the code
  • mkdir

If the given key directory does not exist, a new key directory is created. Such as:

$ etcdctl mkdir testdir2
Copy the code

If the key directory exists, an error is reported when executing this command, for example:

$ etcdctl mkdir testdir2
Error:  105: Key already exists (/testdir2) [9]
Copy the code

The following options are supported:

--ttl '0'Timeout duration (unit: second). If you do not set this parameter (the default value is 0), it will never timeout.Copy the code
  • setdir

Create a key directory. Create the directory if it does not exist, update the directory TTL if it does.

$ etcdctl setdir testdir3
Copy the code

The following options are supported:

--ttl '0'Timeout duration (unit: second). If you do not set this parameter (the default value is 0), it will never timeout.Copy the code

5.2 delete

  • rm

Delete a key value. Such as:

$ etcdctl rm /testdir/testkey
PrevNode.Value: Hello
Copy the code

An error is reported when the key does not exist. Such as:

$ etcdctl rm /testdir/testkey
Error:  100: Key not found (/testdir/testkey) [7]
Copy the code

The following options are supported:

--dir Delete if the key is an empty directory or key-value pair --recursive delete the directory and all subkeys --with-value Check if the existing values match --with-index'0'Check whether existing indexes matchCopy the code
  • rmdir

Delete an empty directory, or a key-value pair.

$ etcdctl setdir dir1
$ etcdctl rmdir dir1
Copy the code

If the directory is not empty, an error is reported:

$ etcdctl set /dir/testkey hi
hi
$ etcdctl rmdir /dir
Error:  108: Directory not empty (/dir) [17]
Copy the code

The 5.3 update

  • update

Update the value content when the key is present. Such as:

$ etcdctl update /testdir/testkey "Hello"
Hello
Copy the code

An error is reported when the key does not exist. Such as:

$ etcdctl update /testdir/testkey2 "Hello"
Error:  100: Key not found (/testdir/testkey2) [6]
Copy the code

The following options are supported:

--ttl '0'Timeout duration (unit: second). If you do not set this parameter (the default value is 0), it will never timeout.Copy the code
  • updatedir

Update an existing directory.

$ etcdctl updatedir testdir2
Copy the code

The following options are supported:

--ttl '0'Timeout duration (unit: second). If you do not set this parameter (the default value is 0), it will never timeout.Copy the code

5.4 the query

  • get

Gets the value of the specified key. Such as:

$ etcdctl get /testdir/testkey
Hello world
Copy the code

An error is reported when the key does not exist. Such as:

$ etcdctl get /testdir/testkey2
Error:  100: Key not found (/testdir/testkey2) [5]
Copy the code

The following options are supported:

--sort sorts the results --consistent sends the request to the master node to ensure the consistency of the obtained content.Copy the code
  • ls

List the keys or subdirectories under the directory (root by default). By default, the contents of subdirectories are not displayed.

Such as:

$ etcdctl ls
/testdir
/testdir2
/dir

$ etcdctl ls dir
/dir/testkey
Copy the code

Supported options include:

--sort sort the output --recursive if there are subdirectories in the directory, prints them recursively -p if the output is a directory, add/at the end to distinguishCopy the code

5.5 watch

  • watch

Monitor changes in a key value, and when the key is updated, output the latest value and exit.

For example, update testKey to Hello Watch.

$ etcdctl get /testdir/testkey
Hello world
$ etcdctl set /testdir/testkey "Hello watch"
Hello watch
$ etcdctl watch testdir/testkey
Hello watch
Copy the code

Supported options include:

-- Forever monitors until the user presses CTRL+C to exit -- After-index'0'Monitor until index is specified -- Recursive returns all key and subkey valuesCopy the code
  • exec-watch

Monitors changes in a key value and executes the given command once the key is updated.

For example, update the testkey value.

$ etcdctl exec-watch testdir/testkey -- sh -c 'ls'
config	Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
Copy the code

Supported options include:

--after-index '0'Monitor until index is specified -- Recursive returns all key and subkey valuesCopy the code

5.6 backup

Back up data on the ETCD.

$ etcdctl backup --data-dir /var/lib/etcd  --backup-dir /home/etcd_backup
Copy the code

Supported options include:

--data-dir EtCD data directory --backup-dir Backs up to the specified pathCopy the code

5.7 member

List, add, and remove ETCD instances to the ETCD cluster using the list, add, and remove commands.

View the nodes in the cluster

$ etcdctl member list
8e9e05c52164694d: name=dev-master-01 peerURLs=http://localhost:2380 clientURLs=http://localhost:2379 isLeader=true
Copy the code

Delete a node from the cluster

$ etcdctl member remove 8e9e05c52164694d
Removed member 8e9e05c52164694d from cluster
Copy the code

Add a node to the cluster

Added $etcdctl member add etcd3 http://192.168.1.100:2380 member named etcd3 with 8 e9e05c52164694d ID to clusterCopy the code

The sample

#Set a key value
[root@etcd-0-8 ~]# etcdctl set /msg "hello k8s"
hello k8s

#Gets the value of key
[root@etcd-0-8 ~]# etcdctl get /msg
hello k8s

#Gets details about the key value
[root@etcd-0-8 ~]# etcdctl -o extended get /msg
Key: /msg
Created-Index: 12
Modified-Index: 12
TTL: 0
Index: 12

hello k8s

#Error in obtaining nonexistent key return
[root@etcd-0-8 ~]# etcdctl get /xxzx
Error:  100: Key not found (/xxzx) [12]

#Set the TTL of the key. After the TTL expires, the key will be automatically deleted
[root@etcd-0-8 ~]# etcdctl set /testkey "tmp key test" --ttl 5
tmp key test
[root@etcd-0-8 ~]# etcdctl get /testkey
Error:  100: Key not found (/testkey) [14]

#Key replacement operation
[root@etcd-0-8 ~]# etcdctl get /msg
hello k8s
[root@etcd-0-8 ~]# etcdctl set --swap-with-value "hello k8s" /msg "goodbye"
goodbye
[root@etcd-0-8 ~]# etcdctl get /msg
goodbye

#Mk is created only if the key does not exist (setOverwrite for the same key.
[root@etcd-0-8 ~]# etcdctl get /msg
goodbye
[root@etcd-0-8 ~]# etcdctl mk /msg "mktest"
Error:  105: Key already exists (/msg) [18]
[root@etcd-0-8 ~]# etcdctl mk /msg1 "mktest"
mktest

#Create a self-sorted key
[root@etcd-0-8 ~]# etcdctl mk --in-order /queue s1
s1
[root@etcd-0-8 ~]# etcdctl mk --in-order /queue s2
s2
[root@etcd-0-8 ~]# etcdctl ls --sort /queue
/queue/00000000000000000021
/queue/00000000000000000022
[root@etcd-0-8 ~]# etcdctl get /queue/00000000000000000021
s1

#Update the key value
[root@etcd-0-8 ~]# etcdctl update /msg1 "update test"
update test
[root@etcd-0-8 ~]# etcdctl get /msg1
update test

#Update the TTL and value of the key
[root@etcd-0-8 ~]# etcdctl update --ttl 5 /msg "aaa"
aaa

#Create a directory
[root@etcd-0-8 ~]# etcdctl mkdir /testdir

#Deleting an empty directory
[root@etcd-0-8 ~]# etcdctl mkdir /test1
[root@etcd-0-8 ~]# etcdctl rmdir /test1

#Delete a non-empty directory
[root@etcd-0-8 ~]# etcdctl get /testdir
/testdir: is a directory
[root@etcd-0-8 ~]#
[root@etcd-0-8 ~]# etcdctl rm --recursive /testdir

#Listing the contents of the directory
[root@etcd-0-8 ~]# etcdctl ls /
/tmp
/msg1
/queue
[root@etcd-0-8 ~]# etcdctl ls /tmp
/tmp/a
/tmp/b

#Recursively lists the contents of the directory
[root@etcd-0-8 ~]# etcdctl ls --recursive /
/msg1
/queue
/queue/00000000000000000021
/queue/00000000000000000022
/tmp
/tmp/b
/tmp/a

#Listen for the key and print the changes when the key changes
[root@etcd-0-8 ~]# etcdctl watch /msg1
xxx

[root@VM_0_17_centos ~]# etcdctl update /msg1 "xxx"
xxx

#Listen to a directory and print out any node changes in the directory
[root@etcd-0-8 ~]# etcdctl watch --recursive /
[update] /msg1
xxx

[root@VM_0_17_centos ~]# etcdctl update /msg1 "xxx"
xxx

#Keep listening unless 'CTL + C' causes you to quit listening
[root@etcd-0-8 ~]# etcdctl watch --forever /


#Listen to the directory and execute a command when it changes
[root@etcd-0-8 ~]# etcdctl exec-watch --recursive / -- sh -c "echo change"
change

# backup[root@etcd-0-14 ~]# etcdctl backup --data-dir /data/app/etcd --backup-dir /root/etcd_backup 2019-12-04 10:25:16.113237 I | ignoring EntryConfChange raft entry 10:25:16 2019-12-04. 113268 I | ignoring EntryConfChange raft entry in the 2019-12-04 s 10:25:16. 113272 I | ignoring EntryConfChange raft entry 10:25:16 2019-12-04. 113293 I | ignoring member attribute update On / 0 / members / 2 d2e457c6a1a76cb/attributes the 2019-12-04 10:25:16. 113299 I | ignoring member attribute update on / 0 / members/d2d2e9fc758e6790 / attributes the 2019-12-04 10:25:16. 113305 I | ignoring member attribute update on 56 e0b6dad4c53d42 / attributes / 0 / members / 2019-12-04 10:25:16. 113310 I | ignoring member attribute update on 56 e0b6dad4c53d42 / attributes / 0 / members / 2019-12-04 10:25:16. 113314 I | ignoring member attribute update on / 0 / members / 2 d2e457c6a1a76cb/attributes the 2019-12-04 10:25:16. 113319 I | ignoring member attribute update on / 0 / members/d2d2e9fc758e6790 / attributes the 2019-12-04 10:25:16. 113384 I | ignoring member attribute update on /0/members/56e0b6dad4c53d42/attributes
#Using version V3[root@etcd-0-14 ~]# export ETCDCTL_API=3 [root@etcd-0-14 ~]# etcdctl - endpoints = "http://172.16.0.8:2379, http://172.16.0.14:2379, http://172.16.0.17:2379," the snapshot save mysnapshot. Db Snapshot saved at mysnapshot.db [root@etcd-0-14 ~]# etcdctl snapshot status mysnapshot.db -w json {"hash":928285884,"revision":0,"totalKey":5,"totalSize":20480}Copy the code

Six summarize

  • By default, etCD only stores 1000 historical events, so it is not suitable for scenarios where there are a lot of update operations, which can cause data loss. The typical application scenarios for ETCD are configuration management and service discovery, which are more read than write.

  • Etcd is much easier to use than ZooKeeper. For true service discovery, however, etCD needs to be used with other tools (such as Registrator, Confd, etc.) to implement automatic registration and update of services.

  • There is currently no graphical tool for ETCD.

Refer to the link

  • Github.com/etcd-io/etc…
  • www.yuque.com/lurunhao/nl…
  • www.hi-linux.com/posts/40915…
  • Cizixs.com/2016/08/02/…
  • Introduction to Etcd Raft and its principles
  • Juejin. Im/post / 684490…
  • www.infoq.cn/article/cor…