This is the 11th day of my participation in the Gwen Challenge in November. Check out the details: The last Gwen Challenge in 2021

A, single

Liverpoolfc.tv: clickhouse. Another dual /

Installation steps

Website installation documentation: clickhouse. Tech/docs/useful/get…

1. Upload four files to the directory

  • clickhouse-common-staticClickHouseCompiled binaries.
  • clickhouse-server– createclickhouse-serverSoft connect and install default configuration services
  • clickhouse-client– createclickhouse-clientClient tool soft connect, and install the client configuration file.
  • clickhouse-common-static-dbg– With debugging informationClickHouseBinary files.
# These 4 files
# clickhouse - common - static - 20.5.4.40-1. El7. X86_64. RPM
# clickhouse - server - 20.5.4.40-1. El7. X86_64. RPM
# clickhouse - server - common - 20.5.4.40-1. El7. X86_64. RPM
# clickhouse - the client - 20.5.4.40-1. El7. X86_64. RPMThird, download the RPM package. Install wGET offline -- Content-Disposition https://packagecloud.io/Altinity/clickhouse/packages/el/7/clickhouse-server-common-20.5.4.40-1.el7.x86_64.rpm/download.r pm wget --content-disposition https://packagecloud.io/Altinity/clickhouse/packages/el/7/clickhouse-common-static-20.5.4.40-1.el7.x86_64.rpm/download.r pm wget --content-disposition https://packagecloud.io/Altinity/clickhouse/packages/el/7/clickhouse-server-20.5.4.40-1.el7.x86_64.rpm/download.rpm wget  --content-disposition https://packagecloud.io/Altinity/clickhouse/packages/el/7/clickhouse-client-20.5.4.40-1.el7.x86_64.rpm/download.rpmCopy the code

2. Install the four devices separatelyrpmfile

RPM -ivh./*.rpm

# #

[root@linux121 ck]# pwd
/opt/software/ck
[root@linux121 ck]# llTotal 90452-RW-r --r-- 1 root root 6376 Apr 25 11:51 clickhouse-client-20.5.4.40-1.el7.x86_64. RPM -rw-r--r-- 1 root root RPM -rw-r--r-- 1 root root 35102796 Apr 25 11:51 clickhouse-common-static-20.5.4.401-1.el7.x86_64. RPM -rw-r--r-- 1 root root 35102796 Apr 25 11:51 RPM -rw-r--r-- 1 root root 12988 Apr 25 11:51 Clickhouse - server - common - 20.5.4.40-1. El7. X86_64. RPM/root @ linux121 ck# rpm -ivh ./*.rpm
Preparing...                          # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%]Updating / installing... 1: clickhouse - server - common - 20.5.4.4# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [25%]2: clickhouse - common - static - 20.5.4.4# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [50%]3: clickhouse - server - 20.5.4.40-1. El7# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [75%]Create user clickhouse. Clickhouse with datadir /var/lib/clickhouse 4:clickhouse-client-20.5.4.40-1.el7# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%]
Create user clickhouse.clickhouse with datadir /var/lib/clickhouse

Copy the code

3. StartClickServer

  1. Activation:systemctl start clickhouse-server
  2. View status:systemctl status clickhouse-server
The actual operation is as follows

[root@linux121 clickhouse-server]# systemctl start clickhouse-server
[root@linux121 clickhouse-server]# systemctl status clickhouse-serverLow clickhouse - server. The service - LSB: another dual clickhouse - server daemon the Loaded: the Loaded (/ etc/rc. D/init. D/clickhouse - server; bad; vendor preset: disabled) Active: active (exited) since Sun 2021-04-25 12:08:44 CST; 5s ago Docs: man:systemd-sysv-generator(8) Process: 5248 ExecStart=/etc/rc.d/init.d/clickhouse-server start (code=exited, status=0/SUCCESS) Apr 25 12:08:44 linux121 systemd[1]: Starting LSB: Yandex clickhouse-server daemon... Apr 25 12:08:44 linux121 su[5256]: (to clickhouse) root on none Apr 25 12:08:44 linux121 clickhouse-server[5248]: Start clickhouse-server service: Path to data directoryin/etc/clickhouse-server/config.xml: ... ckhouse/ Apr 25 12:08:44 linux121 su[5263]: (to clickhouse) root on none Apr 25 12:08:44 linux121 su[5265]: (to clickhouse) root on none Apr 25 12:08:44 linux121 su[5269]: (to clickhouse) root on none Apr 25 12:08:44 linux121 su[5273]: (to clickhouse) root on none Apr 25 12:08:44 linux121 clickhouse-server[5248]: DONE Apr 25 12:08:44 linux121 systemd[1]: Started LSB: Yandex clickhouse-server daemon. Hint: Some lines were ellipsized, use -l to showin full.
Copy the code

Use 4.clientThe connectionserver

Enter the client clickhouse-client

The actual operation is as follows

[root@linux121 clickhouse-server]# clickhouse-clientClickHouse Client Version 20.5.4.40. Connecting to localhost:9000 as user default. Connected to ClickHouse Server Linux121: Linux121: show databases; SHOW the DATABASES ┌ ─ name ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │ _temporary_and_external_tables │ │ default │ │ system │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ 3 rowsin setElapsed: 0.005 SEC.Copy the code

Second, the cluster

Need a Zookeeper

Installation steps

The environment is as follows:

hostname IP note
linux121 172.16.64.121 The master node
linux122 172.16.64.122 From the node
linux123 172.16.64.123 From the node

Each node has Zookeeper.

1. Installation and deployment

Send four packets to each node:

[root@linux121 opt]# pwd
/opt

$ scp -r software root@linux122:$PWD
$ scp -r software root@linux123:$PWD
Copy the code

RPM -ivh./*.rpm

# #

[root@linux121 ck]# pwd
/opt/software/ck
[root@linux121 ck]# llTotal 90452-RW-r --r-- 1 root root 6376 Apr 25 11:51 clickhouse-client-20.5.4.40-1.el7.x86_64. RPM -rw-r--r-- 1 root root RPM -rw-r--r-- 1 root root 35102796 Apr 25 11:51 clickhouse-common-static-20.5.4.401-1.el7.x86_64. RPM -rw-r--r-- 1 root root 35102796 Apr 25 11:51 RPM -rw-r--r-- 1 root root 12988 Apr 25 11:51 Clickhouse - server - common - 20.5.4.40-1. El7. X86_64. RPM/root @ linux121 ck# rpm -ivh ./*.rpm
Preparing...                          # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%]Updating / installing... 1: clickhouse - server - common - 20.5.4.4# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [25%]2: clickhouse - common - static - 20.5.4.4# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [50%]3: clickhouse - server - 20.5.4.40-1. El7# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [75%]Create user clickhouse. Clickhouse with datadir /var/lib/clickhouse 4:clickhouse-client-20.5.4.40-1.el7# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%]
Create user clickhouse.clickhouse with datadir /var/lib/clickhouse
Copy the code

2. Modify the configuration file of each nodeconfig.xml

Run the vi /etc/clickhouse-server/config.xml command

[root@linux121 clickhouse-server]# pwd
/etc/clickhouse-server
[root@linux121 clickhouse-server]# vi config.xml
Copy the code

Modify the config. XML file

    <listen_host>: :</listen_host>
    <! -- Same for hosts with disabled ipv6: -->
    <! - < listen_host > 0.0.0.0 < / listen_host > -- >
Copy the code

Added the config. XML file

# zookeeper TAB added above<! -- Add external configuration file metrika.xml -->
<include_from>/etc/clickhouse-server/metrika.xml</include_from>
Copy the code

Send this configuration to other nodes:

scp config.xml root@linux122:/etc/clickhouse-server/config.xml
scp config.xml root@linux123:/etc/clickhouse-server/config.xml
Copy the code

3. Create a configuration file for each nodemetrika.xml

Tips: ckUse port 9000 and followHadoopConflict, let’s do it hereHadoopstopped

Command: vi/etc/clickhouse – server/config. D/metrika. XML

A total of three shards are set, and each shard has only one copy.

<yandex>
    <clickhouse_remote_servers>
        <perftest_3shards_1replicas>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>linux121</host>
                    <port>9000</port>
                </replica>
            </shard>
            <shard>
                <replica>
                    <internal_replication>true</internal_replication>
                    <host>linux122</host>
                    <port>9000</port>
                </replica>
            </shard>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>linux123</host>
                    <port>9000</port>
                </replica>
            </shard>
        </perftest_3shards_1replicas>
    </clickhouse_remote_servers>
    <! -- ZooKeeper configuration -->
    <zookeeper-servers>
        <node index="1">
            <host>linux121</host>
            <port>2182</port>
        </node>
        <node index="2">
            <host>linux122</host>
            <port>2182</port>
        </node>
        <node index="3">
            <host>linux123</host>
            <port>2182</port>
        </node>
    </zookeeper-servers>
    <macros>
        <replica>linux121</replica>
    </macros>
    <networks>
        <ip>: : / 0</ip>
    </networks>
    <clickhouse_compression>
        <case>
            <min_part_size>10000000000</min_part_size>
            <min_part_size_ratio>0.01</min_part_size_ratio>
            <method>lz4</method>
        </case>
    </clickhouse_compression>
</yandex>
Copy the code

Send this configuration to other nodes:

scp -r config.d root@linux122:$PWD
scp -r config.d root@linux123:$PWD
Copy the code

4. All nodes are startedclickhouse-server

Before starting Zookeeper, you need to start it.

Sh zk. Sh start

Script on Linux121
[root@linux121 shells]# pwd
/root/shells
[root@linux121 shells]# ll
total 4
-rw-r--r--. 1 root root 240 Aug 27  2020 zk.sh

[root@linux121 shells]# sh zk.sh startstart zookeeper server... ZooKeeper JMX enabled by default Using the config: / opt/lagou/servers/ZooKeeper - 3.4.14 / bin /.. /conf/zoo.cfg Starting zookeeper ... STARTED to ZooKeeper JMX enabled by default Using the config: / opt/lagou/servers/ZooKeeper - 3.4.14 / bin /.. /conf/zoo.cfg Starting zookeeper ... STARTED to ZooKeeper JMX enabled by default Using the config: / opt/lagou/servers/ZooKeeper - 3.4.14 / bin /.. /conf/zoo.cfg Starting zookeeper ... STARTED# check to see if it's up
# QuorumPeerMain is zk
[root@linux121 shells]# jps
1120 -- process information unavailable
5842 Jps
5812 QuorumPeerMain
Copy the code

Command: systemctl start clickhouse-server

Each node should be started
[root@linux121 software]# systemctl start clickhouse-server
[root@linux122 software]# systemctl start clickhouse-server
[root@linux123 software]# systemctl start clickhouse-server
Copy the code

5. View the deployment result

# 1. Enter the client
clickhouse-client


# 2. View the results
select * from system.clusters;
Copy the code

The actual operation is as follows:

[root@linux121 software]# clickhouse-clientClickHouse Client Version 20.5.4.40. Connecting to localhost:9000 as user default. Connected to ClickHouse Server Version 20.5.4 revision 54435. Linux121 :) select * from system. Clusters; SELECT * FROM system.clusters ┌ ─ cluster ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┬ ─ shard_num ─ ┬ ─ shard_weight ─ ┬ ─ replica_num ─ ┬ ─ host_name ─ ┬ ─ host_address ─ ─ ┬ ─ port ─ ┬ ─ is_loc Al ─┬─user─ ┬─default_database─┬─errors_count─┬─estimated_recovery_time─ ─ perftest_3shards_1replicas │ 1 │ 1 │ 1 │ Linux121 │ 172.16.64.121 │ 9000 │ 1 │ default │ 0 │ 0 │ perftest_3SHARds_1replicas │ 2 │ 1 │ 1 │ linux122 │ 172.16.64.122 │ 9000 │ 0 │ default │ 0 │ 0 │ perftest_3SHARds_1replicas │ 3 │ 1 │ 1 │ linux123 │ 172.16.64.123 │ 9000 │ 0 │ default │ 0 │ 0 │ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 9000 │ 1 │ default │ 0 │ 0 │ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ 0 │ 0 │ Test_cluster_two_shards_localhost │ 1 │ 1 │ localhost │ :1 │ 9000 │ default │ 0 │ 0 │ Test_cluster_two_shards_localhost │ │ │ │ 1 1 2 localhost │ : : 1 9000 │ │ │ default │ │ │ 0 0 │ │ test_shard_localhost │ 1 │ 1 │ default │ 0 │ 0 │ test_shard_localhost_secure │ 1 │ 1 │ localhost │ 1 │ 9000 │ 1 │ default │ 0 │ 0 │ test_shard_localhost_secure │ 1 │ 1 │ localhost │ ::1 │ 942 │ 0 │ default │ 0 │ test_unavailable_shard │ 1 │ 1 │ localhost │ ::1 │ 9000 │ default │ 0 │ 0 │ test_unavailable_shard │ 2 │ 1 │ localhost │ ::1 │ 0 │ default │ 0 │ └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┴ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ 11 rowsin setElapsed: 0.007 SEC.Copy the code

That’s a successful startup.