Original link: javajgs.com/archives/21…

preface

Unlike Crane, CloudHelp from GoodRain (now RainBond) is based on K8S, and frankly feels a little bit better than Crane’s open source attitude, which shows that it’s serious. Crane, no one has replied to my issue so far. I think it’s getting cold

1. Introduction:

About the positioning of cloud help, you can refer to the official FAQS

Q: What is the positioning of the open source version of Yunbang?

A: CI/CD platform for small and medium-sized enterprises, application management platform for production environment. Instead of bridging the gap between development and operations, cloud Gang lets development and operations do what they are supposed to do. Development is responsible for programs and business, operation and maintenance is responsible for resources, and cloud help is the assistant of development and operation and maintenance.

Q: What is the purpose of releasing open source?

A: I hope that more enterprises and individual enthusiasts can enjoy the efficiency and convenience brought by container and cloud computing technology. Through the community version to let the majority of users understand the cloud help product design concept.

Q: Open source development plan

A: Yunbang is A platform-level product. Even if it is an open source version, our primary concern is stability. The product design will follow the principle of simple and sufficient functions to reduce the threshold of use, so that users can experience the bonus brought by container technology in the simplest way.

Q: Is there any case of yunbang Enterprise edition running in production environment? Is the open source version just a “toy” for demo and testing?

A: Speaking of which, I think we need to be clear about the criteria by which we judge A technology or product to work in A “production environment”. Only when the standard or definition is clear does it make sense to discuss the issue. Let’s explain it from four aspects: stability, maintainability, scalability and support services.

1. Stability: The cloud Gang public cloud is actually a set of enterprise version of the cloud gang running on the public IaaS platform. The public cloud of Yunbang has been in operation for more than 700 days without a single accident caused by the underlying program. The SLA can reach 99.999%. The core code of the basic module of the open source version is 100% consistent with that of the enterprise version.

2. Maintainability: Docker is the basic technical unit of Cloud Help. Kubernetes is used for service orchestration and scheduling. Other modules of Yunbang are packaged in the way of Docker image, and high availability is guaranteed by the internal high availability mechanism of Kubernetes. Therefore, the maintenance cost of the platform is very low, and the good Rain technology team has many years of platform maintenance experience, so the deployment and monitoring system of Cloud bang are very perfect.

3. Scalability: Yunsheng platform supports distributed deployment. With the help of Kubernetes’ container scheduling mechanism, it can start and stop thousands of containers in a few seconds. Platform container hosts also support dynamic scaling, and new container hosts can be online within 3 minutes.

4. Service support: For the open source version, we provide timely product update services, platform bugs, and security patches will be repaired as soon as possible according to the development progress. Provide community, wechat /QQ group online support, and provide complete documentation support. In addition, the enterprise version of Yunbang is mainly a privatized project at present. More than 100 enterprises have deployed the enterprise version, such as Zdoo.com and Yaoji Lottery website. Yunbang community edition also has many small and medium enterprises running in the production environment.

Say nothing else, but it can be seen that cloud help will continue to maintain, at least not suddenly cool, not for open source and open source toys, so if you need a management platform based on K8s, cloud help may also be a choice.

2. Installation:

The reason there are 3.4 and 3.5 is because I played with this tutorial for a while, and then I found that 3.5 was officially released… So I will introduce 3.5, after all, 3.4 is more experimental

3.5 the new www.rainbond.com/docs/stable… 3.4 the old version www.kancloud.cn/good-rain/c…

To sum up:

  1. CentOS7 (must be systemd, Debian theoretically can, but CentOS7 is recommended)

  2. The configuration is sufficient (the configuration is not low and the cluster is recommended)

  3. NTP time adjustment (configuring NTPD and changing the time zone)

  4. The server must be configured with a static IP address to ensure that the IP address does not change after the DHCP server restarts

  5. Clean environment (official advice is to install Docker and Kubelet uninstall all come again)

  6. It is better not to change the hostname after the installation. (If necessary, please change the /etc/host and /etc/hostname at the same time and ensure that each node does not repeat.)

Install the cloud help

Bash < (curl-shttp://repo.goodrain.com/install/3.5/start.sh)

Basically a one-click install (if your environment is ok)

It can be seen that this gr-Docker-engine seems to be a modified version of Good Rain Cloud based on Docker 1.12.

Hmm, this installation is quite long, please wait patiently, and it is suggested to hang screen to prevent disconnection

PS. If it is stuck for a long time, please refer to the following official solution

If no, start a terminal and run the systemctl restart rainbond-node command. If no, run the GRCTL tasks get < task > command to check whether dependent tasks are successfully executed. If no, check the dependent task execution logs

Then access the displayed URL, that is, http://server IP address :7070/

Register for first access (default first registration as administrator)

After registering automatic login, the interface is still pretty good to see

The command to add a compute node is as follows, this is run on the management node, just make sure that you have configured SSH password-free login for the target machine (it is recommended that you can log in to the management node directly under test), and then run it directly

Add compute node to node in the cluster GRCTL node add -i < IP network computing node > – Rolecompute compute nodes uid uuid = $(GRCTL nodelist | grep < compute nodes network IP > | awk ‘{printUuid Online Compute node service GRCTL nodeUp $Uuid

Cloud help use patterns or more, complete please see the official document – > portal

I only cover the basics here

This is deployed directly from the official example. It is important to know that each cloud band is assigned a secondary domain name by the goodrain cloud, which is generally resolved to your node, so access to that domain name is likely to be changed by modifying the WILD_DOMAIN in /etc/goodrain/console.py. But I’m not sure it’s feasible.

DockerCompose, DockerCompose, DockerCompose, DockerCompose, DockerCompose, DockerCompose, DockerCompose, DockerCompose This, in my opinion, did not give you too many options, panel just as application management exists, what (server) set up what is all help you fixed (to be automatic), enterprise may have more Settings, but in my opinion seems to be enough, don’t you worry about and perhaps over team or company needs (?)

3. The optimization:

1. Modify file descriptor restrictions

Vi/etc/security/limits. The conf # increase the following root soft nofile102400 root hardnofile102400 * soft nofile102400 * hard nofile102400

You need to restart it for it to take effect

2. Tune kernel parameters

Vi/etc/sysctl. Conf increase the following net. Ipv4. Neigh. Default. Gc_stale_time = 120 net. Ipv4. Conf. All. Rp_filter = 0 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce=2 net.ipv4.conf.all.arp_announce=2 net.ipv4.tcp_max_tw_buckets=5000 net.ipv4.tcp_syncookies=1 net.ipv4.tcp_max_syn_backlog=1024 net.ipv4.tcp_synack_retries=2 net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 net.ipv4.conf.lo.arp_announce=2 vm.swappiness=10 vm.vfs_cache_pressure=50 vm.overcommit_memory=1 net.core.somaxconn = 65535 net.netfilter.nf_conntrack_max = 655350 Net.net filter. nf_conntrack_tcp_timeout_ESTABLISHED = 1200 Sysctl -p for the modification to take effect immediately

3. Increase the number of concurrent NFS mount services

Conf add the following content to /etc/sysctl.conf: sunrpc.tcp_slot_table_entries=128 Sysctl -p for the modification to take effect immediately

4. Enable container swap usage restrictions

If only -m is specified when the container is started without –memory-swap, then –memory-swap defaults to twice the value of -m, for example

docker run-it-m200Mimage

Indicates that the container can use a maximum of 200M physical memory and 200M swap. If the following error message is displayed, the possible cause is that cgroup is not enabled on the host by default

WARNING:Your kernel does notsupport swap >limitcapabilities,memory limited without swap.

You can run the following command to resolve the problem

echo GRUB_CMDLINE_LINUX=”cgroup_enable=memory swapaccount=1″>>/etc/default/grub grub2-mkconfig-o/boot/grub2/grub.cfg>/dev/stdout2>&1

For more information, please refer to the official documentation:

www.rainbond.com/docs/stable… Previous version: www.kancloud.cn/good-rain/c… The new document: www.rainbond.com/docs/stable…