Overview: Kubesphere QingCloud has been used recently. Excellent user experience, private deployment, no infrastructure dependency, no Kubernetes dependency, supports deployment across physical machines, virtual machines, cloud platforms, and can manage different versions of Kubernetes clusters from different vendors. In k8s upper encapsulation realized the role based access control, enterprise assembly line quickly implement CI/CD, built-in harbor/gitlab/Jenkins/sonarqube commonly used tools, based on based on the total life cycle management OpenPitrix provide application, Including development, testing, publishing, upgrading, removal and other related operations of the app is still very good experience. Also, as an open source project, it is inevitable that there are some bugs, and I encountered some wrong ideas in my use. Thank you very much for the technical assistance provided by Qingcloud community. If you are interested in K8S, you can try the domestic platform, which is silky and smooth, and rancher users can try it.

Clean up the container in the exit state

After the cluster runs for a period of time, some Containers exit Exited due to abnormalities. You need to clean and release disks in a timely manner. You can set the Containers to a scheduled task

docker rm `docker ps -a |grep Exited |awk '{print $1}'`
Copy the code

Clean up abnormal or expelled PODS

  • Kubesphere – Devops -system ns cleanup
kubectl delete pods -n kubesphere-devops-system $(kubectl get pods -n kubesphere-devops-system |grep Evicted|awk '{print $1}')
kubectl delete pods -n kubesphere-devops-system $(kubectl get pods -n kubesphere-devops-system |grep CrashLoopBackOff|awk '{print $1}')
Copy the code
  • For easy cleanup, specify ns cleanup evicted/ CrashLoopbackoff pod/ cleanup exited containers
#! /bin/bash
# auth:kaliarch

clear_evicted_pod() {
  ns=The $1
  kubectl delete pods -n ${ns} $(kubectl get pods -n ${ns} |grep Evicted|awk '{print $1}')}clear_crash_pod() {
  ns=The $1
  kubectl delete pods -n ${ns} $(kubectl get pods -n ${ns} |grep CrashLoopBackOff|awk '{print $1}')}clear_exited_container() {
  docker rm `docker ps -a |grep Exited |awk '{print $1}'`}echo "1.clear exicted pod"
echo "2.clear crash pod"
echo "3.clear exited container"
read -p "Please input num:" num


case ${num} in 
"1")
  read -p "Please input oper namespace:" ns
  clear_evicted_pod ${ns}
  ;;


"2")
  read -p "Please input oper namespace:" ns
  clear_crash_pod ${ns}
  ;;
"3")
  clear_exited_container
  ;;
"*")
  echo "input error"
  ;;
esac
Copy the code
  • Clear all pods of Evicted/CrashLoopBackoff in NS
Get all ns
kubectl get ns|grep -v "NAME"|awk '{print $1}'

# Clear the pod in expulsion state
for ns in `kubectl get ns|grep -v "NAME"|awk '{print $1}'`;do kubectl delete pods -n ${ns} $(kubectl get pods -n ${ns} |grep Evicted|awk '{print $1}');done
# Clear abnormal pod
for ns in `kubectl get ns|grep -v "NAME"|awk '{print $1}'`;do kubectl delete pods -n ${ns} $(kubectl get pods -n ${ns} |grep CrashLoopBackOff|awk '{print $1}');done
Copy the code

Migrate docker data

The docker data directory is not specified during the installation process, and the system disk is 50GB. As time goes by, the disk is not enough, and docker data needs to be migrated. The soft connection mode is used: the first choice is to mount the new disk to the /data directory

systemctl stop docker

mkdir -p /data/docker/  

rsync -avz /var/lib/docker/ /data/docker/  

mv /var/lib/docker /data/docker_bak

ln -s /data/docker /var/lib/

systemctl daemon-reload

systemctl start docker
Copy the code

Four kubesphere network errors

  • Problem Description:

In the kubesphere node or master node, manually start the container, the container cannot connect to the public network, is there something wrong with my configuration, before the default calico, now change to Fluannel will not work, The pod in Kubesphere deployment container can exit the public network, node or master manually started separately cannot access the public network

View docker0 on manually started container network


root@fd1b8101475d:/# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:inet 127.0.0.1/8 scope host LO valid_lft forever preferred_lft forever 2: tunl0@NONE: <NOARP> MTU 1480 qdisc noop state DOWN group default qlen 1 Link/ipIP 0.0.0.0 BRD 0.0.0.0 105: eth0@if106: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd Ff :ff:ff:ff:ff:ff :ff link- netnSID 0 inet 172.17.0.2/16 BRD 172.17.255.255 scope global eth0 valid_lft forever preferred_lft  foreverCopy the code

The container network in PODS uses kube-IPVs0


1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1

    link/ipip 0.0.0.0 brd 0.0.0.0

4: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue

    link/ether c2:27:44:13:df:5d brd ff:ff:ff:ff:ff:ff

    inet 10.233.97.175/32 scope global eth0

       valid_lft forever preferred_lft forever
Copy the code
  • Solution:

View the Docker startup configuration

Modify the file/etc/systemd/system/docker. Service. D/docker – options. Remove the parameters in the conf: – iptables = false don’t write the iptables when this parameter is false

[Service]
Environment="DOCKER_OPTS= --registry-mirror=https://registry.docker-cn.com --data-root=/var/lib/docker --log-opt max-size=10m --log-opt max-file=3 --insecure-registry=harbor.devops.kubesphere.local:30280"
Copy the code

Kubesphere application routing is abnormal

In kubesphere, routing ingress uses nginx. Configuration on the Web interface results in two hosts using the same CA certificate, which can be configured through a comment file

⚠️ Note: Ingress controls deployment at:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: prod-app-ingress
  namespace: prod-net-route
  resourceVersion: '8631859'Labels: app: prod - app - ingress annotations: desc: production environment application routing nginx. Ingress. Kubernetes. IO/client - body - buffer - size: 1024m nginx.ingress.kubernetes.io/proxy-body-size: 2048m nginx.ingress.kubernetes.io/proxy-read-timeout:'3600'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '1800'
    nginx.ingress.kubernetes.io/service-upstream: 'true'
spec:
  tls:
    - hosts:
        - smartms.tools.anchnet.com
      secretName: smartms-ca
    - hosts:
        - smartsds.tools.anchnet.com
      secretName: smartsds-ca
  rules:
    - host: smartms.tools.anchnet.com
      http:
        paths:
          - path: /
            backend:
              serviceName: smartms-frontend-svc
              servicePort: 80
    - host: smartsds.tools.anchnet.com
      http:
        paths:
          - path: /
            backend:
              serviceName: smartsds-frontend-svc

              servicePort: 80
Copy the code

Kubesphere updates Jenkins’ agent

Users may use different language versions or tool versions in their own application scenarios. This document describes how to replace the built-in Agent.

There is no sonar scanner tool in the default base-build image. Every Agent of Kubesphere Jenkins is a Pod. If you want to replace the built-in agent, you need to replace the corresponding image of the agent.

Build the latest Kubesphere/Builder-base: Advanced-1.0.0 agent image

Update for the specified custom image: ccr.ccs.tencentyun.com/testns/base:v1

Reference links: kubesphere. IO/docs/advanc…

After KubeSphere has modified jenkins-casc-config, you will need to reload your updated system configuration on the Configuration-as-Code page under the Jenkins Dashboard system administration.

Reference:

Kubesphere. IO/docs/advanc…

jenkins-casc-config

Seven Devops send Mail

Reference: www.cloudbees.com/blog/mail-s…

Built-in variables:

The variable name explain
BUILD_NUMBER The current build number, such as “153”
BUILD_ID The current build ID, identical to BUILD_NUMBER for builds created in 1.597+, but a YYYY-MM-DD_hh-mm-ss timestamp for older builds
BUILD_DISPLAY_NAME The display name of the current build, which is something like “#153” by default.
JOB_NAME Name of the project of this build, such as “foo” or “foo/bar”. (To strip off folder paths from a Bourne shell script, try: ${JOB_NAME##*/})
BUILD_TAG String of “jenkins-{BUILD_NUMBER}”. Convenient to put into a resource file, a jar file, etc for easier identification.
EXECUTOR_NUMBER The unique number that identifies the current executor (among executors of the same machine) that’s carrying out this build. This is the number you see in the “build executor status”, except that the number starts from 0, not 1.
NODE_NAME Name of the slave if the build is on a slave, or “master” if run on master
NODE_LABELS Whitespace-separated list of labels that the node is assigned.
WORKSPACE The absolute path of the directory assigned to the build as a workspace.
JENKINS_HOME The absolute path of the directory assigned on the master node for Jenkins to store data.
JENKINS_URL Full URL of Jenkins, like http://server:port/jenkins/ (note: only available if Jenkins URL set in system configuration)
BUILD_URL Full URL of this build, like http://server:port/jenkins/job/foo/15/ (Jenkins URL must be set)
SVN_REVISION Subversion revision number that’s currently checked out to the workspace, such as “12345”
SVN_URL Subversion URL that’s currently checked out to the workspace.
JOB_URL Full URL of this job, like http://server:port/jenkins/job/foo/ (Jenkins URL must be set)

Finally, I wrote a template to adapt to my own business, which can be used directly

mail to: '[email protected]',
          charset:'UTF-8', // or GBK/GB18030
          mimeType:'text/plain', // or text/html
          subject: "Kubesphere ${env.JOB_NAME} [${env.BUILD_NUMBER}] release normal Running Pipeline${currentBuild.fullDisplayName}",
          body: """-- -- -- -- -- -- -- -- -- Anchnet enterprise Kubesphere Pipeline job -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - the project name:${env.JOB_NAME}Number of builds:${env.BUILD_NUMBER}Scanning Information: Address:${SONAR_HOST}Mirror address:${REGISTRY}/${QHUB_NAMESPACE}/${APP_NAME}:${IMAGE_TAG}SUCCESSFUL: Job${env.JOB_NAME} [${env.BUILD_NUMBER}] Build status:${env.JOB_NAME}Jenkins post works fine build URL:${env.BUILD_URL}"""
Copy the code