Welcome to my GitHub

Github.com/zq2599/blog…

Content: all original article classification summary and supporting source code, involving Java, Docker, Kubernetes, DevOPS, etc.;

Links to articles

  1. Kubebuilder: Preparation
  2. Kubebuilder For the first time
  3. Kubebuilder 3: Quick overview of basic knowledge
  4. Operator requirements specification and design
  5. Kubebuilder operator code
  6. Kubebuilder Six: Build deployment run
  7. Kubebuilder: Webhook
  8. Kubebuilder: How do you do

This paper gives an overview of

  • As the sixth part of “KubeBuilder Actual Combat” series, the coding has been completed before, now it is time to verify the function of the link, please ensure that your Docker and Kubernetes environment is normal, and then let’s complete the following operations:
  1. The deployment of CRD
  2. Run Controller locally
  3. Create a ElasticWeb resource object using the YAML file
  4. Use logs and kubectl commands to verify that ElasticWeb functions properly
  5. The browser accesses the Web page to verify whether the service is normal
  6. Modify singlePodQPS to see if ElasticWeb automatically adjusts the pod count
  7. Modify totalQPS to see if ElasticWeb automatically adjusts pod count
  8. Delete ElasticWeb and see that the associated Service and Deployment are automatically removed
  9. Build the Controller image and run the Controller in Kubernetes to verify that the above functions are normal
  • Seemingly simple deployment verification operations add up to so much… All right, let’s get started;

The deployment of CRD

  • To deploy CRD to Kubernetes, go to the directory where the Makefile resides from the console and run make install:
zhaoqin@zhaoqindeMBP-2 elasticweb % make install /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases kustomize build config/crd | kubectl apply -f - Warning: Apiextensions. K8s. IO/v1beta1 CustomResourceDefinition is deprecated in v1.16 +, unavailable in v1.22 +; use apiextensions.k8s.io/v1 CustomResourceDefinition customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configuredCopy the code
  • From the above, you can see that what you are actually doing is using kustomize to merge the YAML resources under config/ CRD and create them in Kubernetes.

  • Kubectl api-versions can be used to verify that the CRD deployment is successful:

zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl api-versions|grep elasticweb
elasticweb.com.bolingcavalry/v1
Copy the code

Run Controller locally

  • First try the simplest way to verify the function of Controller, as shown in the picture below. The Macbook is my development environment, and I can directly use the Makefile in elasticWeb project to run the code of Controller locally:

  • Go to the directory where the Makefile resides and run the make run command to compile and run the controller:
zhaoqin@zhaoqindeMBP-2 elasticweb % pwd /Users/zhaoqin/github/blog_demos/kubebuilder/elasticweb zhaoqin@zhaoqindeMBP-2 elasticweb % make run /Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." Output: CRD: artifacts: config = config/CRD/outside go run. / main go 2021-02-20 T20: upon. 774 + 0800 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"} 2021-02-20T20:46:16.774+0800 INFO Setup Starting Manager 2021-02-20T20:46:16.775+0800 INFO controller-runtime.controller Starting EventSource {"controller": "elasticweb", "source": "kind source: /, Kind="} 2021-02-20T20:46:16.776+0800 INFO Controller-Runtime. manager starting metrics server {"path": } 2021-02-20T20:46:16.881+0800 INFO Controller - Runtime. Controller {"controller": "Elasticweb "} 2021-02-20T20:46:16.881+0800 INFO Controller - Runtime. Controller {" Controller ": "elasticweb", "worker count": 1}Copy the code

Create an ElasticWeb resource object

  • The elasticWeb Controller is now up and running. Create the ElasticWeb resource object using the yamL file.

  • Under the config/samples directory, KubeBuilder created the demo file elasticWeb_v1_ElasticWeb. yaml for us, but the contents of the spec in the demo file are not the four fields defined by us, so we need to change it to the following:

apiVersion: v1
kind: Namespace
metadata:
  name: dev
  labels:
    name: dev
---
apiVersion: elasticweb.com.bolingcavalry/v1
kind: ElasticWeb
metadata:
  namespace: dev
  name: elasticweb-sample
spec:
  # Add fields here
  image: Tomcat: 8.0.18 - jre8
  port: 30003
  singlePodQPS: 500
  totalQPS: 600
Copy the code
  • The preceding parameters are described as follows:
  1. The namespace used is dev
  2. Tomcat is deployed in this test
  3. Service exposes tomcat services using port 30003 of the host machine
  4. Assume that a single POD can support 500QPS and the external request QPS is 600
  • Execute the command kubectl apply -f config/samples/elasticweb_v1_elasticweb yaml, can create in kubernetes elasticweb instances:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml
namespace/dev created
elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created
Copy the code
  • Go to the Controller window and find that quite a few logs are printed. By analyzing the logs, the Reconcile method is executed twice, with resources such as Deployment and Service created in the first execution:
The T10:2021-02-21 03:57. 108 + 0800 INFO controllers. ElasticWeb 1. Start reconcile logic {" ElasticWeb ": "Dev /elasticweb-sample"} 2021-02-21t10:03:57.108 +0800 INFO controllers. Elasticweb 3. Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil] {" elasticWeb ": "Dev/elasticWeb-sample "} 2021-02-21T10:03:57.210+0800 INFO controllers. Elasticweb 4. Deployment not exists {" elasticWeb ": "dev/elasticweb-sample"} 2021-02-21T10:03:57.313+0800 INFO controllers. Elasticweb set Reference {"func": "CreateService "} 2021-02-21T10:03:57.313+0800 INFO ElasticWeb start service {"func": "CreateService "} 2021-02-21T10:03:57.364+0800 INFO controllers.ElasticWeb create service Success {"func": "CreateService "} 2021-02-21T10:03:57.365+0800 INFO Controllers.ElasticWeb expectReplicas [2] {"func": "CreateDeployment "} 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb set Reference {"func": "CreateDeployment "} 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb start create deployment {"func": 2021-02-21T10:03:57.382+0800 INFO controllers.ElasticWeb create deployment success {"func": 2021-02-21T10:03:57.382+0800 INFO controllers.ElasticWeb singlePodQPS [500], replicas [2], realQPS[1000] {"func": "UpdateStatus "} 2021-02-21T10:03:57.407+0800 DEBUG controller-Runtime. Controller Successfully Reconciled {"controller": "elasticweb", "request": Dev/elasticWeb-sample "} 2021-02-21T10:03:57.407+0800 INFO controllers. Elasticweb 1. Start reconcile Logic {" elasticWeb ": "dev/elasticweb-sample"} 2021-02-21t10:03:57.407 +0800 INFO controllers. Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000] {"elasticweb": "Dev/elasticWeb-sample "} 2021-02-21T10:03:57.407+0800 INFO expectReplicas [2], realReplicas [2] {"elasticweb": "Dev /elasticweb-sample"} 2021-02-21t10:03:57.407 +0800 INFO controllers. Elasticweb now {" elasticWeb ": "Dev /elasticweb-sample"} 2021-02-21T10:03:57.407+0800 DEBUG controller-Runtime. controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}Copy the code
  • Kubectl get = kubectl get = kubectl get = Kubectl get
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml namespace/dev created elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get elasticweb -n dev NAME AGE elasticweb-sample 35s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev NAME TYPE cluster-ip external-ip PORT(S) AGE ElasticWeb -sample NodePort 10.107.177.158 < None > 8080:30003/TCP 41s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev NAME READY UP-TO-DATE AVAILABLE AGE elasticweb-sample 2/2 2 2 46s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev NAME READY STATUS RESTARTS AGE elasticweb-sample-56fc5848b7-l5thk 1/1 Running 0 50s elasticweb-sample-56fc5848b7-lqjk5 1/1 Running 0 50sCopy the code

Browser authenticates service functions

  • The docker image used for this deployment operation is Tomcat, which is very simple to verify. If you can see the cat on the default page, it proves that Tomcat has been started successfully. The IP address of my kubernetes host is 192.168.50.75. Then use the browser to http://192.168.50.75:30003, the following diagram, normal business functions:

Modify the QPS of a Pod

  • If the QPS of a single Pod is increased from 500 to 800, let’s see if our Operator can adjust it automatically. (The total QPS is 600, so the number of pods should be reduced from 2 to 1.)

  • Add a file named update_single_pod_qps.yaml under config/samples/ with the following contents:

spec:
  singlePodQPS: 800
Copy the code
  • To update the QPS of a single Pod from 500 to 800, run the following command (note that type is important) :
kubectl patch elasticweb elasticweb-sample \
-n dev \
--type merge \
--patch "$(cat config/samples/update_single_pod_qps.yaml)"
Copy the code
  • Red box 1 indicates that the spec has been updated, red box 2 indicates that the number of pods calculated using the latest parameters is in line with expectations:

  • Kubectl get = kubectl get = kubectl get = kubectl get
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev                                                                                       
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          30m
Copy the code
  • Remember to check tomcat with your browser;

Modify the total QPS

  • External QPS are also changing frequently, and our operator also needs to adjust the POD instance in time according to the total QPS to ensure the overall service quality. Next, let’s modify the total QPS to see if the operator takes effect:

  • Add a file named update_total_qps.yaml under the config/samples/ directory with the following contents:

spec:
  totalQPS: 2600
Copy the code
  • To update the total QPS from 600 to 2600, run the following command (note that type is important) :
kubectl patch elasticweb elasticweb-sample \
-n dev \
--type merge \
--patch "$(cat config/samples/update_total_qps.yaml)"
Copy the code
  • Red box 1 indicates that the spec has been updated, red box 2 indicates that the number of pods calculated using the latest parameters is in line with expectations:

  • Kubectl get command to check the pod, it can be seen that the pod has increased to 4, 4 PD can support the QPS 3200, meet the current 2600 requirements:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev
NAME                                 READY   STATUS    RESTARTS   AGE
elasticweb-sample-56fc5848b7-8n7tq   1/1     Running   0          8m22s
elasticweb-sample-56fc5848b7-f2lpb   1/1     Running   0          8m22s
elasticweb-sample-56fc5848b7-l5thk   1/1     Running   0          48m
elasticweb-sample-56fc5848b7-q8p5f   1/1     Running   0          8m22s
Copy the code
  • Remember to check tomcat with your browser;
  • You are smart enough to think that this is too low a way to adjust the pod count, eh… You are right to say that it is low, but you can develop an application that automatically calls client-go to change the totalQPS of elasticweb after receiving the current QPS !

Delete the validation

  • At present the whole dev service under the namespace, deployment, pod, elasticweb these resources object, if you want to delete all, just remove elasticweb can, Because both Service and Deployment are associated with ElasticWeb, the code is shown in the red box below:

  • Command to delete ElasticWeb:
kubectl delete elasticweb elasticweb-sample -n dev
Copy the code
  • To check other resources, are automatically deleted:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl delete elasticweb elasticweb-sample -n dev elasticweb.elasticweb.com.bolingcavalry "elasticweb-sample" deleted zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev NAME READY STATUS RESTARTS AGE elasticweb-sample-56fc5848b7-9lcww 1/1 Terminating 0 45s elasticweb-sample-56fc5848b7-n7p7f 1/1 Terminating 0 45s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev NAME  READY STATUS RESTARTS AGE elasticweb-sample-56fc5848b7-n7p7f 0/1 Terminating 0 73s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev No resources found in dev namespace. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev No resources found in dev namespace. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev No resources found in dev namespace. zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get namespace dev NAME STATUS AGE dev Active 97sCopy the code

Build the mirror

  1. We have tried all the functions of controller in the development environment. In the actual production environment, controller is not independent of Kubernetes, but runs in Kubernetes as a POD. Next, let’s try to compile and build the Controller code into a Docker image and run it on Kubernetes.
  2. The first thing to do is press Ctrl+C on the previous Controller console to stop that controller;
  3. There is a requirement that you have a mirror repository that Kubernetes can access, such as Harbor on a LAN, or a public hub.docker.com. I chose hub.docker.com for convenience. To use it, you must have a registered account with Hub.docker.com;
  4. On the KubeBuilder computer, open a console, execute the Docker login command, and enter your hub.docker.com account and password as prompted. This allows you to execute a Docker push command from your current console to push the image to hub.docker.com (which has a poor network and may require several login attempts);
  5. Execute the following command to build docker mirror and push to hub.docker.com, mirror called bolingcavalry/elasticweb: 002:
make docker-build docker-push IMG=bolingcavalry/elasticweb:002
Copy the code
  1. Hub.docker.com network condition is not generally poor, kubeBuilder computer docker must set the image acceleration, the above command if encounter timeout failure, please try several times, in addition, the construction process will also download a lot of go module dependence, also need you to wait patiently, it is also easy to encounter network problems, Multiple retries are required, so it is best to use the Habor service set up on the LAN;
  2. After the command is successfully executed, the following output is displayed:
zhaoqin@zhaoqindeMBP-2 elasticweb % make docker-build docker-push IMG=bolingcavalry/elasticweb:002 /Users/zhaoqin/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..." go fmt ./... go vet ./... /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases go test ./... -coverprofile cover.out ? elasticweb [no test files] ? [no test files] ok elasticWeb /controllers 8.287s coverage: 0.0% of statements docker build. -t bolingcavalry/elasticweb: 002 [+] Building 146.8 s (17/17) FINISHED = > (internal) Load build definition from Dockerfile 0.1s => => dokerfile: Dockerignore 0.0s => => transferring context: 2 b 0.0 s = > (internal) load metadata for GCR. IO/distroless/static: nonroot 1.8 s = > (internal) load metadata for Docker. IO/library/golang: 1.13 0.7 s = > [builder 1/9] the FROM Docker. IO/library/golang: 1.13 @ sha256:8 ebb6d5a48deef738381b56b1d4cd33d99a5d608e0d03c5fe8dfa3f68d41a1f8 0.0 s = > [stage 1 A third] FROM GCR. IO/distroless/static: nonroot @ sha256: b89b98ea1f5bc6e0b48c8be6803a155b2a3532ac6f1e9508a8bcbf99885a9152 0.0 s => [internal] Load build context 0.0s => => transferring-transferring-context: 14.51kB 0.0s => CACHED [Builder 2/9] WORKDIR /workspace 0.0s => CACHED [Builder 3/9] COPY go.mod go.mod 0.0s => CACHED [Builder 4/9] COPY go.sum go.sum 0.0s => CACHED [Builder 5/9] RUN go mod Download 0.0s => CACHED [Builder 6/9] COPY [Builder 7/9] COPY API/API / 0.1s => [Builder 8/9] COPY API/API / 0.1s => [Builder 9/9] RUN CGO_ENABLED=0 GOOS= Linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go 144.5s => CACHED [stage-1 2/3] COPY --from=builder /workspace/ manager.0.0s => exporting to image 0.0s => => exporting layers 0.0s => => Their image sha256:622 d30aa44c77d93db4093b005fce86b39d5ba5c6cd29f1fb2accb7e7f9b23b8 0.0 s = > = > naming the to Docker. IO/bolingcavalry elasticweb: 002 0.0 s docker push bolingcavalry/elasticweb: 002 The push refers to The repository [docker.io/bolingcavalry/elasticweb] eea77d209b68: Layer already exists 8651333b21e7: Layer already exists 002: digest: sha256:c09ab87f6fce3d85f1fda0ffe75ead9db302a47729aefd3ef07967f2b99273c5 size: 739Copy the code
  1. Go to hub.docker.com and see the image below. The new image has been uploaded so that any machine with an Internet connection can pull the image to local use:

  1. Once the image is ready, execute the following command to deploy the Controller in the Kubernetes environment:
make deploy IMG=bolingcavalry/elasticweb:002
Copy the code
  1. Next create the ElasticWeb resource object as before and verify that all resources were created successfully:
zhaoqin@zhaoqindeMBP-2 elasticweb % make deploy IMG=bolingcavalry/elasticweb:002 /Users/zhaoqin/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases cd config/manager && kustomize edit set image controller=bolingcavalry/elasticweb:002 kustomize build config/default | kubectl apply -f - namespace/elasticweb-system Created Warning: apiextensions. K8s. IO/v1beta1 CustomResourceDefinition is deprecated in v1.16 +, unavailable in v1.22 +; use apiextensions.k8s.io/v1 CustomResourceDefinition customresourcedefinition.apiextensions.k8s.io/elasticwebs.elasticweb.com.bolingcavalry configured role.rbac.authorization.k8s.io/elasticweb-leader-election-role created clusterrole.rbac.authorization.k8s.io/elasticweb-manager-role created clusterrole.rbac.authorization.k8s.io/elasticweb-proxy-role created Warning: . Rbac authorization. K8s. IO/v1beta1 ClusterRole is deprecated in v1.17 +, unavailable in v1.22 +; use rbac.authorization.k8s.io/v1 ClusterRole clusterrole.rbac.authorization.k8s.io/elasticweb-metrics-reader created rolebinding.rbac.authorization.k8s.io/elasticweb-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/elasticweb-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/elasticweb-proxy-rolebinding created service/elasticweb-controller-manager-metrics-service created deployment.apps/elasticweb-controller-manager created zhaoqin@zhaoqindeMBP-2 elasticweb % zhaoqin@zhaoqindeMBP-2 elasticweb % zhaoqin@zhaoqindeMBP-2 elasticweb % zhaoqin@zhaoqindeMBP-2 elasticweb % zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml namespace/dev created elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get Service -n dev NAME TYPE cluster-ip EXTERNAL-IP PORT(S) AGE elasticWeb -sample NodePort 10.96.234.7 < None > 8080:30003/TCP  13s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev NAME READY UP-TO-DATE AVAILABLE AGE elasticweb-sample 2/2 2 2 18s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev NAME READY STATUS RESTARTS AGE elasticweb-sample-56fc5848b7-559lw 1/1 Running 0 22s elasticweb-sample-56fc5848b7-hp4wv 1/1 Running 0 22sCopy the code
  1. That’s not enough! There is another important information that we need to check -controller’s log, first look at the pod:
zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pods --all-namespaces
NAMESPACE           NAME                                             READY   STATUS    RESTARTS   AGE
dev                 elasticweb-sample-56fc5848b7-559lw               1/1     Running   0          68s
dev                 elasticweb-sample-56fc5848b7-hp4wv               1/1     Running   0          68s
elasticweb-system   elasticweb-controller-manager-5795d4d98d-t6jvc   2/2     Running   0          98s
kube-system         coredns-7f89b7bc75-5pdwc                         1/1     Running   15         20d
kube-system         coredns-7f89b7bc75-nvbvm                         1/1     Running   15         20d
kube-system         etcd-hedy                                        1/1     Running   15         20d
kube-system         kube-apiserver-hedy                              1/1     Running   15         20d
kube-system         kube-controller-manager-hedy                     1/1     Running   16         20d
kube-system         kube-flannel-ds-v84vc                            1/1     Running   22         20d
kube-system         kube-proxy-hlppx                                 1/1     Running   15         20d
kube-system         kube-scheduler-hedy                              1/1     Running   16         20d
test-clientset      client-test-deployment-7677cc9669-kd7l7          1/1     Running   9          9d
test-clientset      client-test-deployment-7677cc9669-kt5rv          1/1     Running   9          9d
Copy the code
  1. Elasticweb-controller-manager-5795d4d98d-t6jvc () : elasticWeb-controller-manager-5795d4d98d-t6jvc You need to specify the correct container to see the log:
kubectl logs -f \
elasticweb-controller-manager-5795d4d98d-t6jvc \
-c manager \
-n elasticweb-system
Copy the code
  1. Once again, you see the familiar business log:
The 2021-02-21 T08:52:27. 064 z INFO controllers. ElasticWeb 1. Start reconcile logic {" ElasticWeb ": "Dev /elasticweb-sample"} 2021-02-21t08:52:27.064z INFO controllers. Elasticweb 3. Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil] {" elasticWeb ": Dev /elasticweb-sample"} 2021-02-21t08:52:27.064z INFO ElasticWeb 4. Deployment not exists {" elasticWeb ": "Dev /elasticweb-sample"} 2021-02-21t08:52:27.064z INFO Controllers. Elasticweb set Reference {"func": "CreateService "} 2021-02-21T08:52:27.064z INFO ElasticWeb start service {"func": "CreateService "} 2021-02-21T08:52:27.107z INFO ElasticWeb create service Success {"func": "CreateService "} 2021-02-21T08:52:27.107z INFO Controllers.ElasticWeb expectReplicas [2] {"func": "CreateDeployment "} 2021-02-21T08:52:27.107z INFO Controllers.ElasticWeb set Reference {"func": 2021-02-21T08:52:27.107z INFO controllers.ElasticWeb start createDeployment {"func": 2021-02-21T08:52:27.119z INFO controllers.ElasticWeb create deployment success {"func": 2021-02-21t08:52:27.119z INFO Controllers.ElasticWeb singlePodQPS [500], replicas [2], realQPS[1000] {"func": "UpdateStatus "} 2021-02-21T08:52:27.198z DEBUG controller-Runtime. Controller Successfully Reconciled {"controller": "elasticweb", "request": "Dev/elasticweb - sample"} 2021-02-21 T08:52:27. 198 z INFO controllers. Elasticweb 1. Start reconcile logic {" elasticweb ": "Dev /elasticweb-sample"} 2021-02-21t08:52:27.198z INFO controllers. Elasticweb 3. Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000] {"elasticweb": "Dev/elasticWeb-sample "} 2021-02-21T08:52:27.198z INFO expectReplicas [2], realReplicas [2] {"elasticweb": "Dev /elasticweb-sample"} 2021-02-21t08:52:27.198z INFO elasticWeb 10. Return now {" elasticWeb ": "Dev /elasticweb-sample"} 2021-02-21T08:52:27.198z DEBUG controller-Runtime. Controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}Copy the code
  1. Use the browser to verify that Tomcat has started successfully.

Unloading and cleaning

  • After the experience is complete, if you want to clean up all the resources created earlier, you can run the following command:
make uninstall
Copy the code
  • At this point, the entire operator design, development, deployment, verification process has been completed, in your operator development process, I hope this article can bring you some reference;

You are not alone, Xinchen original accompany all the way

  1. Java series
  2. Spring series
  3. The Docker series
  4. Kubernetes series
  5. Database + middleware series
  6. The conversation series

Welcome to pay attention to the public number: programmer Xin Chen

Wechat search “programmer Xin Chen”, I am Xin Chen, looking forward to enjoying the Java world with you…

Github.com/zq2599/blog…