Triton overview

With the implementation of cloud native technology in more and more enterprises, now Kubernetes and containers have completely entered the mainstream market, become the new interface of cloud computing, help enterprises fully enjoy the advantages of cloud native, accelerate the efficiency of application iterative innovation, and reduce the development and maintenance costs. But there are a number of issues that need to be addressed in the transition to a cloud-native architecture. Examples include the complexity of native Kubernetes, lifecycle management of containerized applications, and the potential stability challenges of services as they migrate to container-based infrastructure.

The emergence of Triton, an open source cloud native application publishing component, is to solve the problem of enterprise application landing safely in the process of containerization. Using OpenKruise as the container application automation engine, Triton realizes the expansion and enhancement of application load, and brings a comprehensive upgrade to the original continuous delivery system. It not only solves the problem of application lifecycle management, including development, deployment, operation and maintenance, but also improves the efficiency of continuous delivery through microservice governance.

For a detailed introduction of Trion design scheme and implementation principle, please refer to this article. This article will introduce Triton’s core features in terms of source code installation, debug, and Demo application release demonstration, as well as its fast access to use and development. Finally, it introduces Triton’s Roadmap. Due to time constraints, one-click installation and Helm installation are under development and will be provided in the official release.

Core competence

The v0.1.0 open source version has been reconstructed in the code, temporarily removing the dependence on network solutions and micro-service architecture, abstracting the concept of application model, and having more universality. The core features are as follows:

  • Fully hosted in the K8S cluster for easy component installation, maintenance and upgrade;

  • Support to use API and Kubectl plug-in (planning) to complete application creation, deployment, upgrade, and support single batch release, batch release and Canary release;

  • It provides life-cycle management services from creation to running, including publishing, starting, stopping, expanding, reducing, and deleting applications. It can easily manage the delivery of thousands of application instances.

  • Triton provides a number of apis to simplify deployment operations, such as Next, Cancel, Pause, Resume, Scale, Gets, and Restart.

    Operational guidelines for school official cites use

    Before you start, check that the following prerequisites are met in your current environment:

    1. Ensure that the environment is connected to kube-Apiserver;

    2. Ensure that OpenKruise has been installed in the current K8S cluster. If OpenKruise is not installed, refer to the documentation.

    3. Make install to install CRD DeployFlow after Fork & Git Clone code to ensure you have the Golang development environment.

    4. The grpcurl tool is required to operate the API. Refer to the grpcurl documentation for installation.

    Create DeployFlow to publish Nginx Demo Application

    Run DeployFlow controller

    Go to the root directory and run make Run

    Create DeployFlow ready to publish an application

    kubectl apply -f https://github.com/triton-io/triton/raw/main/docs/tutorial/v1/nginx-deployflow.yaml
    Copy the code

    A DeployFlow resource and the application’s corresponding Service are created. See the YAML file for detailed DeployFlow definitions.

    apiVersion: apps.triton.io/v1alpha1kind: DeployFlowmetadata:  labels:    app: "12122"    app.kubernetes.io/instance: 12122-sample-10010    app.kubernetes.io/name: deploy-demo-hello    group: "10010"    managed-by: triton-io  name: 12122-sample-10010-df  namespace: defaultspec:  action: create  application:    appID: 12122    appName: deploy-demo-hello    groupID: 10010    instanceName: 12122-sample-10010    replicas: 3    selector:      matchLabels:        app: "12122"        app.kubernetes.io/instance: 12122-sample-10010        app.kubernetes.io/name: deploy-demo-hello        group: "10010"        managed-by: triton-io    template:      metadata: {}      spec:        containers:          - image: nginx:latest            name: 12122-sample-10010-container            ports:              - containerPort: 80                protocol: TCP            resources: {}  updateStrategy:    batchSize: 1    batchIntervalSeconds: 10    canary: 1 # the number of canary batch    mode: auto # the mode is auto after canary batch
    Copy the code

    As you can see, the application name of our release is 12122-SAMple-10010, the number of copies is 3, and the batch size is 1. There is one Canary batch, and the batch size is 1. The release mode is Auto, which means that this release will only be suspended between canary batch and normal batch. The next two batches are automatically triggered at batchIntervalSeconds intervals.

    Check DeployFlow status

    As you can see, we created a resource called 12122-SAMple-10010-df DeployFlow and see from the fields shown that this release is split into 3 batches, the current batch size is 1 and the number of updates and copies completed is 0.

    A few dozen seconds after starting DeployFlow, check the status field of DeployFlow:

    kubectl get df 12122-sample-10010-df -o yaml status: availableReplicas: 0 batches: 3 conditions: - batch: 1 batchSize: 1 Canary: true failedReplicas: 0 finishedAt: null Phase: Smoked Pods: -ip: 172.31.230.23 Name: 12122-sample-10010-2mwkt phase: ContainersReady port: 80 pullInStatus: "" pulledInAt: null startedAt: "2021-09-13T12:49:04Z" failedReplicas: 0 finished: false finishedAt: null finishedBatches: 0 finishedReplicas: 0 paused: false phase: BatchStarted pods: - 12122-sample-10010-2mwkt replicas: 1 replicasToProcess: 3 startedAt: "2021-09-13T12:49:04Z" updateRevision: 12122-sample-10010-6ddf9b7cf4 updatedAt: "2021-09-13T12:49:21Z" updatedReadyReplicas: 0 updatedReplicas: 1Copy the code

    We can see that the batch of Canary is activated now, and the pod in the batch is 12122-SAMple-10010-2MWKT. We can also see the pod pull status and pull time in the current batch.

    Pull the application into traffic

We can check the status of the Service before doing this:

kubectl describe svc sample-12122-svc -o yaml
Copy the code

Pod 12122-sample-10010-2MWKT = pod 12122-sample-10010-2MWKT = pod 12122-sample-10010-2MWKT

Name: sample-12122-svcNamespace: defaultLabels: app=12122 app.kubernetes.io/instance=12122-sample-10010 app.kubernetes.io/name=deploy-demo-hello group=10010 managed-by=triton-ioAnnotations: <none>Selector: app.kubernetes.io/instance=12122-sample-10010,app.kubernetes.io/name=deploy-demo-hello,app=12122,group=10010,managed-by= Triton-iotype: ClusterIPIP Families: <none>IP: 10.22.6.154IPs: < None >Port: web 80/TCPTargetPort: 80/TCPEndpoints:Session Affinity: NoneEvents: <none>Copy the code

Then we perform a Bake operation. The pod state changes from ContainerReady to Ready, and it is mounted to the Endpoints of the Service.

grpcurl --plaintext -d '{"deploy":{"name":"12122-sample-10010-df","namespace":"default"}}' localhost:8099 deployflow.DeployFlow/Next
Copy the code

A second check of the status of DeployFlow, Service, CloneSet shows that Pod is mounted into Endpoints and the UPDATED_READY_REPLICAS field of DeployFlow changes to 1, The Canary batch has entered the Baking phase, and if the application is working well at this point, we repeat the Next action above and set DeployFlow to the Baked stage to indicate that the batch has been ignited and the application flow is normal.

Rollout operation

After the canary batch reaches the baked stage, the Next operation will enter the following common batch release. Since the number of copies we apply is set to 3, after removing one copy of canary batch, there are still 2 copies left, and the batchSize is 1, all the remaining common batches will be released in two batches. The two batches are triggered 10 seconds apart.

grpcurl --plaintext -d '{"deploy":{"name":"12122-sample-10010-df","namespace":"default"}}' localhost:8099 deployflow.DeployFlow/Next
Copy the code

Finally, application publication is complete and DeployFlow status is checked as Success:

Looking at the Service’s Endpoints again, you can see that all three copies of this release have been mounted.

Reviewing the entire release process again, this can be summarized as the following state flow diagram:

Pause/continue DeployFlow

To suspend DeployFlow during deployment, you can perform Pause:

grpcurl --plaintext -d '{"deploy":{"name":"12122-sample-10010-df","namespace":"default"}}' localhost:8099 deployflow.DeployFlow/Pause​
Copy the code

Resume when you are ready to Resume:

grpcurl --plaintext -d '{"deploy":{"name":"12122-sample-10010-df","namespace":"default"}}' localhost:8099 deployflow.DeployFlow/Resume​
Copy the code

Cancel this release

If a startup failure or pull-in failure occurs during publication, you can Cancel the publication by performing the following operation:

grpcurl --plaintext -d '{"deploy":{"name":"12122-sample-10010-df","namespace":"default"}}' localhost:8099 deployflow.DeployFlow/Cancel​
Copy the code

Start an expanded and scaled DeployFlow

You can also use auto or Mannual mode to divide multiple batches for capacity expansion and contraction. When a CloneSet shrinks, sometimes users prefer to remove specific pods. You can use the podsToDelete field to specify Pod shrinks:

kubectl get pod | grep 12122-sample12122-sample-10010-2mwkt                      1/1     Running             0          29m12122-sample-10010-hgdp6                      1/1     Running             0          9m55s12122-sample-10010-zh98f                      1/1     Running             0          10m
Copy the code

We specify the Pod to be shrunk as 12122-sample-10010-zh98f when scaling down:

grpcurl --plaintext -d '{"instance":{"name":"12122-sample-10010","namespace":"default"},"replicas":2,"strategy":{"podsToDelete":["12122-sample- 10010-zh98f"],"batchSize":"1","batches":"1","batchIntervalSeconds":10}}' \localhost:8099 application.Application/Scale{ "deployName": "12122 - sample - 10010 - kvn6b"} ❯ kubectl get pod | grep 12122 - sample12122 - sample MWKT Running 1/1 0-10010-2 29m12122-sample-10010-zh98f 1/1 Running 0 11mCopy the code

CloneSet is scaled down to two copies, and the Pod is the one specified. The realization of this function benefits from the capabilities provided by the enhanced stateless Workload CloneSet in OpenKruise. For specific function descriptions, please refer to the OpenKruise documentation.

As part of the operation, Triton also provides the Get method to Get Pod information for the current DeployFlow in real time:

grpcurl --plaintext -d '{"deploy":{"name":"12122-sample-10010-df","namespace":"default"}}' localhost:8099 deployflow.DeployFlow/Get { "deploy": { "namespace": "default", "name": "12122-sample-10010-df", "appID": 12122, "groupID": 10010, "appName": "deploy-demo-hello", "instanceName": "12122-sample-10010", "replicas": 3, "action": "create", "availableReplicas": 3, "updatedReplicas": 3, "updatedReadyReplicas": 3, "updateRevision": "6ddf9b7cf4", "conditions": [ { "batch": 1, "batchSize": 1, "canary": true, "phase": "Baked", "pods": [ { "name": "12122-SAMple-10010-2MWKT "," IP ": "172.31.230.23", "port": 80, "phase": "Ready", "pullInStatus": "PullInSucceeded" } ], "startedAt": "2021-09-13T12:49:04Z", "finishedAt": "2021-09-13T13:07:43Z" }, { "batch": 2, "batchSize" : 1, "phase" : "Baked", "pods" : [{" name ":" 12122 - sample - 10010 - zh98f ", "IP" : "172.31.226.94", "port" : 80, "phase": "Ready", "pullInStatus": "PullInSucceeded" } ], "startedAt": "2021-09-13T13:07:46Z", "finishedAt": "2021-09-13T13:08:03Z" }, { "batch": 3, "batchSize": 1, "phase": "Baked", "pods": [ { "name": "12122-SAMple-10010-hgdp6 "," IP ": "172.31.227.215", "port": 80, "phase": "Ready", "pullInStatus": "PullInSucceeded" } ], "startedAt": "2021-09-13T13:08:15Z", "finishedAt": "2021-09-13T13:08:45Z" } ], "phase": "Success", "finished": true, "batches": 3, "batchSize": 1, "finishedBatches": 3, "finishedReplicas": 3, "startedAt": "2021-09-13T12:49:04Z", "finishedAt": "2021-09-13T13:08:45Z", "mode": "auto", "batchIntervalSeconds": 10, "canary": 1, "updatedAt": "2021-09-13T13:08:45Z" }}Copy the code

TODOS

This is the core capability that Triton provides. For the base team, Triton is not just an open source project, it is also a real, down-to-earth cloud-native continuous delivery project. Through open source, we hope Triton will enrich the cloud native community with continuous delivery tools, enabling more developers and enterprises to build a modern and efficient technology solution for cloud Native PaaS.

Open source is just a small step, and we will continue to push Triton to improve in the future, including but not limited to the following:

  • Supports custom registries. Triton currently uses K8S native Service as the application registry, but as far as we know, many enterprises use custom registries, such as Spring Cloud’s Nacos, etc.

  • The helm installation mode is provided.

  • Improve REST & GRPC API and documentation;

  • Continue to iterate based on internal and external user needs. Once the project is open source, we also iterate based on developer needs.

You are welcome to contribute to the Triton community by submitting an issue and PR to Triton. We look forward to more developers joining us, and we look forward to Triton helping more and more companies quickly build cloud-native continuous delivery platforms. If there are enterprises or users interested, we can provide special technical support and communication, welcome to join the group consultation.

A link to the

Project address: github.com/triton-io/t…

Communication group