Linkerd 2.10 series

  • Linkerd v2.10 Service Mesh
  • Tencent Cloud K8S deployment Service Mesh — Linkerd2 & Traefik2 deployment emojivoto application
  • Learn about the basic features of Linkerd 2.10 and step into the era of Service Mesh
  • Add your service to Linkerd
  • Automated Canary release
  • Automatic rotation controls plane TLS and Webhook TLS credentials
  • How do I configure an external Prometheus instance
  • Configuring proxy Concurrency
  • Configure to retry
  • Configured timeout
  • Control plane debug endpoint
  • Use Kustomize to customize Linkerd configuration
  • Use Linkerd for distributed tracing
  • Debugging 502 s
  • Use each routing metric to debug the HTTP application
  • Debug gRPC applications using request tracing
  • Export index
  • Exposure to Dashboard
  • Generate your own mTLS root certificate
  • Obtain indicators of each route
  • Injection failures in chaos engineering
  • Graceful Pod off
  • Ingress traffic
  • Install multiple cluster components
  • Install Linkerd
  • Install Linkerd using Helm
  • Linkerd and Pod Security Policy (PSP)
  • Manually rotate control plane TLS credentials
  • Change the agent log level
  • Multicluster communication
  • Use GitOps in conjunction with Linkerd and Argo CDS
  • Use Debug Sidecar to inject a Debug container to capture network packets

Linkerd 2.10 中文 版

  • linkerd.hacker-linner.com

Service Profiles provide Linkerd with additional information about the Service and how Service requests are handled.

When Linkerd Proxy receives an HTTP (non-HTTPS) request, it identifies the destination service of the request. If a service profile exists for the target service, the service profile is used to provide each routing metric, retry, and timeout.

The destination service for the request is calculated by selecting the value of the first existing header, L5D-dst-Override, :authority, and Host. Port components (if contained and contain colons) will be stripped. This value maps to the fully qualified DNS name. Linkerd uses the Destination Service to provide per-route Metrics, Retries, and timeouts when it matches the name of the service profile in the sender or receiver namespace.

Sometimes you may need to define service profiles for services that reside in namespaces beyond your control. To do this, you simply create a service profile as before, but edit the namespace of the service profile to the namespace of the POD that calls the service. When a Linkerd proxy requests a service, the service profile in the source namespace takes precedence over the service profile in the target namespace.

Your destination service could be ExternalName Service. In this case, use the spec.metadata.name and spec.metadata.namespace values to name your ServiceProfile. For example,

apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: prod
spec:
  type: ExternalName
  externalName: my.database.example.com
Copy the code

Use the name my – service. Prod. SVC. Cluster. The local as ServiceProfile.

Note that you cannot currently view the statistics collected for the routes in this ServiceProfile in the Web dashboard. You can use the CLI to obtain statistics.

For a complete walkthrough, check out the Books Demo.

There are several different ways to create service profiles using Linkerd profiles. `

The request associated with the route will have an RT_route annotation. To manually verify that the request is properly associated, run TAP on your own deployment:

linkerd viz tap -o wide <target> | grep req
Copy the code

The output streams the requests that Deploy/WebApp is receiving in real time. A sample is:

req id=0:1 proxy=inSRC =10.1.3.76:57152 DST =10.1.3.74:7000 TLS =disabled :method=POST :authority=webapp.default:7000 :path=/books/2878/edit src_res=deploy/traffic src_ns=foobar dst_res=deploy/webapp dst_ns=default rt_route=POST /books/{id}/editCopy the code

Conversely, if rT_route does not exist, the request is not associated with any route. Try to run:

linkerd viz tap -o wide <target> | grep req | grep -v rt_route
Copy the code

Swagger

If your service has the OpenAPI (Swagger) specification, you can use the — open-API flag to generate the service configuration file from the OpenAPI specification file.

linkerd profile --open-api webapp.swagger webapp
Copy the code

This generates a service configuration file for the WebApp service from the WebApp.Swagger OpenAPI specification file. The generated service configuration files can be piped directly to Kubectl Apply and installed into the namespace of the service.

linkerd profile --open-api webapp.swagger webapp | kubectl apply -f -
Copy the code

Protobuf

If your service has a protobuf format, you can use the –proto flag to generate the service configuration file.

linkerd profile --proto web.proto web-svc
Copy the code

This will generate the service configuration file from the web.proto format file used for the Web-SVC service. The generated service configuration files can be piped directly to Kubectl Apply and installed into the namespace of the service.

Automatically created

It is common to have no OpenAPI specification or Protobuf format. You can also generate service profiles by viewing live traffic. This is click-based data and is a good way to see what a service profile can do for you. To start the build process, you can use the –tap flag:

linkerd viz profile -n emojivoto web-svc --tap deploy/web --tap-duration 10s
Copy the code

This will generate a service configuration file from the deploy/ Web traffic observed within 10 seconds of the command running. The resulting service configuration file can be piped directly to Kubectl Apply and installed into the namespace of the service.

The template

In addition to all the ways to automatically create service profiles, you can also get a template that allows you to manually add routes. To generate a template, run:

linkerd profile -n emojivoto web-svc --template
Copy the code

This generates a service configuration file template that contains examples that can be manually updated. After updating the service configuration file, install it into the namespace of the service on the cluster using Kubectl Apply.