In inter-service communication and Service Governance for microservices, we mentioned that Scallop’s current Service Mesh architecture is Envoy based.

The subject of this article is the rookie Envoy

Abandoned Nginx and chose Envoy

In fact, scallop is a heavy user of Nginx and has more experience in its deployment, configuration and tuning. If possible, we would prefer to use Nginx as the core solution of the Service Mesh.

But there were a few problems:

  1. Nginx reverse proxy does not support http2/ GRPC.
  2. Unlike Envoy, almost all network configurations can be dynamically modified using the xDS API, Nginx lacks an effective configuration thermal change mechanism (unless developed in depth or continuously reloaded).
  3. Many of Nginx’s microservices are purchased from Nginx Plus

Envoy, as a proxy for Service Mesh solutions, designs with Service Mesh in mind. After some understanding, we decided to enter Envoy pit.

What is the Envoy

We quote from the official website:

Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures. The project was born out of the belief that: “The network should be transparent to applications. When network and application problems do occur it should be easy to determine the source of the problem.”

Envoy’s core features/selling points

  • Non-intrusive architecture:EnvoyIt runs in parallel with the application service and transparently proxies traffic sent/received by the application service. The application service only needs andEnvoyCommunication, without knowing where other microservice applications are.
  • Based on Modern C++11 implementation, excellent performance.
  • L3/L4 filter architecture:EnvoyAt its core is an L3/L4 proxy, which is then passed through plug-in filters (network filters) chain to perform TCP/UDP related tasks, such as TCP forwarding, TLS authentication, etc.
  • HTTP L7 filter architecture: HTTP is a very special application layer protocol in modern applications, soEnvoyThere is a very core filter built in:http_connection_manager.http_connection_managerIs itself so special and complex, supports rich configurations, and is itself a filter architecture that can be implemented through a series of HTTP filters (http filters) to implement HTTP protocol level tasks, such as HTTP routing, redirection, CORS support and so on.
  • HTTP/2 As first Citizen:EnvoySupports HTTP/1.1 and HTTP/2. HTTP/2 is recommended.
  • GRPC support: Because of good HTTP/2 support,EnvoyGRPC can be easily supported, especially on loads and proxies.
  • Service discovery: Supports multiple service discovery solutions, including DNS and EDS.
  • Health check: Built-in health check subsystem.
  • Advanced load balancing schemes: In addition to general load balancing, Envoy supports several advanced load balancing schemes based on the Rate Limit service, including Automatic Retries, Circuit Breaking, and Global Rate Limiting
  • Tracing: Easy to integrate with the Open Tracing system to track requests
  • Statistics and Monitoring: Built-in STATS module for easy integration with monitoring solutions such as Prometheus/StatSD
  • Dynamic configuration: Use the Dynamic configuration API to dynamically adjust the configuration without restartingEnvoyThe service.

Core Terminology Explanation

Host

In this case, Host can be understood as a uniquely identified service instance of IP and Port

Downstream

An Envoy Host that sends a request to an Envoy is an Downstream client, such as a gRPC

Upstream

The Host that receives the request from Enovy is Upstream, such as the server of the gRPC

Listener

An address listened on, e.g. IP :port, Unix socket, etc

Cluster

A group of uniformly functioning upstream hosts, called a cluster. Similar to k8s Service, nginx upstream

Http Route Table

HTTP routing rules, such as the requested domain name, what rules Path conforms to, and which Cluster to forward to.

The configuration (v2)

The Envoy configuration interface is defined in proto3(Protocol Buffers V3) format, and is fully defined in Data Plane API Repository

Specific configuration files can be written in four formats: YAML, JSON, PB, and PB_TEXT. For readability, we use the YAML format as an example.

Let’s start with a simple example:

Completely static configuration

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 127.0. 01.. port_value: 9901 }

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 127.0. 01.. port_value: 10000 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["example.com"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: some_service }
          http_filters:
          - name: envoy.router
  clusters:
  - name: some_service
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    hosts: [{ socket_address: { address: 127.0. 02.. port_value: 1234 }}]
Copy the code

In the example above, we configured an example envoy listening for 127.0.0.1:10000, which supports HTTP visits with the domain name example.com. All HTTP traffic received is forwarded to the 127.0.0.2:1234 service.

Some_service hosts is fixed (127.0.0.2:1234), but the corresponding host may change. For example: added new host(horizontal extension), in K8S environment, issued new code (POD changed), etc. At this point, we don’t need to keep hosts in the cluster, right? Relax, if that were the case, we wouldn’t use Envoy.

throughEDSDynamically configure the Cluster hosts

The first example of dynamic configuration will be to dynamically configure the hosts of a cluster.

Let’s assume we have A service A at 127.0.0.3:5678 that returns A proto3 encoded subordinate response

version_info: "0"
resources:
- "@type": type.googleapis.com/envoy.api.v2.ClusterLoadAssignment
  cluster_name: some_service
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 127.0. 02.
            port_value: 1234
Copy the code

So let’s set the envoy configuration to:

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 127.0. 01.. port_value: 9901 }

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 127.0. 01.. port_value: 10000 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["example.com"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: some_service }
          http_filters:
          - name: envoy.router
  clusters:
  - name: some_service
    connect_timeout: 0.25s
    lb_policy: ROUND_ROBIN
    type: EDS
    eds_cluster_config:
      eds_config:
        api_config_source:
          api_type: GRPC
          cluster_names: [xds_cluster]
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: 127.0. 03.. port_value: 5678 }}]
Copy the code

Some_service can be configured dynamically in the hosts of the cluster. In the new configuration, the hosts of the cluster some_service is determined by the return value of EDS. That is, EDS returns a list of hosts for the cluster some_service. In the new configuration, the address of the EDS service is defined in the xDS_cluster cluster, which is the address of the service A assumed at the beginning.

Envoy listeners, clusters, HTTP routes, etc. can be configured dynamically in the same way as EDS, via LDS, CDS, RDS, collectively called xDS. An example of a complete dynamic configuration is as follows:

Based on thexDSDynamic configuration of

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 127.0. 01.. port_value: 9901 }

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [xds_cluster]
  cds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [xds_cluster]

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: 127.0. 03.. port_value: 5678 }}]
Copy the code

Here we assume that 127.0.0.3:5678 provides full xDS

LDS:

version_info: "0"
resources:
- "@type": type.googleapis.com/envoy.api.v2.Listener
  name: listener_0
  address:
    socket_address:
      address: 127.0. 01.
      port_value: 10000
  filter_chains:
  - filters:
    - name: envoy.http_connection_manager
      config:
        stat_prefix: ingress_http
        codec_type: AUTO
        rds:
          route_config_name: local_route
          config_source:
            api_config_source:
              api_type: GRPC
              cluster_names: [xds_cluster]
        http_filters:
        - name: envoy.router
Copy the code

RDS:

version_info: "0"
resources:
- "@type": type.googleapis.com/envoy.api.v2.RouteConfiguration
  name: local_route
  virtual_hosts:
  - name: local_service
    domains: [" * "]
    routes:
    - match: { prefix: "/" }
      route: { cluster: some_service }
Copy the code

CDS:

version_info: "0"
resources:
- "@type": type.googleapis.com/envoy.api.v2.Cluster
  name: some_service
  connect_timeout: 0.25s
  lb_policy: ROUND_ROBIN
  type: EDS
  eds_cluster_config:
    eds_config:
      api_config_source:
        api_type: GRPC
        cluster_names: [xds_cluster]
Copy the code

EDS:

version_info: "0"
resources:
- "@type": type.googleapis.com/envoy.api.v2.ClusterLoadAssignment
  cluster_name: some_service
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 127.0. 02.
            port_value: 1234
Copy the code

conclusion

As a quick primer on Envoy, we’ve taken an overview of Envoy’s core functions, terminology, and configuration. For further customization, check out Envoy’s official documentation.

In scallop, Envoy serves as the core component of the Service Mesh and carries all direct call traffic between microservices.

Our different microservices correspond to one Cluster, and Route Table is set by authority.

Use Envoy stats data for performance monitoring of service calls, Envoy Access Log for traffic log collection, and Rate limit for service protection.