In the previous post, we explained that, for the sake of performance and stability, we did not use the second-generation Service Mesh technology represented by IStio, but instead directly used our OWN xDS service Envoy.

However, it is important to understand Istio, which represents the future of Service Mesh. Unsurprisingly, scallops will migrate to second-generation Service Mesh frameworks in the near future.

This article will do a simple analysis for Istio architecture, will involve part of the source analysis.

1. Istio architecture

When we introduced Envoy, dynamic configuration gives us the possibility to control Envoy routing, traffic rules, ratelimit, logging, etc., by implementing services that provide specific apis, as described in Envoy specifications.

In Istio’s concept, proxies such as Envoy that actually perform traffic forwarding and control are called data panels. Services that provide apis to control Envoy behavior are called control panels.

To illustrate Istio’s architecture, use a picture from the official website:

Data panels are deployed together with microservices in the form of Sidecars. Each microservice instance sends and receives requests through its own SidecAR. Microservices do not communicate with microservices directly, but through proxies (forwarding) of sidecar. Sidecar forms the call network directly, like a “grid.”

The control panel consists of Pilot, Mixer and IStio-Auth. Take the example of deploying Istio on Kubernetes with the Envoy data panel.

Pilot is the heart of the control panel and is essential. Translate Kubernetes resource information into the relevant xDS API(CDS, SDS/EDS, RDS) required by the Envoy to implement service discovery, including the relevant configuration of user-defined Istio, into the routing rules (RDS) understood by the Envoy.

Mixer implements data collection, as well as some additional flow control. First, the data panel reports the request data to the Mixer, which is structured according to Istio specifications. The reported data was processed in various ways by Mixer, such as printing logs, processing metrics needed by Prometheus for easy capture for performance monitoring, and rate limits. Mixer is plug-in, and the data processing is accomplished by configuring individual plug-ins.

2. Pilot

Take the Kubernetes environment as an example: Each Pod in the Pilot actually contains two “containers” : Discovery and IStiO-Proxy

2.1 the discovery

The discovery of the corresponding image is docker. IO/istio/pilot: 0.4.0, is the true function of pilot provider, its listening address is: TCP ://127.0.0.1:8080, whose main job is to translate Kubernetes resources into configurations understood by envoys via the xDS service.

Among them:

xDSCore code:pilot/proxy/envoy/discovery.go

// Struct, core data structure
type DiscoveryService struct {
	proxy.Environment
	server *http.Server

	sdsCache *discoveryCache
	cdsCache *discoveryCache
	rdsCache *discoveryCache
	ldsCache *discoveryCache
}

// Register adds routes a web service container
func (ds *DiscoveryService) Register(container *restful.Container) {
	ws := &restful.WebService{}
	ws.Produces(restful.MIME_JSON)

	Example: List all known services (informational, not invoked by Envoy)
	ws.Route(ws.
		GET("/v1/registration").
		To(ds.ListAllEndpoints).
        Doc("Services in SDS"))

    // Other xDS...
}
Copy the code

Kubernetes resource translation core code:pilot/platform/kube/controller.go

// Example: list services
func (c *Controller) Services(a) ([]*model.Service, error) {
	list := c.services.informer.GetStore().List()
	out := make([]*model.Service, 0.len(list))

	for _, item := range list {
		ifsvc := convertService(*item.(*v1.Service), c.domainSuffix); svc ! =nil {
			out = append(out, svc)
		}
	}
	return out, nil
}
Copy the code

2.2 istio – proxy

Istio – proxy corresponding image is docker. IO/istio/proxy: 0.4.0 and sidecars is Pilot service, responsible for the reverse proxy to discovery requests, monitor TCP: / / 0.0.0.0:15003. The Envoy’s core configuration is as follows:

{
  "listeners": [{"address": "TCP: / / 0.0.0.0:15003"."name": "Tcp_0. 0.0.0 _15003"."filters": [{"type": "read"."name": "tcp_proxy"."config": {
       "stat_prefix": "tcp"."route_config": {
        "routes": [{"cluster": "in.8080"}]}}}],"bind_to_port": true}]."admin": {
    "access_log_path": "/dev/stdout"."address": "TCP: / / 127.0.0.1:15000"
  },
  "cluster_manager": {
    "clusters": [{"name": "in.8080"."connect_timeout_ms": 1000."type": "static"."lb_type": "round_robin"."hosts": [{"url": "TCP: / / 127.0.0.1:8080"}]}}Copy the code

3. Sidecar

Take the Kubernetes environment as an example: Sidecar, as the data panel, actually inserts two containers in each Pod of the microservice: proxy-init and istio-proxy

3.1 istio – proxy

Istio – proxy corresponding image is docker. IO/istio/proxy: 0.4.0, is the actual function of sidecars takers, monitor TCP: / / 0.0.0.0:15003, accept all TCP traffic sent to the Pod, Distribute all TCP traffic from the Pod. In fact, the proxy consists of two parts: an administrative process, the Agent, and the actual proxy process Envoy.

The agent is responsible for generating the Envoy configuration, monitoring the Envoy’s health as appropriate, and managing the Envoy process as necessary (e.g., reload Envoy after configuration changes). Also responsible for interaction with Mixer components (including reporting data, etc.).

Agent for generating Envoy configuration of the core code is located in: pilot/proxy/Envoy/config. Go

func buildConfig(config meshconfig.ProxyConfig, pilotSAN []string) *Config {
	listeners := Listeners{}

	clusterRDS := buildCluster(config.DiscoveryAddress, RDSName, config.ConnectTimeout)
	clusterLDS := buildCluster(config.DiscoveryAddress, LDSName, config.ConnectTimeout)
	clusters := Clusters{clusterRDS, clusterLDS}

	out := &Config{
		Listeners: listeners,
		LDS: &LDSCluster{
			Cluster:        LDSName,
			RefreshDelayMs: protoDurationToMS(config.DiscoveryRefreshDelay),
		},
		Admin: Admin{
			AccessLogPath: DefaultAccessLog,
			Address:       fmt.Sprintf("tcp://%s:%d", LocalhostAddress, config.ProxyAdminPort),
		},
		ClusterManager: ClusterManager{
			Clusters: clusters,
			SDS: &DiscoveryCluster{
				Cluster:        buildCluster(config.DiscoveryAddress, SDSName, config.ConnectTimeout),
				RefreshDelayMs: protoDurationToMS(config.DiscoveryRefreshDelay),
			},
			CDS: &DiscoveryCluster{
				Cluster:        buildCluster(config.DiscoveryAddress, CDSName, config.ConnectTimeout),
				RefreshDelayMs: protoDurationToMS(config.DiscoveryRefreshDelay),
			},
		},
		StatsdUDPIPAddress: config.StatsdUdpAddress,
    }
    // Other related logic...
}
Copy the code

It is worth noting that ISTIO-Proxy can run in multiple roles and generate different configurations according to different roles. For example, in Section 2.2, proxy as Pilot is configured differently from Sidecar.

3.2 the proxy – init

IO /istio/proxy_init:0.4.0. In section 3.1, we said that isTIo-Proxy accepts all TCP traffic to the Pod and distributes all TCP traffic from the Pod, but when we actually write code, we don’t have to think about this at all. How does istio do this? The answer is proxy-init! The rules for incoming and outgoing Pod traffic are overwritten by injecting iptables to redirect incoming and outgoing Pod traffic to different listening ports of ISTIO-Proxy.

For example, redirection rules for incoming traffic:

iptables -t nat -N ISTIO_REDIRECT                                             -m comment --comment "istio/redirect-common-chain"
iptables -t nat -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port ${ENVOY_PORT}  -m comment --comment "istio/redirect-to-envoy-port"
iptables -t nat -A PREROUTING -j ISTIO_REDIRECT                               -m comment --comment "istio/install-istio-prerouting"
Copy the code

4. Summary

As the next generation of Service Mesh framework, Istio is not ready for production yet, but its ideas and architecture are well worth learning. I hope this article will be helpful to students who are interested in Istio.