1.1 IsTIO Introduction and Installation

When implementing microservices based on Spring-Cloud, implement service governance. Spring is based on Java. If all languages support service governance, you need ISTIO

The service grid is a concept, just an idea, and ISTIO is just one implementation

Spring Cloud can’t go cross-platform. Think of these two as two PODS. If two service Pods are to be governed by communication, the basic implementation is in the business code

A service mesh is created by adding a container (usually called a Sidecar) to a POD. The traffic will pass through the Sidecar, so that some traffic can be processed. In this way, it can be understood that the pod of each service is added with a nginx-like container, that is, the service governance capability is put into the Sidecar. The business code is still written as it is, but the service governance is not written in the code, and the service governance capability is put on the platform

You can do a lot of things in Sidecar, service registration, health check, traffic fuse

The blue one is the sidecar. Sidecars communicate with each other, and the transfer of traffic between them is placed in the platform

Add an agent layer next to your service

Service mesh is an idea whose implementations, the first generation of which were Linker’s and Envoy’s, were more proxy layer implementations

If the control level is not good, I can write the configuration. How can I send the configuration to you

The second generation of ISTIO is actually a standard for Service Mesh, which is actually an update on envoy that complements the control plane functionality

The traffic that comes in is a microservice proxy (called an envoy, sidecar, which is a container), and the traffic goes into the proxy, and the outgoing traffic goes through the proxy

Forwarding control rules are delivered to agents at the control layer

Download the ISTIO version

Client control tools

Copy commands to path

1.7.3 version

There is a command completion that needs to execute the shell

What does profile mean

It can be understood as different configurations, just like buying a car, istio has several configurations

If you select Default, there are several components installed, with X indicating installation

Istiod is the control plane

To see which components are installed, see this link

Istioctl has manifest generate, which generates yamL files according to the specified profile

I want to see what the demo items are

It’s all from K8S

There will be an extra namespace

In fact, three Deployments are deployed and three services are set up

CRDS are created for isTIO’s own implementation

So istio is installed, and uninstallation is to remove all the resources in K8S

1.2 ISTIO Traffic Model Example 1

There is a front-end service, which is a front-end V1 Deployment version of the service, followed by a billing service, which is accessed by a billing V1 Deployment version, which exposes a service billing service at the front end

Create the front end first

The pod container starts with this command

A service is missing

A bill-service is matched

That’s this one

Create the istio-demo namespace first

Now we have a service

Enter the Tomcat container and call the service name to access the service

Return 1 bill

The second model, background update, is now version V2

Create the resource

Now we have two pods

But now the traffic is basically even

K8S service is 100% formula

When visiting, the first step is domain name resolution, which is resolved to an IP address

In addition, a POD can also be accessed

That’s what it looks like

This address is the ClusterIP address of the Bill-service

So these two things are equivalent

Enter the POD container and look at the route. This is the FLANnel IP segment

There is obviously no 10.105.42 IP address segment

Therefore, it can only be forwarded to gateway 10.244.2.1

It’s on the mainframe

That is, the incoming traffic is forwarded to the host

But the host does not have this 10-segment route

This is implemented in Iptale, where 10.105.42.173 traffic is forwarded to kube-svC-PK4bntkc2

There is an extra random forward, 0.5 percent probability is forwarded to kube-sep-hMSXX67, and the rest is forwarded to another

Two pods, plus DNAT, forwarded to pod

The rules that coredns maintains, you can’t change, you want to split traffic evenly, but if you want 90,10 percent you need istio

There is now only one container per POD

Now you need to add Sidecar, Inject into the service

A bunch of YAML files are generated

After yamL injection, the front is the same

What affects functionality is in the spec

That’s part of it

Start here

here

A container with an additional istio-proxy name is called a sidecar container

I also added an initialization container, executed some commands, and then the container exited, just did some iptable rules

Directly deployed

Istio’s sidecar container was added to the original

Inject both front-end and V2 versions

Equals a small grid, now with three nodes

Rules how to write is a point, traffic how to distinguish between two versions different

A destnation-rule is defined. The destnation-rule is used to distinguish two groups of services

Subset is equivalent to an array, the first set of v1 matches the second set of v2 with version =v1

It doesn’t match all v1 and v2, it matches v1 and v2 under bill-service

Thus, within ISTIO, the two pods were given names

Virtualservice: VirtualService: VirtualService: VirtualService: VirtualService: VirtualService

Hosts is the bill-service-route, HTTP traffic, and the following is the route route

The destination of weight 90 is v1 under bill-service, and weight 10 is V2 under bill-service

When this rule is created, it tells all the services in the grid to follow this routing rule when looking for names such as bill-service. 90% of the traffic goes to V1 and 10% goes to V2

To achieve the

First create destnation – rule

Vs stands for VirtualService

Access the bill-service service from the pod at the front end

Make sure you select which container -c goes into

Access to bill-services within the grid is about 9 to 1

Understand the envoy

The 9-to-1 rule is defined by VirtualService

What ultimately affects this POD is quite complex

If it is Nginx, this is how it is configured

We added two containers, istio-init and istio-proxy

Go to istio-Proxy and have a look

So an envoy is actually a proxy, a lightweight proxy

Create a service from the K8S API and store it in ETCD

Istiod will watch the VirtualService rule we created

Each POD adds an envoy, istio container, and istiod senses user-created rules and synchronizes the configuration to the Envoy from the IStiod server

The Envoy configuration file must be different from the virtualService rules you created, the format is different, istiod is used for transformations

Istiod listens to the created VirtualService rule

Some general knowledge about envoy

Nginx is also a proxy, envoy is also a proxy

Now how does nginx migrate to envoy

Here’s the Nginx configuration, here’s the Envoy configuration, and nginx’s server is called the listener in the envoy

Location is a route

Called a filter in an envoy

The core one filters HTTP requests and converts some packets to HTTP format

The traffic goes to filter, through route_config, followed by some matching conditions

Proxy and upstream correspond to clusters of envoys

That is, nginx configurations can be translated into envoy configurations, and there are some concepts in envoy

The forwarded envoy configuration file looks like this

Envoy launches a port as an administration port

And then you have envoy configurations

A listener listener_0 is raised, listening on local port 10000

The captured traffic is handed to Filter_chains for processing

Match the slash to a route to a cluster like some_service (upstream)

What is a service, as defined by clusters down here

Start the version of Envoy1.15.2 and place the yaml file in the container /etc/envoys/enlist.yaml

The cluster-IP expected to go to service is 10.105.42.173

An envoy is a proxy

The core concept is listeners

Filter_chains contains HTTP_connection_manager, which processes HTTP traffic, matches routing rules, and forwards it to clusters

To configure ELB and SLB on the cloud, you also need to configure listeners and routing and forwarding rules

Envoy core concept XDS

Static configurations are YAML files, and dynamic configurations are dynamically available through interfaces from elsewhere

Listeners can actually be dynamic, L (Listener) D (Discover) S (service)

Another configuration is RDS, route configuration can also be dynamic, do not have to write dead

CDS can also be dynamic

EDS can also be dynamic

LDS,RDS,CDS,EDS collectively called XDS, X stands for variables, these are the characteristics of the cloud environment envoy, all the environments are unstationary, the information is stored in the ETCD

The core is listener, cluster, route, endpoint, namely LDS, CDS, RDS. eds

All accessed via the XDS interface, envoy can dynamically fetch configurations to serve

It initiates traffic from the client, sends it to the listener, goes through LDS, CDS, EDS, and knows which endpoint to send the request to, and then finally to the machine that actually provides the service

The listener filter is used to change some metadata. The listener filter has filter_chains

Using envoys, which have requests, forwarded to the cluster via a listener, via route, and eventually to the server, the configuration can be both static and dynamic, and XDS can meet that

So an envoy is just a proxy

For K8S, an envoy is a process in an istio-proxy container that can monitor traffic, can fuse, can trace

Envoy is just the ability, you need a server to provide an interface to tell an envoy what data can be accessed through that interface, and pilot in ISTIO provides an implementation of the XDS server interface in a flash.

There’s a pilot-Discover service in there, and this process actually provides the XDS service interface required by the Envoy, the XDS server

The working principle of

Look at question 1, what does the dynamic configuration of an envoy look like

This is the envoy’s administrative port

Envoy needs a configuration file

There is an administrative configuration that listens for ports and addresses

You can actually get something by accessing the port

Information from the Envoy can be accessed here

Download envoy configuration

This configuration file is very long

700 + KB configuration

Several large points of the most important configuration

This one is more complicated

Now inside the grid, instead of injecting ISTIO, the Bill-ServiceIP address is resolved first, routed to the host, and accessed on port 9999

That’s what happens when you inject it

If tomcat is on the front end, the traffic is diverted to the istio-Proxy container, which forwards the traffic to different versions of the service

Istio-init is to modify the iptables rules

Iptables, 4 Table 5 chain

Traffic into prerOUTING chain, input chain, forward chain to see if it’s the local address,

The local traffic goes to the local stack and is then forwarded to the Output chain, which goes to the Postrouting chain

If ISTIo-proxy wants to manage traffic, it must intercept Tomcat traffic. That is, IStio-Proxy can intercept requests by adding a rule to its own POstrouting. How to intercept requests is what IStio-Init does

Istio does a wrapper around iptables

-p redirects all TCP outbound traffic to 15001, -z All TCP inbound traffic to POD/VM should be redirected to port (default $INBOUND_CAPTURE_PORT)

Intercepted all inbound and outbound traffic

And then finally we put one, and we have a uid, and the user optimizes his request, puts it on port I 15090,15021,15020, and all other requests are blocked

Now the id is 0 and the request is made

It must be intercepted, because it does not meet the criteria, it does not meet the criteria that uid 1337 initiated, nor access 15090,15021,15020, so istio will intercept

The above rule is the following description, 15090 for Telemetry for Prometheus, 15020 for ingress, and 15021 for monitoring

If you intercept, you intercept your own, and you’re in an endless loop

Istio-init initializes the IPtable rule and enables istio-proxy to intercept inbound and outbound traffic

支那

支那

支那

支那

支那

支那

** See rules in istio-Proxy, on Slava1 **

支那

支那

** Find the corresponding PID ** for this process

** Iptables rules are set in POD and have a separate network space

** Can enter cyberspace and view the rules **

支那

支那

支那

支那

** PRERouting is for pushing. Go to 15006 **

The istio_inbound rule redirects all inbound TCP traffic to istio_inbound

** Everything except these ports is redirected to the istio_in_redirect **

** this is the ** generated by -d

** excluded non-business traffic and transferred to port 15006, envoy only needs to listen on port 15006 and fetch this inbound traffic **

**-p is outbound traffic **

** All outbound traffic is forwarded to istio_output **

支那

支那

** This intercepts traffic for each pod **

**15001 is to intercept all outbound. 15006 intercepts all incoming **

** The traffic of the service is directly forwarded to isTIo-Proxy, which is processed and sent to the service end. The service end isTIo-Proxy listens to 15006 (receiving all inbound traffic) **

** is intercepted by 15001, how to forward, envoy has listener, Filter_chains **

支那

支那

** This command will get only the segments you care about **

** outbound is 15001 intercepted, (two ports 15006,15001) **

** If a request is sent from front-tomcat and blocked by istio-proxy, it is necessary to check the pod listener in istio-proxy **

支那

支那

支那

支那

It’s hard to find the book because it’s too long

** example, you can see port **

** has a 15001, here is an abbreviation **

A 15001 listener appears, but no processing is done

** Uses the original IP address and port **

** Only passes the traffic, and does no other processing **

支那

支那

支那

支那

** This request was actually intercepted by 15001

** Traffic is just done throughcluster **

**15001 is not processed and is forwarded to port 9999 of the service itself

** Normally there will be a listener ** below

支那

支那

** See what the listener looks like **

** RDS is a dynamic route, and the listener is switched to route 9999. **

The ** name is 9999 **

支那

支那

** this looks like a match for all forwards to the passthroughCluster, but our request won’t be here right now, and there’s a better match

The top match is **, only when you can’t find the bottom will find the top **

If there is no * number, you can not access the Internet, so be sure to add a pass

Prefix: matches the prefix

The rules here are translated into ISTIO rules

Weightedcluster Clusters based on weights

To look up clusters with both names

Cluster provides an FQDN to help you look it up

Now check endpoint, do not know how to check can -h, see help

V1 is 90% traffic

Port to port 80

104 is v2

LDS (Listener), RDS (Route), CDS (clSUter) EDS (endpoint)

Run linstener to route to find cluster

The traffic is monitored by 15001 of isTIo-proxt. The 15001 does not process the traffic and is directly forwarded to the original IP + port. The traffic monitored on port 9999 is forwarded to the service on port 9999 and then forwarded to the cluster

After finding the endpoint address, equal to the envoy curl address from inside the IStio-proxy

After sending a request to the FLANnel network, how can I get to the POD? It is a POD IP address and the flannel network can see the route

The ISTIO-Proxy here intercepts on the PRERouting chain and eventually switches from Direct to 15006

15006 to check the listener, -h help, check is -port

Converted to json

Inbound Inbound request

Inbound directly to 15006 processing

15006 has an inbound listener, which is matched by port

There is also a route, and processing is directed to the endpoint behind the matching cluster

Matches 80, goes to route

Go to the local to provide the service

Some VirtualService rules are created, recognized by IStiod, and eventually transformed into configuration fragments that can be seen by the Envoy and passed through the IStiod server. The traffic direction can be changed only after the synchronization is performed on the service end. Envoy configuration fragments can be viewed via istio config-dump, you can view route, cluster, endpoint

支那

支那

The little knowledge envoy

The two pods visit, the pod has two containers, the serving container, and the envoy, go to curl Review :9080 from the ProductPage container, and the traffic arrives at the IPtable first. There was a configuration output chain that was forwarded to istio_output, which was redirected to istio_redirect. Pass 15001 (this port is envoy listened on, Virtualoutbound, this is a listener) forward to an orginal listener, match to 0.0.0.0_9080, filter_chains, start processing, Go to a route and find an endpoint IP + port. Go to the POD on the back end (the envoy listening on the PRERouting chain, on the IStio_IN_Redirect chain), process on port 15006, and find the Inbound Cluster to process

The specific process or istioctl PC to check

Go to the Tomcat container to curl

The isTIO-Proxy container does not move at the ratio of 91

Since this uid is 1337, this 1337 request will not be intercepted by iptable. If you do not intercept it, you will not be treated inside your own grid. If you do not envoy, you will be treated outside the grid

It still followed the network process of IPtable maintained by Kube-Proxy on the host. When flannel came to the host, the host forwarded it by using the IPtable made by Flannel

支那

支那

支那

支那

支那

支那

支那

支那

** If you kill several kube-proxies, it will be affected, but using scale will not work **

** You can view it this way. Look at YAML **

** There is a label **

支那

支那

支那

支那

支那

支那

** such a no, it automatically quit **

** Clean up NAT **

** There is no such rule **

In this case, the host is inaccessible from inside the ISTIO grid, as the direct iistio envoy intercepts this flow **

支那

支那

** actually access the POD IP **

** Clear iptable rules ** on Slave1

支那

支那

** is still accessible

** Master to clean up **

** is now in the container’s IStio-proxy **

支那

支那

Now kube-proxy has been stopped, the host iptable rule has been removed from the cluster rule

Try deleting a POD

Go to the Internet link

There should have been a resolved cache before it was deleted

The container can be accessed internally, but the host can not parse the IPtable rules, but also can be accessed, there may be a problem

All services create virtual listeners

Access the service in pod

Normal access is available through isTIO listening

Inside ISTIO, a listener is set up for each service in the cluster

There were 9000 port services before there was ISTIO

Here is the IP + port of the service

Istio is the manager that adds the entire service to isTIO

Finally, go to the route below

To find route, use r

We can convert it to JSON and let’s see

Domain. Match /, go to cluster, find endpoint

Here are all the listening rules created by the previous service. The more services there are, the longer the listner will be. Most of them are outbound listening rules

Istio does not want to use Kube-proxy any more. It can directly manage iptable rules and control traffic within the grid

As long as it’s in the grid

Because IStio itself doesn’t know which service your POD needs to access

This is all dynamically maintained, envoy configuration has static dynamic, dynamic based on the IStiod service

This is a dynamic server of ISTIO, the server of gRPC

Configuration segments that are converted to envoy are forwarded to envoy

Access grid services using ingress-gateway

In this scenario, 90% of the client traffic accesses V1 and 10% of the client traffic accesses V2. To achieve this, create a V2 pod and create virtualService

The selector is app: front – tomcat

Below is the Deployment file

App is front- Tomcat will be called before, version V2

All you need to do is create a virtual service and use front-tomcat to access the hosts inside the ISTIO grid. Front-tomcat traffic is matched through it and forwarded according to route configuration, 90 to V1 and 10 to v2

How to distinguish between different groups of machines is to create a destinationRule, which is actually a cluster

The focus is on the host

Service, and context rules

This is the VirtualService created

Istio needs to be injected

There’s a 1 there’s no injection

If you don’t inject it, you access it from the front end, and then you access it from the front end, through the ingress, you can’t access it inside the grid

Access to 8080 should now be 9 to 1 traffic

Services can now be accessed from the interface

Now the access ingress is going on top of it

Now it’s 1 to 1, not playing by ISTIod rules

Access through the ingress will not follow ISTIod rules

Because isTIO inside to access

Now it is accessed from the host, not inside the grid, but inside the grid in the injected IStio-Proxy container, and now the host is outside the grid, using the Kube-Proxy IPtable

The front-end page also goes through the ingress, without isTIO, there is no meaning, now we should also go through isTIO in the front-end instead of Kube-proxy

Ingress can simply divert external HTTP traffic. The purpose here is to highlight the limitations of ingress

You need ingressGateway

Mesh boundary is a set of rules for the flow inside the grid and outside if it wants to enter the inside of the grid. The ingress Gateway component is used to import the mesh gateway, also known as the Sidecar. The egress Gateway is required for istio to access the external grid, but it is not necessary for isTIO to access the external grid

Ingress: Supports only HTTP traffic. Any advanced functionality is cumbersome for the ingress to implement (because it needs to be re-functioned and not in the same ingress).

External access is still polling 1 to 1 for internal access

There can be multiple IngressGateways within a cluster

To find ingressGateway

There is a label like ISTIO = ingressGateway

This rule defines a gateway to be handled by a pod with isTIO = ingressGateway as a label. There are more than one IngressGateway, so you can specify which IngressGateway to use in the cluster to handle requests

When you create a resource like this in K8S, you add a rule to istio’s ingress gateway that allows tomcat.istio-demo.com HTTP traffic into the grid. Tomcat.istio-demo.com traffic to the pod can be received via the gateway with ISTIO = ingressGateway, that is, according to ISTIO rules

支那

支那

** Add a Gateway (kind) that allows HTTP traffic to hosts from outside the grid to **

支那

支那

支那

支那

支那

支那

支那

支那

** Only allows traffic to come in and then send it to where does virtualService need to be defined previously. Virtualserivice can be used with the gateway **

Virtualservice (tomcat.istio-demo.com) : virtualService (tomcat.istio-demo.com) : VirtualService (tomcat.istio-demo.com) : VirtualService (tomcat.istio-demo.com) Virtualservice uses a gateway binding, which applies only to traffic from the gateway. The gateway binding does not apply to traffic from the gateway. This means that the entire grid is affected, so follow this rule

支那

支那

The gateway does not have a general rule that if you access front-tomcat from within the IStio grid, it will follow the virtualService

支那

支那

支那

支那

** Now let external traffic in, which is the gateway rule **

** TOMcat.istio-demo.com HTTP traffic can be handled through the ingress Gateway

** The virtualService below is bound to gateway **

支那

支那

** Only traffic from this gateway can enter this rule, other external requests will be rejected **

** Now the gateway has been created **

** Create rule **

支那

支那

支那

支那

** This virtualService only uses ** for front-tomcat-gateway

** This rule applies only to incoming traffic from the Gateway **

** Requests from other pods that do not come through the front-tomcat-gateway gateway are not affected by the above rule **

支那

支那

** As long as the traffic from this domain name is diverted to ingressGateway **

** goes to this POD **

Now request this domain name to be handled by IngressGateway

Loadbalancer creates a number of ports. 3079 points to 80

-oyaml

There’s an HTTP port

Get the port with this command

Now it’s nine to one

-h Domain name + port

The Ingress Gateway service is at this address

Combined with analytical

Add the ports and you get access, 90% of the traffic is in V1, 10% of the traffic is in V2

But access port 80. It’s still 1 to 1

Now access to 30779 is essentially access to the ingressGateway provided within ISTIO to process requests

But this port is not good-looking, you can find an Nginx proxy, or find a cloud service lb, CLB, ELB

Create a mirror

Change address resolution

Gateway selects only this domain name

So you can’t get in

By domain name

Listen on domain name, proxy version 1.1

It is now accessible externally, but not inside the nginx container

Let’s make it 69, which is the local host

We can do it now. Nine to one

Plus nginx in front, problems can also be convenient to track, but the reality is at least to buy a cloud LB to agent

Flow routing

External traffic comes in. In short, inside the IStio grid, the ingressGateway is replaced by ingressGateway. All traffic to ingress-Controller is passed to ingressGateway

Go to nginx for the reverse generation, because the ingressgateway was added to allow HTTP services for this domain name to use the ingress

Service to this ingress

Virtualservice is created and bound to the Gateway to further match the VirtualService rules

Now for a new example, this is a bookstore

Create a namespace

Productpage is a front-end application that accesses the reviews service (v1 version does not call the rating service, v2, v3 calls the ratings service) and Details service

Create an ingress for the Product page

Both ports are 9080

It can now be accessed externally directly using the original INGress of K8S

Detail and review are services when visiting the Product Page page

Review itself also calls the rating service

If you visit productPage consecutively, review will be random between the three versions

If you want to implement traffic control, you need to put it in ISTIO, you need to inject sidecar

Inject the specified YAML with istioctl kube-inject, then kubectl apply. This is done by injecting the specified POD in the namespace. (That is, there can be services that are already in ISTIO and services that are not in ISTIO)

Another way to do this is to label the namespace. For example, if istio-injection=enabled is used in the default namespace, it will be injected whenever the service is deployed in default

For now, the injection is still pod

Now it’s going to be 1/2 so it’s going to be injected. It was 1 over 1

This is an envoy, so each pod is injected with an envoy

支那

支那

** The external accesses ** with Ingressenvoy

** Creates a gateway, again using the specified ingressGateway **

支那

支那

支那

支那