In the Network model of Kubernetes, based on the official default CNI Network plug-in Flannel, this Overlay Network can easily achieve Network communication between PODS. When we migrate spring Cloud-based microservices to K8S, without any changes, microservice Pods can be registered with Eureka and easily accessed by each other. In addition, we can use the Ingress + Ingress Controller to introduce user request traffic based on HTTP 80 port and HTTPS 443 port to the cluster service on each node.

This post is shared by @tangt.

But in practice, we have the following requirements:

  • 1. The office network is disconnected from the K8S POD network. Development in the computer to complete the development of a micro-service module, hope that after local startup, can register in the K8S development environment of the service center for debugging, rather than a bunch of local dependent services.
  • 2. The network between the office and k8S SVC is abnormal. Mysql, Redis, etc. running in K8S cannot be exposed through ingress 7 layer. Computers cannot be directly accessed through client tools. If we use the NodePort mode of the service, it will result in a huge amount of maintenance work.

Network Communication Configuration

A node node (Node-30) with low configuration (2-core 4G) is added to the K8S cluster for routing and forwarding, connecting the office network with POD and SVC of the K8S cluster

  • Node-30 IP address 10.60.20.30
  • Intranet DNS IP address 10.60.20.1
  • Pod network segment 10.244.0.0/24, SVC network segment 10.96.0.0/12
  • Office network segment 192.168.0.0/22

Taints node-30 nodes to prevent K8S scheduling pods from occupying resources:

kubectl taint nodes node-30 forward=node-30:NoSchedule
Copy the code

Snat for node-30 node:

Net.ipv4. ip_forward = 1 sysctl -p service snat iptables -t NAT -a POSTROUTING -s 192.168.0.0/22 -d 10.244.0.0/24 -j MASQUERADE iptables -t NAT -a POSTROUTING -s 192.168.0.0/22 -d 10.96.0.0/12 - j MASQUERADECopy the code

Configure a static route on the egress router in the office to route the network segments of K8S pod and Service to node Node-30

IP route 10.244.0.0 255.255.255.0 10.60.20.30 IP route 10.96.0.0 255.240.0.0 10.60.20.30Copy the code

DNS Resolution Configuration

After the above steps, we can access the service from the local computer by accessing pod IP and Service IP. However, in K8S, because pod IP can change at any time, service IP is not easily obtained by development and testing. We want Intranet DNS to resolve *.cluster.local, go to coreDNS to find the resolution result.

For example, we agreed to deploy (ProjectA, development environment 1, mysql) to the namespace projecta-dev1. Since the local network is connected to the k8s cluster service, when we use the mysql client to connect to the local computer, You only need to enter mysql.projecta-dev1.svc.cluster. local. After the DNS query request reaches the Intranet DNS, it goes to CoreDNS to resolve the service IP address.

Because the Intranet DNS is resolving *.cluster.local, you need to access CoreDNS to search for the resolution result. This requires network accessibility

  • Solution 1, the simplest way, we set up the Intranet DNS on the node node-30, then he must access kube-DNS 10.96.0.10
# kubectl kube get SVC - n - system | grep kube - DNS kube - DNS ClusterIP 10.96.0.10 < none > 53 / UDP, 53 / TCP 20 dCopy the code
  • In scheme 2, since the Intranet DNS IP address 10.60.20.1 in our experimental scenario is not on Node-30, we need to get through 10.60.20.1 to access SVC network segment 10.96.0.0/12
Route add-net 10.96.0.0/12 GW 10.60.20.30 # node-30 (IP 10.60.20.30) do snat iptables -t NAT -A POSTROUTING -S 10.60.20.1/32 -D 10.96.0.0/12 -j MASQUERADECopy the code
  • Scheme 3 (experimental choice) : Since the Intranet DNS IP address 10.60.20.1 in our experimental scenario is not on Node-30, we can use nodeSelector to deploy a Nginx ingress controller on Node-30. TCP/UDP 53 port of CoreDNS exposed with Layer 4.

Label Node-30:

kubectl label nodes node-30 node=dns-l4
Copy the code

Create a namespace:

kubectl create ns dns-l4
Copy the code

Install the nginx-ingress controller under namespace DNS-L4, select node Node-30, and Tolerate the stain:

kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: dns-l4 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: dns-l4 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx data: 53: "kube-system/kube-dns:53" --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: dns-l4 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx data: 53: "kube-system/kube-dns:53" --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: dns-l4 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: dns-l4 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: dns-l4 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: dns-l4 --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: dns-l4 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: dns-l4 labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: nodeSelector: node: dns-l4 hostNetwork: true serviceAccountName: nginx-ingress-serviceaccount tolerations: - key: "node-30" operator: "Exists" effect: "NoSchedule" containers: - name: nginx-ingress-controller image: Quay. IO/kubernetes - ingress - controller/nginx - ingress - controller: 0.21.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1Copy the code

After the deployment is completed, the computer verifies whether it takes effect:

Nslookup - q = A kube - DNS. Kube - system. SVC. Cluster. The local 10.60.20.30Copy the code

KubeDNS 10.60.20.30: KubeDNS 10.60.20.30: KubeDNS 10.60.20.30:

Conf strict-order listen-address=10.60.20.1 borghology-nxdomain =61.139.2.69 Server = / cluster. The local / 10.60.20.30Copy the code

After completing the above steps, our office network and Kubernetes network communication requirements are realized, and we can directly use the domain name rules of K8S Service to access the service in K8S.

To promote

Finally, I would like to make an advertisement to recommend a high-quality course carefully created by myself, which is now in a limited time offer:Advanced from Docker to Kubernetes

Scan the QR code below (or wechat search)K8s technology circle) Follow our wechat public account and reply to our wechat public accountAdd groupJoin our Kubernetes discussion group to learn more.

“Sincere appreciation, lingering fragrance in hand”