For developers using Kubernetes as the application running environment, we can quickly create multiple isolated environments in the same cluster using the Namespace (Namespace). In the same Namespace, services can access each other using the internal DNS domain name of the Service. Based on Kubernetes’ powerful isolation and service choreography capabilities, you can implement a set of capabilities that define choreography (YAML) for multiple deployments.

However, in general Kubernetes uses a container network that is not directly connected to the developer’s office network. Therefore, how to efficiently use Kubernetes for joint testing between services has become an unavoidable obstacle in daily development work. In this article, we will talk about how to accelerate kubernetes-based r&d efficiency.

Use automatic pipeline

In order to enable developers to quickly deploy modified code to a clustered test environment, we generally introduce a continuous delivery pipeline, which automates the compilation of code, packaging and uploading of images, and deployment. As follows:



To some extent, this approach prevents developers from doing a lot of repetitive work. But while the process is automated, developers have to wait for the pipeline to run after each code change. Waiting for the pipeline to run after every code change has become perhaps the worst part of the development task experience for developers.

Break the network restriction, local joint investigation

Ideally, the developer can launch the service directly locally, and the service can seamlessly interact with other service implementations in the remote Kubernetes cluster. Two problems need to be solved:

  • I rely on other services: code running locally can directly access other applications deployed in the cluster via podIP, clusterIP, or even DNS addresses within the Kubernetes cluster, as shown on the left.
  • Other services depend on me: other applications running in the Kubernetes cluster can access my running native code without making any changes, as shown on the right.



In order to realize the two local joint investigation methods mentioned above, the following three problems need to be solved:

  • Direct connectivity between local network and Kubernetes cluster network
  • In the local implementation of Kubernetes internal service DNS resolution;
  • If traffic to other PODS in the cluster is moved locally;

KT cloud developer tool

In order to simplify the complexity of joint debugging test under Kubernetes, cloud Effect built a free auxiliary tool KT (click to download) for developers on the basis of SSH tunnel network and combined with Kubernetes features, as shown below:



When locally running Service C ‘wants to directly access Service A and Service B in the default namespace of the cluster, run the following command:

$ ktctl -namespace=default

KT will automatically deploy SSH/DNS proxy container in the cluster, and build the VPN network from local to Kubernetes cluster, and achieve DNS domain name resolution of cluster service through DNS proxy. After running KT, The developer’s local application can invoke other applications deployed in the cluster directly with the service name as if they were running in the cluster:

If you want other PODS in the cluster (such as the PodD and PodE in the figure) to be able to access the locally running program C ‘through ServiceC, specify the target Deployment to be replaced and specify the local service port by using the following command:

#-swap-deployment Specifies the target Deployment to be replaced


# -expose Specifies the port on which the local service runs


ktctl -swap-deployment c-deployment -expose=8080

When KT builds a VPN network, it automatically takes over the original PodC instance of the cluster through the proxy container and directly forwards the PodC instance to the local port 8080. Implement cluster application joint local tuning.

After these two commands, developers can truly use the cloud native way to develop and debug Kubernetes applications.

The working principle of

The following analysis of the working principle of KT, if you have been eager to try the function of KT, you can directly go to download the KT tool.

KT is mainly composed of two parts:

  • Command line tool KTCTL that runs locally
  • SSH/DNS proxy container running in the cluster.

In terms of working principle, KT is actually a VPN network based on SSH combined with Kubernetes’ own capabilities. In this part, the author will introduce the working principle of cloud effect Kubernetes developer tool KT in detail:

The SSH channel was opened. Procedure

The port-forward command built into the Kubernetes command line tool kubectl helps users establish network forwarding between a local port and a specific Pod instance port in a Kubernetes cluster.

After deploying a container containing the SSHD service in the cluster, we can use port-forward to map the SSH service port of the container to the local:

# forward port 2222 to port 22 of kT-porxy instance


$ kubectl port-forward deployments/kt-proxy 2222:22


Forwarding from 127.0.0.1:8080 -> 8080


Forwarding from [::1]:8080 -> 8080

After running port forwarding, you can directly access the KT-Proxy instance of Kubernetes cluster through SSH protocol through local port 2222. In this way, the SSH network link between the local device and the cluster is established.

Local Dynamic port Forwarding and VPN

After getting through SSH network, we can use SSH channel to realize network request from local to cluster, the most basic way is to use SSH dynamic port forwarding capability.

You can run the following command to forward network requests through the KT-Proxy container running in the cluster through the proxy running on local 2000:

# SSH -d [local nic address :] local port name@ip -p Local port mapped to kT-proxy port 22


SSH -d 2000 [email protected] -p2222

After SSH dynamic port forwarding is enabled and the http_proxy environment variable is set, you can directly access the cluster network from the command line interface:

# export http_proxy=socks5://127.0.0.1: proxy port for SSH dynamic port forwarding


Export http_proxy = socks5: / / 127.0.0.1:2000

However, native SSH dynamic port forwarding also has certain limitations, that is, UDP protocol cannot be directly used. Here, we choose an alternative solution, sshuttle. The command output is as follows:

# export http_proxy=socks5://127.0.0.1: proxy port for SSH dynamic port forwarding


Export http_proxy = socks5: / / 127.0.0.1:2000


Sshuttle — DNS –to-ns 172.16.1.36 -e ‘ssh-ostricthostKeychecking = no-ouserKnownHostsfile =/dev/null’ -r [email protected]:2222 172.16.1.0/16 172.19.1.0/16 – vv

Sshuttle builds a simple VPN network based on SSH and supports DNS forwarding.

Therefore, the next problem is to implement a custom DNS service, which in KT is directly built into the KT proxy image.

Remote port forwarding

After the link between the local and the cluster is established. Next up is the access link from the cluster to the local. In this part, we will use the remote port forwarding capability of SSH, as shown below, to specify that all network requests to kT-Proxy port 8080 will be directly forwarded to local port 8080 through SSH tunnel:

# ssh -R 8080:localhost:8080 [email protected] -p2222


SSH -r 8080:localhost:8080 [email protected] -p2222

Therefore, in the implementation process of KT, combined with Kubernetes’ tag-based loose coupling capability, we only need to clone the YAML description of the original application instance and replace the container with KT-Proxy. In this way, requests for existing applications in the cluster are forwarded to the local device through SSH remote ports.

To sum up, by using Kubernetes native capabilities and moderate expansion, developers can quickly break the boundary between local network and Kubernetes network by using KT locally, and greatly improve the efficiency of using Kubernetes for joint testing.

summary

Tools carry solutions to specific problems, while engineering practices magnify their value. Alibaba Cloud effect platform is committed to providing one-stop enterprise RESEARCH and development and collaboration services for developers, and feedback ali’s years of software engineering practice to the technical community in a more developed form, welcoming more technology developers to settle in.


The original link

This article is the original content of the cloud habitat community, shall not be reproduced without permission.