“K8S Ecology Weekly” mainly contains some recommended weekly information related to K8S ecology that I have come into contact with. Welcome to subscribe zhihu column “K8S Ecology”.

KIND V0.11.0 is released

KIND (Kubernetes In Docker) my friends must be familiar with it. This is a project I have been participating In and using a lot. It can be very convenient to use Docker container as Kubernetes Node. Quickly start one or more test clusters. It’s been four months since the last release, so let’s take a look at some of the notable changes.

Disruptive change

  • The default K8S version in this release is V1.21.1;
  • Remove the usebazelThe way you build the image,kind build node-image--typeParameter invalid;
  • kind build node-image--kube-rootParameter deprecated, k8S code directory location will be found according to standard mode;

New features

  • kind build node-imageThere’s a new one--archCan support the construction of multi-architecture mirror;
  • KIND’s current pre-built images, already multi-arch, run on both AMD64 and ARM64 architectures;
  • KIND can now run in rootless Docker and rootless Podman. For details, see KIND running in Rootless Docker.
  • The KIND default CNI kindnetd already supports dual-stack networking and is enabled by default in k8s v1.21;

You can install the latest version of KIND in any of the following ways:

  • GO111MODULE = "on" go get sigs. K8s. IO/[email protected];
  • Wget - O kind https://kind.sigs.k8s.io/dl/v0.11.0/kind-linux-amd64;
  • clone KIND’s code repositoryAfter, the implementation ofmake build ;

More about the use of KIND and instructions please refer to the official document: kind.sigs.k8s. IO/welcome to download and use.

Apisix ingress – controller v0.6.0 release

Apache APISIX Ingress Controller is a control surface component of Apache APISIX, which can publish its custom resources (CR) and native Ingress resources in Kubernetes to APISIX. APISIX is then used as the gateway to manage the north-south traffic. Let’s take a look at some of the notable changes in v0.6.0:

  • #115 supports TCP proxy;
  • #242 adds labels to resources pushed by the ingress controller.
  • forApisixUpstreamApisixTlsAdded jsonSchema verification;
  • Kubernetes events for resource processing;
  • #395 Status of the resource to be reported
  • Add global_rules configuration for cluster-level plug-ins

Cilium V1.10.0 has been released

Cilium, which I have covered many times in previous articles, is based on eBPF technology and provides transparent proxy and protection for network and API connections between application services in Kubernetes. For a quick look at Cilium, you can refer to my previous article “Getting Started with Cilium”. For a quick look at eBPF, you can also check out my share on PyCon China 2020.

Cilium V1.10 is a big feature release that brings a lot of notable features. Let’s take a look!

Egress IP Gateway

When almost all network components are acting as gateway, Cilium found that when integrating cloud native applications with traditional applications, which are mostly authorized by IP whitelist, coupled with the dynamic nature of Pod IP, IP address management becomes a pain point.

Now in the new version of Cilium, with the new Kubernetes CRD, it is possible to associate static IP with traffic when packets leave the Kubernetes cluster, which enables external firewalls to use this consistent static IP to identify Pod traffic.

In fact, Cilium helped make NAT, and it’s very simple to use:

apiVersion: cilium.io/v2alpha1
kind: CiliumEgressNATPolicy
metadata:
  name: egress-sample
spec:
  egress:
  - podSelector:
      matchLabels:
        # The following label selects default namespace
        io.kubernetes.pod.namespace: default
  destinationCIDRs:
  - 192.16833.13./ 32
  egressSourceIP: "192.168.33.100"
Copy the code

This configuration means that the IP configured in egressSourceIP is used to handle the outgoing traffic from the Pod in the default namespace.

Support for BGP integration

One of the main reasons Cilium is abandoned is because of BGP support, but as of this release, there is no need to worry!

Cilium does this by integrating MetalLB to achieve BGP L3 support, so that Cilium can assign IP addresses to LoadBalancer services and advertise them to routers via BGP so that external traffic can access the service normally.

Configuring BGP support is also simple:

apiVersion: v1
kind: ConfigMap
metadata:
  name: bgp-config
  namespace: kube-system
data:
  config.yaml: | peers: - peer - address: 10.0.0.1 peer - the asn: 64512 my - the asn: 64512 address - pools: - name: the default protocol: BGP addresses: - 192.0.2.0/24Copy the code

Peers are used to interconnect with existing BGP routers in the network. Address-pools are IP pools allocated by Cilium for LoadBalancer.

Independent load balancing based on XDP

Cilium’s eBPF based load balancer has recently added support for Maglev consistent hashing and acceleration of the forwarding plane on the eXpress(XDP) layer, which allows it to also exist as a standalone 4-layer load balancer.

Cilium XDP L4LB has full IPv4/IPv6 dual-stack support and can be deployed independently of the Kubernetes cluster as a programmable L4LB.

other

In addition, Wireguard support is added to encrypt the traffic between PODS. Added a new Cilium CLI for managing Cilium clusters; And better performance than ever!

For more information about changes to the Cilium project, refer to its ReleaseNote

Progress in the upstream

  • Runc released v1.0-RC95, probably the last version before V1.0;

  • CNCF network team defines a set of specifications for Service Mesh Performance, which is used to achieve a unified standard to measure the Performance of Service Mesh.

  • CNCF network team defines a set of specifications for Service Mesh Performance, which is used to achieve a unified standard to measure the Performance of Service Mesh.


Please feel free to subscribe to my official account [MoeLove]