The original link: isovalent.com/blog/post/2…

Translator: Fan Bin, Di Wei hua, Miyang Yang, Cilium Parent Company Isovalent Team

Note: this article has obtained the author’s translation authorization!

The Cilium project has become a star and we are proud to be at the heart of it. A few days ago, we released version 1.11 of Cilium with many new features, which is an exciting release. Among the new features is the much anticipated Beta version of Cilium Service Mesh. In this article, we’ll take a closer look at some of these new features.

Service Mesh Test Version (Beta)

Before we dive into version 1.11, let’s take a look at the new features of the Service Mesh announced by the Cilium community.

  • Service Mesh (Beta) based on eBPF technology: Defines new Service grid capabilities, including L7 traffic management and load balancing, TLS termination, Canary publishing, tracking, and many more.
  • Integrated Kubernetes Ingress (Beta) functionality: Support for Kubernetes Ingress through a powerful combination of eBPF and Envoy.

An article on the Cilium website describes the Service Mesh Beta in detail, including how to participate in the development of this feature. Currently, these Beta features are part of the Cilium project, being developed in a separate branch for independent testing, feedback and modification, and we look forward to incorporating them into the Cilium main branch prior to the release of Cilium version 1.12 in early 2022.

Cilium 1.11

Cilium version 1.11 includes additional Kubernetes features, as well as independently deployed load balancers.

  • OpenTelemetry tracking and Metrics format export is supported for OpenTelemetry. (More details)
  • Kubernetes APIServer Policy Matching: The new policy entity is used to easily create a policy model for traffic to and from The Kubernetes APIServer. (More details)
  • Topology-aware routing: Improves load balancing capabilities and routes traffic to the nearest endpoints or in a Region based on topology-aware routing. (More details)
  • BGP announces Pod CIDR: Uses BGP to announce Pod CIDR IP routes on the network. (More details)
  • Graceful termination of service back-end traffic: Graceful connection termination is supported, and traffic sent to terminated pods through load balancing can be processed and terminated as normal. (More details)
  • Host firewall Stable version: The host firewall has been upgraded to a stable version available for production. (More details)
  • Improved load balancer scalability: Cilium load balancer supports back-end endpoints over 64K. (More details)
  • Improved load balancer device support: Load balancing’s accelerated XDP fast path now supports bond devices (more details) and is more commonly used for multi-device setups. (More details).
  • Kube-proxy-replacement supports ISTIO: Cilium’s Kube-proxy replacement mode is compatible with isTIO Sidecar deployment mode. (More details)
  • Egress Egress gateway optimization: The Egress gateway capability is enhanced to support other data path modes. (More details)
  • Managed IPv4/IPv6 neighbor discovery: Extended the Linux kernel and Cilium load balancer by removing their internal ARP libraries and delegating IPv4 next hop discovery and now IPv6 nodes to the kernel. (More details)
  • Routing-based device detection: Automatic routing-based detection of external network devices to improve the user experience of Cilium multi-device Settings. (More details)
  • Kubernetes Cgroup enhancement: The kube-proxy-replacement function of Cilium is integrated in Cgroup V2 mode, and the Linux kernel in Cgroup V1 / V2 hybrid mode is enhanced. (More details)
  • Cilium Endpoint Slices: Cilium can interact with Kubernetes’ control plane more efficiently based on CRD mode, without the need for proprietary ETCD instances, and with nodes expandable to 1000+. (More details)
  • Integrated Mirantis Kubernetes engine: The Mirantis Kubernetes engine is supported. (More details)

What is Cilium?

Cilium is open source software that transparently provides network and API connectivity and security between services deployed on the Kubernetes-based Linux container management platform.

Underlying Cilium is eBPF, a new technology based on the Linux kernel that dynamically injects powerful security, visibility, and network control logic into Linux systems. Cilium provides multi-cluster routing based on eBPF, load balancing instead of Kube-Proxy, transparent encryption, network and service security and many other functions. In addition to providing traditional network security, eBPF’s flexibility also supports application protocol and DNS request/response security. Meanwhile, Cilium is tightly integrated with Envoy to provide a Go based extension framework. Because eBPF runs in the Linux kernel, all Cilium functionality is applied without any changes to application code or container configuration.

Please see the [Introduction to Cilium] section for a more detailed introduction to Cilium.

OpenTelemetry support

The new version adds support for OpenTelemetry.

OpenTelemetry is a CNCF project that defines telemetry protocols and data formats, covering distributed tracing, metrics, and logging. The project provides an SDK and a collector running on Kubernetes. Typically, application instrumentation exposes OpenTelemetry data directly, and this instrumentation is most often implemented within the application using the OpenTelemetry SDK. The OpenTelemetry collector is used to collect data from various applications in the cluster and send it to one or more back ends. The CNCF project Jaeger is one of the backends that can be used to store and render trace data.

The OpenTelemetry enabled Hubble adapter is an add-on that can be deployed to clusters running Cilium (Cilium version 1.11 is preferred, but should also work with older versions). The adapter is an OpenTelemetry collector embedded with the Hubble receiver, and we recommend deploying it using the OpenTelemetry Operator (see the user guide). The Hubble adapter reads traffic data from Hubble and converts it into tracking and logging data.

The Hubble adapter is added to the OpenTelemetry cluster to provide valuable observability for network events and application-level telemetry. The current version provides an association of HTTP traffic and SPANS via the OpenTelemetry SDK.

Topologically aware load balancing

It is common for Kubernetes clusters to be deployed across multiple data centers or availability zones. This not only brings high availability benefits, but also some operational complexity.

So far, Kubernetes does not have a built-in structure that describes the location of Kubernetes Service endpoints on a topological level. This means that the Service endpoint selected by the Kubernetes node based on the service load balancing decision may be in a different availability zone than the customer requesting the service. There are a number of side effects in this scenario, which can be increased fees for cloud services, often due to additional charges from cloud providers for traffic across multiple availability zones, or increased request delays. More broadly, we need to define the location of service endpoints in terms of topology, for example, service traffic should be load balanced between endpoints of the same node, rack, zone, region, and cloud provider.

Kubernetes V1.21 introduces a feature called topology-aware routing to address this limitation. By service. Kubernetes. IO/topology – aware – hints annotation is set to auto, set the endpoint in the service object of EndpointSlice hint, hint endpoints running partitions. Partition name from node topology. Kubernetes. IO/zone for tags. If two nodes have the same partition label value, they are considered to be at the same topology level.

This notification is handled by Cilium’s Kube-proxy, which filters the endpoints of the route based on the notification set by the EndpointSlice controller, allowing the load balancer to preferentially select endpoints in the same partition.

The Kubernetes feature is currently in Alpha, so it needs to be enabled with –feature-gate. Please refer to the official documentation for more information.

The Kubernetes APIServer policy matches

The IP address of Kube-Apiserver is opaque on hosting Kubernetes environments such as GKE, EKS, and AKS. In previous versions of Cilium, there was no formal way to write Cilium network policies defining access control to Kube-Apiserver. This involves implementation details such as Cilium security identity assignment and whether Kube-Apiserver is deployed within or outside the cluster.

To address this issue, Cilium 1.11 has added new features that provide a way for users to define access control for traffic to and from Apiserver using dedicated policy objects. At the bottom of this function is the entity selector, which can resolve the reserved kube-Apiserver label meaning, and can be automatically applied to the IP address associated with kube-Apiserver.

This new feature will be of particular interest to security teams because it provides an easy way to define a Cilium network policy for a POD that allows or disallows access to Kube-Apiserver. The following CiliumNetworkPolicy fragment defines that all Cilium endpoints in the kube-system namespace allow access to Kube-Apiserver, and all other Cilium endpoints deny access to Kube-Apiserver.

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-to-apiserver
  namespace: kube-system
spec:
  endpointSelector: {}
  egress:
  - toEntities:
    - kube-apiserver
Copy the code

BGP declares Pod CIDR

With the growing focus on private Kubernetes environments, we wanted to integrate well with existing data center network infrastructure, which is usually routing based on BGP. In the previous version, Cilium Agent started to integrate BGP and advertise viPs of load balancer services to BGP routers through BGP.

Cilium version 1.11 now also introduces the ability to declare Kubernetes Pod subnets over BGP. Cilium can create a BGP peer with any downstream connected BGP infrastructure and notify the subnet of the assigned Pod IP address. This allows the downstream infrastructure to distribute these routes in an appropriate manner so that the data center can be routed to the Pod subnet through various private/public next hops.

To start using this feature, the Kubernetes node running Cilium needs to read BGP ConfigMap Settings:

apiVersion: v1
kind: ConfigMap
metadata:
  name: bgp-config
  namespace: kube-system
data:
  config.yaml: | peers: - peer - address: 192.168.1.11 peer - the asn: 64512 my - the asn: 64512Copy the code

In the meantime, Cilium should be installed using the following parameters:

$ cilium install \
    --config="bgp-announce-pod-cidr=true"
Copy the code

After Cilium is installed, it will publish the Pod CIDR range, 192.168.1.11, to the BGP router.

Below is the full demo video of the recent Cilium eCHO Episode.

  • www.youtube.com/watch?v=nsf…

For more information, such as how to configure LoadBalancer IP declaration for Kubernetes Service and how to advertise the Pod CIDR range of a node through BGP, see docs.cilium. IO.

Managed IPv4/IPv6 neighbor discovery

When Cilium enables eBPF to replace Kube-Proxy, Cilium performs neighbor discovery of cluster nodes to collect L2 addresses of direct neighbors or next hops in the network. Necessary for service load balancing, the eXpress Data Path (XDP) fast Path supports reliably high traffic rates of millions of packets per second. In this mode, dynamic parsing on demand is technically impossible because it requires waiting for the adjacent back end to be parsed.

In Cilium 1.10 and earlier, the Cilium Agent itself included an ARP resolution library whose controller triggered the discovery and periodic refresh of new cluster nodes. Manually resolved neighbor entries are pushed to the kernel and refreshed as PERMANENT entries, which the eBPF load balancer retrieves and directs traffic to the back end. Cilium Agent’s ARP resolution library lacks support for IPv6 neighbor resolution, and PERMANENT neighbor entries have many problems: for example, entries can become stale, and the kernel refuses to learn about address updates because they are static in nature, causing packets to be dropped between nodes in some cases. In addition, the tight coupling of neighbor resolution to the Cilium agent also has a disadvantage. During the agent start-stop cycle, no address update learning occurs.

In Cilium 1.11, neighbor discovery has been completely redesigned and the ARP resolution library inside Cilium has been completely removed from the Agent. The Agent now relies on the Linux kernel to discover the next hop or hosts in the same L2 domain. Cilium now supports both IPv4 and IPv6 neighbor discovery. For v5.16 or newer kernels, we have submitted upstream the work on “managing” neighbor entries that we co-organized at the BPF & Networking Summit during this year’s Linux Plumbers Conference (Parts 1, 2, 3). In this case, the Agent pushes down the L3 addresses of the new cluster nodes and triggers the kernel to resolve their CORRESPONDING L2 addresses automatically on a regular basis.

These neighbor entries are pushed into the kernel based on NetLink as “external learning” and “management” neighbor attributes. While the old attributes ensure that these neighbor attributes are not handled by the kernel’s garbage collector when under pressure, the “managed” neighbor attributes do, and the kernel needs to automatically keep these neighbor attributes in the REACHABLE state whenever possible. This means that if the upper stack of a node is not actively sending or receiving traffic to the back-end node, the kernel can relearn, keep neighbor attributes in the REACHABLE state, and then periodically trigger explicit neighbor resolution via the internal kernel work queue. For older kernels that do not have the ability to “manage” neighbor properties, the Agent Controller will periodically urge the kernel to trigger a new solution if needed. As a result, Cilium no longer has PERMANENT neighbor entries, and at upgrade time, the agent will automatically migrate the old entries to the dynamic neighbor entries so that the kernel can learn address updates from them.

In addition, in the case of multipath routing, the agent does load balancing, and it can now look for failed next hops in route lookups. This means that, instead of replacing all routes, you avoid failed paths by looking at information about adjacent subsystems. In general, for Cilium Agent, this modification work significantly promotes neighbor management, and the data path is easier to change when the neighbor address of the node or the next hop in the network changes.

XDP Multi-device load balancer

Prior to this release, XDP based load balancer acceleration could only be enabled on a single network device, running in hair-pinning mode, where packets are forwarded off the same device as they arrive on. This initial restriction was added in kube-proxy acceleration based on the XDP layer because there is limited driver support for multi-device forwarding under XDP (XDP_REDIRECT), whereas same-device forwarding (XDP_TX) is supported by XDP for every driver in the Linux kernel.

This means that in the multi-network device environment, we need to use TC eBPF mechanism, so we must use Cilium’s regular Kube-proxy instead. A typical example of such an environment is a host with two network devices, one of which is a public network and receives external requests to the Kubernetes Service, and the other is a private network for intra-cluster communication between Kubernetes nodes.

Since most 40GB and 100GB upstream NIC drivers support XDP_REDIRECT out of the box on modern LTS Linux kernels, this restriction can finally be lifted. Therefore, this version is replaced by Cilium kube-Proxy. And Cilium’s independent load balancer implements load balancing for multiple network devices at XDP layer, which enables packet processing performance to be maintained even in more complex environments.

XDP transparently supports bond devices

In many enterprise or cloud environments, nodes use bond devices to configure dual-port nics for external traffic. With recent optimizations of Cilium versions, such as Kube-Proxy alternatives or standalone load balancers in the XDP layer, one of the questions we often receive from users is whether XDP acceleration can be used in conjunction with bond network devices. While the vast majority of 10/40/100Gbit/s network drivers in the Linux kernel support XDP, it lacks the ability to operate XDP transparently in bond (and 802.3AD) mode.

One option is to implement 802.3AD in user space and bond load balancing in XDP applications, but this is a fairly tedious effort to manage bond devices, such as: Netlink event observation also requires separate programs for the local and bond components of the choreographer. Instead, the native kernel implementation addresses these issues, provides more flexibility, and is able to handle eBPF programs without changing or recompiling them. The kernel is responsible for managing the bond device group and can automatically propagate eBPF programs. For v5.15 or newer kernels, we have implemented XDP support for bond devices upstream (part1, Part2).

When an XDP program is connected to a bond device, the semantics of XDP_TX are equivalent to tc eBPF programs attached to a bond device, which means that the slave device is selected to transmit packets from the bond device using the bond-configured transport method. Both failover and link aggregation modes can be used in XDP operation. For sending packets back from bond devices through XDP_TX, we implement round robin, active backup, 802.3AD, and hash device selection. This situation makes particular sense for hairpin load balancers like Cilium.

Device detection based on routing

Version 1.11 significantly improves automatic detection of devices and can be used to replace Kube-Proxy, bandwidth manager, and host firewall with eBPF.

In earlier versions, devices automatically detected by Cilium required devices with default routes, and devices with Kubernetes NodeIP. Going forward, device detection is now performed against all routing table entries in the host namespace. That is, all non-bridging, non-bonded, and non-virtual devices with global unicast routing can now be detected.

With this improvement, Cilium should now be able to automatically detect the correct device in more complex network setups, rather than manually specifying the device using the Devices option. When the latter option is used, device names cannot be named using consistent naming conventions. For example, devices cannot be named using common prefix regular expressions.

Graceful termination of service back-end traffic

Kubernetes can terminate a Pod for a variety of reasons, such as rolling updates, scaling, or user-initiated deletions. In this case, it is important to gracefully terminate the active connection to the Pod, giving the application time to complete the request to minimize disruptions. Abnormal connection termination can result in data loss or delay application recovery.

Cilium Agent listens for service endpoint updates through the “EndpointSlice” API. When a service endpoint is terminated, Kubernetes sets the terminating state for that endpoint. The Cilium Agent then deletes the endpoint’s data path state so that the endpoint is not selected for new requests, but the current connection that the endpoint is serving can be terminated within a user-defined grace period.

At the same time, Kubernetes tells the container runtime to send SIGTERM signals to the service’s Pod container and wait for the termination grace period. The container application can then initiate graceful termination of an active connection, for example, by closing a TCP socket. Once the grace period ends, Kubernetes eventually triggers a forced shutdown of processes still running in the Pod container via SIGKILL signals. At this point, the agent also receives a deletion event for the endpoint and then completely deletes the endpoint’s data path state. However, if the application Pod exits before the grace period ends, Kubernetes will immediately send a delete event regardless of the grace period setting.

Follow the guide in docs. Cilium. IO for more details.

Egress Egress gateway optimization

In simple scenarios, Kubernetes applications only communicate with other Kubernetes applications, so traffic can be controlled through mechanisms such as network policies. This is not always the case in the real world, for example: some privately deployed applications are not containerized, and Kubernetes applications need to communicate with services outside the cluster. These legacy services are typically configured with static IP and protected by firewall rules. So in this case, how should traffic control and audit?

The Egress IP Gateway function, introduced in Cilium 1.10, addresses this problem by using Kubernetes nodes as gateways for cluster Egress traffic. Users use policies to specify which traffic should be forwarded to the gateway node and how it should be forwarded. In this case, the gateway node will use static egress IP to disguise the traffic, so rules can be established on traditional firewalls.

apiVersion: cilium.io/v2alpha1
kind: CiliumEgressNATPolicy
metadata:
  name: egress-sample
spec:
  egress:
    - podSelector:
      matchLabels:
        app: test-app
  destinationCIDRs:
    - 1.23.. 0/ 24
  egressSourceIP: 20.0. 01.
Copy the code

In the example policy above, the Pod with the app: test-app tag and traffic with a target CIDR of 1.2.3.0/24 need to communicate with the outside of the cluster via the egress IP (SNAT) of the gateway node of 20.0.0.1.

During the Cilium 1.11 development cycle, we put a lot of effort into stabilizing the egress gateway functionality and making it ready for production. Now, the egress gateway can now work under direct routing, differentiating internal traffic (i.e., the egress policy of CIDR with Kubernetes overlapping addresses) and using the same egress IP in different policies. Some issues, such as replies incorrectly described as exit traffic and others, have been fixed, while testing has been improved to catch potential problems early.

Kubernetes Cgroup enhancement

One of the advantages of Cilium using eBPF as a standalone load balancer instead of Kube-Proxy is the ability to attach eBPF programs to socket hooks, Such as connect(2), bind(2), sendmsg(2), and various other related system calls to transparently connect local applications to back-end services. However, these programs can only be attached to CGroup V2. Although Kubernetes is trying to migrate to CGroup V2, the vast majority of users currently have a mixed cGroup V1 and V2 environment.

Linux marks the relationship between sockets and Cgroups in the kernel socket object, and cgroup V1 and V2 socket labels are mutually exclusive due to a setting set six years ago. This means that if a socket is created as a CGroup V2 member but is later flagged by a NET_prio or net_CLS controller that has cGroup V1 membership, Cgroup V2 does not execute the program attached to the Pod subpath, but reverts to execute the eBPF program attached to the root of CGroup V2 hierarchy. In this case, the cGroup V2 hierarchy will be bypassed if there is no program attached to the cGroup V2 root.

Today, the assumption that CGroup V1 and V2 cannot run in parallel is no longer valid, as illustrated in the Linux Plumbers conference talk earlier this year. Only in rare cases will the CGroup V1 network controller in Kubernetes cluster bypass the eBPF program marked as a CGroup V2 member when it is attached to a subsystem of the CGroup V2 hierarchy. To address this problem as early as possible in the packet processing path, the Cilium team recently implemented a fix to the Linux kernel that allows for mutually secure operation between two versions of cGroup (Part1, Part2) in all scenarios. This fix not only makes Cilium’s Cgroup operation completely robust, but also benefits all other eBPF Cgroup users in Kubernetes.

In addition, container runtimes such as Kubernetes and Docker have recently started announcing support for CGroup V2. In cGroup V2 mode, Docker switches to the private Cgroup namespace by default, that is, each container (including Cilium) runs in its own private Cgroup namespace. Cilium makes Cilium’s socket-based load balancing work in a CGroup V2 environment by ensuring that the eBPF program is attached to the correct Socket hooks in the CGroup hierarchy.

Enhance the scalability of load balancers

Main external contributor: Weilong Cui (Google)

Recent tests show that the Service load balancer is limited for large Kubernetes environments running Cilium with more than 64,000 Kubernetes Endpoints. There are two constraints:

  • Cilium’s local back-end ID allocator for a standalone load balancer using eBPF instead of Kube-Proxy is still limited to the 16-bit ID space.
  • Cilium’s eBPF Datapath back-end mappings for IPv4 and IPv6 are limited to the 16-bit ID space.

To enable the Kubernetes cluster to scale to more than 64,000 Endpoints, Cilium’s ID allocator and associated Datapath structures have been converted to use 32-bit ID space.

Cilium Endpoint Slices

Key external contributors: Weilong Cui (Google), Gobinath Krishnamoorthy (Google)

In version 1.11, Cilium added support for a new mode of operation that greatly improves Cilium’s scalability with a more efficient Pod message broadcast method.

Previously, Cilium broadcast Pod IP addresses and secure identities through the Watch CiliumEndpoint (CEP) object, which presented some scalability challenges. The creation/update/deletion of each CEP object will trigger the multicast of the Watch event, and its scale is linear with the number of Cilium-agents in the cluster, and each Cilium-Agent can trigger such fan out action. If there are N nodes in the cluster, the total watch events and traffic may scale twice at an N^2 rate.

Cilium 1.11 introduces a new CRD CiliumEndpointSlice (CES), where CEPs slices from the same namespace are grouped into CES objects by the Operator. In this mode, Cilium-agents no longer watch CEP, but watch CES, which greatly reduces the watch events and traffic that need to be broadcast through Kube-Apiserver, thus reducing the pressure on Kube-Apiserver. Enhanced scalability of Cilium.

Because CEP takes the pressure off Kube-Apiserver so much, Cilium no longer relies on dedicated ETCD instances (the KVStore mode). For clusters where the number of pods varies dramatically, we still recommend KVStore in order to offload the processing from Kube-Apiserver to an ETCD instance.

This model strikes a balance between faster dissemination of Endpoint information and a more scalable control plane. Note that compared to CEP mode, if the number of pods changes dramatically (for example, if they scale up or down massively) at a larger scale, there may be a higher Endpoint information propagation delay that affects remote nodes.

We did a series of “worst case” scale tests on GKE, the earliest adopter of CES, and found that Cilium scales much better in CES mode than CEP mode. According to the load test of 1000 nodes, after CES is enabled, the peak value of Watch events decreases from 18K /s of CEP to 8K /s of CES, and the peak value of Watch traffic decreases from 36.6Mbps of CEP to 18.1Mbps of CES. In terms of controller node resource usage, it reduced the peak CPU usage from 28 cores/SEC to 10.5 cores/SEC.

Please refer to Cilium’s official documentation for details.

Support Istio Kube – Proxy – Replacement

Many users use eBPF load balancers in Kubernetes instead of Kube-Proxies to enjoy the efficient processing provided by eBPF datapath. Avoid the chain of iptables rules where kube-proxy grows linearly with cluster size.

EBPF’s treatment of Kubernetes Service load balancing is architecturally divided into two parts:

  • Handling external service traffic into the cluster (north-south direction)
  • Handle service traffic from within the cluster (east-west)

With the support of eBPF, Cilium has been able to complete the processing of each packet as close as possible to the driver (for example, through XDP) in the north-south direction. East-west traffic is handled as close as possible to the eBPF application layer by “connecting” application requests (such as TCP Connect (2)) directly from the Service virtual IP to one of the back-end IP addresses to avoid NAT costs per packet.

Cilium’s approach works for most scenarios, with some exceptions, such as common service grid solutions (Istio, etc.). Istio relies on Iptables to insert additional redirection rules into the Pod’s network namespace so that application traffic arrives first at the proxy Sidecar (e.g., Envoy) before leaving the Pod and entering the host namespace. It then queries the Netfilter connection tracker directly from its internal socket via SO_ORIGINAL_DST to collect the original service destination address.

Therefore, in Istio and other service grid scenarios, Cilium has improved the processing mode of traffic between pods (east-west), and changed to DNAT based on eBPF to complete the processing of each packet, while applications in the host namespace can still use socket-based load balancer. To avoid NAT cost per packet.

To enable this feature, simply set BPF-lb-sock-hobns-only: true in the Helm Chart of the new Cilium Agent. Please refer to Cilium’s official documentation for details.

Feature enhancement and deprecation

The following features have been further enhanced:

  • The Host Firewall is changed from beta to stable. The host firewall by allowing CiliumClusterwideNetworkPolicies chose to protect the host network namespace cluster nodes. Since introducing the host firewall feature, we have significantly increased test coverage and fixed some bugs. We’ve also received feedback from some of our community users who are happy with the feature and ready for production.

The following features have been deprecated:

  • Consul, previously available as a KVStore back end for Cilium, has been deprecated in favor of the more amenable Etcd and Kubernetes. Developers at Cilium used Consul primarily for local end-to-end testing, but in the recent development cycle it became possible to use Kubernetes directly as a back end for testing, and Consul can retire.
  • IPVLAN was previously used as an alternative to VETH to provide Pod network communication across nodes. Driven by the Cilium community, the Linux kernel performance has been greatly improved to the point where Veth is now on par with IPVLAN. See this article: eBPF host Routing.
  • Policy TracingUsed by many Cilium users in early versions of Cilium, through the command line tool in Podcilium policy traceTo execute. Over time, however, it failed to keep up with the functional advances of the Cilium policy engine. Cilium now provides better tools to track Cilium’s strategies, for exampleNetwork Policy EditorandPolicy Verdicts.

This article is published by OpenWrite!