How to use Open Policy Agent to implement access Policy control, please refer to here

Open Policy Agent: The Top 5 Kubernetes Admission Control Policies

Kubernetes developers and platform engineers are often under a lot of pressure to keep application deployments moving quickly, and are always compromised for the sake of speed and schedule. Platform teams are increasingly responsible for ensuring that these compromises, such as managing Ingress, do not result in customer data being exposed across the Internet and so on.

Fortunately, Kubernetes provides the ability to set up policies to avoid these consequences by checking for and preventing deployment errors and putting them into production. To ensure that the team’s application is not more important than confidence, here are the top five Kubernetes access control policies that should now be running in the cluster.

1. Trusted image repository

This strategy is simple, but powerful: Only container images are allowed to be pulled from trusted mirror repositories, and you can optionally pull only those images that match the list of addresses for the allowed repository images.

Of course, pulling an unknown image from the Internet (or anywhere other than a repository of trusted images) carries risks — such as malware. But there are other good reasons to maintain a single trusted source, such as enabling supportability in the enterprise. By ensuring that images only come from trusted mirror repositories, you can closely control the image inventory, reduce the risk of software entropy and contagion, and improve the overall security of the cluster.

Related strategies:

  • Disallow all mirrors with the “Latest” tag
  • Only images that sign or match a particular hash /SHA are allowed

Example policy:

package kubernetes.validating.images
deny[msg] {
    some i
    input.request.kind.kind == "Pod"
    image := input.request.object.spec.containers[i].image
    not startswith(image, "")
    msg := sprintf("Image '%v' comes from untrusted registry", [image])

2. Label safety

This policy requires that all Kubernetes resources contain the specified tag and use the appropriate format. Because labels determine the grouping of Kubernetes objects and policies, including where the workload can run — front end, back end, data layer — and which resources can send traffic, label errors can lead to unexplained deployment and supportability issues in production. In addition, without access control over how tags are applied, the cluster lacks basic security. Finally, the danger of manually entering tags is that errors can spread, especially since tags are flexible and powerful in Kubernetes. Apply this policy and ensure that the labels are configured correctly and consistently.

Related policies:

  • Ensure that each workload requires specific annotations
  • Specify stains and tolerances to limit where images can be deployed

Example policy:

package kubernetes.validating.existence
deny[msg] {
    not input.request.object.metadata.labels.costcenter
    msg := "Every resource must have a costcenter label"
deny[msg] {
    value := input.request.object.metadata.labels.costcenter
    not startswith(value, "cccode-")
    msg := sprintf("Costcenter code must start with `cccode-`; found `%v`", [value])

## 3. Disallow (or specify) privileged modes

This policy ensures that by default the container cannot run in privileged mode – unless certain cases are excluded when permitted (which is usually rare).

In general, you want to avoid running the container in privileged mode because it provides access to host resources and kernel functionality — including the ability to disable host-level protection. Although the containers are somewhat isolated, they ultimately share the same kernel. This means that if the privileged container is compromised, it could be the starting point for an invasion of the entire system. Still, there’s a legitimate reason to run in privileged mode — just make sure those times are the exception, not the rule.

Related policies:

  • Disallow unsafe capabilities
  • Disallow containers from running as root (running as non-root)
  • Set the userID

Example policy:

package kubernetes.validating.privileged
deny[msg] {
    some c
    msg := sprintf("Container '%v' should not run in privileged mode.", [])
input_container[container] {
    container := input.request.object.spec.containers[_]
input_container[container] {
    container := input.request.object.spec.initContainers[_]

Define and control the entry

The INGRESS policy allows specific services to be exposed as needed (allowing INGRESS), or no services to be exposed as needed. In Kubernetes, it is very easy to accidentally start a service that communicates with the public Internet (there are many examples of this in the Kubernetes failure story). Also, too loose an Ingress will cause an unnecessary external LoadBalancer to start, which can also become very expensive (like monthly budget expenditure) very quickly! In addition, when two services try to share the same Ingress, it can break the application.

The following example policy prevents Ingress objects in different namespaces from sharing the same hostname. This common problem means that new workloads can “steal” Internet traffic from existing workloads, which can have a range of negative consequences — from service outages to data exposure.

Related policies:

  • Need the TLS
  • Disallow/allow specific ports

Example policy:

package kubernetes.validating.ingress deny[msg] { is_ingress input_host := input.request.object.spec.rules[_].host some other_ns, other_name other_host := data.kubernetes.ingresses[other_ns][other_name].spec.rules[_].host [input_ns, input_name] ! = [other_ns, other_name] input_host == other_host msg := sprintf("Ingress host conflicts with ingress %v/%v", [other_ns, other_name]) } input_ns = input.request.object.metadata.namespace input_name = is_ingress { input.request.kind.kind == "Ingress" == "extensions" input.request.kind.version ==  "v1beta1" }

5. Define and control exits

Each application needs guardrails to control the way exit traffic flows. This policy allows you to specify intra-cluster and out-of-cluster traffic. As with Ingress, it is easy by default to accidentally “allow Egress” to every IP in the world. Sometimes this is not even an accident — complete liberalization is often a last-ditch effort to ensure access to a newly deployed application, even if it is too loose or introduces risks. At the intra-cluster level, it is also possible to inadvertently send data to services that should not be owned. In both cases, there is a risk of data breach and theft if the service is compromised. On the other hand: Being too strict, using Egress can sometimes lead to configuration errors that break the application. Achieving the best of both worlds means using this policy to select and specify The Times and services that allow Egress to occur.

Relevant policies.

  • See the entry policy above

Example policy:

Package kubernetes. Validating. Egress allow_list: = {" ", ""} deny[reason] {network_policy_allows_all_egress reason := "Network policy allows access to any IP address." } deny[reason] { count(allow_list) > 0 input.request.kind.kind == "NetworkPolicy" input.request.object.spec.policyTypes[_] == "Egress" ipBlock := input.request.object.spec.egress[_].to[_].ipBlock not any({t | t := net.cidr_contains(allow_list[_], ipBlock.cidr)}) reason := "Network policy allows egress traffic outside of allowed IP ranges." } network_policy_allows_all_egress { input.request.kind.kind == "NetworkPolicy" input.request.object.spec.policyTypes[_] == "Egress" egress := input.request.object.spec.egress[_] not }

With these policies in place, you can focus on building a world-class platform. Of course, if you want to add more basic policies to Kubernetes, check out

The article is uniformly published in the public cloud native refers to the north