Examination questions yesterday

Deploy three Deployment applications (A,B, and C), allowing A to access B but not C to access B.

* The name of the Deployment is CKA-1128-01, CKA-1128-02, CKA-1128-03

* The Network Policy is named CKA-1128-NP

Note: complete yamL with the command used, the deployment and network policy created, and prove that A can access application B; C does not allow access to APPLICATION B. Can be divided into multiple comments.

Yesterday the answer

The first Deploy file, KAK-1128-01.yaml, uses a radial/busyboxplus image because busybox does not have curl in it.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cka-1128-01
spec:
  selector:
    matchLabels:
      app: cka-1128-01
  template:
    metadata:
      labels:
        app: cka-1128-01
    spec:
      containers:
        - name: cka-1128-01
          image: radial/busyboxplus
          command: ['sh', '-c', 'sleep 1000']
          imagePullPolicy: IfNotPresentCopy the code

Cka – 1128-02. Yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cka-1128-02
spec:
  selector:
    matchLabels:
      app: cka-1128-02
  template:
    metadata:
      labels:
        app: cka-1128-02
    spec:
      containers:
        - name: cka-1128-02
          image: radial/busyboxplus
          command: ['sh', '-c', 'sleep 1000']
          imagePullPolicy: IfNotPresentCopy the code

Cka – 1128-03. Yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cka-1128-03
spec:
  selector:
    matchLabels:
      app: cka-1128-03
  template:
    metadata:
      labels:
        app: cka-1128-03
    spec:
      containers:
        - name: cka-1128-03
          image: radial/busyboxplus
          command: ['sh', '-c', 'sleep 1000']
          imagePullPolicy: IfNotPresent
Copy the code

You can see that both A and C can access B:

[root @ liabio cka] # kubectl get pod - owide | grep cka cka - 1128-01-7 b8b8cb79 - mll6d Running 1/1 0 3 m5s 192.168.155.124 Cka-1128-02-69dd65bdb7-mfq26 1/1 Running 0 3m8s 192.168.155.117 liabio <none> <none> Cka-1128-03-66f8f69ff-64q75 1/1 Running 0 3m3s 192.168.155.116 liabio <none> <none> [root@liabio cka]# kubectl exec-ti Cka-1128-01-7b8b8cb79-mll6d -- ping 192.168.155.117 ping 192.168.155.117: 56 data bytes 64 bytes from 192.168.155.117: seq=0 TTL =63 time=0.146 ms 64 bytes from 192.168.155.117: Seq =1 TTL =63 time= 0.095ms [root@liabio cka]# kubectl exec-ti cKA-1128-03-66f8f69FF-64q75 -- ping 192.168.155.117 192.168.155.117 (192.168.155.117): 56 Data bytes 64 bytes from 192.168.155.117: Seq =0 TTL =63 time=0.209 ms 64 bytes from 192.168.155.117: seq=1 TTL =63 time=0.112 msCopy the code

New cka – 1128 – np yaml, kubectl cka apply – f – 1128 – np. Yaml create Network Policy, spec. PodSelector. MatchLabels Pod choose B management; Ingress. The from. PodSelector. MatchLabels specified only to flow from A white list.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: cka-1128-np
spec:
  podSelector:
    matchLabels:
      app: cka-1128-02
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: cka-1128-01Copy the code

User A can ping USER B, but user C cannot ping user B.

[root@liabio cka]# kubectl apply -f cka-1128-np.yaml networkpolicy.networking.k8s.io/cka-1128-np created [root@liabio cka]# kubectl get networkpolicies. NAME POD-SELECTOR AGE cka-1128-np app=cka-1128-02 13s [root@liabio cka]# [root@liabio Cka] # kubectl get pod - owide | grep cka b8b8cb79 cka - 1128-01-7-1/1 mll6d Running 1 24 m 192.168.155.124 liabio < none > <none> CKA-1128-02-69DD65bdb7-mfQ26 1/1 Running 1 24m 192.168.155.117 liabio <none> <none> CKA-1128-03-66f8f69FF-64Q75 1/1 Running 1 24m 192.168.155.116 liabio <none> <none> [root@liabio cka]# kubectl exec-ti cka-1128-01-7b8b8cb79-mll6d -- ping 192.168.155.117 Ping 192.168.155.117 (192.168.155.117): 56 Data bytes 64 bytes from 192.168.155.117: [root@liabio cka]# kubectl exec-ti cKA-1128-03-66f8f69ff-64q75 -- ping 192.168.155.117 192.168.155.117 (192.168.155.117): 56 data bytes......Copy the code

Yesterday parsing

K8s NetworkPolicy NetworkPolicy The following is an excerpt from the official document:

concept

NetworkPolicy is a specification of the communication rules that are allowed between pods and between pods and other network endpoints. The NetworkPolicy resource uses the label to select the POD and defines the communication rules allowed by the selected POD. Network policies are implemented through network plug-ins, so users must use network solutions that support NetworkPolicy,

Network strategy concepts and introduce the official documentation: https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource

Network policy practice official document, this topic can refer to the complete document. https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/

The premise

NetworkPolicy is implemented through network plug-ins, so users must use a network solution that supports NetworkPolicy – simply creating resource objects without a controller to make it work is of no use.

Supported network solutions

Calico, Kube-Router, Cilium, Romana, Weave Net

K8s official documentation: https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/

Quarantined and unquarantined PODS

By default, pods are non-isolated and they accept traffic from any source.

Pods can be isolated by associated network policies. Once a particular Pod is selected by NetworkPolicy in the namespace, the Pod will reject connections that are not allowed by the NetworkPolicy. (Other pods in the namespace that are not selected by the network policy continue to receive all traffic)

NetworkPolicy resources

Here is an example of NetworkPolicy:

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: -172.17.1.0/24 -namespacesElector: matchLabels: project: myProject -PodSelector: matchLabels: Role: frontend ports: -protocol: TCP port: 6379 egress: to: -ipblock: cidr: 10.0.0.0/24 Ports: -protocol: TCP port: 5978Copy the code

Sending the above example to APIServer has no effect unless you choose a network solution that supports NetworkPolicy.

Spec: the NetworkPolicy spec contains all the information needed to define a particular NetworkPolicy in a namespace

PodSelector: Each NetworkPolicy includes a podSelector that selects the set of pods to which the policy applies. Because NetworkPolicy currently only supports defining ingress rules, the podSelector here is essentially defining a “target pod” for that policy. The policy selected in the example is POD with the “Role =db” tag. The empty podSelector selects all the pods in the namespace.

PolicyTypes: Each NetworkPolicy contains a list of policyTypes, which may contain the Ingress, Egress, or both. The policyTypes field indicates whether a given policy applies to inbound traffic to the selected Pod, outbound traffic from the selected Pod, or both. If no policyType is specified on NetworkPolicy, the Ingress is always set by default, and if NetworkPolicy has any Egress rules, the Egress is set.

Ingress: Each NetworkPolicy contains a whitelist list of ingress rules. The rule allows matching of traffic in both the FROM and ports sections. The example policy contains a simple rule: it matches a single port from one of two sources, the first specified by namespaceSelector and the second by podSelector.

Egress: Each NetworkPolicy contains a whitelist list of egress rules. Each rule allows matching of traffic in the TO and port sections. The sample policy contains a rule that matches traffic on a single port to any destination in 10.0.0.0/24.

So, the example network policy:

Quarantine pods for “role=db” in the “default” namespace if they are not already quarantined. TCP port 6379 connections are allowed from pods with role=frontend tags in the “default” namespace to pods in the “default” namespace.

  • In the “default” namespace, with any Pod labeled “role=frontend”;
  • Any pod from namespaces with the tag ‘project=myproject’;
  • IP addresses range from 172.17.0.0 to 172.17.0.255 and 172.17.2.0 to 172.17.255.255 (that is, all 172.17.0.0/16 except 172.17.1.0/24).

Allows connections to 6379 TCP port from any POD under the namespace with the “project=myproject” tag to pod under the “default” namespace.

Behavior of selectors to and from

Four selectors can be specified in the ingress from section or egress to section:

PodSelector: This will select a specific Pod in the same namespace as NetworkPolicy, which should be allowed as an entry source or an exit destination.

NamespaceSelector: This will select a specific namespace and all pods should be used as their input sources or output destinations.

NamespaceSelector and podSelector: A to/ FROM entry specifying namespaceSelector and podSelector selects a particular Pod in a particular namespace. Be careful to use correct YAML syntax; This strategy:

. ingress: - from: - namespaceSelector: matchLabels: user: alice podSelector: matchLabels: role: client ...Copy the code

There is only one element in the FROM array, and only connections are allowed from pods marked role = client in namespaces marked with User = Alice. This strategy:

. ingress: - from: - namespaceSelector: matchLabels: user: alice - podSelector: matchLabels: role: client ...Copy the code

Contains two elements in the FROM array that allow connections from pods marked with Role = client in the local namespace, or from any Pod marked with User = Alice in any namespace.

IpBlock: This will select a specific IP CIDR range to use as an entry source or an exit destination. These should be external IP addresses to the cluster, because Pod IP is ephemeral and randomly generated.

The cluster entry and exit mechanisms typically require overwriting the source or destination IP of packets. In cases where this happens, it is uncertain whether it occurs before or after NetworkPolicy processing, and the behavior may differ for different combinations of network plug-ins, cloud providers, Service implementations, and so on.

In the inbound case, this means that in some cases, you can filter incoming packets based on the actual raw source IP, while in other cases, the source IP that NetworkPolicy works on May be a node of a LoadBalancer or Pod, etc.

For exits, this means that the connection from a Pod to a Service IP that is rewritten as an external IP of the cluster may or may not be subject to an IPBlock-based policy.

The default policy

By default, if no policy exists in a namespace, all traffic to and from pods in that namespace is allowed. The following example allows you to change the default behavior in this namespace.

All incoming traffic is rejected by default

You can create a “default” isolation policy for namespaces by creating NetworkPolicy that selects all containers but does not allow any incoming traffic into those containers.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - IngressCopy the code

This ensures that the container can still be isolated even if no other NetworkPolicy has been selected. This policy does not change the default exit quarantine behavior.

All incoming traffic is allowed by default

If you want to allow all traffic to go to all pods in a namespace (even if you add policies that cause some pods to be considered “isolated”), you can create a policy that explicitly allows all traffic in that namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
spec:
  podSelector: {}
  ingress:
  - {}
  policyTypes:
  - IngressCopy the code

All egress traffic is denied by default

You can create a “Default” egress isolation policy for namespaces by creating NetworkPolicy that selects all containers but does not allow any egress traffic from those containers.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - EgressCopy the code

This ensures that pods that are not selected by any other NetworkPolicy are not allowed to flow out traffic. This policy does not change the default ingress isolation behavior.

All egress traffic is allowed by default

If you want to allow all traffic from all pods in a namespace (even if you add policies that cause some pods to be considered “isolated”), you can create a policy that explicitly allows all exit traffic in that namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - EgressCopy the code

All entry and all exit traffic is denied by default

You can create a “default” policy for a namespace to block all inbound and outbound traffic by creating the following NetworkPolicy in that namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - EgressCopy the code

This ensures that pods that are not selected by any other NetworkPolicy are not allowed to enter or exit traffic.

Today’s examination questions

Create a Role(only the operation rights of the PODS under the CKA Namespace) and a RoleBinding(using the ServiceAccount for authentication), and use the ServiceAccount as the authentication information for the CKA Perform operations on pods in the namespace and Pods in the default namespace.

– Role and RoleBinding are named cKA-1202-role and CKA-1202-rb

Note: Please attach the full YAML of the command used, the Role created, RoleBinding, and ServiceAccount for multiple comments.

Author’s brief introduction

Author: Xiaowantang, a passionate and serious writer, currently maintains the original public account: “My Xiaowantang”, focusing on writing Linux, Golang, Docker, Kubernetes and other knowledge to enhance the hard power of the article, looking forward to your attention. Note: Be sure to specify the source (note: from the official account: My Small bowl of soup, author: Small bowl of soup)

The author is simple

Author: Xiaowantang, a passionate and serious writer, currently maintains the original public account “My Xiaowantang”, focusing on writing the go language, Docker, Kubernetes, Java and other development, operation and maintenance knowledge to enhance the hard power of the article, looking forward to your attention. Note: Be sure to specify the source (note: from the official account: My Small bowl of soup, author: Small bowl of soup)