“This is the 12th day of my participation in the First Challenge 2022. For details: First Challenge 2022”

【 K8S series 】 How to access POD metadata

** When we run the container in pod, will we also want to get the environment information of the current POD? ** Our YAML manifest is very simple. In fact, after deployment, K8S will give us additional fields that are not written in the YAML manifest. How will our POD environment information and container metadata be passed to the container? Is it also by obtaining these fields that K8S gives me by default?

There are 3 ways:

  • By way of environment variables
  • The way through the PROGRAM Api
  • By interacting with ApiServer

By way of environment variables

It’s easier to get pod information by using environment variables. Remember how we passed volume data into the container as environment variables?

This time we will pass the data in a similar way, which is simpler than before, but we will pass the environment information, such as pod IP, POD name, namespace, POD service account, node name, CPU/memory request/limit size, etc

Let’s take a look at the YAML manifest information for pod

Each of the fields in the yamL manifest above can be passed to the container as an environment variable, so we can try to write one

  • Write a YAML listing and create a POD with the name My-Program
  • The container uses BusyBox as the base image. Since the container needs to run in pod, we need to run a program in the container, such as Sleep 8888888 or any other long-running program

apiVersion: v1
kind: Pod
metadata:
  name: my-downward
spec:
  containers:
  - name: my-downward
    image: busybox
    command: ["sleep"."8888888"]
    env:
    - name: XMT_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: XMT_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: XMT_NODENAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: XMT_SERVICE_ACCOUNT
      valueFrom:
        fieldRef:
          fieldPath: spec.serviceAccountName
    - name: XMT_REQUEST_CPU
      valueFrom:
        resourceFieldRef:
          resource: requests.cpu
          divisor: 1m
    - name: XMT_LIMITS_MEMORY
      valueFrom:
        resourceFieldRef:
          resource: limits.memory
          divisor: 1Ki
Copy the code

Kubectl create kubectl create kubectl create kubectl create kubectl create kubectl create kubectl create kubectl create kubectl create kubectl create kubectl create

The environment variables shown above can be used whenever we need them in the container, which is very convenient

You can see that the environment variables in the container correspond to env in the YAML listing

The way through the PROGRAM Api volume

Of course, it is possible to use the second way, that is, through the program Api volume. The specific operation mode is similar to the above environment variable, but the file will be generated in the specified path

The PROGRAM looks like a Restful Api, but it’s all about accessing the interface to get data.

It is actually the way of transferring data to the container with the definition and status information of POD as the container’s environment variables or files, as shown in the figure below

The price of the Api volume can be written like this:

apiVersion: v1
kind: Pod
metadata:
  name: my-downward-vv
  labels:
    xmtconf: dev
spec:
  containers:
  - name: my-downward-vv
    image: busybox
    command: ["sleep"."8888888"]
    volumeMounts:
    - name: my-downward-vv
      mountPath: /etc/myvv
  volumes:
  - name: my-downward-vv
    downwardAPI:
      items:
      - path: "xmtPodName"
        fieldRef:
          fieldPath: metadata.name
      - path: "xmtNamespace"
        fieldRef:
          fieldPath: metadata.namespace
      - path: "xmtLabels"
        fieldRef:
          fieldPath: metadata.labels
Copy the code

Yaml is relatively simple, similar to the write mount method. Here, we use items in the downwardAPI to pass each piece of data. The source of the data is written in the same way as the environment variables above

It can be seen that the mounting data of the PROGRAM Api will be presented in the form of key-value pairs or text in the specific file

Let’s change the pod label to PROd to verify whether the corresponding file in the container will be changed accordingly.

kubectl label pod my-downward-vv xmtconf=prod --overwrite
Copy the code

Enter the container to check whether the /etc/myvv/xmtlabels file has changed

It can be seen from the above effect that when using the VOLUME of PROGRAM Api, the corresponding environment variables will exist in the form of files in the directory we specify

If we change the value of the environment variable while the program is running, the contents of the file in the volume will change accordingly

How do I interact with APiServer?

Total program cost: $114,000 Total program cost: $114,000 Total program cost: $114,000

The program can only get the data of its OWN POD; if you want to get the resource information of other PODS, then you need to interact with the Api Server

Something like this:

So let’s write a pod and let the container inside the POD interact with the ApiServer. Here we need to pay attention to two things:

  • We need to locate the ApiServer so that we have a chance to access it properly
  • ApiServer authentication is required

Using curl to access an ApiServer

Make your own simple mirror image with Curl

FROM ubuntu:latest
RUN  apt-get update -y
RUN  apt install curl -y
ENTRYPOINT ["sleep"."8888888"]
Copy the code

Make the image and push it to the DockerHub

docker build -t xiaomotong888/xmtcurl .
docker push xiaomotong888/xmtcurl
Copy the code

Write a simple YAML and run pod

mycurl.yaml

apiVersion: v1
kind: Pod
metadata:
  name: xmt-curl
spec:
  containers:
  - name: xmt-curl
    image: xiaomotong888/xmtcurl
    command: ["sleep"."8888888"]
Copy the code

After pod runs successfully, we go into the container

kubectl exec -it xmt-curl bash
Copy the code

We can check the IP address of kubernetes service in the K8S environment, we can access it in this way

Access Kubernetes in the container

This is because there is no certificate, we need to import the certificate and token, so that we can access ApiServer correctly, and also need an important operation

Create a ClusterRoleBinding and access ApiServer only after the clusterRoleBinding is created. If the clusterRoleBinding is not created, ApiServer will also report 403 Forbidden

kubectl create clusterrolebinding gitlab-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts --namespace=dev
Copy the code

Did you remember the location of the certificate?

Before we see the default k8s mount position, / var/run/secrets/kubernetes. IO/serviceaccount there namespace, certificates, token

At this time, we can add the certificate when accessing k8S ApiServer

curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes
Copy the code

We can import an environment variable and access k8S ApiServer without receiving the import certificate

export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Copy the code

It can be seen that the effect is different from the previous one. Now 403 is quoted because there is no token. At this time, we can add token

TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
Copy the code

This will show you what apis apiServer has that we filled in when we wrote the API version of yamL manifest

So to end today’s share, I want to show you an image

Applications in a container are authenticated with ApiServer through certificates, and interact with ApiServer through tokens and namespaces

Today is here, learning, if there is a deviation, please correct

Welcome to like, follow and favorites

Friends, your support and encouragement, I insist on sharing, improve the quality of the power

All right, that’s it for this time

Technology is open, our mentality, should be more open. Embrace change, live in the sun, and strive to move forward.

I am Nezha, welcome to like, see you next time ~