0 – preface

This tutorial is only suitable for learning and deployment in development environments and is strongly not recommended for production

There are many articles about the deployment of ingress Controller on the Internet, and the deployment methods are also various. In combination with the actual environment, find a suitable way for yourself, record it, and give it to your future self.

Installing and deploying the Ingress Controller consists of two steps:

  • The first step is to install the component in the K8S cluster.
  • The second step is deployment, which is how do you expose the components of the cluster to the outside world for invocation

After the first step and the second step, it is in the K8S cluster inside and outside the junction to build a bridge of communication.

1- Implementation scheme of Ingress Controller

Ingress Controller is just an abstract general name. Its implementation components are as follows:

  • Ingress NGINX: Kubernetes official maintenance solution. This is the Controller used in this installation.
  • F5 big-IP Controller: A Controller developed by F5 that allows administrators to manage F5 big-IP devices using Kubernetes and OpenShift via CLI or API.
  • Ingress Kong: Kubernetes Ingress Controller maintained by the famous open source API Gateway solution.
  • Traefik is an open source HTTP reverse proxy and load balancer that also supports Ingress.
  • Voyager: An Ingress Controller based on HAProxy.

Source

NOTE: Ingress NGINX is the object to be installed and deployed in this document. All of the following Ingress Controllers refer to the Ingress NGINX solution

2 – installation

One line command (verified on K8S V19.2) :

kubectl apply -f mandatory.yml

This is an official mandatory.yml file. Many ingress Controller deployment tutorials do not provide the full mandatory.yml file.

  • 1. The file cannot be found when the URL is clicked
  • 2. The version provided is too old

So I’m going to post a full version of mandatory.yml to give you a quick install of verification anda backup for yourself (it’s strongly not recommended if you’re going into production).

One change:

kind.deployment.spec.template.spec.containers.imagePullPolicy=IfNotPresent

(kubectl apply -f mandatory.yml) { Quay. IO/kubernetes – ingress – controller/nginx – ingress – controller: 0.31.1. Docker pull quay. IO /kubernetes-ingress-controller/nginx-ingress-controller:0.31.1 Then deploy (kubectl apply -f mandatory.yml).

Knowledge:

The default mirror pull strategy is IfNotPresent: Kubelet will no longer pull a mirror if it already exists. If you want to force the image to always pull, you can do one of the following:

  • Set the container’s imagePullPolicy to Always.
  • Omit imagePullPolicy and use: Latest as the tag for the image to use.
  • Omit imagePullPolicy and the image tag to use.
  • The AlwaysPullImages Admission Controller is enabled.

If imagePullPolicy is not defined to a specific value, it is also set to Always

Source

3 – the deployment

There are many ways to deploy the Ingress Controller. The following are three common ways:

  • Deployment+Service(NodePort)
  • DaemonSet+HostNetwork+nodeSelector
  • Deployment+Service(LoadBalancer)

Here, I’ll focus on the first two.

3.1 Deployment + Service (NodePort)

NodePort deployment roadmap:

  • Start by providing a nodePort service on all nodes,
  • Then create a nodePort on each node (this port is the host port).
  • If the ingress Controller container is not deployed on the node, iptables will be used to forward the traffic to the Ingress-Controller container (the Nginx container in the figure).
  • Then Nginx will make a judgment according to the rules of ingress and forward it to the corresponding application Web container.

Note that the Ingress Controller container can be deployed on one node or the master node (not recommended). It can also be deployed on all nodes and master nodes. The latter can reduce the forwarding of network traffic through NAT and improve the efficiency. The disadvantage is that it is not possible to deploy an Ingress Controller container on each node with limited resources.

The SVC YAML file of the deployment mode is ingress-controller-nodeport-svc.yml. Enable port 30080 on the node node as the nodeport and expose it to the external cluster.

External access: http://nodeport_ip:30080/url

3.2 DaemonSet + HostNetwork + nodeSelector

HostNetwork deployment roadmap:

  • Create an ingress Controller container for each node,
  • Set the network mode of these containers to hostNetwork (this will cause ports 80 and 443 of each node to be occupied by the Ingress Controller container).
  • When external traffic enters the node through port 80/443, it enters the Ingress Controller container. Nginx then forwards the traffic to the corresponding Web application container according to the ingress rules.

Compared with nodePort mode, hostNetwork mode does not require creating an SVC to unify traffic, but allows each node to forward traffic by itself.

To deploy the scheme, change mandatory.yml:

  • deletekind.Deployment.spec.replicas: 1
  • Modify thekind.Deploymentkind.DeamonSet
  • addkind.DeamonSet.spec.template.spec.hostNetwork: true

External access: http://nodeport_ip/url

Knowledge: What is DeamonSet deployment?

DaemonSet ensures that a copy of Pod is run on all (or some) nodes. When nodes join the cluster, a Pod is added for them as well. These pods are also reclaimed when a node is removed from the cluster. Deleting DaemonSet will remove all pods it has created. Source

Knowledge point: Why deploy with DeamonSet?

In combination with the definition of DS and the deployment idea of hostNetwork, it can be seen that DS can be conveniently implemented: a copy of the Ingress Controller container can be deployed on all nodes. Of course, you can also deploy copies of the Ingress Controller container on one or more specified nodes using K8S’s label selector. The operation is as follows:

  • Label the nodes (such as Node-1) on which the Ingress Controller container is to be deployed

$ kubectl label node node-1 isIngressController="true"

  • Modify the mandatory. Yml:

kind.DeamonSet.spec.template.spec.NodeSelector: kubernetes.io/os: Linux to kind. DeamonSet. Spec. The template. Spec. NodeSelector: isIngressController = “true”

3.3- Comparison between Nodeport and HostNetwork Schemes

  • Container resource consumption:

    • NodePort In the nodePort deployment mode, fewer Ingress-Controller containers need to be deployed. A cluster can be deployed several.
    • In hostNetwork mode, an Ingress-Controller container needs to be deployed on each node, which consumes a lot of resources.
  • Host port usage:

    • The nodePort mode mainly occupies the nodePort of the SVC.
    • The hostNetwork needs to occupy ports 80 and 443 on dedicated servers.
  • Network traffic flow:

    • When accessing through hostNetwork, ingress Controller containers are deployed on each node. Therefore, incoming traffic does not need to be forwarded, improving efficiency
  • Source IP tracking:

    • When accessed via nodePort, the source IP in the HTTP request received by Nginx will be converted to the IP address of the node receiving the request, rather than the real client IP.
    • In the hostNetwork mode, ingress-controller will use the DNS domain name resolution of the dedicated server (i.e., /etc/resolv.conf of the dedicated server). Internal domain name resolution such as CoreDNS is not available.

I-Reference

  • Nginx Ingress Controller Installation Guide
  • K8s INGRESS principle and ingress-NGINx deployment test
  • Update image
  • Kubernetes Ingress Controller
  • Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?

An appendix II

ingress-controller-nodeport-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
Copy the code

mandatory.yml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.1
          imagePullPolicy: IfNotPresent
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container
Copy the code