preface

Recently, when using K8S to build micro-service, we found that we need to manually modify the POD name, POD image, SVC name, Ingress TLS and so on in the YAML file, which is very troublesome, but the situation is different with Helm, which is the package manager of K8S. Like Ubuntu’s apt-get and CentOS’s yum, installing RabbitMQ with Helm is easy. Here’s how to install RabbitMQ with Helm.

The preparatory work

  • To install K8S, I use the ACK K8S service of Ali Cloud.
  • Install k8s client :kubectl kubectl installation address
  • Install HELM client Install HELM
  • Configuring Helm Repo sources Below are the three sources I added: Stable, Bitnami, and Ali

    helm repo add stable https://charts.helm.sh/stable helm repo add bitnami https://charts.helm.sh/stable helm repo add ali  https://charts.helm.sh/stable

    View the installed Repo sources

    $ helm repo list                                                    
    NAME       URL
    stable     https://charts.helm.sh/stable
    bitnami    https://charts.bitnami.com/bitnami
    ali        https://apphub.aliyuncs.com/stable/

RabbitMQ installation

There are several common ways to install RabbitMQ:

  • RabbitMQ ECS CentOS offers RabbitMQ ECS CentOS
  • The RabbitMQ official documentation is provided for installing K8S
  • Helm installs RabbitMQ community leaders to provide the installation

RabbitMQ installation methods in different environments

Here we use Helm to install RabbitMQ, but is RabbitMQ installed in the same way if it is used for development and testing or for pre-production and production environments? Of course, there are differences, so let’s explain how installing RabbitMQ differs from environment to environment.

  • Install the development and test environment (dev, test)

    • K8s Service: type: NodePort | LoadBalance
    • RMQ management interface: use IP :port to access, such as: 192.168.0.1:15672
    • IP :port: 192.168.0.1:5672
  • Installation of pre-production and production environment (UAT, PROD)

    • K8S Service: type: Clusterip, Ingress
    • RMQ admin interface: Use the domain name configured in Ingress, such as rabbitmq.demo.com
    • AMQP 5672 port: use the K8S Internet DNS resolution name to access, such as:test-rabbitmq-headless.rabbit.svc.cluster.local:5672

Let’s take a look at how RabbitMQ is installed in different environments.

Preparation before installation

First of all, let’s check whether there is RabbitMQ chart in Helm,

$Helm Search Repo RabbitMQ Name Chart Version App Version Description Ali/Prometheus-RabbitMQ-Exporter 0.5.5 v0.29.0 Exporter of RabbitMQ Metrics for Prometheus Ali Exporter of RabbitMQ Metrics for Prometheus Ali... RabbitMQ cluster, RabbitMQ cluster, RabbitMQ cluster, RabbitMQ cluster, RabbitMQ cluster, RabbitMQ cluster, RabbitMQ cluster, RabbitMQ cluster... 3.8.18 Open Source Message Broker Software that Implem... Exporter of RabbitMQ Metric Exporter for Promet... Deprecated Open Source Message Broker... RabbitMQ cluster... RabbitMQ cluster... RabbitMQ cluster...

It can be seen that different repo sources provide different versions of RabbitMQ in Chart. We choose Bitnami/RabbitMQ, Chart version: 8.16.1, APP version: 3.8.18.

Next, we need to download the chart file of stable/rabbitmq. The chart file is a compressed file of.tgz.

Helm Pull Bitnami RabbitMQ Tar ZXF RabbitMQ-8.16.1.tgz
$ cd rabbitmq && ls -ls
total 168
 8 -rwxr-xr-x   1 zhangwei  staff    435  1  1  1970 Chart.yaml
72 -rwxr-xr-x   1 zhangwei  staff  34706  1  1  1970 README.md
 0 drwxr-xr-x   5 zhangwei  staff    160  6 30 14:43 ci
 0 drwxr-xr-x  19 zhangwei  staff    608  6 30 14:43 templates
40 -rwxr-xr-x   1 zhangwei  staff  19401  1  1  1970 values-production.yaml
 8 -rwxr-xr-x   1 zhangwei  staff   2854  1  1  1970 values.schema.json
40 -rwxr-xr-x   1 zhangwei  staff  18986  1  1  1970 values.yaml

Then, let’s look at the available values. Yaml parameters, which can also be seen with the command:

helm show values bitnami/rabbitmq

values.yaml

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass

## Bitnami RabbitMQ image version
## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/
##
image:
  registry: docker.io
  repository: bitnami/rabbitmq
  tag: 3.8.2-debian-10-r30

  ## set to true if you would like to see extra information on logs
  ## it turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  debug: false

  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

## String to partially override rabbitmq.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override rabbitmq.fullname template
##
# fullnameOverride:

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## does your cluster have rbac enabled? assume yes by default
rbacEnabled: true

## RabbitMQ should be initialized one by one when building cluster for the first time.
## Therefore, the default value of podManagementPolicy is 'OrderedReady'
## Once the RabbitMQ participates in the cluster, it waits for a response from another
## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster.
## If the cluster exits gracefully, you do not need to change the podManagementPolicy
## because the first RabbitMQ of the statefulset always will be last of the cluster.
## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure,
## you must change podManagementPolicy to 'Parallel'.
## ref : https://www.rabbitmq.com/clustering.html#restarting
##
podManagementPolicy: OrderedReady

## section of specific values for rabbitmq
rabbitmq:
  ## RabbitMQ application username
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  username: user

  ## RabbitMQ application password
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # password:
  # existingPasswordSecret: name-of-existing-secret

  ## Erlang cookie to determine whether different nodes are allowed to communicate with each other
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # erlangCookie:
  # existingErlangSecret: name-of-existing-secret

  ## Node name to cluster with. e.g.: `clusternode@hostname`
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # rabbitmqClusterNodeName:

  ## Value for the RABBITMQ_LOGS environment variable
  ## ref: https://www.rabbitmq.com/logging.html#log-file-location
  ##
  logs: '-'

  ## RabbitMQ Max File Descriptors
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits
  ##
  setUlimitNofiles: true
  ulimitNofiles: '65536'

  ## RabbitMQ maximum available scheduler threads and online scheduler threads
  ## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads
  ##
  maxAvailableSchedulers: 2
  onlineSchedulers: 1

  ## Plugins to enable
  plugins: "rabbitmq_management rabbitmq_peer_discovery_k8s"

  ## Extra plugins to enable
  ## Use this instead of `plugins` to add new plugins
  extraPlugins: "rabbitmq_auth_backend_ldap"

  ## Clustering settings
  clustering:
    address_type: hostname
    k8s_domain: cluster.local
    ## Rebalance master for queues in cluster when new replica is created
    ## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance
    rebalance: false

  loadDefinition:
    enabled: false
    secretName: load-definition

  ## environment variables to configure rabbitmq
  ## ref: https://www.rabbitmq.com/configure.html#customise-environment
  env: {}

  ## Configuration file content: required cluster configuration
  ## Do not override unless you know what you are doing. To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead
  configuration: |-
    ## Clustering
    cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
    cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
    cluster_formation.node_cleanup.interval = 10
    cluster_formation.node_cleanup.only_log_warning = true
    cluster_partition_handling = autoheal
    # queue master locator
    queue_master_locator=min-masters
    # enable guest user
    loopback_users.guest = false

  ## Configuration file content: extra configuration
  ## Use this instead of `configuration` to add more configuration
  extraConfiguration: |-
    #disk_free_limit.absolute = 50MB
    #management.load_definitions = /app/load_definition.json

  ## Configuration file content: advanced configuration
  ## Use this as additional configuraton in classic config format (Erlang term configuration format)
  ##
  ## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines.
  ## advancedConfiguration: |-
  ##   [{
  ##     rabbitmq_auth_backend_ldap,
  ##     [{
  ##         ssl_options,
  ##         [{
  ##             verify, verify_none
  ##         }, {
  ##             fail_if_no_peer_cert,
  ##             false
  ##         }]
  ##     ]}
  ##   }].
  ##
  advancedConfiguration: |-

  ## Enable encryption to rabbitmq
  ## ref: https://www.rabbitmq.com/ssl.html
  ##
  tls:
    enabled: false
    failIfNoPeerCert: true
    sslOptionsVerify: verify_peer
    caCertificate: |-
    serverCertificate: |-
    serverKey: |-
    # existingSecret: name-of-existing-secret-to-rabbitmq

## LDAP configuration
##
ldap:
  enabled: false
  server: ""
  port: "389"
  user_dn_pattern: cn=${username},dc=example,dc=org
  tls:
    # If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter.
    enabled: false

## Kubernetes service type
service:
  type: ClusterIP
  ## Node port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # nodePort: 30672

  ## Set the LoadBalancerIP
  ##
  # loadBalancerIP:

  ## Node port Tls
  ##
  # nodeTlsPort: 30671

  ## Amqp port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  port: 5672

  ## Amqp Tls port
  ##
  tlsPort: 5671

  ## Dist port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  distPort: 25672

  ## RabbitMQ Manager port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  managerPort: 15672

  ## Service annotations
  annotations: {}
    # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

  ## Load Balancer sources
  ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ##
  # loadBalancerSourceRanges:
  # - 10.10.10.0/24

  ## Extra ports to expose
  # extraPorts:

  ## Extra ports to be included in container spec, primarily informational
  # extraContainerPorts:

# Additional pod labels to apply
podLabels: {}

## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001
  extra: {}

persistence:
  ## this enables PVC templates that will create one per pod
  enabled: true

  ## rabbitmq data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessMode: ReadWriteOnce

  ## Existing PersistentVolumeClaims
  ## The value is evaluated as a template
  ## So, for example, the name can depend on .Release or .Chart
  # existingClaim: ""

  # If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well.
  size: 8Gi

  # persistence directory, maps to the rabbitmq data directory
  path: /opt/bitnami/rabbitmq/var/lib/rabbitmq

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}

networkPolicy:
  ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
  ##
  enabled: false

  ## The Policy model to apply. When set to false, only pods with the correct
  ## client label will have network access to the ports RabbitMQ is listening
  ## on. When true, RabbitMQ will accept connections from any source
  ## (with the correct destination port).
  ##
  allowExternal: true

  ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
  ##
  # additionalRules:
  #  - matchLabels:
  #    - role: frontend
  #  - matchExpressions:
  #    - key: role
  #      operator: In
  #      values:
  #        - frontend

## Replica count, set to 1 to provide a default available cluster
replicas: 1

## Pod priority
## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClassName: ""

## updateStrategy for RabbitMQ statefulset
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
  type: RollingUpdate

## Node labels and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
tolerations: []
affinity: {}
podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1
## annotations for rabbitmq pods
podAnnotations: {}

## Configure the ingress resource that allows you to access the
## WordPress installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
  ## Set to true to enable ingress record generation
  enabled: false

  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  ## hostName: foo.bar.com
  path: /

  ## Set this to true in order to enable TLS on the ingress record
  ## A side effect of this will be that the backend wordpress service will be connected at port 443
  tls: false

  ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
  tlsSecret: myTlsSecret

  ## Ingress annotations done as key:value pairs
  ## If you're using kube-lego, you will want to add:
  ## kubernetes.io/tls-acme: true
  ##
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
  ##
  ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
  annotations: {}
  #  kubernetes.io/ingress.class: nginx
  #  kubernetes.io/tls-acme: true

## The following settings are to configure the frequency of the lifeness and readiness probes
livenessProbe:
  enabled: true
  initialDelaySeconds: 120
  timeoutSeconds: 20
  periodSeconds: 30
  failureThreshold: 6
  successThreshold: 1

readinessProbe:
  enabled: true
  initialDelaySeconds: 10
  timeoutSeconds: 20
  periodSeconds: 30
  failureThreshold: 3
  successThreshold: 1

metrics:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/rabbitmq-exporter
    tag: 0.29.0-debian-10-r28
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  ## environment variables to configure rabbitmq_exporter
  ## ref: https://github.com/kbudde/rabbitmq_exporter#configuration
  env: {}
  ## Metrics exporter port
  port: 9419
  ## RabbitMQ address to connect to (from the same Pod, usually the local loopback address).
  ## If your Kubernetes cluster does not support IPv6, you can change to `127.0.0.1` in order to force IPv4.
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#networking
  rabbitmqAddress: localhost
  ## Comma-separated list of extended scraping capabilities supported by the target RabbitMQ server
  ## ref: https://github.com/kbudde/rabbitmq_exporter#extended-rabbitmq-capabilities
  capabilities: "bert,no_sort"
  resources: {}
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9419"

  livenessProbe:
    enabled: true
    initialDelaySeconds: 15
    timeoutSeconds: 5
    periodSeconds: 30
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 5
    timeoutSeconds: 5
    periodSeconds: 30
    failureThreshold: 3
    successThreshold: 1

  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##      https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
  serviceMonitor:
    ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
    enabled: false
    ## Specify the namespace in which the serviceMonitor resource will be created
    # namespace: ""
    ## Specify the interval at which metrics should be scraped
    interval: 30s
    ## Specify the timeout after which the scrape is ended
    # scrapeTimeout: 30s
    ## Specify Metric Relabellings to add to the scrape endpoint
    # relabellings:
    ## Specify honorLabels parameter to add the scrape endpoint
    honorLabels: false
    ## Specify the release for ServiceMonitor. Sometimes it should be custom for prometheus operator to work
    # release: ""
    ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    additionalLabels: {}

  ## Custom PrometheusRule to be defined
  ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
  ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
  prometheusRule:
    enabled: false
    additionalLabels: {}
    namespace: ""
    rules: []
      ## List of reules, used as template by Helm.
      ## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html
      ## Please adapt them to your needs.
      ## Make sure to constraint the rules to the current rabbitmq service.
      ## Also make sure to escape what looks like helm template.
      # - alert: RabbitmqDown
      #   expr: rabbitmq_up{service="{{ template "rabbitmq.fullname" . }}"} == 0
      #   for: 5m
      #   labels:
      #     severity: error
      #   annotations:
      #     summary: Rabbitmq down (instance {{ "{{ $labels.instance }}" }})
      #     description: RabbitMQ node down

      # - alert: ClusterDown
      #   expr: |
      #     sum(rabbitmq_running{service="{{ template "rabbitmq.fullname" . }}"})
      #     < {{ .Values.replicas }}
      #   for: 5m
      #   labels:
      #     severity: error
      #   annotations:
      #     summary: Cluster down (instance {{ "{{ $labels.instance }}" }})
      #     description: |
      #         Less than {{ .Values.replicas }} nodes running in RabbitMQ cluster
      #         VALUE = {{ "{{ $value }}" }}

      # - alert: ClusterPartition
      #   expr: rabbitmq_partitions{service="{{ template "rabbitmq.fullname" . }}"} > 0
      #   for: 5m
      #   labels:
      #     severity: error
      #   annotations:
      #     summary: Cluster partition (instance {{ "{{ $labels.instance }}" }})
      #     description: |
      #         Cluster partition
      #         VALUE = {{ "{{ $value }}" }}

      # - alert: OutOfMemory
      #   expr: |
      #     rabbitmq_node_mem_used{service="{{ template "rabbitmq.fullname" . }}"}
      #     / rabbitmq_node_mem_limit{service="{{ template "rabbitmq.fullname" . }}"}
      #     * 100 > 90
      #   for: 5m
      #   labels:
      #     severity: warning
      #   annotations:
      #     summary: Out of memory (instance {{ "{{ $labels.instance }}" }})
      #     description: |
      #         Memory available for RabbmitMQ is low (< 10%)\n  VALUE = {{ "{{ $value }}" }}
      #         LABELS: {{ "{{ $labels }}" }}

      # - alert: TooManyConnections
      #   expr: rabbitmq_connectionsTotal{service="{{ template "rabbitmq.fullname" . }}"} > 1000
      #   for: 5m
      #   labels:
      #     severity: warning
      #   annotations:
      #     summary: Too many connections (instance {{ "{{ $labels.instance }}" }})
      #     description: |
      #         RabbitMQ instance has too many connections (> 1000)
      #         VALUE = {{ "{{ $value }}" }}\n  LABELS: {{ "{{ $labels }}" }}

##
## Init containers parameters:
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
volumePermissions:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: buster
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  resources: {}

## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an
## unknown order.
## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot
##
forceBoot:
  enabled: false

## Optionally specify extra secrets to be created by the chart.
## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded.
##
extraSecrets: {}
  # load-definition:
  #   load_definition.json: |
  #     {
  #       ...
  #     }

The VALUES. YAML file is the public configuration information of Helm chart, which can be modified according to needs. Because we use AliCloud, alicloud-disk-ssd is used for storage, and note that the minimum SSD of alicloud-disk-ssd is 20G:

storageClass: "alicloud-disk-ssd"
size: 20Gi

Next, we create the rabbit namespace

kubectl create namespace rabbit

Install under development test environment

Since the development environment is accessed using IP :port, we need to configure the service. The final complete values.yaml file looks like this:

values.yaml

service:
  type: NodePort
persistence:
  storageClass: "alicloud-disk-ssd"
  size: 20Gi

Now let’s install RabbitMQ and run it with the following command:

# helm install -f values. YAML test -rabbitMq bitnami/ rabbitMq --namespace rabbit

Here is the output at install time:

Output began to


$ helm install -f test-values.yaml test-rabbitmq bitnami/rabbitmq --namespace rabbit NAME: test-rabbitmq LAST DEPLOYED: Thu Jul 8 10:07:50 2021 NAMESPACE: rabbit STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ** Please be patient while the chart is being deployed ** Credentials: echo "Username : user" echo "Password : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)" echo "ErLang Cookie : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)" Note that the credentials are saved in persistent volume claims and will not be changed upon upgrade or reinstallation unless the persistent volume claim has been deleted. If this is not the first installation of this chart, the credentials may not be valid. This is applicable when no passwords are set and therefore the random password is autogenerated. In case of using a fixed password, you should specify it when upgrading. More information about the credentials may be found at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases . RabbitMQ can be accessed within the cluster on port at test-rabbitmq.rabbit.svc. To access for outside the cluster, perform the following steps: Obtain the NodePort IP and ports: export NODE_IP=$(kubectl get nodes --namespace rabbit -o jsonpath="{.items[0].status.addresses[0].address}") export NODE_PORT_AMQP=$(kubectl get --namespace rabbit -o jsonpath="{.spec.ports[1].nodePort}" services test-rabbitmq) export NODE_PORT_STATS=$(kubectl get --namespace rabbit -o jsonpath="{.spec.ports[3].nodePort}" services test-rabbitmq) To Access the RabbitMQ AMQP port: echo "URL : amqp://$NODE_IP:$NODE_PORT_AMQP/" To Access the RabbitMQ Management interface: echo "URL : http://$NODE_IP:$NODE_PORT_STATS/"

The output end of the


Check all RabbitMQ resources in the rabbit namespace to see if they were created successfully!

$ kubectl get all -n rabbit NAME READY STATUS RESTARTS AGE pod/test-rabbitmq-0 1/1 Running 0 3h56m NAME TYPE CLUSTER-IP EXTERNAL - IP PORT (S) AGE service/test - the rabbitmq ClusterIP 172.21.15.81 < none > 4369 / TCP, 5672 / TCP, 25672 / TCP, 3 h56m 15672 / TCP service/test-rabbitmq-headless ClusterIP None <none> 4369/TCP,5672/TCP,25672/TCP,15672/TCP 3h56m NAME READY AGE statefulset.apps/test-rabbitmq 1/1 3h56m

After waiting for a while, we will find that POD, SVC, PVC, PV, StatefulSet are all created. How can we access the RabbitMQ admin interface? From the output we find that:

  • NODE_IP

    Command:kubectl get nodes --namespace rabbit -o jsonpath="{.items[0].status.addresses[0].address}"
  • NODE_PORT_AMQP

    Command:kubectl get --namespace rabbit -o jsonpath="{.spec.ports[1].nodePort}" services test-rabbitmq
  • NODE_PORT_STATS

    Command:kubectl get --namespace rabbit -o jsonpath="{.spec.ports[1].nodePort}" services test-rabbitmq

The above node_IP is actually the real IP of the K8S node (to see if it is accessible to the public network, if it is not accessible to the public network by applying for Ali Cloud EIP), such as: 192.168.0.1, NODE_PORT_AMQP, NODE_PORT_STATS, NODE_PORT_STATS, NODE_PORT_STATS, NODE_PORT_STATS, NODE_PORT_STATS, NODE_PORT_STATS To access the RabbitMQ admin interface, type: http://192.168.0.1:31010 in the browser:

In this way, SpringBoot can access RabbitMQ by IP address,

Spring: rabbitmq: host: 192.168.0.1 port: 5672

Pre-production installation under production environment

Enable Ingress and Domain Configuration because we need to access the domain name:

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
  hostName: rabbitmq.demo.com

Finally, to access it over HTTPS, we need to enable TLS:

tls: true
tlsSecret: tls-secret-name

IO /cluster-issuer (which I have already generated in K8S), so I can generate the TLS certificate directly. If you are interested in using the cert-manager, you can apply for a free HTTPS certificate

annotations:
    cert-manager.io/cluster-issuer: your-cert-manager-name

The final complete values.yaml file looks like this:

values.yaml

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: your-cert-manager-name
    nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
  hostName: rabbitmq.demo.com
  tls: true
  tlsSecret: tls-secret-name
persistence:
  storageClass: "alicloud-disk-ssd"
  size: 20Gi

Now let’s install RabbitMQ and run it with the following command:

# helm install -f values. YAML test -rabbitMq bitnami/ rabbitMq --namespace rabbit

Here is the output at install time:

Output began to


NAME: test-rabbitmq LAST DEPLOYED: Thu Jul 8 19:06:58 2021 NAMESPACE: rabbit STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ** Please be patient while the chart is being deployed ** Credentials: echo "Username : user" echo "Password : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)" echo "ErLang Cookie : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)" Note that the credentials are saved in persistent volume claims and will not be changed upon upgrade or reinstallation unless the persistent volume claim has been deleted. If this is not the first installation of this chart, the credentials may not be valid. This is applicable when no passwords are set and therefore the random password is autogenerated. In case of using a fixed password, you should specify it when upgrading. More information about the credentials may be found at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases . RabbitMQ can be accessed within the cluster on port at test-rabbitmq.rabbit.svc. To access for outside the cluster, perform the following steps: To Access the RabbitMQ AMQP port: 1. Create a port-forward to the AMQP port: kubectl port-forward --namespace rabbit svc/test-rabbitmq 5672:5672 & echo "URL : AMQP ://127.0.0.1:5672/" 2. Access RabbitMQ using the obtained URL. To Access the RabbitMQ Management interface: 1. Get the RabbitMQ Management URL and associate its hostname to your cluster external IP: export CLUSTER_IP=$(minikube ip) # On Minikube. Use: `kubectl cluster-info` on others K8s clusters echo "RabbitMQ Management: http://rabbitmq.demo.com/" echo "$CLUSTER_IP rabbitmq.testapi.seaurl.com" | sudo tee -a /etc/hosts 2. Open a browser and  access RabbitMQ Management using the obtained URL.

The output end of the


Check all RabbitMQ resources in the rabbit namespace to see if they were created successfully!

$ kubectl get all -n rabbit NAME READY STATUS RESTARTS AGE pod/test-rabbitmq-0 1/1 Running 0 3h56m NAME TYPE CLUSTER-IP EXTERNAL - IP PORT (S) AGE service/test - the rabbitmq ClusterIP 172.21.15.81 < none > 4369 / TCP, 5672 / TCP, 25672 / TCP, 3 h56m 15672 / TCP service/test-rabbitmq-headless ClusterIP None <none> 4369/TCP,5672/TCP,25672/TCP,15672/TCP 3h56m NAME READY AGE statefulset.apps/test-rabbitmq 1/1 3h56m

After waiting for some time, we will find that POD, SVC, Ingress, PVC, PV, StatefulSet are all created. How can we access the RabbitMQ admin interface? We found that the http://rabbitmq.demo.com/ address, from the content of the output in this way, we visited the Rabbitmq management interface in the browser input: http://rabbitmq.demo.com/ can complete access:

Spring Boot will be able to access RabbitMQ via the name of the K8S Intranet DNS parse

spring:
  rabbitmq:
    host: test-rabbitmq-headless.rabbit.svc.cluster.local
    port: 5672
    username: user
    password:

The problem

1, visit the rabbitmq: 503 path if you configure domain such as: demo.com/rabbitmq, this domain name, so you want to configure as follows, to access right, also: Path :/rabbitmq is not the path:/rabbitmq.

rabbitmq:
  extraConfiguration: |-
    management.path_prefix = /rabbitmq/
ingress:
...
  hostName: demo.com
  path: /rabbitmq/
...

2,kubectl describe svc your-service-name -n rabbitThe service endpoint was found to be empty

  • Reason: PVC has not been removed by referring to this article
  • Solution:

    kubectl get pvc
    kubectl delete pvc <name>

Running “VolumeBindingFilter Plugin for POD” test-RabbitMQ-0 “: POD has unbound immediate persistentVoluMCLAims

  • Reason: To set StorageClass and Ali cloud disk minimum requirement of 20GI
  • Solution:

    persistence:
    storageClass: "alicloud-disk-ssd"
    size: 20Gi

4, persistentVolumeClaim “data-test-rabbitMq-0” is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims

  • Reason: PVC containers do not support online modification
  • Solution: Delete PVC to recreate

5, Warning provisioningFailed 6S (x4 over 14M) diskplugin.csi.alibabacloud.com_iZbp1d2cbgi4jt9oty4m9iZ_3408e051-98e8-4295-8a21-7f1af0807958 (combined from similar events): failed to provision volume with StorageClass “alicloud-disk-ssd”: rpc error: code = Internal desc = SDK.ServerError ErrorCode: InvalidAccountStatus.NotEnoughBalance

  • Reason: SSD cloud disk can only be created with a minimum balance of 100 yuan in Aliyun account
  • Solution: Ali cloud account recharge

RabbitMQ is a rabbitMQ server that has a rabbitMQ password

echo "Password      : $(kubectl get secret --namespace rabbit test-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)"

The rabbitmq.host address should not be your ingress extranet domain name :rabbitmq.demo.com, but should be the address of your internal cluster DNS resolution: Use the following command to get the Intranet resolution address of test-rabbitMQ, and then assign it to spring.rabbitMq. host to reconnect

# If POD and service are not the same namespace, then DNS query must specify the namespace in which the service resides. # test-rabbitmq-headless.rabbit namespace is Rabbit # If no namespace is specified, $kubectl exec-i-t dnsutils -- nslookup test-rabbitmq-headless $kubectl exec-i-t dnsutils -- nslookup Test-rabbitmq-headless. Rabbit Server: 172.21.0.10 Address: 172.21.0.10#53 Name: Rabbitmq-headless The test - the rabbitmq - headless. Rabbit. SVC. Cluster. The local Address: 172.20.0.124

application.yml

spring.rabbitmq.host=test-rabbitmq-headless.rabbit.svc.cluster.local

8. The NodePort corresponding to 5672 is not obtained correctly in the development and test environment

kubectl get --namespace rabbit -o jsonpath="{.spec.ports[1].nodePort}" services test-rabbitmq

Use the following command to view to get 5672 corresponding NodePort port 31972

$ kubectl get svc test-rabbitmq -n rabbit NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-rabbitmq NodePort 172.21.14.17 < none > / TCP, 5672-31972, 4369:31459 / TCP, 25672:31475 / TCP, 15672:31696 / TCP 14 m

conclusion

1, kubectl to configure k8s connection configuration information to use, and the helm will get the connection information by default, the default is in ~ /. Kube/config 2, helm3 has removed the helm tillers and helm init command, so they don’t discuss helm2. 3. If errors occur during or after installation, you can delete them through the app-> Helm Release list under the ACK console of Aliyun. RabbitMQ -plugins enable rabbitmq_management is not required because it is enabled by default after Helm installs rabbitMQ. 5. If you want to debug locally, you can use the following method

To Access the RabbitMQ AMQP port: kubectl port-forward --namespace rabbit svc/test-rabbitmq 5672:5672 echo "URL : Amqp ://127.0.0.1:5672/" To Access the RabbitMQ Management interface: Kubectl port-forward --namespace rabbit SVC /test-rabbitmq 15672:15672 echo "URL: http://127.0.0.1:15672/"

RabbitMQ 5672 only supports AMQP: protocol, which means that it cannot be exposed by INGRESS, while INGRESS can only be exposed by HTTP and HTTPS. Therefore, we set the service type: NodePort for this reason.

reference

Helm Deploy RabbitMQ Clusters _ Helm document How does Kubectl exec work? Helm tutorial