The original link: fuckcloudnative. IO/posts/docke…

When using Docker and Kubernetes, we often need to access gCR. IO and quay. IO mirror warehouses. Due to well-known reasons, these mirror warehouses cannot be accessed in China, and the only one that can be accessed is Docker Hub, which is extremely slow. Azk8s.cn is the proxy site for the Gcr. IO image warehouse. You can access the image in the gcr. IO warehouse by using gcr.azk8s.cn. However, *.azk8s.cn is limited to the IP address in Azure China and no longer provides external services. Most of the other mirror acceleration schemes in China use the way of timing synchronization to cache, this method has a certain delay, can not ensure timely update, USTC and seven niuyun mirror accelerator I have tried, very unreliable, many mirrors are not.

In order to access gCR. IO and other mirror warehouses smoothly, we need to build a mirror warehouse agent site similar to gcr.azk8s.cn outside the wall. Docker’s open source project Registry can be used to achieve this requirement. Registry can be used not only as a local private mirror repository, but also as a cache for the upstream mirror repository, also known as pull through cache.

To get a feel for the speed:

1. Prerequisites


  • A server that works magic (you know, direct access to gCR.io)
  • A domain name and its related SSL certificate (docker pull image requires domain name certificate authentication). Generally, Let’s Encrypt is sufficient.

2. Core ideas


Registry can be used as a cache repository for the remote repository by setting the parameter RemoteURL, so that when you pull an image from the private repository’s address, Regiistry caches the image to local storage before providing it to the pulled client (it’s possible that the two steps are simultaneous, I don’t know). We can deploy a private Registry, then set remoteURL to the mirror repository address that needs to be accelerated, and that’s basically it.

3. The custom registry

IO, k8s.gcr. IO, quay. IO and ghcr. IO, we need to customize the configuration file of Registry, Dockerfile is as follows:

FROM registry:2.6
LABEL maintainer="registry-proxy Docker Maintainers https://fuckcloudnative.io"
ENV PROXY_REMOTE_URL="" \
    DELETE_ENABLED=""
COPY entrypoint.sh /entrypoint.sh
Copy the code

Entrypoint. sh is used to pass environment variables into the configuration file:

{{< expand “entrypoint.sh” >}}

#! /bin/sh

set -e

CONFIG_YML=/etc/docker/registry/config.yml

if [ -n "$PROXY_REMOTE_URL" -a `grep -c "$PROXY_REMOTE_URL" $CONFIG_YML` -eq 0 ]; then
    echo "proxy:" >> $CONFIG_YML
    echo "  remoteurl: $PROXY_REMOTE_URL" >> $CONFIG_YML
    echo "  username: $PROXY_USERNAME" >> $CONFIG_YML
    echo "  password: $PROXY_PASSWORD" >> $CONFIG_YML
    echo "------ Enabled proxy to remote: $PROXY_REMOTE_URL-- -- -- -- -- -"
elif [ $DELETE_ENABLED = true -a `grep -c "delete:" $CONFIG_YML` -eq 0 ]; then
    sed -i '/rootdirectory/a\ delete:' $CONFIG_YML
    sed -i '/delete/a\ enabled: true' $CONFIG_YML
    echo "------ Enabled local storage delete -----"
fi

sed -i "/headers/a\ Access-Control-Allow-Origin: ['*']" $CONFIG_YML
sed -i "/headers/a\ Access-Control-Allow-Methods: ['HEAD', 'GET', 'OPTIONS', 'DELETE']" $CONFIG_YML
sed -i "/headers/a\ Access-Control-Expose-Headers: ['Docker-Content-Digest']" $CONFIG_YML

case "The $1" in
    *.yaml|*.yml) set -- registry serve "$@" ;;
    serve|garbage-collect|help| - *)set -- registry "$@" ;;
esac

exec "$@"
Copy the code

{{< /expand >}}

4. Start the cache service

Once the Docker image is built, you can start the service. If you don’t want to build your own, you can just use my image: Yangchuansheng/Registrie-proxy.

IO, k8s.gcr. IO, quay. IO, and ghcr. IO, a 1C 2G cloud host is generally enough (provided you don’t run other services on it). My blog, comment service and a bunch of other services run on cloud hosts, so one isn’t enough for my needs. I just bought two Tencent Cloud Hong Kong lightweight servers.

Since I bought two of them, I definitely need to set up a K3S cluster. The host name tells me what I’m used for. 2C 4G serves as the master node, 1C 2G serves as the node.

Use docker. IO as an example to create a resource list:

{{< expand “dockerhub.yaml” >}}

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dockerhub
  labels:
    app: dockerhub
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dockerhub
  template:
    metadata:
      labels:
        app: dockerhub
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - dockerhub
              topologyKey: kubernetes.io/hostname
            weight: 1
      dnsPolicy: None
      dnsConfig:
        nameservers:
          - 8.88.8.
          - 8.84.4.
      containers:
      - name: dockerhub
        image: yangchuansheng/registry-proxy:latest
        env:
        - name: PROXY_REMOTE_URL
          value: https://registry-1.docker.io
        - name: PROXY_USERNAME
          value: yangchuansheng
        - name: PROXY_PASSWORD
          value: * * * * * * * *
        ports:
        - containerPort: 5000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/localtime
          name: localtime
        - mountPath: /var/lib/registry
          name: registry
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime
      - name: registry
        hostPath:
          path: /var/lib/registry
---
apiVersion: v1
kind: Service
metadata:
  name: dockerhub
  labels:
    app: dockerhub
spec:
  selector:
    app: dockerhub
  ports:
    - protocol: TCP
      name: http
      port: 5000
      targetPort: 5000
Copy the code

{{< /expand >}}

Use the resource list to create the corresponding service:

🐳 → kubectl apply-f dockerhub.yamlCopy the code

If you only have one host, docker-compose can be used to compose the container, and the configuration file can be modified by referring to k8S configuration.

5. Agent selection

If only docker.io is cached, you can directly change the port of Registrie-proxy to 443 and add SSL certificate configuration. This is not recommended if you want to cache multiple public repositories, because there is only one port 443 and multiple Registrie-proxy services cannot share one port. It is reasonable to use edge proxy services to forward requests to different Registrie-proxy services based on domain names.

For Kubernetes clusters, Ingress controllers are edge proxies. Common Ingress controllers are implemented by Nginx or enenvoy. Emerging as an Envoy, but born at the right time, many of its features are intended for the Cloud and are truly Cloud Native L7 proxies and communications buses. For example, its service discovery and dynamic configuration features, unlike the hot loading of proxies such as Nginx, Envoy can implement its control plane via apis that centralize service discovery and dynamically update the configuration of the data plane without restarting the proxy for the data plane. Not only that, but the control plane can layer the configuration through the API and then update it layer by layer.

Ingress Controller is an Envoy implemented by Contour, Ambassador, and Gloo. If you are interested in using Ingress Controller as an edge proxy, Try Contour. The Ingress Controller abstracts the bottom layer and shields a lot of details. It cannot take into account all the details of the configuration and will not necessarily support all the configuration items of the bottom proxy, so I chose to use the native Envoy as the edge proxy. If you’re running registrie-proxy on a standalone server, you can also try Envoy.

6. Configure the proxy

First create Envoy’s list of resources:

{{< expand “envoy.yaml” >}}

apiVersion: apps/v1
kind: Deployment
metadata:
  name: envoy
  namespace: kube-system
  labels:
    app: envoy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: envoy
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: envoy
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: envoy
        image: Envoyproxy/envoy: v1.17 - latest
        imagePullPolicy: IfNotPresent
        command:
        - envoy
        - /etc/envoy/envoy.yaml
        ports:
        - containerPort: 443
          name: https
        - containerPort: 80
          name: http
        - containerPort: 15001
          name: http-metrics
        volumeMounts:
        - mountPath: /etc/localtime
          name: localtime
        - mountPath: /etc/envoy
          name: envoy
        - mountPath: /root/.acme.sh/fuckcloudnative.io
          name: ssl
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime
      - name: ssl
        hostPath:
          path: /root/.acme.sh/fuckcloudnative.io
      - name: envoy
        hostPath:
          path: /etc/envoy
Copy the code

{{< /expand >}}

Use the resource list to create the corresponding service:

🐳  → kubectl apply -f envoy.yaml
Copy the code

The option here is to use hostPath to mount the Envoy’s configuration into the container and dynamically update the configuration through a file. To take a look at the Envoy configuration, go to /etc/envoy.

The bootstrap configuration:

{{< expand “envoy.yaml” >}}

node:
  id: node0
  cluster: cluster0
dynamic_resources:
  lds_config:
    path: /etc/envoy/lds.yaml
  cds_config:
    path: /etc/envoy/cds.yaml
admin:
  access_log_path: "/dev/stdout"
  address:
    socket_address:
      address: "0.0.0.0"
      port_value: 15001
Copy the code

{{< /expand >}}

LDS configuration:

{{< expand “lds.yaml” >}}

version_info: "0"
resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
  name: listener_http
  address:
    socket_address:
      address: 0.0. 0. 0
      port_value: 80
  filter_chains:
  - filters:
    - name: envoy.filters.network.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        codec_type: AUTO
        access_log:
          name: envoy.access_loggers.file
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
            path: /dev/stdout
        route_config:
          name: http_route
          virtual_hosts:
          - name: default
            domains:
            - "*"
            routes:
            - match:
                prefix: "/"
              redirect:
                https_redirect: true
                port_redirect: 443
                response_code: "FOUND"
        http_filters:
        - name: envoy.filters.http.router
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
  name: listener_https
  address:
    socket_address:
      address: 0.0. 0. 0
      port_value: 443
  listener_filters:
  - name: "envoy.filters.listener.tls_inspector"
    typed_config: {}
  filter_chains:
  - transport_socket:
      name: envoy.transport_sockets.tls
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
        common_tls_context:
          alpn_protocols: H2, HTTP / 1.1
          tls_certificates:
          - certificate_chain:
              filename: "/root/.acme.sh/fuckcloudnative.io/fullchain.cer"
            private_key:
              filename: "/root/.acme.sh/fuckcloudnative.io/fuckcloudnative.io.key"
    filters:
    - name: envoy.filters.network.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_https
        codec_type: AUTO
        use_remote_address: true
        access_log:
          name: envoy.access_loggers.file
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
            path: /dev/stdout
        route_config:
          name: https_route
          response_headers_to_add:
          - header:
              key: Strict-Transport-Security
              value: "max-age=15552000; includeSubdomains; preload"
          virtual_hosts:
          - name: docker
            domains:
            - docker.fuckcloudnative.io
            routes:
            - match:
                prefix: "/"
              route:
                cluster: dockerhub
                timeout: 600s
        http_filters:
        - name: envoy.filters.http.router
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router 
Copy the code

{{< /expand >}}

CDS configuration:

{{< expand “cds.yaml” >}}

version_info: "0"
resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: dockerhub
  connect_timeout: 15s
  type: strict_dns
  dns_lookup_family: V4_ONLY
  lb_policy: ROUND_ROBIN
  load_assignment:
    cluster_name: dockerhub
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: dockerhub.default
              port_value: 5000
Copy the code

{{< /expand >}}

The address used here is the Kubernetes cluster internal domain name, other deployment methods please consider.

Once configured, the image of docker.io can be pulled from the proxy server.

7. Verify acceleration

Now you can pull the public image through the proxy server. For example, if you want to pull the nginx:alpine image, you can use the following command:

🐳 - docker pull docker. Fuckcloudnative. IO/library/nginx: alpine alpine: Pulling the from library/nginx 801 bfaa63ef2: Pull complete b1242e25d284: Pull complete 7453d3e6b909: Pull complete 07ce7418c4f8: Pull complete e295e0624aa3: Pull complete Digest: sha256:c2ce58e024275728b00a554ac25628af25c54782865b3487b11c21cafb7fabda Status: Downloaded newer imagefor docker.fuckcloudnative.io/library/nginx:alpine
docker.fuckcloudnative.io/library/nginx:alpine
Copy the code

8. Cache all mirror repositories

The previous example only caches docker.io. To cache all public repositories, see sections 4-6. Using k8s.gcr. IO as an example, prepare a resource list:

{{< expand “gcr-k8s.yaml” >}}

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gcr-k8s
  labels:
    app: gcr-k8s
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gcr-k8s
  template:
    metadata:
      labels:
        app: gcr-k8s
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - gcr-k8s
              topologyKey: kubernetes.io/hostname
            weight: 1
      dnsPolicy: None
      dnsConfig:
        nameservers:
          - 8.88.8.
          - 8.84.4.
      containers:
      - name: gcr-k8s
        image: yangchuansheng/registry-proxy:latest
        env:
        - name: PROXY_REMOTE_URL
          value: https://k8s.gcr.io
        ports:
        - containerPort: 5000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/localtime
          name: localtime
        - mountPath: /var/lib/registry
          name: registry
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime
      - name: registry
        hostPath:
          path: /var/lib/registry
---
apiVersion: v1
kind: Service
metadata:
  name: gcr-k8s
  labels:
    app: gcr-k8s
spec:
  selector:
    app: gcr-k8s
  ports:
    - protocol: TCP
      name: http
      port: 5000
      targetPort: 5000
Copy the code

{{< /expand >}}

Deploy it to a Kubernetes cluster:

🐳 → kubectl apply-f gcr-k8s.yamlCopy the code

Add the configuration to lds.yaml:

          virtual_hosts:
          - name: docker
            .
            .
          - name: k8s
            domains:
            - k8s.fuckcloudnative.io
            routes:
            - match:
                prefix: "/"
              route:
                cluster: gcr-k8s
                timeout: 600s
Copy the code

Add the configuration to cds.yaml:

- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: gcr-k8s
  connect_timeout: 1s
  type: strict_dns
  dns_lookup_family: V4_ONLY
  lb_policy: ROUND_ROBIN
  load_assignment:
    cluster_name: gcr-k8s
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: gcr-k8s.default
              port_value: 5000
Copy the code

Other mirror repositories can follow the steps above. Here are all the cache service containers I ran myself:

🐳 → kubectl get pod-o wide GCR-8647FFb586-67C6G 1/1 Running 0 21h 10.42.1.52 blog-k3s02 GHCR-7765F6788b-hxxvc 1/1 Running 0 21h 10.42.1.55 blog-k3s01 Dockerhub-94bbb7497-x4zWG 1/1 Running 0 21h 10.42.1.54 blog-k3s02 Gcr-k8s-644db84879-7xssb 1/1 Running 0 21h 10.42.1.53 blog-k3s01 quay-559b65848b-ljclb 1/1 Running 0 21h 10.42.0.154 blog-k3s01Copy the code

9. Container runtime configuration

Once all the caching services are configured, the public image can be pulled by proxy by replacing the fields in the image address with the following list:

The original URL The URL after the replacement
Docker. IO/XXX/XXX and XXX, XXX docker.fuckcloudnative.io/xxx/xxx
Docker. IO/library/XXX and XXX docker.fuckcloudnative.io/library/xxx
gcr.io/xxx/xxx gcr.fuckcloudnative.io/xxx/xxx
k8s.gcr.io/xxx/xxx k8s.fuckcloudnative.io/xxx/xxx
quay.io/xxx/xxx quay.fuckcloudnative.io/xxx/xxx
ghcr.io/xxx/xxx ghcr.fuckcloudnative.io/xxx/xxx

The best way is to configure the Registry mirror directly. Docker only supports the registry mirror of docker. IO. Containerd and Podman support the registry mirror of all repositories.

Docker

Docker can modify the configuration file /etc/dock/daemon. json to add the following contents:

{
    "registry-mirrors": [
        "https://docker.fuckcloudnative.io"]}Copy the code

Then restart the Docker service, you can directly pull the image of Docker. IO, do not need to display the address of the specified proxy server, Docker service itself will automatically pull the image through the proxy server. Such as:

🐳 - docker pull nginx: alpine 🐳 - docker pull docker. IO/library/nginx: alpineCopy the code

Containerd

Containerd is relatively simple, it supports any registry of mirror, only need to modify the configuration file/etc/Containerd/config toml, add the following configuration:

    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://docker.fuckcloudnative.io"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://k8s.fuckcloudnative.io"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["https://gcr.fuckcloudnative.io"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."ghcr.io"]
          endpoint = ["https://ghcr.fuckcloudnative.io"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
          endpoint = ["https://quay.fuckcloudnative.io"]
Copy the code

After restarting the Containerd service, it can directly pull all images without changing any prefixes. Containerd automatically selects the corresponding proxy URL to pull images based on the configuration.

Podman

Podman also support any registry of mirror, only need to modify the configuration file/etc/containers/registries. Conf, add the following configuration:

unqualified-search-registries = ['docker.io'.'k8s.gcr.io'.'gcr.io'.'ghcr.io'.'quay.io']

[[registry]]
prefix = "docker.io"
insecure = true
location = "registry-1.docker.io"

[[registry.mirror]]
location = "docker.fuckcloudnative.io"

[[registry]]
prefix = "k8s.gcr.io"
insecure = true
location = "k8s.gcr.io"

[[registry.mirror]]
location = "k8s.fuckcloudnative.io"

[[registry]]
prefix = "gcr.io"
insecure = true
location = "gcr.io"

[[registry.mirror]]
location = "gcr.fuckcloudnative.io"

[[registry]]
prefix = "ghcr.io"
insecure = true
location = "ghcr.io"

[[registry.mirror]]
location = "ghcr.fuckcloudnative.io"

[[registry]]
prefix = "quay.io"
insecure = true
location = "quay.io"

[[registry.mirror]]
location = "quay.fuckcloudnative.io"
Copy the code

You can then pull all images directly, without changing any prefixes, and Podman automatically selects the appropriate proxy URL to pull images based on the configuration. Podman also has a fallback mechanism, which means that you first try to pull the mirror from the URL of the location field in Registry. mirror. If it fails, it tries to pull from the URL of the Location field in Registry.

10. Clear the cache

The cache service caches the pulled image locally, consuming disk capacity. Generally, the disk capacity of cloud hosts is not large, and OSS and S3 storage are expensive and not cost-effective.

To solve this problem, I recommend periodically deleting some or all of the mirrors cached to your local disk. It is also simple to deploy a separate Registry, share the storage of the other Registry, enable delete, and then delete it through the API or Dashboard.

Prepare a resource list:

{{< expand “reg-local.yaml” >}}

apiVersion: apps/v1
kind: Deployment
metadata:
  name: reg-local
  labels:
    app: reg-local
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reg-local
  template:
    metadata:
      labels:
        app: reg-local
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - reg-local
              topologyKey: kubernetes.io/hostname
            weight: 1
      containers:
      - name: reg-local
        image: yangchuansheng/registry-proxy:latest
        env:
        - name: DELETE_ENABLED
          value: "true"
        ports:
        - containerPort: 5000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/localtime
          name: localtime
        - mountPath: /var/lib/registry
          name: registry
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime
      - name: registry
        hostPath:
          path: /var/lib/registry
---
apiVersion: v1
kind: Service
metadata:
  name: reg-local
  labels:
    app: reg-local
spec:
  selector:
    app: reg-local
  ports:
    - protocol: TCP
      name: http
      port: 5000
      targetPort: 5000
Copy the code

{{< /expand >}}

Deploy it to a Kubernetes cluster:

🐳 → kubectl apply-f reg-local.yamlCopy the code

Prepare a resource list for the Docker Registry UI:

{{< expand “registry-ui.yaml” >}}

apiVersion: apps/v1
kind: Deployment
metadata:
  name: registry-ui
  labels:
    app: registry-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: registry-ui
  template:
    metadata:
      labels:
        app: registry-ui
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - registry-ui
              topologyKey: kubernetes.io/hostname
            weight: 1
      tolerations:
      - key: node-role.kubernetes.io/ingress
        operator: Exists
        effect: NoSchedule
      containers:
      - name: registry-ui
        image: joxit/docker-registry-ui:static
        env:
        - name: REGISTRY_TITLE
          value: My Private Docker Registry
        - name: REGISTRY_URL
          value: "http://reg-local:5000"
        - name: DELETE_IMAGES
          value: "true"
        ports:
        - containerPort: 80
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/localtime
          name: localtime
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime
---
apiVersion: v1
kind: Service
metadata:
  name: registry-ui
  labels:
    app: registry-ui
spec:
  selector:
    app: registry-ui
  ports:
    - protocol: TCP
      name: http
      port: 80
      targetPort: 80
Copy the code

{{< /expand >}}

Deploy it to a Kubernetes cluster:

🐳 → kubectl apply-f registry-ui.yamlCopy the code

This allows you to use Dashboard to clean up the image and free up space.

Or simply delete the entire storage directory periodically. For example, run the crontab -e command to add the following information:

* * */2 * * /usr/bin/rm -rf /var/lib/registry/* &>/dev/null
Copy the code

Indicates that the /var/lib/registry/ directory is cleared every two days.

11. Anti-white piao certification

Finally, there is a problem, I have all the domain name of the cache service public, if everyone comes to the white piao, my cloud host certainly can not bear. To prevent prostitution, I need to authenticate registry proxy. The easiest way is to use basic Auth and htpasswd to store passwords.

  1. Create a password file for user admin with the password admin:

    🐳 → docker run \
      --entrypoint htpasswd \
      registry:2.6 -Bbn admin admin > htpasswd
    Copy the code
  2. Create a Secret:

    🐳 → kubectl create secret generic registry-auth --from-file=htpasswd
    Copy the code
  3. Modify the configuration of the resource list, using docker. IO as an example:

    {{< expand “dockerhub.yaml” >}}

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dockerhub
      labels:
        app: dockerhub
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: dockerhub
      template:
        metadata:
          labels:
            app: dockerhub
        spec:
          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - dockerhub
                  topologyKey: kubernetes.io/hostname
                weight: 1
          dnsPolicy: None
          dnsConfig:
            nameservers:
              - 8.88.8.
              - 8.84.4.
          containers:
          - name: dockerhub
            image: yangchuansheng/registry-proxy:latest
            env:
            - name: PROXY_REMOTE_URL
              value: https://registry-1.docker.io
            - name: PROXY_USERNAME
              value: yangchuansheng
            - name: PROXY_PASSWORD
              value: * * * * * * * *
    +       - name: REGISTRY_AUTH_HTPASSWD_REALM
    +         value: Registry Realm
    +       - name: REGISTRY_AUTH_HTPASSWD_PATH
    +         value: /auth/htpasswd 
            ports:
            - containerPort: 5000
              protocol: TCP
            volumeMounts:
            - mountPath: /etc/localtime
              name: localtime
            - mountPath: /var/lib/registry
              name: registry
    +       - mountPath: /auth
    +         name: auth
          volumes:
          - name: localtime
            hostPath:
              path: /etc/localtime
          - name: registry
            hostPath:
              path: /var/lib/registry
    +     - name: auth
    +       secret:
    +         secretName: registry-auth
    Copy the code

    {{< /expand >}}

    Apply to put into effect:

    🐳 → kubectl apply-f dockerhub.yamlCopy the code
  4. Try to pull mirror:

    🐳 - docker pull docker. Fuckcloudnative. IO/library/nginx: latest Error response from the daemon: Get https://docker.fuckcloudnative.io/v2/library/nginx/manifests/latest: no basic auth credentialsCopy the code
  5. Log in to the image vault:

    🐳 - docker login docker. Fuckcloudnative. IO Username: admin Password: WARNING! Your password will be stored unencryptedin /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    Copy the code

    Now you can pull the mirror normally.

If you want more granular control of permissions, you can use Token authentication. For details, see docker_auth.

12. Cost assessment

All right, now let’s assess the cost of all this. You must have the server that meets magic above all, of home affirmation need not consider, must choose abroad, and the speed that returns to home passes, lowest lowest won’t under 30 RMB/month. In addition, you still have to have a personal domain name, this price is not easy to say, all in all, add up certainly can’t be less than 30, most people certainly can’t get down this hand. It doesn’t matter, I have a cheaper solution, I have already deployed everything, you can directly use my service, of course, I also bought the server myself, every month also costs money, if you really want to use, only need to pay 3 yuan, so as to protect my monthly server cost. Certainly not only you, there are about a dozen users at present, if the number of users is very large, then consider adding servers. This needs you to consider clearly, interested parties scan the qr code below to consult me: