preface

With the increase of front-end projects, more and more dynamic and static resources will be deployed separately. The problem of proxy forwarding is often involved in the separated deployment. Private network projects mainly use the deployment mode of Nginx + Docker + K8S.

background

The cloud environment of the company provides object storage service (PS: Similar to tencent cloud object storage COS), but for security reasons, the whole environment is based on the network system, the HTTPS certificate does not in CA institution accreditation, but the private network from the service project will involve in public to show client access problems, browser for no CA certification HTTPS will give a warning, need users to click on the confirmation, The user experience was so poor that it was decided to proxy forward static services at deployment time, and the whole solution became a load proxy problem for NginX1 (pure front-end application) and NginX2 (static service forward)

case

Environmental consistency issues

During development, problems with the environment often arise, and when the test sister comes to us with bugs, we often respond with: In fact, this is essentially a problem of environmental consistency. For front-end engineering, solving the problem of environmental consistency is actually a relatively common problem in operation and maintenance. Common cloud IDE and unified configuration files are used to solve it. Here, we use the idea of DLL for reference when constructing the scaffolding. We use a config.json to configure the URL after each request from the server is parsed. In the production environment, we use Nginx, and in the development environment, we use dev.proxy.js

  • config.json
{
    "IS_NGINX": 1,
    "API" : {
        "TGCOS": {
            "alias": "/tgcos/",
            "url": "http://bfftest.default.service.local:8148/tgcos/"
        }
    }
}
Copy the code
  • dev.proxy.js
Module. Exports = {'/tgcos: {target: 'http://172.24.131.166:8148'}}Copy the code
  • Nginx1.conf (pure front-end application)
server { location /tgcos/ { proxy_pass http://bfftest.default.service.local:8148/tgcos/; }}Copy the code
  • Nginx2.conf (Static service proxy forwarding)
server {
    location / {
        proxy_pass http://cos.zz-hqc.cos.tg.ncmp.unicom.local/
    }

    location /tgcos/ {
        proxy_pass http://cos.zz-hqc.cos.tg.ncmp.unicom.local/
    }
}
Copy the code

Problem: After the proxy is configured here, the forwarding service in Webpack sends another layer, so the forwarding layer will be missing during the proxy process, and the address of the proxy will not be found. The solution is to proxy the root directory to the same COS address, which is ugly but can solve the problem

The K8S domain name is faulty

During the deployment, due to the IP address drift problem inside K8, it is hoped that the DNS domain name inside K8 can be used to fix the proxy forwarding domain name. After Kubernetes 1.12, CoreDNS became its default DNS server. Its configuration can be modified in /etc/resolv.conf. There are three main configuration keywords

  • Nameserver Defines the IP address of the DNS server

  • Search Specifies the search list of domain names, when included in the query domain name. If the number of options is less than the value of options.ndots, each value in the list is matched in turn

  • Options Specifies the configuration information for domain name lookup

Let’s go to the launched Pod and look at its resolv.conf

Nameserver 11.254.0.10 search default. SVC. Cluster. The local SVC. Cluster. The local cluster. The local options nodts: 5Copy the code

I didn’t do anything else here, so normally it should be possible to use the default K8S DNS policy, which is to use the default ClusterFirst policy

Problem: Normally, the corresponding domain name can be found, but the result is not found, so I think it is a port mapping problem

K8s port mapping is abnormal

K8s as an elegant distributed resource scheduling framework, its excellent architectural design can be used for different core objects (e.g. Pod, Service, RC) scheduling and operation, the entire K8S architecture, through the command line Kubectl operation API Server, using its packaging API to operate access control, registration, ETCD and other behavior, The lower layer realizes the ability of scheduling and managing resources through Scheduler and Controller Manager. Here, the proxy ability of the whole service is realized through Kube proxy, so as to realize reverse proxy and load balancing

Here service and Deployment are configured in yamL written to the front end

apiVersion: v1 kind: Service metadata: name: bff spec: ports: - port: 80 selector: app: bff --- apiVersion: app/v1 kind: Deployment metadata: name: bff labels: app: bff spec: replicas: 1 selector: matchLabels: app: bff template: metadata: labels: app: bff spec: containers: - name: bff image: Harbor. Dcos. NCMP. Unicom. Local/fe/BFF: 1.0 imagePullPolicy: Always ports: - containerPort: 80Copy the code

Question: The new port 8148 configured is different from the port 80 written in YAML before. There is no problem that the service cannot be found if the search is conducted purely through IP. However, since the search is conducted through DNS, In the previous section, k8S internal default DNS policy is ClusterFirst policy, so there will be two names and ports are not matched, essentially two services simultaneously scheduling resources in the same POD

conclusion

Front-end engineering stability of production as an important consideration factor of front-end engineering, we should not only consider the front part of the traditional engineering related infrastructure, as well as the positioning of the problems of performance monitoring and log collection do precise control, link tracking, of course, these also need the front know more the back-end, cloud and container related content, The future front-end development may be towards the “end + cloud” model of development, do a good all-round learning is the future big front-end must be the only way, mutual encourage!!

reference

  • IPVS is a beginner to proficient kube-Proxy implementation
  • KubeDNS and CoreDNS
  • Use CoreDNS to deal with DNS contamination
  • It took me 10 hours to write this K8S architecture analysis
  • Difference between layer 4 and Layer 7 load balancing