With the increase of front-end projects, more and more dynamic and static resources will be deployed separately, which often involves the problem of proxy forwarding. Private network projects mainly use the deployment mode of Nginx + Docker + K8S. This paper mainly shares the practice process and review thinking of some related projects.


The corporate cloud environment provides object storage services (PS: Similar to tencent cloud object storage COS), but for security reasons, the whole environment is based on the network system, the HTTPS certificate does not in CA institution accreditation, but the private network from the service project will involve in public to show client access problems, browser for no CA certification HTTPS will give a warning, need users to click on the confirmation, Due to poor user experience, the decision to proxy-forward static services at deployment time becomes a load proxy-issue for NGINX1 (pure front-end application) and NGINX2 (static service forwarding)


Environmental consistency problem

In the development process, there are often problems with the environment. When the test sister comes to us to mention a bug, we often reply: “Here is good ah, you refresh (restart) try” [manual head], this is actually the nature of the environmental consistency problem, for front-end engineering, to solve the problem of environmental consistency is actually a more common problem in the operation and maintenance, common cloud IDE and unified configuration files to solve, Here, we use the idea of DLL for reference when building the scaffolding. We use a config.json to configure the URL after parsing every request from the server side. Nginx is used in the production environment, and dev.proxy.js is used in the development environment

  • config.json
    "IS_NGINX": 1,
    "API" : {
        "TGCOS": {
            "alias": "/tgcos/",
            "url": "http://bfftest.default.service.local:8148/tgcos/"
  • dev.proxy.js
Module. Exports = {'/tgcos: {target: ''}}
  • Nginx1.conf (pure front-end application)
server { location /tgcos/ { proxy_pass http://bfftest.default.service.local:8148/tgcos/; }}
  • Nginx2. conf (static service proxy forwarding)
server {
    location / {

    location /tgcos/ {

Problem: After the proxy is configured here, the forwarding service is transferred to another layer in Webpack, so there will be one layer less forwarding during the proxy process, and then the address of the proxy cannot be found. The solution is to proxy the root directory to the same address of COS, which is ugly but can solve the problem

K8S domain name problem

In the deployment process, due to IP drift within K8, it is hoped that the domain name forwarded by the proxy can be fixed by using the DNS domain name inside K8. There are two common plugins for DNS in k8s, namely: KubeDNS and CoreDNS. After Kubernetes 1.12, CoreDNS became the default DNS server, and its configuration can be modified in /etc/resolv.conf. There are three main configuration keywords

  • Nameserver defines the IP address of the DNS server
  • Search specifies the search list of domain names that are included when querying the domain name. Is less than the options.ndots value, matches each value in the list in turn
  • Options specifies the configuration information for the field name lookup

Let’s go into the launched Pod and look at its resolv.conf

Nameserver search default. SVC. Cluster. The local SVC. Cluster. The local cluster. The local options nodts: 5

I didn’t do any other operations here, so normally what can be used is the default DNS strategy of K8S, that is, the default strategy of ClusterFirst

Problem: Normally, it should be able to find the corresponding domain name, but the result is not found, so it is a question of port mapping

K8S port mapping problem

K8S as an elegant distributed resource scheduling framework, its excellent architectural design can be used for different core objects (such as: POD, Service, RC). The entire K8S architecture operates API Server through command-line KUBECTL, and uses its packaged API to operate access control, registration, ETCD and other behaviors. The lower layer realizes the ability to schedule and manage resources through Scheduler and Controller Manager. Here, the proxy ability of the whole service is realized through Kube Proxy, so as to realize reverse proxy and load balancing

Service and Deployment are configured here in the YAML written in the front end

apiVersion: v1 kind: Service metadata: name: bff spec: ports: - port: 80 selector: app: bff --- apiVersion: app/v1 kind: Deployment metadata: name: bff labels: app: bff spec: replicas: 1 selector: matchLabels: app: bff template: metadata: labels: app: bff spec: containers: - name: bff image: Harbor. Dcos. NCMP. Unicom. Local/fe/BFF: 1.0 imagePullPolicy: Always ports: - containerPort: 80

Question: Here, when creating CLB, we will resume a service again. The new configured port 8148 is different from the previous port 80 written in YAML. If we simply search through IP, there is no problem that we can’t find it. In the previous part, the default DNS policy in K8S was the policy of ClusterFirst, so there would be a situation where the two names and ports were not exactly aligned. In essence, two services were scheduling resources in the same pod at the same time


Front-end engineering stability of production as an important consideration factor of front-end engineering, we should not only consider the front part of the traditional engineering related infrastructure, as well as the positioning of the problems of performance monitoring and log collection do precise control, link tracking, of course, these also need the front know more the back-end, cloud and container related content, In the future, the development of front end may be in the direction of “end + cloud” mode, and comprehensive learning is the only way for the future of big front end, mutual encouragement!!


  • IPVS from the beginning to master the principle of Kube-Proxy implementation
  • KubeDNS and CoreDNS
  • Use CoreDNS to deal with DNS pollution
  • It took me 10 hours to write this K8S architecture analysis
  • The difference of load balancing between four layers and seven layers