1. Front-end Advanced: How is javascript code compressed
  2. Front-end Advancements: How to better optimize packaged resources
  3. Front-end advanced: Best practices and considerations for web site cache control strategies
  4. Front-end Advanced: Increase your NPM I speed by 50% in production
  5. Front-end Advanced: Deploy your front-end applications efficiently with Docker
  6. Front-end advanced: deployment of a front-end multi-feature branching environment under CICD
  7. Front-end Advancements: The evolution of front-end deployment

More articles:Front-end engineering series

I have set up a new warehouse on Github, one question per day, one interview question per day, welcome to exchange.

  • Front end test notes
  • Front end big factory surface through big complete set
  • Computer basic interview questions subtotal

When the front end talks about slash-and-burn, it must follow the topic of front end engineering. With the development of React/Vue/Angular, ES6 +, Webpack, Babel, typescript, and Node, the front-end has gradually replaced script for CDN development, creating a big wave of engineering. Thanks to the development of engineering and the ecosystem of the open source community, the usability and efficiency of front-end applications have been greatly improved.

The front end used to be slash-and-burn, and the front end application used to be slash-and-burn. What is the benefit of front-end deployment, a byproduct of front-end engineering?

That’s part of it, but the bigger reason is the rise of Devops.

To gain a clearer understanding of the history of front-end deployment and the division of responsibilities between deployment and the front end (or, more broadly, business developers), there are two things to think about every time a front-end deployment changes

  1. Caching, HTTP in front-end applicationsresponse headerWho will match it? Thanks to engineering advances, files with hash values can be packaged for permanent caching
  2. Cross domain,/apiWho will configure the proxy configuration for? You can open a small service in front of the development environment and enable itwebpack-dev-serverConfiguration across domains, what about production

These two questions are high-frequency questions in the front-end interview, but whether the right to speak is in the hands of the front-end

React was just getting started, and apps were already being developed with React and packaged with WebPack. But front-end deployment is still slash-and-burn

This article requires that you have some knowledge of Docker, DevOps, and front-end engineering. If not, this series and the Docker section of the Personal Server Maintenance Guide will help you.

This process

A jumper

One production server

A deployment script

The front-end tune up its Webpack, happily send a deployment email to o&M with a deployment script, thinking about the first time the front-end can be deployed independently without the backend template. Think of oneself foundation dish further expand, front end can not help but smile happily

O&m pulled the code over and over and changed the configuration to say TRy_files with Proxy_pass following the deployment email from the front end.

At this point, the front-end static file is hosted by Nginx, and the Nginx configuration file looks something like this

server {
  listen 80;
  server_name shanyue.tech;

  location / {
    # avoid non-root path 404
    try_files $uri $uri/ /index.html;
  }

  # Resolve cross-domain
  location /api {
    proxy_pass http://api.shanyue.tech;
  }

  Configure a permanent cache for files with hash values
  location ~ * \. (? :css|js)$ {
      try_files $uri =404;
      expires 1y;
      add_header Cache-Control "public";
  }

  location ~ ^. + \.. + $ {
      try_files $uri =404; }}Copy the code

But… Often sometimes can’t run

O&m complains that the front-end deployment scripts are not versioned properly, and the front-end complains that the test environment is ok

At this point, operations and maintenance put a lot of effort into deployment, and even the deployment of the test environment, the front-end also put a lot of effort into how operations and maintenance are deployed. At this time, due to the fear of affecting the online environment, online often choose at night, front-end and operation and maintenance of physical and mental fatigue

But that’s always the case.

Lu Xun said, always so, that is right.

At this time, no matter the cross-domain configuration or cache configuration, it is o&M management, O&M does not understand the front end. But the configuration is provided by the front end, which is not familiar with NGINx

Build the image using Docker

The introduction of Docker has largely solved the big BUG that deployment scripts cannot run. A dockerfile is a deployment script and a deployment script is a dockerfile. This also greatly alleviates the friction between the front end and operations, after all, the front end is becoming more reliable, at least the deployment script is no problem (laughs)

Instead of providing static resources, the front end provides a service, an HTTP service

The front-end dockerfile looks something like this

FROM node:alpine

# stands for production environment
ENV PROJECT_ENV production
Many packages will behave differently depending on this environment variable
In addition, packaging in Webpack is optimized for this environment variable, but create-react-app will write this environment variable dead when packaging
ENV NODE_ENV production
WORKDIR /code
ADD . /code
RUN npm install && npm run build && npm install -g http-server
EXPOSE 80

CMD http-server ./public -p 80
Copy the code

Docker-compose up -d: docker-compose up -d: docker-compose up -d: docker-compose up The front end writes dockerfile and docker-comemage.yaml for the first time and plays an increasingly important role in the deployment process. Think of oneself foundation dish further expand, front end can not help but smile happily again

version: "3"
services:
  shici:
    build: .
    expose:
      - 80
Copy the code

The operational Nginx configuration file looks something like this

server {
  listen 80;
  server_name shanyue.tech;

  location / {
    proxy_pass http://static.shanyue.tech;
  }

  location /api {
    proxy_passhttp://api.shanyue.tech; }}Copy the code

In addition to configuring nginx, o&M also executes a command: docker-compose up -d

Now think about the first two questions in the article

  1. Caching, due to the conversion from a static file to a service, is now under the control of the front end (but in the imagehttp-serverNot really suited to it.
  2. Cross domain, cross domain is still operated innginxIn the configuration

The front end can do some of the things he’s supposed to do, and that’s a good thing

Of course, the front-end improvement of Dockerfile is also a slowly evolving process, so what is the problem with mirroring at this time?

  1. The build mirror is too large
  2. The image building time is too long. Procedure

Use multi-stage build optimization mirrors

In fact, there have been a lot of ups and downs in this process. Please refer to another article of mine: How to deploy front-end applications with Docker.

The main optimizations are also in the two areas mentioned above

  1. Image size changed from 1 GB + to 10 MB +
  2. Image build time changed from 5min+ 1min (depending on the complexity of the project, most of the time is spent building and uploading static resources)
FROM node:alpine as builder

ENV PROJECT_ENV production
ENV NODE_ENV production

WORKDIR /code

ADD package.json /code
RUN npm install --production

ADD . /code

# NPM run uploadCdn # NPM run uploadCdn # NPM run uploadCdn # NPM run uploadCdn
RUN npm run build && npm run uploadCdn

# Select a smaller volume base image
FROM nginx:alpine
COPY --from=builder code/public/index.html code/public/favicon.ico /usr/share/nginx/html/
COPY --from=builder code/public/static /usr/share/nginx/html/static
Copy the code

So what does it do

  1. First,ADD package.json /codeAgain,npm install --productionafterAddAll documents. Take full advantage of image caching to reduce build time
  2. Multi-stage construction, greatly reducing the mirror volume

There can also be some minor optimizations, such as

  • npm cacheBase mirror ornpmPrivate warehouses, reducednpm installTime, reduce build time
  • npm install --productionPack only necessary bags

Front looked at her dockerfile optimization, thinking of a few days ago was operational noisy, said about half of disk space is accounted for by the front mirror, thinking about saving the front mirror the volume of orders of magnitude, if saved a lot of the cost of server for the company, the basis of the thinking about further expanded, could not help but happy smile again

Now think about the first two questions in the article

  1. Caches, which are controlled by the front end and cached on OSS, will use CDN to accelerate OSS. The cache is controlled by the front-end writing script
  2. Cross domain, cross domain is still operated innginxIn the configuration

CI/CD with gitlab

At this time, the front-end sense of accomplishment, operation and maintenance? Operations is still coming online again and again, repeating three actions over and over again for deployment

  1. Pull the code
  2. docker-compose up -d
  3. Restart the nginx

Operations decided it couldn’t go on any longer, so he introduced CI: GitLab CI, a companion to the existing code repository

  • CI.Continuous IntegrationContinuous integration
  • CD.Continuous DeliveryContinuous delivery

What matters is not what CI/CD is, but what matters is that operations now don’t have to follow the business online, and don’t have to watch the front-end deployment all the time. This is all CI/CD stuff, which is used for automated deployment. The three things mentioned above were assigned to CI/CD

.gitlab-ci.yml is gitlab’s CI configuration file, which looks something like this

deploy:
  stage: deploy
  only:
    - master
  script:
    - docker-compose up --build -d
  tags:
    - shell
Copy the code

CI/CD not only liberates the deployment of business projects, it also greatly enhances the quality of business code prior to delivery. It can be used for Lint, test, package security checks, and even multi-feature, multi-environment deployments, which I will write about in a future article

One of my server rendering projects, Shfshanyue/shICI, was previously deployed in the way of docker/docker-compose/gitlab-ci in my server. If you are interested, please check its configuration file

  • shfshanyue/shici:Dockerfile
  • shfshanyue/shici:docker-compose.yml
  • shfshanyue/shici:gitlab-ci.yml

If you have a personal server, it is also recommended that you create a front-end application and a supporting back-end interface service that you are interested in, and deploy it on your own server with CI/CD

If you want to use Github for CI/CD, try github + Github Action

In addition, you can also try drone.ci. How to deploy it can refer to my previous article: Introduction and deployment of continuous integration solution Drone on Github

Deploy using Kubernetes

Docker-compose has become too busy for docker-compose to handle as the business grows larger and more mirrors become available. At this time, the server also changed from one to multiple, and multiple servers will have the problem of distribution

The advent of a new technology introduces complexity as well as solving previous problems.

The benefits of K8S deployment are obvious: health checks, rolling upgrades, flexible capacity expansion, fast rollback, resource constraints, improved monitoring, and more

So what’s the new problem now?

The server that builds images, the server that provides container services, the server that does continuous integration is one!

The need for a private mirror repository was an o&M thing, and the Harbor was quickly set up by O&M, but for front-end deployment, the complexity increased

Let’s take a look at the old process:

  1. The front-end configurationdockerfiledocker-compose
  2. For the production serverCI runnerPull code (think of it as the old operation),docker-compose up -dStart the service. And then rebootnginxTo provide external services as a reverse proxy

The previous process had a problem: the server that builds the image, the server that provides the container service, and the server that does continuous integration is one! , so you need a private mirror repository, a continuous integration server with access to the K8S cluster

After process improvement, the process combined with K8S is as follows

  1. The front-end configurationdockerfile, build the image and push it to the mirror repository
  2. O&m is configured for front-end applicationsk8sResource configuration file,kubectl apply -fThe image is pulled again and resources are deployed

Operation and maintenance ask the front end, do you need to expand your base disk, write a front-end K8S resource configuration file, and listed a few articles

  • Deploy your first application with K8S: Pod, Deployment, and Service
  • Use K8S to configure the domain name for your application: Ingress
  • Use K8S to add HTTPS to your domain name

The front end looked at a dozen k8S configuration files on the back end and shook his head

At this point, gitlab-ci.yaml looks something like this, and the permissions for configuration files are managed by a single operator

deploy:
  stage: deploy
  only:
    - master
  script:
    - docker build -t harbor.shanyue.tech/fe/shanyue
    - docker push harbor.shanyue.tech/fe/shanyue
    - kubectl apply -f https://k8s-config.default.svc.cluster.local/shanyue.yaml
  tags:
    - shell
Copy the code

Now think about the first two questions in the article

  1. Caching, which is controlled by the front end
  2. Cross domain, cross domain is still controlled by operations, back endk8sResource profile controlIngress

Deployment using helm

At this time, the front end has no contact with operation and maintenance, except occasionally new projects need operation and maintenance help

But then one day, the front end finds that it can’t even pass an environment variable! So often look for transport maintenance to change configuration files, operation and maintenance is also annoying

Hence the helm, which, if explained in one sentence, is a K8S resource profile with template functionality. As a front end, you just fill in the parameters. For more details, see my previous article deploying K8S resources with helm

If we use bitnami/nginx as helm chart, the front end might write configuration files that look like this

image:
  registry: harbor.shanyue.tech
  repository: fe/shanyue
  tag: 8a9ac0

ingress:
  enabled: true
  hosts:
  - name: shanyue.tech
    path: /

  tls:
  - hosts:
      - shanyue.tech
    secretName: shanyue-tls

    # livenessProbe:
    # httpGet:
    # path: /
    # port: http
    # initialDelaySeconds: 30
    # timeoutSeconds: 5
    # failureThreshold: 6
    #
    # readinessProbe:
    # httpGet:
    # path: /
    # port: http
    # initialDelaySeconds: 5
    # timeoutSeconds: 3
    # periodSeconds: 5
Copy the code

Now think about the first two questions in the article

  1. Caching, which is controlled by the front end
  2. Cross-domain, cross-domain is controlled by the back end and configures the configuration file of Chart on the back endvalues.yaml

Where does the front end and operations come in at this point?

The front-end needs to do things like:

  1. Write front-end builddockerfile, this is a one-time job, and there is a reference
  2. usehelmSpecify parameters during deployment

What about operations and maintenance

  1. Provide one for all front-end projectshelm chartYou don’t even have to provide it, use it if the operation is lazybitnami/nginx. It’s a one-time job
  2. Provide a basis forhelmTool, prohibit business too much authority, even do not need to provide, if the operation and maintenance is lazy then directly usehelm

The front-end can focus on its own business, and o&M can focus on its own cloud native, and the division of responsibilities has never been clearer

Unified front-end deployment platform

Later, O&M felt that the nature of front-end applications was a pile of static files, which were relatively single and easy to unify, so as to avoid the uneven quality of each front-end image. So o&M prepared a unified node base image to make a unified front-end deployment platform, and what can this platform do

  1. CI/CD: Code is automatically deployed when you push it to a particular branch of the repository
  2. http headers: You can customize resourceshttp headerSo that we can doCache optimizationEtc.
  3. http redirect/rewrite: If anginx, which can be configured/apiTo solve cross-domain problems
  4. hostname: You can set the domain name
  5. CDNPush your static resources to the CDN
  6. https: Prepare your certificate
  7. Prerender:SPA, do pre-render

The front end no longer needs to build an image and upload a CDN, it just needs to write a configuration file that looks something like this

build:
  command: npm run build
  dist: /dist

hosts:
- name: shanyue.tech
  path: /

headers:
- location: / *
  values:
  - cache-control: max-age=7200
- location: assets/*
  values:
  - cache-control: max-age=31536000

redirects:
- from : /api
  to: https://api.shanyue.tech
  status: 200
Copy the code

At this point, the front end only needs to write a configuration file, configure the cache, configure the proxy, do everything that should belong to the front end, and operation and maintenance no longer need to worry about front-end deployment

The front end looks at the configuration file it just wrote and feels lost…

If you are interested in netlify, you can check out my article: Deploying your front-end applications with Netlify

Server-side rendering and back-end deployment

Most front-end applications are static resources in nature, while the remaining few are server-side renderings, which are essentially a back-end service and whose deployment can be viewed as a back-end deployment

Back-end deployments are more complex, for example

  1. To configure a service, the back end needs access to sensitive data, but it cannot put sensitive data in the code repository. You can be inenvironment variables.consulork8s configmapIn the maintenance
  2. Up and down link services, you need to rely on databases, upstream services
  3. Access control, RESTRICTED IP address, blacklist and whitelist
  4. RateLimit
  5. , etc.

I will share in a future article how to deploy a backend in K8S

summary

As DevOPS grows and front-end deployments become easier and more manageable, it’s recommended that everyone learn a little devOPS stuff.

When the path is long and obstructed, it will come.

Communicate with me

Scan code to add my robot wechat and it will automatically pull you into the front end advanced advanced learning group (automatic pull program is under development)

I recommend a public account about big factory recruitment [Internet Big factory recruitment], the author will continue to push the recruitment positions and requirements of each big factory in the public account, and direct contact with big factory interviewers and recruitment principals, interested can directly communicate with the person in charge.

In addition, the author will continue to share quality interview experience of large factories, various exclusive interview questions and excellent articles, not limited to front-end, back-end, operation and peacekeeping system design.

I have set up a new warehouse on Github, one question per day, one interview question per day, welcome to exchange.

  • Front end test notes
  • Computer basic interview questions subtotal