Author: Mu Tong

The foregoing

Nginx is not new to most developers, corporate teams use it to set up request gateways, and we use it privately for “scientific networking” (value alerts). However, for front-end ER, most of the development in normal days is just focused on business, there is no need or opportunity to involve Nginx, which leads to our understanding of it is very little. With the popularization of Serverless, more and more people believe that they can realize their technical idea simply and quickly without mastering any operation and maintenance knowledge.

However, in fact, this is not the case. The rise of Node allows front-end engineers to step into the back-end field. We can maintain some BFF services independently, even if it is just some simple applications, you need to master certain operation and maintenance skills. On the other hand, in the rapidly changing software development system, some boundaries between different responsibilities are becoming increasingly blurred. The depth of DevOps concept also forces us to look at application operations and start thinking about how to build integrated engineering in the new system. Therefore, it is essential for front-end developers to know some easy-to-use Nginx techniques.

The so-called “skills are not pressure body”, when you are still thinking about learning, some people have learned.

What is Nginx

Nginx is an open source, high-performance, reliable HTTP middleware, proxy service. Nginx (pronounced engine X) is a Web server that can also be used as a reverse proxy, load balancer, and HTTP cache.

This is a classic overview. The “high performance” of Nginx is mainly reflected in supporting massive concurrent Webserver services, while “reliable” means high stability and high fault tolerance. Meanwhile, since Nginx architecture is based on modules, we can freely combine built-in modules and third-party modules. To build Nginx services for their own business. This is why Nginx is so popular that it can be found in teams of all sizes and become an important player in technology systems.

There’s a lot we can explore with Nginx, but for front-end developers, mastering and writing nginx.conf, the core configuration file of Nginx, solves 80% of the problem.

Docker quickly builds Nginx services

The classic manual installation of Nginx is “four checks, two installs, one initialization”, which is tedious and easy to stomp, but with Docker, there is no need to bother. Docker is an open source application container engine based on Golang, which allows developers to package their applications and dependencies into a lightweight and portable sandbox container. Therefore, we can easily build a Nginx service locally using Docker and completely skip the installation process. There will be no details about Docker here. Interested students can learn about Docker by themselves.

For a simple demonstration, we use a more efficient Docker-compose to build our Nginx service. Docker-compose is a command line tool provided by Docker for defining and running applications composed of multiple containers. Docker-compose allows you to declaratively define the individual services of your application through a YAML file and create and start the application with a single command.

To complete the next step, you’ll need to install Docker first, which varies depending on the operating system.

Docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose

version: "3"

services:
  nginx: # service name
    image: nginx
    volumes: # folder mapping
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf # nginx configuration file
    ports: Port forwarding
      - "8080:80"
Copy the code

We defined a set of services, nginx, to start a Docker container. The image of the container is nginx. Within the container, the startup port of the nginx service is 80 and the external access port is 8080. / Nginx /nginx.conf to the /etc/nginx/nginx.conf directory.

New nginx/nginx. Conf:

User nginx; Worker_processes 1; Error_log /var/log/nginx/error.log warn; Pid /var/run/nginx.pid indicates the log level. Warn indicates the log level. Events {accept_mutex on; Set network connection serialization to prevent multi_accept on; Worker_connections 1024; # Maximum number of connections per process HTTP {include /etc/ Nginx /mime.types; Default_type application/octet-stream; Log_format main '$remote_addr - $remote_user [$time_local] "$request" "$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # log format access_log/var/log/nginx/access. Log the main; Sendfile on; Keepalive_timeout 65; Server {listen 80; Server_name localhost; [root/usr/share/nginx/html] [root/usr/share/nginx-html] # root directory index index.html index.htm; # default page}}}Copy the code

This is a basic configuration for Nginx. See the comment for the configuration items and corresponding definitions. Here we simply configure a localhost:80 access listener.

Docker-compose up -d Welcome to Nginx! Localhost :8080 .

Run the docker exec it nginx-quick_nginx_1 bash to get inside the container, and run the cat /etc/nginx/nginx.conf file to see that the default nginx configuration file is successfully overwritten.

HTTP configuration for Nginx

HTTP configuration is the most critical part of Nginx configuration and the most frequently covered part of Nginx practical tips. Nginx’s HTTP configuration is divided into three levels of context: HTTP-server-location.

http

HTTP mainly stores protocol-level configurations, including common network connection configurations such as file types, connection durations, log storage, and data formats, which are valid for all services.

server

Server is a virtual host configuration, the main storage of service level configuration, including service address and port, encoding format and service default root directory and home page. Some of the special configuration levels of context are available, such as Charest (coded) access_log, so you can specify the access log for that service separately, and if you don’t have it you will inherit it upwards by default.

location

Location is a request-level configuration that specifies how to handle a request through URL regular matching, including proxy configuration, cache configuration, and so on. The syntax rules for location are as follows:

# location [modifier] match pattern {... }
location [= | | | ~ ~ * ^ ~] pattern { . }
Copy the code

1) no modifiers that path prefix match, the example below, match http://www.jd.com/test and http://www.jd.com/test/may.

server {
  server_name www.jd.com;
  location /test{}}Copy the code

2) = indicates exact path matching. In this example, only http://www.jd.com/test is matched.

server {
  server_name www.jd.com;
  location = /test{}}Copy the code

3) ~ indicates that the match is case-sensitive. In this example, http://www.jd.com/test is matched but http://www.jd.com/TEST is not.

server {
  server_name www.jd.com;
  location ~ ^/testThe ${}}Copy the code

4) ~* indicates that the match does not need to be case-sensitive. In the following example, both http://www.jd.com/test and http://www.jd.com/TEST are matched.

server {
  server_name www.jd.com;
  location ~* ^/testThe ${}}Copy the code

5) ^~ indicates that if the character following the symbol is the best match, this rule is adopted and no further search is performed.

Nginx location has its own set of matching priorities:

  • Exact match first=
  • Reprefix matching^ ~
  • Regex matches the order in the file~~ *
  • The final match is a prefix match without any decoration

Below this example, http://www.jd.com/test/may although hit two location rules, but because the ^ ~ priority above ~ * match, so will give priority to use the second location.

server {
  server_name www.jd.com;
  location ~* ^/test/may$ { }
  location ^~ /test{}}Copy the code

Nginx practical tips

Having learned some basic Nginx syntax, let’s take a look at some useful scenarios and techniques that Nginx can use in the hands of the front end.

Forward agent

Proxy forwarding is the most common usage scenario of Nginx, and forward proxy is one of them.

The client accesses a proxy service, which forwards the request to the target service, receives the request response from the target service, and finally returns it to the client. This is a proxy process. Scientnet is a typical forward proxy, in which Nginx acts as a proxy broker.

We create a new Web/directory under the root directory and add an index1.html as the home page for the target service:


      
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="Width = device - width, initial - scale = 1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Web Services Home page</title>
  <style>
  p {
    margin: 80px 0;
    text-align: center;
    font-size: 28px;
  }
  </style>
</head>
<body>
  <p>This is the front page of the Web1 service</p>
</body>
</html>
Copy the code

Docker-comemage. yml, add a new Nginx service web1 as the target service, use custom HTML to override the default home page HTML, and use link: – web1: Web1 establishes the container connection between the proxy service nginx and the target service web1:

version: "3"

services:
  nginx: # service name
    image: nginx
    links:
      - web1:web1
    volumes: # folder mapping
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf # nginx configuration file
    ports: Port forwarding
      - "8080:80"
  web1:
    image: nginx
    volumes:
      - ./web/index1.html:/usr/share/nginx/html/index.html
    ports:
      - "80"
Copy the code

Change the location configuration of Nginx to use the proxy_pass attribute to forward the main path access request to the target service web1:

// ...
location / {
  proxy_redirect off;
  proxy_pass http://web1; ## Forward to web1
}
// ...
Copy the code

Restart the container and visit localhost:8080 to see that the proxy service successfully forwarded our request to the target Web service:

Load balancing

Proxy also includes reverse proxy. The most commonly mentioned load balancing in our business is a typical reverse proxy. When the site visits to a certain extent, a single server can not meet the user’s request, it is necessary to use multiple servers to build cluster services, at this time, multiple servers will share the load in a reasonable way, to avoid the high load of a server down and a server idle situation.

Load balancing is simple with Nginx’s upstream configuration. Load balancing requires multiple target services, so we create index2.html and index3.html in the Web directory as the access home page for the new service.

Docker-comemage. yml, add two services web2 and web3, and establish container connection:

#...

services:
  nginx: # service name
    #...
    links:
      #...
      - web2:web2
      - web3:web3

  #...

  web2:
    image: nginx
    volumes:
      - ./web/index2.html:/usr/share/nginx/html/index.html
    ports:
      - "80"
  web3:
    image: nginx
    volumes:
      - ./web/index3.html:/usr/share/nginx/html/index.html
    ports:
      - "80"
Copy the code

In nginx.conf, we created an upstream to configure web-app. Web-app configures three target services, so our request will be proxyed to the target service via web-app. Nginx provides multiple load balancing policies, including the default polling mode, weight mode, IP_hash mode based on IP address allocation, and least_CONN mode based on the least number of connections. Which policy is adopted depends on different services and concurrent scenarios. Here we use the LEast_CONN policy to handle the distribution of requests.

// ...
upstream web-app {
  least_conn;   Select the server with the smallest ratio of active connections to weight as the next server to process requests
  server web1 weight=10 max_fails=3 fail_timeout=30s;
  server web2 weight=10 max_fails=3 fail_timeout=30s;
  server web3 weight=10 max_fails=3 fail_timeout=30s;
}

server {
  listen       80;         # monitor port
  server_name  localhost;  # monitor address

  location / {
    proxy_redirect off;
    proxy_pass http://web-app; ## Forward to web-app}} / /...Copy the code

Restart the container and you can see that the proxy service is forwarded to different target Web services for multiple requests:

Server-side Include

Server-side Include (SSI for short) is a simple interpreted server-side scripting language. When a page is obtained, the Server can parse SSI instructions and add dynamically generated content to the existing HTML page. SSI is an important means of early Web modularization, suitable for a variety of operating environments, and the efficiency of parsing is higher than JSP, is still widely used in some large websites.

Using SSI in HTML looks like this:

<! --#include virtual="/global/foot.html"-->
Copy the code

A single line of comments, through SSI parsing on the server, will be replaced with the contents of /global/foot.html. Virtual can be an absolute path or a relative path.

Nginx can quickly and easily support SSI, allowing your page implementation to dynamically import other HTML content. Let’s create a new HTML page slice in the Web directory, sinclude.html:

<style>
* {
  color: red;
}
</style>
Copy the code

Modify web/index1.html, add SSI directive, introduce page slice./sinclude.html:

<head>
  <! -... -->
  <! --#include virtual="./sinclude.html"-->
</head>
Copy the code

Modify docker-comemage. yml to add sinclude. HTML to the web service’s access root:

version: "3"

services:
  nginx: # service name
    image: nginx
    links:
      - web1:web1
    volumes: # folder mapping
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf # nginx configuration file
    ports: Port forwarding
      - "8080:80"
  web1:
    image: nginx
    volumes:
      - ./web/index1.html:/usr/share/nginx/html/index.html
      - ./web/sinclude.html:/usr/share/nginx/html/sinclude.html
    ports:
      - "80"
Copy the code

Conf to enable Nginx’s SSI support, ssi_silent_errors indicates that error messages are displayed when processing SSI files:

location / {
  ssi on;
  ssi_silent_errors on; Error message output when processing SSI file error, default off
  
  proxy_redirect off;
  proxy_pass http://web1; ## Forward to web1
}
Copy the code

Nginx successfully parses the SSI directive and inserts the page slice into the HTML page:

Note that if a reverse proxy is used and there are multiple Web services, make sure that each of them has the same sinclude. HTML file and the same path, because retrieving index.html and retrieving sinclude. HTML are two distributions. Unless the IP_hash policy is used, the page slice file may be forwarded to two different services and cannot be obtained.

GZIP compression

HTTP transmission is mainly text, a large number of which are some static resource files, including JS/CSS/HTML/IMAGE, etc. GZIP compression can compress content in the process of transmission, reduce bandwidth pressure and improve user access speed, which is an effective way to optimize Web page performance.

Nginx uses the gzip attribute configuration to enable gZIP compression of response content:

location / {
  #...
  gzip on;
  gzip_min_length 1k; Files greater than 1K are compressed
  
  #...
}
Copy the code

Gzip_min_length Specifies the minimum size of data to be compressed. Content-encoding: gzip is included in the response header of the compressed request. We can add more content to the HTML file to make it more compressed. Here is a comparison of GZIP before and after it is enabled:

1) HTML size 3.3KB before compression

2) After GZIP compression is enabled, the HTML size is 555 B

Preventing hotlinking

In some cases, we do not want our resource files to be used by external websites. For example, sometimes I will copy the image link from JD image service directly to GitHub for use. At this time, if JD wants to disable the image access from GitHub, what can WE do? Is simple:

location ~* \.(gif|jpg|png|webp)$ {
   valid_referers none blocked server_names jd.com *.jd.com;
   if ($invalid_referer) {
    return 403;
   }
   return 200 "get image success\n";
}
Copy the code

Nginx valid_referers valid_referers valid_referers valid_referers valid_referers valid_referers valid_referers valid_referers valid_referers valid_referers valid_referers The value of the $INvalid_referer variable is the result of the validation. When we test the access results, we can find that illegal referer requests are blocked:

ECCMAC-48ed2e556:nginx-quick hankle$ curl -H 'referer: http://jd.com' http://localhost:8080/test.jpg
get image success
ECCMAC-48ed2e556:nginx-quick hankle$ curl -H 'referer: http://wq.jd.com' http://localhost:8080/test.jpg
get image success
ECCMAC-48ed2e556:nginx-quick hankle$ curl -H 'referer: http://baidu.com'http://localhost:8080/test.jpg <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Who < / h1 > < / center > < hr > < center > nginx / 1.17.5 < / center > < / body > < / HTML >Copy the code

HTTPS

HTTPS is a technology that introduces SSL layer on the basis of HTTP to establish a secure channel. It encrypts and authenticates the transmitted content to prevent data from being hijacked, tampered with, or stolen by middlemen during transmission. Chrome from version 62 will be with input data HTTP sites and all HTTP sites viewed in stealth mode automatically marked as “unsafe” sites, it can be seen that in the popularity of network security standards, HTTPS is a major trend of future Web sites.

Nginx can easily and quickly set up HTTPS services by relying on the HTTP_SSL_module module. Nginx -v lists nginx compilation parameters to see if the http_SSL_module module is installed.

To set up HTTPS service, we need to generate key and self-signed SSL certificate (test, formal need to sign the trusted SSL certificate of the third party), and we need to use openSSL library. Create nginx/ssl_cert directory:

1) Generate key.key

openssl genrsa -out nginx_quick.key 1024
Copy the code

2) Generate certificate signing request file.csr

openssl req -new -key nginx_quick.key -out nginx_quick.csr
Copy the code

3) Generate the certificate signature file.crt

openssl x509 -req -days 3650 -in nginx_quick.csr -signkey nginx_quick.key -out nginx_quick.crt
Copy the code

After completing these three steps, we have also generated the key and SSL certificate required for HTTPS, which are configured directly into nginx.conf:

#...
server {
  listen       443 ssl;    # monitor port
  server_name  localhost;  # monitor address

  ssl_certificate /etc/nginx/ssl_cert/nginx_quick.crt;
  ssl_certificate_key /etc/nginx/ssl_cert/nginx_quick.key;

  #...
}
Copy the code

Docker-comemage. yml, upload custom certificate file to Nginx:

services:
  nginx: # service name
    #...
    volumes: # folder mapping
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf # nginx configuration file
      - ./nginx/ssl_cert:/etc/nginx/ssl_cert # certificate file
    ports: Port forwarding
      - "443:443"
Copy the code

After the restart, visit https://localhost, and find that the page is marked as unsafe access by Chrome, this is because the visa form is invalid certificate, click “Continue” to normally access the page:

Page caching

Page caches fall into three main categories: client caches, proxy caches, and server caches, with the focus on proxy caches.

When Nginx is acting as a proxy, using Nginx caching can greatly speed up requests that are received with little change in response data, such as static resource requests. Caching in Nginx is implemented as a hierarchical data store on the file system, with configurable cache keys and different request-specific parameters to control what goes into the cache.

Nginx enables content caching using proxy_cache_path, which sets the path and configuration of the cache, and proxy_cache, which enables caching:

http {
  #...
  proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m max_size=10g inactive=60m;

  server {
    #...

    proxy_cache mycache;

    #...}}Copy the code

Mycache = myCache = myCache = myCache = myCache

1) /data/nginx/cache specifies the root directory of the local cache;

2) Level indicates that the cache directory structure is two-tier, with a maximum of three layers. The number represents the naming length, for example, 1:2 will generate directories such as /data/nginx/cache/w/0d. For a large number of cache scenarios, reasonable hierarchical caching is necessary.

3) Keys_zone sets a shared memory area, 10m represents the size of the memory area, this memory area is used to store cache keys and metadata, ensure that Nginx can quickly determine whether the cache hit without retrieving the disk;

4) max_size sets the upper limit of the cache. Default is unlimited.

5) Inactive sets the maximum amount of time that the cache can remain unaccessed, which is the inactivation time.

Stern said

Above are some simple and practical Nginx application scenarios and usage tips that are still necessary to understand in depth for front-end development. However, faced with complicated Nginx configuration and unreadable official documentation, many people have to complain, and even if the syntax is proficient, they will waste a lot of time because of various problems such as debugging difficulties. Nginx configuration can be easily and quickly generated by nginx.conf configuration. Don’t worry about me learning Nginx.


If you think this article is valuable to you, please like it and pay attention to our official website and WecTeam. We have quality articles every week: