This is the 15th day of my participation in the August More Text Challenge. For details, see:August is more challenging

Nginx is a lightweight Web server/reverse proxy server and email (IMAP/POP3) proxy server, currently many domestic companies are in use, this article brings the basic introduction of Nginx, later will have more detailed introduction, please pay attention to.

More articles on my Github and personal public number [Whole cliff Road], welcome to watch [front knowledge points], if there is benefit, don’t pay, little hands point a Star.

Read this article

  • The role of Nginx
  • Basic Nginx information, including the architecture, basic configuration, and basic concepts

The role of Nginx

  • webserver
  • Reverse proxy server
  • Load balancing

A preliminary study of Nginx architecture

  • Multiprocess mode
  • You can manually disable the background mode to enable Nginx to run in the foreground. You can cancel the master process to enable Nginx to run in a single process
  • After Nginx starts, there will be a master process and multiple worker processes
  • Master is mainly used to manage worker processes
    • Includes: receive signals from the outside world, send signals to all worker processes, monitor the running status of worker processes, and automatically restart a new worker process when the worker process exits (in abnormal cases)
  • Network events are placed in the worker process
  • Multiple worker processes are equal and compete equally for requests from clients, and each process is independent of each other
  • A request can only be processed in one worker process
  • A worker process cannot process requests from other processes
  • The number of worker processes can be set, which is usually the same as the number of CPU cores on the machine
  • nginx -s reloadNginx restart leisurely: First, the master process will reload the configuration file after receiving the signal, then start the new worker process, and send a signal to all the old worker processes, telling them that they can retire. After the new worker is started, it starts to receive new requests, while the old worker will not receive new requests after receiving the signal from the master, and will exit after the processing of all outstanding requests in the current process is completed.

Nginx process model

  • Each worker process forks the master process (after the master process has finished setting up the socket for listen).

  • To ensure that only one worker process processes the request, grab the registration event of accept_mutex before the worker registers the listenFD event

  • The accept call in the read event accepts the link, reads the request, parses the request, processes the request, generates data and returns it, disconnects

  • Benefits of this model

    • Independent processes do not lock, save trouble
    • The service is not interrupted and does not affect each other
  • Handle events

    • Processes are handled in an asynchronous, non-blocking manner
      • Non-blocking: if the event is not ready, return EAGAIN immediately to tell you that the event is not ready, why are you pandering? Come back later. Well, you can check the event later, until the event is ready, and in the meantime, you can do other things, and then you can check the event.
  • There are three types of events: network events, signals, and timers

Basic concepts of Nginx

connection

  • Maximum number of connections for an Nginxworker_connections * worker_processes
  • The maximum number of concurrent requests that can be supported for HTTP requests to local resources isworker_connections * worker_processes
  • For HTTP as a reverse proxy, the maximum number of concurrent requests should beworker_connections * worker_processes/2(Client link and back-end service link)
  • accept_mutexOption: Gotaccept_mutexTo add the Accept event
    • Controls whether the accept time is added to the process
    • Avoid some processes have free links, no processing opportunities, some processes do not have free links, the link is discarded
    • ngx_accept_disabledTo control whether to competeaccept_mutex

request

  • ngx_http_request_tIs an encapsulation of an HTTP request that holds data related to parsing the request and the output response
  • fromngx_http_init_requestTo start, this function sets the read eventngx_http_process_request_lineTo process the request line, throughngx_http_read_request_headerTo process the request header
    • The virtual machine is found in the host domain in the request line
  • The parsed data is stored inngx_http_request_tIn the structure
  • Nginx resolves to two carriage return and line feeds to indicate the end of the request headerngx_http_process_requestTo process the request
  • ngx_http_process_requestSet the handler function tongx_http_request_handler
  • And then callngx_http_handlerTo actually process a complete HTTP request

keepalive

  • In addition tohttp1.0Don’t takecontent-lengthAs well ashttp1.1The chunked withoutcontent-lengthThe length of the body is known
  • In the client’s request headerconnectionclose, the client needs to close the persistent connection
  • If it iskeep-alive, the client needs to open the persistent connection
  • If not in the client’s requestconnectionThis head, then according to the protocol, if ishttp1.0, the default iscloseIf it ishttp1.1, the default iskeep-alive
  • If the server finally decideskeepaliveOpen, which is included in the response headerconnection:keep-alive, or elseconnection:close
  • keepaliveBenefits of enabling this function: A client needs to access the same server multiple timesserver, will be greatly reducedtime-waitThe number of

pipe

  • pipelineAssembly line, yeskeepaliveA sublimation of
  • withkeepaliveThe same is also based on persistent links
  • Make multiple requests with a single connection
  • withkeepaliveThe difference between
    • keepalive: The second request cannot be initiated until the response to the first request is fully received
    • pipeline: NginxpipelineMultiple requests are not processed in parallel, and the client can initiate a second request while processing the first request
  • Implementation principle: When Nginx reads data, it will read the data into abufferIn, after processing the previous request, if foundbufferIf there is data in it, it will be considered the start of the next request, and then it will process the next request, otherwise setkeepalive

lingering_close

  • Delayed close, when Nginx closes the connectiontcpWait for a period of time before closing the connection for reads
  • Maintain better client compatibility. Additional resources will be consumed (e.g. the connection will be occupied forever)

Nginx configuration system

  • Consists of a master configuration file and some other auxiliary configuration files
  • These files are plain text files
  • All in the Nginx installation directoryconfdirectory
  • Only the master configuration filenginx.confIt will be used in all circumstances
  • nginx.conf, several configuration items consist of configuration instructions and command parameters

instruction

  • The configuration directive is a string
  • They can be enclosed in single or double quotation marks, or not
  • If a configuration directive contains Spaces, be sure to raise them.

Directive parameter

  • Directive parameters are separated from the directive by one or more Spaces or TAB characters

  • Directive parameters consist of one or more TOKEN strings, which are separated by Spaces or TAB keys

  • Simple configuration items: error_page 500 502 503 504/50x.html;

  • Complex configuration items:

    location / {
        root   /home/jizhao/nginx-book/build/html;
        index  index.html index.htm;
    }
    Copy the code

Instruction context

  • nginx.confAccording to the logical meaning, the configuration information is classified into multiple scopes (configuration instruction context)
  • Different scopes contain one or more configuration items
  • Several instruction contexts supported by Nginx
Instruction context meaning
main Nginx runtime parameters, such as the number of worker processes and the identity of the running process, are independent of the specific business function (such as HTTP service or email service proxy)
http Some configuration parameters related to providing HTTP services. For example, whether to use keepalive, whether to use gZIP compression, etc
server Several virtual hosts are supported on the HTTP service. Each virtual host has a corresponding server configuration item, which contains the configuration related to the virtual host. When providing a proxy for mail services, you can also set up several servers. Each server is identified by the address it listens to
location In the HTTP service, a series of configuration items corresponding to certain URLS
mail Shared configuration items when implementing emailing related SMTP/IMAP/POP3 agents (because it is possible to implement multiple agents, working on multiple listening addresses)

The modular architecture of NGINx

  • The internal structure of NGINx is composed of core parts and a series of functional modules
  • Benefits: The function of each module is relatively simple, easy to develop, and easy to expand the system

The module overview

  • Each functional module is organized into a chain. When a request arrives, the request passes through part or all of the modules on the chain in turn for processing.
  • There are two modules in particular that sit between the Nginx core and the functional modules. These two modules are the HTTP module and the Mail module.
  • The HTTP and Mail modules implement another layer of abstraction on top of the Nginx core, handling events related to the HTTP protocol and email related protocols (SMTP/POP3/IMAP) and ensuring that these events can be invoked in the correct order to other functional modules

Classification of modules

type function
event module The framework of event processing mechanism independent of operating system is set up, and the processing of each specific event is provided. These modules include ngx_events_module, ngx_event_core_module, and ngx_epoll_module. Which event handling module nginx uses depends on the specific operating system and compilation options
phase handler Modules of this type are also referred to directly as handler modules. It is mainly responsible for processing client requests and generating response content, such as ngX_HTTP_static_module module. It is responsible for processing client static page requests and preparing corresponding disk files for output response content
output filter Also known as the filter module, is mainly responsible for the output of the content of the processing, can be modified to the output. For example, you can add predefined footbar-like work to all the HTML pages that you output, or do things like replace the URL of the image that you output
upstream The upstream module implements the reverse proxy function, forwarding the real request to the back-end server, reading the response from the back-end server and sending it back to the client. The upstream module is a special type of handler, except that the response content is not actually generated by itself, but is read from the back-end server
load-balancer Load balancing module, the implementation of a specific algorithm, in many back-end servers, select a server out as a request forwarding server

The processing flow of the request

  • Initialize theHTTP RequestRead the data from the client, generateHTTP RequestObject that contains all the information for the request.
  • Processing request header
  • Processing request body
  • If so, invoke the URL or Location associated with the requesthandler
  • Call each in turnphase handlerFor processing
  • Usually, onephase handlerFor thisrequestAnd produce some output. usuallyphase handlerIs defined in the configuration filelocationThe associated
  • aphase handlerThe following tasks are usually performed:
    • To obtainlocationconfiguration
    • Generate an appropriate response
    • sendresponse header
    • sendresponse body

Write in the last

If you find this article helpful, please like it and share it with more people who need it!

Welcome to pay attention to [Quanzhendaolu] and wechat public account [Quanzhendaolu], to get more good articles and free books!
There is a need [Baidu] & [bytedance] & [JD] & [ape counselling] within the push, please leave a message oh, you will enjoy the VIP level extreme speed within the push service ~

Past oliver

Wechat JS API payment implementation

Create a personalized Github profile

The interviewer asks you<img>What elements do you say

Special JS floating point number storage and calculation

Long [word] baidu and good interview after containing the answer | the nuggets technology essay in the future

Front end practical regular expression & tips, all throw you 🏆 nuggets all technical essay | double festival special articles

HTML Tabindex

A few lines of code to teach you to solve wechat poster and TWO-DIMENSIONAL code generation

Vue3.0 Responsive data principle: ES6 Proxy

Read on to make your interviewer fall in love with you

How to draw a fine line gracefully

Front-end performance optimization -HTML, CSS, JS parts

Front-end performance optimization – Page loading speed optimization

Front-end performance optimization – Network transport layer optimization