Lock screen interview questions 100 days, adhere to update the interview questions every working day. Lock screen interview questions app, small program has been launched, website address: https://www.demosoftware.cc/#/introductionPage. It has included all the contents of the interview questions updated daily, as well as features such as unlocking screen review interview questions, daily programming questions email push, etc. This will give you a head start in the interview and give the interviewer a leg up! Here are today’s interview questions:

==== How does current limiting work in Nginx? Nginx restricts the speed of a user’s request. There are three ways to prevent the server from being overrun by the limit:

  1. Normal restricted access frequency (normal traffic)
  2. Burst restricted access frequency (burst traffic)
  3. Restricting the number of concurrent connections Nginx is based on the leaky bucket flow algorithm to achieve three traffic limiting algorithms 1, normal limit access frequency (normal traffic) : limit a user to send a request, I Nginx how often to receive a request. Nginx uses the ngx_http_limit_req_module module to limit the access frequency. The principle of limiting is based on the principle of leaky bucket algorithm. The limit_req_zone command and the limit_req command can be used in the nginx.conf configuration file to limit the frequency of request processing for a single IP.

Limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m; Server {location/seckill.html{limit_req zone=zone; proxy_pass http://lj_seckill; }} 1r/s represents a request per second, and 1r/m receives a request per minute. If NGINX has not finished processing a request at this time, NGINX will refuse to process the user request.

2, Burst limit access frequency (Burst traffic) : limit the number of requests sent by a user, how often I Nginx receive one. The above configuration can limit the frequency of access to some extent, but there is a problem: if the burst traffic exceeds the rejected request processing, and the burst traffic during the activity cannot be handled, what should be done? Nginx provides a burst parameter combined with nodelay parameter to solve the problem of traffic burst. You can set the number of requests that can be processed beyond the set number of requests. Limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m; limit_req_zone =one:10m rate=1r/m; Server {location/seckill.html{limit_req zone=zone burst=5 nodelay; proxy_pass http://lj_seckill; } burst=5 nodelay; This means that NGINX will immediately process the first five requests from one user, and the rest will be dropped slowly. If there are no other requests, I will process yours. If there are other requests, NGINX will not accept your request

The ngx_http_limit_conn_module in Nginx provides the capability to limit the number of concurrent connections. This can be configured using the limit_conn_zone directive and the limit_conn execution. HTTP {limit_conn_zone $binary_remote_addr zone=myip:10m; limit_conn_zone $server_name zone=myServerName:10m; } server { location / { limit_conn myip 10; limit_conn myServerName 100; rewrite / http://www.lijie.net permanent; }} The above configuration allows a maximum of 10 concurrent connections for a single IP and a maximum of 100 concurrent connections for the entire virtual server. Of course, the number of connections to the virtual server is only counted when the requested header is processed by the server. The actual upper bound flow is usually implemented based on leaky bucket algorithm and token bucket algorithm.

==== What about leaky bucket flow algorithm and token bucket algorithm? 1) Leakage bucket algorithm is an algorithm often used in traffic shaping or rate limiting in the network world. Its main purpose is to control the rate of data injection into the network and smooth the burst traffic on the network. The leaky bucket algorithm provides a mechanism by which burst traffic can be reshaped to provide a steady flow to the network. That’s what we were talking about. The mechanism provided by the leaky bucket algorithm is actually the case just now: the burst flow will enter a leaky bucket, the leaky bucket will process the requests in turn according to the rate defined by us, if the flow is too large, that is, the burst flow will directly overflow, then the redundant requests will be rejected. So the leaky bucket algorithm can control the data transmission rate. 2) Token bucket algorithm is one of the most commonly used algorithms in network traffic shaping and rate limiting. Typically, the Token Bucket algorithm is used to control the amount of data sent over the network and to allow the burst of data to be sent. The RateLimiter in Google’s open source project Guava uses a token bucket control algorithm. The token bucket algorithm works as follows: there is a token bucket of a fixed size that generates a constant stream of tokens at a constant rate. If the rate of token consumption is less than the rate of token production, tokens are generated until the entire bucket is filled. If the request fails to obtain the token, the request is denied access.

==== What is the separation of movement and movement? Why do static and static separation? Nginx is the hottest Web container at the moment, the important point of website optimization is the static website, the key point of the static website is static separation, dynamic separation is to let the dynamic Web page in the dynamic website according to certain rules to distinguish the constant resources and often change the resources, dynamic resources do a good job after splitting, We will cache the static resource according to its characteristics. Nginx has a strong static processing capacity, but the dynamic processing capacity is insufficient. Therefore, static and static separation technology is commonly used in enterprises. For static resources such as images, JS, CSS and other files, we cache them in the reverse proxy server nginx. This way, when the browser requests a static resource, the proxy server Nginx can handle it directly, without having to forward the request to the back-end server Tomcat. If the user requests a dynamic file, such as a servlet, the JSP is forwarded to the Tomcat server for processing, so as to achieve static and static separation. This is also an important function of the reverse proxy server.

More interview questions or learning resources can be found on my homepage or in the comments