Click for PDF | springboot | springcloud | programming ideas

What is Nginx?

Nginx is a lightweight/high-performance reverse proxy Web server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols. It realizes very efficient reverse proxy and load balancing. It can handle 20,000 to 30,000 concurrent connections, and the official monitoring can support 50,000 concurrent connections. Now there are many users using Nginx website in China, such as sina, NetEase and Tencent.

What are the advantages of Nginx?

  • Cross-platform and simple configuration.
  • Non-blocking, high concurrent connections: handle 20,000 to 30,000 concurrent connections, official monitoring can support 50,000 concurrent connections.
  • Small memory consumption: only 10 Nginx enabled takes up 150M memory.
  • It’s cheap and open source.
  • High stability, very small probability of downtime.
  • Built-in health check: If one of the servers goes down, a health check is performed, and resent requests are not sent to the down server. Resubmit the request to another node

Nginx application scenarios?

  • HTTP server. Nginx is an HTTP service that can provide HTTP services independently. Can do web static server.
  • Virtual hosting. Multiple websites can be created on one server, for example, virtual machines for personal websites.
  • Reverse proxy, load balancer. When the number of visits to the website reaches a certain level, a single server cannot meet the user’s request, you need to use multiple server cluster can use Nginx as a reverse proxy. In addition, multiple servers can share the load equally, so that one server will not be idle due to the high load of a server downtime.
  • Nginz can also configure security management, for example, you can use Nginx to build API interface gateway, the interception of each interface service.

How does Nginx handle requests?

Server {# the first server block starts, representing a separate virtual host site listen 80; Server_name localhost; Location / {# the first location block starts root HTML; # the root directory of the site, equivalent to the Nginx installation directory index index.html index.html; } # the first location block resultCopy the code
  • When Nginx is started, it parses the configuration file to determine the port and IP address to listen on. Then it initializes the Socket to be monitored in the Master process. Bind to the specified IP address port and listen.
  • Then, fork(an existing process can call the fork function to create a new process. New processes created by fork are called child processes.
  • The child process then competes to accept the new connection. At this point, the client can initiate a connection to Nginx. When a client establishes a connection with Nginx after a three-way handshake. At this point, one of the child processes accepts the Socket for the established connection, and then creates an NGINx encapsulation of the connection, called ngx_Connection_t.
  • Next, set up read/write event handlers and add read/write events to exchange data with the client.
  • Finally, Nginx or the client takes the initiative to close the connection, at which point a connection dies.

How does Nginx achieve high concurrency?

If a server has a process (or thread) handling a request, then the number of processes is the number of concurrent requests. Obviously, there are a lot of processes in waiting. Waiting for? Most of them are waiting for network transmission.

Nginx’s asynchronous non-blocking way of working takes advantage of this wait time. When waiting is required, these processes become idle. Thus it appears that a small number of processes solve a large number of concurrency problems.

Nginx uses the same four processes, if one process handles one request, then each process will handle one of the four requests until the session closes. In between, if a fifth request comes in. It’s not going to be able to respond in time because none of the four processes are doing anything, so you have a scheduler, and every time a new request comes in, it starts a new process to process it.

Think back, is there a problem with BIO?

Nginx does not. Every time a request comes in, a worker process processes it. But not all the way through. To what extent? Process to where blocking might occur, such as forwarding a request to an upstream (back-end) server and waiting for the request to return. Instead of waiting, the processing worker registers an event after sending the request: “Let me know if upstream returns and I’ll continue.” So he went to rest. At this point, if another request comes in, he can quickly do it again. Once the upstream server returns, the event will be triggered, the worker will take over, and the request will continue down.

That’s why Nginx is based on the event model.

Since the nature of the Web Server’s work dictates that the majority of the life of each request is spent on network traffic, the actual time slice spent on the server machine is small. This is the secret to solving high concurrency in just a few processes. That is:

Webserver happens to be a network IO intensive application, not a computationally intensive one.

Asynchronous, non-blocking, use of epoll, and lots of detail optimizations. And it’s the technology that makes Nginx what it is.

What is forward proxy?

A server located between a client and the origin server. In order to get content from the origin server, the client sends a request to the agent and specifies the destination (the origin server). The agent then forwards the request to the original server and returns the content to the client.

The client can use the forward proxy. The forward proxy is summed up in one sentence: the proxy proxy is the client. For example: OpenVPN we use and so on.

What is a reverse proxy?

In Reverse Proxy mode, a Proxy server receives Internet connection requests, sends the requests to the Intranet server, and returns the results to the Internet client. In this case, the proxy server acts as a reverse proxy server.

Reverse proxy is summed up in one sentence: the proxy proxy is the server.

What are the advantages of a reverse proxy server?

A reverse proxy server can hide the existence and features of the source server. It acts as an intermediate layer between the Internet cloud and the Web server. This is great for security, especially if you use web hosting services.

What are the Nginx directory structures?

[root@localhost ~]# tree /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx /usr/local/nginx / │ ├─ ├─ Fastcgi_params # Fastcgi │ ├─ Fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf # fastcgi.conf ├─ FastCGI_├ s.default │ ├─ Koi-UTF │ ├─ mime. Types # Media │ ├─ mime. Types. Default │ ├─ Nginx.conf # Nginx └ │ ├─ Nginx.conf. Default │ ├─ Anti-Flag # Nginx ├─ Anti-Flag # Nginx.conf. Default │ ├─ Anti-Flag # ├─ ├─ uWSGi ├─ ├─ uWSgi_params. Default │ ├─ UWSGI ├─ UWSgi_params. Default │ ├─ UWSGI ├─ UWSgi_params 50x.html # Error pages elegantly replace display files, │ ├─ ├─ access.log │ ├─ error.log │ ├─ ├─ ├.log │ ├─ ├.log │ ├─ ├.log │ ├─ ├.log └─ nginx. Pid # pid file, └─ nginx Will put all the process ID number to the file ├ ─ ─ proxy_temp # temp directory ├ ─ ─ sbin # Nginx command directory │ └ ─ ─ Nginx # Nginx start command ├ ─ ─ scgi_temp # temp directory └ ─ ─ uwsgi_temp # temporary directoryCopy the code

What attribute modules does the Nginx configuration file nginx.conf have?

Worker_processes 1; # worker_connections 1024; HTTP {# HTTP block start include mime.types; Default_type application/octet-stream; # Default media type sendFile on; Keepalive_timeout 65; Server {# the first server block starts, representing an independent virtual host site listen 80; Server_name localhost; Location / {# the first location block starts root HTML; The root directory of the Nginx installation directory index index.html index.htm; } # first location block result error_page 500502503504/50x.html; # location = /50x. HTML {# location block start, access 50x. HTML root HTML; # specify the corresponding site directory as HTML}}......Copy the code

What’s the difference between cookie and session?


Store user information. Stored in key-value format variables and key-value pairs of variable contents.

The difference between:


  • Stored in the client browser
  • Each domain name corresponds to one cookie. You cannot access other cookies across domain names
  • Users can view or modify cookies
  • The HTTP response message gives you the browser Settings
  • Key (used to open the browser lock)


  • Store on server (file, database, redis)
  • Storing sensitive Information
  • locks

Why doesn’t Nginx use multithreading?

Apache: Creates multiple processes or threads, and each process or thread allocates CPU and memory (threads are much smaller than processes, so worker supports higher concurrency than Perfork), which drains server resources.

Nginx: The use of single-thread asynchronous non-blocking processing requests (administrators can configure the number of worker processes in the main Nginx process) (epoll), does not allocate CPU and memory resources for each request, saves a lot of resources, and also reduces a lot of CPU context switching. This is why Nginx supports higher concurrency.

Differences between Nginx and Apache

Lightweight, also web services, uses less memory and resources than Apache.

Anti-concurrency, Nginx processes requests asynchronously and non-blocking, whereas Apache is obstructive, and nginx can maintain low resource, low consumption and high performance under high concurrency.

Highly modular design, writing modules is relatively simple.

The core difference is that Apache is a synchronous multi-process model, one connection corresponds to one process, while Nginx is asynchronous, multiple connections can correspond to one process.

What is dynamic resource and static resource separation?

Dynamic resources, static resources separation, is to let the dynamic web page in the dynamic website according to certain rules to the constant resources and often change resources to distinguish, dynamic resources split after we can do according to the characteristics of static resources to do the cache operation, this is the core idea of static site processing.

Dynamic resource, static resource separation simple summary is: dynamic file and static file separation.

Why do we separate dynamic and static?

In our software development, some requests need background processing (such as:.jsp,.do and so on), some requests do not need to go through background processing (such as: CSS, HTML, JPG, JS and so on files), these files do not need to go through background processing is called static files, otherwise dynamic files.

So we ignore static files in the background. I’m going to say, “I’m going to ignore static files in the background.” Of course this is possible, but the number of requests in the background increases significantly. When we have requirements on the response speed of resources, we should use the strategy of static and static separation to solve the dynamic and static separation of the website static resources (HTML, JavaScript, CSS, IMG and other files) and the background application separate deployment, improve the speed of user access to static code, reduce the access to the background application

Here we put the static resources into Nginx and forward the dynamic resources to the Tomcat server.

Of course, as CDN services such as Quniu and Ali Cloud have become very mature, the mainstream approach is to cache static resources into CDN services to improve access speed.

Compared with the local Nginx, the CDN server has more nodes in the country, which can realize the nearest access of users. Moreover, CDN services can provide greater bandwidth, unlike our own application services, which provide limited bandwidth.

What is CDN service?

CDN, content delivery network.

Its purpose is, by adding a new layer of network architecture in the existing Internet, the content of the website will be published to the edge of the network closest to the user, the user can get the content needed nearby, improve the speed of user access to the website.

Generally speaking, because CDN service is popular now, almost all companies will use CDN service.

How does Nginx do static separation?

You only need to specify the directory corresponding to the path. Location/can be matched using regular expressions. And specify the directory on the corresponding hard disk. As follows :(all on Linux)

location /image/ {
    root   /usr/local/static/;
    autoindex on;
Copy the code


/usr/local/static/image /usr/local/static/image/photo.jpg/nginx sudo nginx -s reloadCopy the code

Open your browser and type server_name/image/1.jpg to access the static image

Nginx load balancing algorithm how to achieve? What are the strategies?

To avoid server crashes, load balancing is used to share server load. When users visit, they will first visit a forwarding server, and then the forwarding server will distribute the access to the server with less pressure.

Nginx implements the following five load balancing policies:

1. Polling (default)

Each request is allocated to a different back-end server one by one in chronological order. If a back-end server goes down, the failed system can be automatically removed.

Upstream {server; Server; }Copy the code
2. Weight weight

The larger the value of weight is, the higher the access probability is. It is mainly used when the performance of each back-end server is unbalanced. The second is to set different weights in the case of master and slave to achieve reasonable and effective use of host resources.

# The higher the weight, the greater the probability of being visited, as shown in the above example, 20% and 80% respectively. Upstream {server weight=2; Server weight = 8; }Copy the code
3. Ip_hash (IP binding)

Each request is allocated according to the hash result of access IP, so that visitors from the same IP address can access a back-end server, and effectively solve the session sharing problem existing in dynamic web pages

upstream backserver { 
Copy the code
4. Fair (third-party plugin)

The upstream_fair module must be installed.

Compared with smarter load balancing algorithms such as weight and IP_hash, Fair performs intelligent load balancing based on the page size and load time. The algorithm with short response time is preferentially allocated.

The request will be allocated to whichever server has the fastest response time. upstream backserver { server server1; server server2; fair; }Copy the code
5. Url_hash (third-party plugin)

You must install the Nginx Hash software package

Requests are allocated based on the hash results of urls so that each URL is directed to the same backend server, which further improves the efficiency of the backend cache server.

upstream backserver { 
 server squid1:3128; 
 server squid2:3128; 
 hash $request_uri; 
 hash_method crc32; 
Copy the code

How to solve front-end cross domain problems with Nginx?

Use Nginx to forward requests. Cross-domain interfaces are written as local-domain interfaces, and those interfaces are then forwarded to the real requesting address.

How to configure Nginx virtual host?

1, based on the domain name of the virtual host, through the domain name to distinguish the virtual host – application: external website

2, port based virtual host, through the port to distinguish virtual host – application: the company’s internal website, external website management background

3. Ip-based virtual hosting.

Configure domain names based on virtual hosts

The /data/ WWW /data/ BBS directory needs to be set up and the Windows local hosts needs to add domain name resolution corresponding to the VM IP address. The index. HTML file is added to the corresponding domain name website directory.

Server {listen 80; / / data/ WWW/server {listen 80; server_name; location / { root data/www; index index.html index.htm; }} server {listen 80;}} server {listen 80; server_name; location / { root data/bbs; index index.html index.htm; }}Copy the code
Port-based virtual host

Use port to distinguish, browser use domain name or IP address: port number access

Data/WWW server {listen 8080; data/ WWW server {listen 8080; server_name; location / { root data/www; index index.html index.htm; }} # if the client accesses, listen port 80 directly to the real IP server address server {listen 80; server_name; Location / {proxy_pass; index index.html index.htm; }}Copy the code

What does location do?

The function of the location directive is to perform different applications according to the URI requested by the user, that is, to match the website URL requested by the user, and relevant operations will be carried out once the match is successful.

Location syntax?

Note: ~ represents the English letters you input

Location re case
# priority 1, exact match, root path location =/ {return 400; } # location ^~ /av {root /data/av/; /media***** path location ~ /media {alias /data/static/; } # 4 priority, case-insensitive regular match, all the * * * *. JPG | | PNG GIF came here the location ~ *. * \. (JPG | | PNG GIF | js | CSS) ${root/data/av /; } location / {return 403; }Copy the code

How is the current limiting done?

Nginx traffic limiting is to limit the speed of user requests and prevent the server from being overwhelmed

There are three types of current limiting

  • Normal Restricted access frequency (normal traffic)
  • Burst limiting access Frequency (Burst traffic)
  • Limit the number of concurrent connections

Nginx traffic limiting is based on leaky bucket flow algorithm

Three traffic limiting algorithms are implemented

1. Normal restricted access frequency (normal traffic) :

Nginx limits the number of requests a user sends, and how often I receive a request.

Nginx uses ngX_HTTP_limit_REq_module module to limit the access frequency, the principle of restriction is based on the principle of leak-bucket algorithm to achieve. In the nginx.conf configuration file, you can use the limit_req_zone command and the limit_req command to limit the frequency of processing requests for a single IP address.

Limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m; Server {location/seckill. HTML {limit_req zone=zone; proxy_pass http://lj_seckill; }}Copy the code

1r/s represents one request per second, and 1r/m receives one request per minute. If Nginx has not finished processing other requests, Nginx will refuse to process the user request.

2. Burst limit access frequency (burst traffic) :

Limit the number of requests a user sends and how often Nginx receives one.

The above configuration can limit the frequency of access to some extent, but there is a problem: what happens if burst traffic exceeds request rejection processing and cannot handle burst traffic during activity?

Nginx provides the burst parameter combined with the nodelay parameter to solve the problem of traffic burst. You can set the number of requests that can be processed in addition to the set number of requests. We can add burst parameters and nodelay parameters to the previous example:

Limit_req_zone $binary_remote_addr zone=one:10m rate=1r/m; Server {location/seckill. HTML {limit_req zone=zone burst=5 nodelay; proxy_pass http://lj_seckill; }}Copy the code

Burst =5 nodelay; This means that Nginx will immediately process the first five requests from a user, and the rest will slowly fall off. If there are no other requests, I will process yours. If there are other requests, I will miss your request

3. Limit the number of concurrent connections

The ngx_HTTP_limit_conn_module module in Nginx provides the function of limiting the number of concurrent connections, which can be configured using the limit_conn_zone directive and the limit_CONN execution. Here’s a simple example:

http { limit_conn_zone $binary_remote_addr zone=myip:10m; limit_conn_zone $server_name zone=myServerName:10m; } server { location / { limit_conn myip 10; limit_conn myServerName 100; rewrite / permanent; }}Copy the code

The maximum number of concurrent connections for a single IP address is 10, and the maximum number of concurrent connections for the entire virtual server is 100. Of course, the number of connections to the virtual server is not counted until the requested header is processed by the server. As mentioned earlier, Nginx is based on the leaky bucket algorithm. The actual upper limit flow is generally based on the leaky bucket algorithm and token bucket algorithm.

Leaky bucket flow algorithm and token bucket algorithm know?

Bucket algorithm

Bucket algorithm idea is very simple, we compared the water to be at the request of bucket to a system capacity limits, first into the water in the bucket, the water in the bucket in a certain flow rate, when the outflow rate is less than the rate of flow, due to the limited capacity of bucket, subsequent water directly into the overflow (rejected requests), to realize the current limit.

Token bucket algorithm

The principle of token bucket algorithm is also relatively simple, we can understand it as the registration of the hospital to see a doctor, only after getting the number can be diagnosed.

The system maintains a token bucket and puts tokens into the bucket at a constant speed. If a request comes in and wants to be processed, it needs to obtain a token from the bucket first. If no token is available in the bucket, the request will be denied service. Token bucket algorithm can limit requests by controlling bucket capacity and token issuing rate.

How to configure Nginx high availability?

If the upstream server (real access server) fails or does not respond in time, it should be directly rotated to the next server to ensure high availability of the server

Nginx configuration code:

server { listen 80; server_name; Location / {### specify the upstream server load balancing server proxy_pass http://backServer; Proxy_connect_timeout 1s; proxy_connect_timeout 1s; ###nginx send timeout to upstream server (real access server) proxy_send_timeout 1s; Proxy_read_timeout 1; proxy_read_timeout 1; index index.html index.htm; }}Copy the code

How does Nginx determine whether other IP addresses are inaccessible?

If ($remote_addr = {return 403; }Copy the code

How can an undefined server name be used in Nginx to prevent requests from being processed?

Simply define the server requesting removal as:

The server name is left with an empty string, it matches the request without the host header field, and a special nginx non-standard code is returned to terminate the connection.

How do I limit browser access?

If ($http_user_agent ~ Chrome) {return 500; }Copy the code

What is the Rewrite global variable?

$remote_addr $binary_remote_addr $remote_port $remote_port 50472 $remote_user // User name authenticated by Auth Basic Module $host // Request host header field, otherwise server name, e.g. $request // User request information, e.g. GET? A =1&b=2 HTTP/1.1 $request_filename // Specifies the path name of the requested file. The path is a combination of root or alias and URI request, for example: /2013/81. HTML $status // The response status code of the request, e.g. 200 $body_bytes_sent // The number of body bytes sent in the response. This data is accurate even if the connection is broken, as in: $http_referer = "referer"; $referer = "referer" $http_user_agent // Client agent information, such as: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, Like Gecko) Chrome/29.0.1547.76 Safari/537.36 $args $document_uri = a=1&b=2 $document_uri Do not include any parameters (see $args) such as :/2013/81. HTML $document_root // Set the value for the root path of the current request $hostname // Such as: Centos53. Localdomain $http_cookie // Client cookie information $cookie_COOKIE //cookie cookie variable value $is_args // If there is a $args parameter, this variable equals "? , otherwise equal to "", null value, such as? $limit_rate = $query_string = $args; $request_body_file // Temporary file name of the body that the client requests $request_method // Action that the client requests, usually GET or POST, for example: GET $request_uri // The original URI containing the request parameters, excluding the host name, for example: /2013/81.html? A =1&b=2 $Scheme //HTTP methods (e.g. HTTP, HTTPS), e.g. HTTP $uri // This variable refers to the current request URI and does not include any parameters (see $args) such as :/2013/81. HTML $request_completion // If the request ends, set it to OK. Empty if the request is not completed or if the request is not the last in the request chain, as in: OK $server_protocol // The protocol used in the request, usually HTTP/1.0 or HTTP/1.1, as in: HTTP/1.1 $server_addr // Server IP address, which can be determined after a system call, $server_name // server name, e.g., $server_port // Port number at which the request reaches the server, e.g., 80Copy the code

How does Nginx implement backend service health check?

Method 1: Use the ngx_HTTP_proxy_module and ngx_HTTP_upstream_module modules of NGINx to perform a health check on backend nodes.

Method 2 (recommended) : Use the nginx_upstream_check_module to perform a health check on the back-end node.

How does Nginx enable compression?

After nginx gzip compression is enabled, the size of static resources such as web pages, CSS, and JS is greatly reduced, which saves a lot of bandwidth, improves transmission efficiency, and gives users a faster experience. Although it consumes CPU resources, it is worth it to give the user a better experience.

The enabled configuration is as follows:

Put the above configuration into the HTTP {… } node.

HTTP {# enable gzip gzip on; # Enable gzip compressed minimum file; Gzip_min_length 1k; # gzip level 1-10 gzip_comp_level 2; The type of file to compress. gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png; Gzip_vary on is recommended for HTTP header Vary: accept-encoding. }Copy the code

Save and restart nginx and refresh the page (force refresh to avoid caching) to see the results. Using Google Chrome as an example, look at the response header of the request through F12:

If zip compression is not enabled, we can compare the corresponding file size, as shown below:

Now that we have turned on gzip for the compressed file size, we can see something like this:

And when we look at the response header, we see compression like gzip, as shown below

Comparison of the original size of jquery before and after gzip compression: the original size of jquery is 90kb, and the compressed size is only 30kb.

Gzip is useful, but the following types of resources are not recommended.

1. Picture type

Reason: Images such as JPG and PNG will be compressed, so even after enabling Gzip, there is no big difference between the size before compression and after compression, so enabling it will be a waste of resources. Tips: Try compressing a JPG image into a ZIP and see that the size doesn’t change much. Although zip and GZIP algorithms are different, it can be seen that the value of compressed images is not great.)

2. Large files

Cause: It consumes a lot of CPU resources, and may not have obvious effect.

What does ngx_HTTP_upstream_module do?

Ngx_http_upstream_module is used to define groups of servers that can be referenced by fastCGI delivery, proxy delivery, UWSGI delivery, memcached delivery, and SCGI delivery instructions.

What is a C10K problem?

The C10K problem is the inability to handle a large number of clients (10,000) of network sockets simultaneously.

Does Nginx support compressing requests upstream?

You can use the Nginx module Gunzip to compress requests upstream. The gunzip module is a filter that decompresses responses using “content encoding :gzip” for clients or servers that do not support the “gzip” encoding method.

How do I get the current time in Nginx?

To get the current time of Nginx, you must use the SSI module, and date_local variables.

Proxy_set_header THE-TIME $date_gmt;
Copy the code

What is the purpose of interpreting -s with Nginx servers?

The executable used to run the Nginx -s argument.

How to add modules to Nginx server?

During compilation, you must select the Nginx module because Nginx does not support runtime selection for modules.

How to set the number of worker processes in production?

In the case of multiple cpus, multiple workers can be set, and the number of worker processes can be set to the same as the number of CPU cores. If multiple worker processes are set on a single CPU, the operating system will schedule among multiple workers, which will reduce the system performance. If there is only one CPU, Then just start a worker process.

Nginx status code


The server takes a long time to process, and the client actively closes the connection.


(1) Whether the.fastcgi process has been started

(2).FastCGI worker process number is not enough

(3).FastCGI execution time is too long

  • fastcgi_connect_timeout 300;
  • fastcgi_send_timeout 300;
  • fastcgi_read_timeout 300;

(4). The FastCGI Buffer is not enough. Nginx, like Apache, has a front-end Buffer limit and can adjust the Buffer parameters

  • fastcgi_buffer_size 32k;
  • fastcgi_buffers 8 32k;

(5). Proxy Buffer is not sufficient. If you use Proxying, adjust it

  • proxy_buffer_size 16k;
  • proxy_buffers 4 16k;

(6) The execution time of the.php script is too long

  • Change 0s in php-fpm.conf to 0s as a time

Source: # # #

Recommend 3 original Springboot +Vue projects, with complete video explanation and documentation and source code:

Build a complete project from Springboot+ ElasticSearch + Canal

  • Video tutorial:…
  • A complete development documents:
  • Online demos:

【VueAdmin】 hand to hand teach you to develop SpringBoot+Jwt+Vue back-end separation management system

  • Full 800 – minute video tutorial:…
  • Complete development document front end:
  • Full development documentation backend:

【VueBlog】 Based on SpringBoot+Vue development of the front and back end separation blog project complete teaching

  • Full 200 – minute video tutorial:…
  • Full development documentation:

If you have any questions, please come to my official account [Java Q&A Society] and ask me