Preface:

Sorted out some basic Java flow chart/architecture diagram, make some notes, we study together.

1. Spring lifecycle

Spring is the most popular and powerful lightweight container framework in Java, so it is necessary to understand the life cycle of Spring.

  • First, after the container starts, the bean is initialized
  • Attributes are injected as defined by the bean
  • Check to see if the object implements the xxxAware interface and inject the relevant xxxAware instance into the bean, such as BeanNameAware, etc
  • The bean object is constructed correctly in the above steps, and some custom method processing is possible by implementing the BeanPostProcessor interface. Such as: postProcessBeforeInitialzation.
  • After the preprocessing of BeanPostProcessor is complete, we can implement postConstruct, afterPropertiesSet,init-method and other methods to add our own logic.
  • By implementing the BeanPostProcessor interface, rear postProcessAfterInitialzation processing
  • Then the Bean is ready to be used.
  • Once the container is closed, if the Bean implements the DisposableBean interface, the destroy() method of that interface is called back
  • By specifying the destroy-method function, you can execute the specified logic before the bean is destroyed

2.TCP shakes hands three times and waves four times

TCP’s three handshakes and four waves should be familiar to every programmer.

Three handshakes:

  • After the first handshake (SYN=1, seq=x), the client enters the SYN_SEND state
  • After the second handshake (SYN=1, ACK=1, seq=y, ACKnum=x+1), the server enters the SYN_RCVD state.
  • After the third handshake (ACK=1, ACKnum=y+1) is sent, the client enters the ESTABLISHED state. When the server receives the packet, it enters the ESTABLISHED state as well. The TCP handshake is used to start data transmission.

Four waves:

  • After the first wave (FIN=1, seq=a) is sent, the client enters FIN_WAIT_1 state
  • After the second wave (ACK=1, ACKnum=a+1), the server enters CLOSE_WAIT state and the client enters FIN_WAIT_2 state after receiving the acknowledgement packet
  • After the third wave (FIN=1, seq=b), the server enters the LAST_ACK state and waits for the last ACK from the client.
  • On the fourth wave (ACK=1, ACKnum=b+1), the client receives a shutdown request from the server, sends an acknowledgement packet, and enters the TIME_WAIT state. 2 Maximum Segment Lifetime), if the server does not receive an ACK, the server considers that the connection has been CLOSED normally, so the server closes the connection and enters the CLOSED state. After receiving the acknowledgement packet, the server closes the connection and enters the CLOSED state.

3. Thread pool execution flowchart

Thread pool: A thread usage pattern. Too many lines lead to scheduling overhead, which affects cache locality and overall performance. A thread pool maintains multiple threads waiting for a supervisor to assign tasks that can be executed concurrently, avoiding the cost of creating and destroying threads while working on short tasks. Thread pool execution processes are a must for every development.

Execute the process

  • When a task is submitted and the number of viable core threads in the pool is less than the number of threads corePoolSize, the pool creates a core thread to process the submitted task.
  • If the number of core threads in the thread pool is full, that is, the number of threads is equal to the corePoolSize, a new submitted task will be placed in the workQueue for execution.
  • When the number of threads in the pool is equal to the corePoolSize and the workQueue is full, determine whether the number of threads has reached maximumPoolSize. If not, create a non-core thread to execute the submitted task.
  • If the current number of threads reaches maximumPoolSize and new tasks come along, the rejection policy is applied directly.

The JDK provides four classes for handling rejection policies

  • AbortPolicy(throw an exception, default)
  • DiscardPolicy(Discards tasks directly)
  • DiscardOldestPolicy (discard the oldest task in the queue and commit the current task to the thread pool)
  • CallerRunsPolicy (handed over to the thread of the pool call for processing)

4.JVM memory structure

The JVM memory structure is the foundation that Java programmers must master.

Program counter (PC register)

A program counter is a small piece of memory that can be seen as an indicator of the line number of bytecode executed by the current thread. In the virtual machine model, the bytecode interpreter works by changing the value of this counter to select the next byte code instruction to execute. Basic functions such as branching, looping, exception handling, and thread recovery depend on the counter.

Java virtual machine stack

  • Like program counters, the Java virtual machine stack is thread private and has the same life cycle as the thread
  • When each method is executed, a “stack frame” is created to store information about local variables (including parameters), operand stack, dynamic links, method exits, and so on. Each method that is called to the end of the execution corresponds to a stack frame in the virtual machine stack from the stack to the stack.
  • The local variable table stores various basic data types, such as Boolean, byte, char, and short

Local method stack

Similar to the VIRTUAL machine stack, the difference is that the virtual machine stack serves the Java method executed by the VIRTUAL machine, while the local method stack serves the Native method.

The Java heap

  • The GC heap is the largest area of memory managed by the Java virtual Machine and is shared by various threads and is created when the JVM starts.
  • The size is set with the -xms (minimum) and -xmx (maximum) parameters. -xms is the minimum memory required for the JVM to start up, and -xmx is the maximum memory that can be required for the JVM.
  • Because collectors now use generational collection algorithms, the heap is divided into new generation and old generation. The Cenozoic is composed of S0 and S1, and the size of the Cenozoic can be specified by the -xMn parameter.
  • All object instances and arrays are allocated on the heap.
  • In addition to the Class version, field, method, interface, and so on, the Class file contains constant pool information, which is used to store various symbolic references generated by the compiler. This information is placed in the runtime constant pool in the method area after the Class is loaded.

Methods area

  • Also known as “permanent generation”, it is used to store virtual machine loaded class information, constants, static variables, is the memory area shared by various threads. The size of the method area can be limited by the -xx :PermSize and -xx :MaxPermSize parameters.
  • Runtime constant pool: part of the method area, where the main content comes from the JVM loading of classes.
  • In addition to the Class version, field, method, interface, and so on, the Class file contains constant pool information, which is used to store various symbolic references generated by the compiler. This information is placed in the runtime constant pool in the method area after the Class is loaded.

5.Java memory model

  • Java threads communicate with each other through shared memory. In the process of communication, there will be a series of problems such as visibility, atomicity, sequenceality and so on. JMM is a model built around multi-thread communication and a series of related features. The JMM defines sets of syntax that map to the Java language such as volatile, synchronized, and so on. Interested can look at my other note: www.jianshu.com/p/3c1691aed…
  • The Java memory model states that all variables are stored in main memory, and that each thread has its own working memory. The working memory of the thread holds a copy of the main memory of the variables used by the thread. All operations on variables must be performed in the working memory, rather than directly reading or writing to main memory. Different threads cannot directly access variables in each other’s working memory. The transfer of variables between threads requires data synchronization between their own working memory and main memory.

6. SpringMVC implementation flowchart

  • The User sends a request to the server, and the front-end control servlets DispatcherServlet capture;
  • The DispatcherServlet parses the request URL, calls HandlerMapping to get all related objects configured by this Handler, and returns them as HandlerExecutionChain objects.
  • The DispatcherServlet selects an appropriate HandlerAdapter based on the obtained Handler.
  • Extract the model data from the Request, fill in the Handler parameters, and start executing Handler (Controller)
  • When the Handler finishes executing, it returns a ModelAndView object to the DispatcherServlet
  • Based on the returned ModelAndView, select an appropriate ViewResolver
  • The ViewResolver combines the Model with the View to render the View
  • Return the render result to the client.

7.JDBC execution process

JDBC execution process:

  • Connecting to a data source
  • Pass query and update instructions to the database
  • Process the database response and return the result

8. Spring Cloud Component Architecture

Spring Cloud is a Cloud native application development tool based on Spring Boot implementation. It provides a simple development method for jVM-based Cloud native application development involving configuration management, service discovery, fuse, intelligent routing, micro proxy, control bus, distributed session and cluster state management.

  • Eureka is responsible for service registration and discovery.
  • Hystrix is responsible for monitoring calls between services, playing the role of circuit breaker and demote.
  • Spring Cloud Config provides a unified configuration center service.
  • We forward all external requests and services through Zuul, acting as an API gateway
  • Finally, we used Sleuth+Zipkin to record all the request data for further analysis.
  • Spring Cloud Ribbon is a set of client load balancing tools based on the Netflix Ribbon. It is a client load balancer based on HTTP and TCP.
  • Feign is a declarative Web Service client whose purpose is to make Web Service invocation easier.

9. Dubbo calls

Dubbo is a distributed service framework dedicated to providing high performance and transparent remote service invocation solutions. This can be confused with load balancing, which provides a common address and routes requests to different servers through polling, randomization, etc.

  • Provider: indicates the service Provider of the exposed service.
  • Consumer: The service Consumer that invokes the remote service.
  • Registry: The Registry for service registration and discovery.
  • Monitor: A monitoring center that collects statistics on service invocation times and invocation times.
  • Container: the Container in which the service runs.

10. To be updated later…

Personal public Account

Welcome everybody to pay attention, everybody study together, discussion together.