One side

1. Self-introduction, projects and technical fields

Open questions

2. Monitoring in the project: What are the common monitoring indicators?

A: CPU, memory, IO, etc. It is recommended to download an Nmon tool, which has various metrics.

Database: Mysql (cache hit, index, single SQL performance, number of database threads, number of datapool connections)

Middleware: 1. Message 2. Load balancing 3. Cache (including thread count, connection count and log).

Network: throughput and throughput

Applications: JVM memory, logging, Full GC frequency

3. What are the technologies involved in microservices and the problems that need attention?

4. What do you know about the registry?

A: Consul, Eureka, and ZooKeeper

5. Do you know the reliability of Consul?

6. Have you gone into details about consul’s mechanism? Has it been compared with other registries?

7. The project uses Spring a lot. Do you know the principle of Spring? Principles of AOP and IOC

IoC (IoC of Control) means that the container controls the relationship between program objects, rather than the traditional implementation, which controls the program code directly. The transfer of control from application code to external containers is called inversion. In the case of Spring, it controls the life cycle of objects and the relationships between them. The IoC has another name — Dependency Injection. Dependency injection, by its name, means that the dependencies between components are determined by the container at run time, that is, the container dynamically injects certain dependencies into the component.

(2). In the way Spring works, all classes register in the Spring container, tell Spring what it is and what you need, and Spring will give you what you want when the system is running properly, while also handing you over to other things that need you. The creation and destruction of all classes is controlled by Spring, which means that it is not the reference object that controls the life cycle of an object, but Spring. For a specific object, it used to control other objects, but now all objects are controlled by Spring, so this is called inversion of control.

(3). In the operation of the system, it dynamically provides an object with other objects it needs.

(4). The idea of dependency injection is realized through reflection mechanism. When instantiating a class, it uses reflection to call the set method in the class to inject the class attributes previously saved in HashMap into the class. In summary, in traditional object creation, the caller usually creates the instance of the called, whereas in Spring Spring creates the called, and then injects the caller, known as dependency injection or inversion of control. There are two types of injection: dependency injection and setting injection. Advantages of IoC: It reduces the coupling between components, reduces the complexity of substitution between business objects, and enables flexible object management.

AOP (Aspect Oriented Programming)

(1) AOP aspect-oriented programming is based on IoC, which is a beneficial supplement to OOP;

(2). AOP uses a technique called “crosscutting” to peel apart the interior of a wrapped object and encapsulate the common behavior that affects multiple classes into a reusable module called “Aspect,” or Aspect. The “aspect”, simply put, is the encapsulation of logic or responsibilities that have nothing to do with the business but are commonly invoked by business modules, such as logging, to reduce the duplication of code in the system, reduce coupling between modules, and facilitate future operability and maintainability.

(3) AOP represents a horizontal relationship, which compares the “object” to a hollow cylinder, which encapsulates the properties and behavior of the object; The aspect-oriented approach is to slice the cylinder in the form of sections and selectively provide business logic. And the cut surface is the so-called “aspect”. Then, with a masterful hand, it restored the cut surfaces, leaving no trace, but accomplishing the result.

(4). AOP technology, mainly divided into two categories: one is the use of dynamic proxy technology, the use of interception message, the message decoration, in order to replace the original object behavior implementation; The other is static weaving, which introduces special syntax to create “aspects” so that the compiler can weave code about “aspects” at compile time.

JDK dynamic proxy: its proxy object must be an implementation of an interface, it is created at runtime to achieve the implementation of an interface class to achieve the target object proxy; The two core classes are InvocationHandler and Proxy. CGLIB proxy: The implementation is similar to JDK dynamic proxies, except that the proxy objects it generates at run time are subclasses of the target class extension. CGLIB is an efficient code generation package that relies on ASM (an open source Java bytecode editing library) to manipulate bytecode. The packages ASm.jar and cglib.jar need to be imported. The use of AspectJ injection sections and @AspectJ annotation-driven sections are actually implemented underneath through dynamic proxies.

(6). AOP usage scenarios:

Authentication Permission Check

Caching cache

Context passing

Error handling Error handling

Lazy loading Indicates Lazy loading

Was Debugging Debugging

Logging, tracing, profiling and Monitoring Logging, tracing, optimization, calibration

Performance optimization, efficiency check

Persistence Persistence

Resource pooling Resource pools

Synchronization synchronous

Transactions Management

In addition, the implementation of Filter and struts2 interceptor are the embodiment of AOP ideas.

8. Besides automatic configuration, what other differences do Spring Boot have compared to traditional Spring?

Provides a cleaner way to develop the Spring ecosystem, providing many non-functional features such as embedded Server, Security, statistics, health check, external configuration, etc., mainly in the following aspects:

1.Spring Boot can establish an independent Spring application;

2. Built-in containers such as Tomcat, Jetty and Undertow, which means you can run them without having to deploy them;

3. No more tedious CONFIGURATION of XML files like Spring;

4. Spring can be automatically configured. SpringBoot changes the XML configuration to Java, changes bean injection to annotation injection (@autowire), and condenses multiple XML and properties configurations into a appliaction.yml configuration file.

5. Provide some existing functions, such as measurement tools, form data validation and some external configuration, such as some third-party functions;

6. Integrate common dependencies (development libraries such as Spring-WebMVC, Jackson-JSON, Validation-API, tomcat, etc.) and provide POM to simplify Maven configuration. When we introduce core dependencies, SpringBoot will introduce other dependencies.

9. What do you know about Spring Cloud?

Spring Cloud is an ordered collection of frameworks. It takes advantage of the development convenience of Spring Boot to subtly simplify the development of distributed system infrastructure, such as service discovery registry, configuration center, message bus, load balancing, circuit breakers, data monitoring, etc., which can be started and deployed with one click using Spring Boot’s development style. Spring Cloud does not remanufacture the wheel. It just combines the mature and practical service frameworks developed by various companies and encapsulates them in the Spring Boot style, masking the complex configuration and implementation principles. Finally, a simple and easy to understand, easy to deploy and easy to maintain distributed system development kit was set aside for developers.

Life cycle of Spring Beans

A Bean is created until it is destroyed, if the BeanFactory is used to generate and manage the Bean

Beans in the Spring context are similar, as follows

Instantiate a Bean– also known as new;

2. Configure the instantiated Bean according to the Spring context — that is, IOC injection;

3. If the Bean already implements the BeanNameAware interface, its implemented setBeanName(String) method is called, passing in the Bean ID value from the Spring configuration file

4. If the Bean already implements the BeanFactoryAware interface, its implementation of setBeanFactory is called (setBeanFactory(BeanFactory) passes the Spring factory itself. Simply configure a normal Bean in the Spring configuration file);

5. If the Bean already implements the ApplicationContextAware interface, the setApplicationContext(ApplicationContext) method is called, passing in the Spring context. But it is better than 4 because ApplicationContext is a subinterface of the BeanFactory and has more implementation methods);

6, if the Bean associated the BeanPostProcessor interface, will call postProcessBeforeInitialization (Object obj, String s) method, BeanPostProcessor is often used as a change to the Bean content, and since this is a method that calls that at the end of Bean initialization, it can also be applied to memory or caching techniques;

7. If the Bean has the init-method property configured in the Spring configuration file, it automatically calls its configured initialization method.

8, if the Bean associated the BeanPostProcessor interface, will call postProcessAfterInitialization (Object obj, String s) method,;

Note: After the above work is done, we can apply the Bean, which is a Singleton, so normally we will call the Bean with the same ID on the instance with the same content address. Of course, we can also configure the non-Singleton in the Spring configuration file, so we won’t go into details here.

9. When the Bean is no longer needed, it passes through the DisposableBean stage. If the Bean implements the interface DisposableBean, its implementation destroy() method will be called;

10. Finally, if the Bean has the destroy-method property configured in its Spring configuration, its configured destruction method is automatically called.

In addition, we describe the life cycle of applying the Spring context Bean. If we apply the Spring factory (BeanFactory), we can remove step 5

11. What is the difference between HashMap and hashTable?

Difference: Hashtable is thread-safe and inefficient

Hashtable does not support Null keys or values. The put() method of Hashtable is explained in the comment

The default size of a Hashtable is 11, and the size is 2n+1 each time it is expanded.

The default initialization size of a HashMap is 16. After each expansion, the capacity is doubled

A Hashtable computes the position of an element by performing a division operation, which can be time-consuming

In order to improve the efficiency of the calculation, the size of the hash table is fixed to a power of 2, so that in the modulus budget, there is no need to do division, only need to do bitwise operation. Bit operations are much more efficient than division.

HashMap is derived from AbstractMap class, and HashTable is derived from Dictionary class. However, they both implement the map, Cloneable, and Serializable interfaces simultaneously

Object hashCode () {equals ();

No, the Ojbect class has two methods equals and hashCode, which are used to compare whether two objects are equal. If two objects are equal, they must have the same hash code.

Even if two objects have the same hash code, they are not necessarily equal

Overwriting the equals() method requires overwriting hashCode(), but overwriting hashCode doesn’t require overwriting equals

13. The occurrence of unsafe Hashmap threads

Use ConcurrentHashMap for thread safety

The HashMap thread is not safe for multithreading

In the hashmap, the value of size is not volatile, which means that it is not a variable visible in memory. When a thread manipulates data, it usually copies a variable copy from main memory and writes the value of size back to main memory

Thread insecurity should be one of the concurrency problems, belongs to a relatively advanced problem. At this point, the problem is not limited to the code level, but often needs to be analyzed in conjunction with the JVM

14, Online service CPU is very high, what should I do? What steps can be taken to find the problem

Locate faulty stack information Troubleshoot specific problems

1, top command: Linux command. You can view real-time CPU usage. You can also view the CPU usage in a recent period.

2. Ps command: Linux command Powerful process status monitoring commands. You can view the current CPU usage of the process and the threads in the process. Sample data belonging to the current state.

Jstack: a command provided by Java. You can view the current thread stack status of a process. The output of this command can be used to locate the current running status of all threads of a process, the running code, whether they are deadlocked, and so on.

Pstack: Linux command. You can view the current thread stack status of a process

15. Which thread pools are in the JDK? Thread pools, by the way

JUC provides scheduler object Executors to create thread pools. There are four thread pools that can be created

NewFixedThreadPool creates a thread pool with a specified number of worker threads. Each time a task is submitted, a worker thread is created, and if the number of worker threads reaches the initial maximum of the thread pool, the submitted task is deposited to the pool queue.

2. NewCachedThreadPool creates a cacheable thread pool. The characteristics of this type of thread pool are:

1). There is almost no limit to the number of worker threads that can be created (although there is a limit to the number of interger.max_value), allowing for the flexibility of adding threads to the thread pool.

2). If there is no submission to the thread pool for a long time, that is, if the worker thread is idle for a specified period of time (default: 1 minute), the worker thread will terminate automatically. After termination, if you submit a new task, the thread pool creates a new worker thread.

NewSingleThreadExecutor creates a single-threaded Executor, that is, a single worker thread is created to execute a task, and if this thread terminates abnormally, another thread will take its place, ensuring sequential execution (which I think is its feature). The most important feature of a single-worker thread is that it is guaranteed that each task is executed sequentially and that no more than one thread is active at any given time.

NewScheduleThreadPool creates a thread pool of fixed length and supports timed and periodic tasks, similar to timers. (This thread pool principle is not yet fully understood.)

What are the common methods of SQL optimization

Query conditions reduce the use of functions to avoid full table scan

Reduce unnecessary table joins

Some business logic for data operations can be implemented in the application layer

You can use with as

Avoid cursors because cursors are inefficient

Don’t make your SQL statements too complex

The query cannot be executed in a loop

Use exists instead of in

Don’t be too tangled with table associations

Select %% from charindex or like[0-9]

The inner table can be checked before the leftJoin table is associated

Table associative data splitting can be done, which means that the core data is checked first and then the other data is checked through the core data, which is much faster

Optimize by referring to the SQL execution order

Alias is also used for table association to improve efficiency

Use views to index and optimize views

In the form of a data warehouse, separate tables are set up to store data and periodically update data according to time stamps. The data associated with multiple tables is extracted and stored in a single table, which improves the query efficiency

Queries should be optimized to avoid full table scans, and indexes should be considered on where and order by columns first

Avoid null values for fields in the WHERE clause, which will cause the engine to abandon the index and perform a full table scan, as in:

select id from t where num is null   

Select * from num where num is null; select * from num where num is null;

select id from t where num=0    

19. Avoid using it in where clauses! = or <> otherwise the engine will abandon the index for a full table scan

17, SQL index order, field order

SQL > select * from SQL where index is used. (What tools do you have?)

Simply prefix the SELECT statement with EXPLAIN

19, the difference between TCP and UDP? TCP data transmission process how to achieve reliable?

User Data Protocol (UDP) is a Protocol corresponding to TCP. It belongs to the TCP/IP protocol family

1) To ensure the reliable transmission of packets, the sender must keep the sent packets in the buffer;

(2) Start a timeout timer for each sent packet;

(3) If a reply message is received before the timer times out, the buffer occupied by the packet is released.

(4) Otherwise, the packet is retransmitted until the response is received or the retransmission times exceed the specified maximum times.

(5) After receiving the packet, the receiver will perform CRC check first. If it is correct, the data will be handed over to the upper layer protocol, and then send a cumulative reply packet to the sender, indicating that the data has been received. If the receiver just has data to send to the sender, the reply packet can also be carried in the packet.

Say you know the sorting algorithm

Common internal sort algorithms are: insertion sort, hill sort, selection sort, bubble sort, merge sort, quick sort, heap sort, radix sort and so on

Find the median of an array.

Find the median by binary search

The basic idea is: assuming that AR1 [I] is the median after merging, then ar1[I] is greater than the pre-i-1 number in AR1 [], and greater than the pre-j =n-i-1 number in ar2[]. Binary search continues on the left or right side of ar1[I] by comparing ar1[I] with ar2[j] and ar2[j+1]. For two arrays ar1[] and ar2[], do a binary lookup in ar1[] first. If the median is not found in ar1[], continue the search in ar2[].

Algorithm flow:

1) Get the middle number of array ar1[], assuming the subscript is I.

2) Calculate the subscript j corresponding to ar2[], j = n-i-1

3) If ar1[I] >= AR2 [j] and ar1[I] <= ar2[j+1], then ar1[I] and ar2[j] are two intermediate elements, return the average value of ar2[j] and ar1[I]

4) If ar1[I] is greater than ar2[j] and ar2[j+1] then do dichotomy search on the left part of ar1[I] (i.e. arr[left… i-1])

5) If ar1[I] is less than ar2[j] and ar2[j+1] then do dichotomous search on the right part of ar1[I] (i.e. arr[I +1….right])

6) If ar1[] is left or right, do binary search in ar2[]

Time complexity: O(logn).

Second interview

Do you have any questions you want to ask me?

1. Self-introduction, work experience and technology stack

2. What skills did you learn in the project?

3. What is the granularity of microservices?

4. How to guarantee the high availability of microservices?

Load balancing and reverse proxy, isolation, limiting, degradation, timeout and retry, rollback, stress testing and contingency planning

5, common load balancing, how to use, can you say?

1. HTTP redirection

When an HTTP proxy (such as a browser) requests a URL from a Web server, the Web server can return a new URL via the Location tag in the HTTP response header. This means that the HTTP proxy needs to continue requesting the new URL to complete the automatic jump.

2. DNS load balancing

The DNS provides domain name resolution services. When accessing a site, you need to obtain the IP address of the domain name through the DNS server of the site’s domain name. During this process, the DNS server maps the domain name to the IP address. The DNS server acts as a load-balancing scheduler, spreading requests across multiple servers like an HTTP redirection conversion strategy, but with a completely different implementation mechanism.

3. Reverse proxy load balancing

This is no doubt familiar, as almost all major Web servers are keen to support reverse proxy based load balancing. Its core job is to forward HTTP requests.

In contrast to the previous HTTP redirection and DNS resolution, the reverse proxy scheduler acts as an intermediary between the user and the real server:

Any HTTP request to the actual server must go through the scheduler

2. The scheduler must wait for the HTTP response from the real server and send it back to the user (the first two methods do not require scheduling feedback, the real server sends it directly to the user).

4. IP Load Balancing (LVS-NAT)

Because the reverse proxy server works at the HTTP layer, its own overhead has severely limited scalability and thus limited its performance limits. Can load balancing be done below the HTTP level?

NAT server: It works at the transport layer and modifies the destination address of incoming IP packets to the actual server address

5. Direct Routing (LVS-DR)

NAT works at the transport layer (Layer 4) of the network layering model, while direct routing works at the data link layer (Layer 2), which is more impressive. It changes the destination MAC address of the packet (without changing the destination IP address) and forwards the packet to the real server. The difference is that the response packet from the real server will be sent directly to the customer without going through the scheduler

6. IP Tunnel (LVS-TUN)

Request forwarding mechanism based on IP tunnel: Encapsulates the IP packets received by the scheduler into a new IP packet and forwards the packets to the real server. Then the response packets from the real server can directly reach the client. LVS TUN can be used to implement LVS TUN. Different from LVS DR, the real server and the scheduler can be on different WANt network segments. The scheduler forwards requests to the real server through IP tunnel technology, so the real server must also have a valid IP address.

In general, LVS-DR and LVS-TUN are suitable for Web servers with asymmetric response and request. How to choose between them depends on your network deployment needs. Because LVS-Tun can deploy real servers in different regions as required and transfer requests according to the principle of nearest access, there are similar requirements. Lvs-tun should be selected.

6. What benefits can gateways bring to back-end services?

Back-end servers can focus on processing business requests, saving a lot of connection management overhead

7. Spring Bean lifecycle

How can the init/destroy methods configured in XML call specific methods?

9. The mechanism of reflection

Everyone knows that in order for Java programs to run, Java classes have to be loaded by the Java Virtual machine. Java classes do not run properly unless they are loaded by the Java virtual machine. All the programs we run now know at compile time that the class you need has been loaded.

Java’s reflection mechanism is that compilation does not determine which class is loaded, but instead loads, probes, and examines itself while the program is running. Use classes that are not known at compile time. Such a characteristic is reflection

The reflection mechanism uses the void setAccessible(Boolean Flag) method to obtain the private methods and properties of a class. Use these private methods and properties

A method in the Object class

constructor

2. The hashCode and equale functions are used to determine whether objects are the same,

3, wait (), wait (long), wait (long, int), notify () and notifyAll ()

ToString () and getClass,

5, clone ()

6, Finalize () is used in garbage collection

Where hashCode and equals methods are commonly used

12. Whether the object comparison is the same

Equals is used to compare whether the contents of two objects are equal, and == is used to compare whether the addresses of two objects are equal

13, How to check whether the hashmap put method is duplicate

Compare key hashCode first and then equal or equals, so override hashCode() and equals() to add duplicate elements.

14. Why override the Object toString method when it is commonly used

Commonly used in object model classes

Because if User is a User object, if user.tostring (); The result is abnormal because the User object may have multiple attributes, such as age, name, etc., and the toString does not know which attribute is converted to a string

15, What is the difference between Set and List?

Set: Objects in a collection are not ordered in a particular way, and there are no duplicate objects. Some of its implementation classes can sort objects in collections in specific ways.

List: Objects in a collection are sorted by index position, can have duplicate objects, allowing objects to be retrieved by their index position in the collection.

16, ArrayList and LinkedList

ArrayList is a data structure based on dynamic arrays, and LinkedList is a data structure based on linked lists

ArrayList inheritance AbstractList

LinkedList inheritance AbstractSequentialList

An ArrayList is an array that holds objects in contiguous locations, so the biggest drawback is that insertion and deletion are cumbersome

LinkedList takes the approach of storing objects in a separate space and storing the index of the next link in each space, but the disadvantage is that it is very cumbersome to search from the first index

17. Which takes up more space if you access the same data, ArrayList or LinkedList?

ArrayList feels superior to LinkedList for random access to GET and set because LinkedList moves the pointer

For add and remove operations, LinedList has an advantage because ArrayList moves data. To delete or insert an object from the array, it moves the elements in the next segment of the array, reordering the index. Reordering the index takes time. In contrast,LinkedList is implemented using a LinkedList, and to remove or insert an object from the list, simply change the reference to the object before and after it

Is the Set stored in an orderly order?

A disorderly

Set is a mask of Map, and the Map implements most of the logic

19. What are the common implementations of Set?

HashSet

LinkedHashSet

TreeSet

20. What data requirements does TreeSet have for storage?

TreeSet collections are used to sort object elements, and they can also be used to ensure that elements are unique

21. What is the underlying implementation of HashSet?

Have you seen the underlying source code of TreeSet?

The underlying implementation of TreeSet is TreeMap

public TreeSet(Comparator<? super E> comparator) {

    this(new TreeMap<>(comparator));

  }

Is HashSet thread safe? Why not thread-safe?

To put it bluntly, a HashSet is a HashMap with limited functionality, so learn how to implement a HashMap

24. What are thread-safe Maps in Java?

Concurrenthashmap

25. How is Concurrenthashmap thread safe?

Most of the operations of ConcurrentHashMap are the same as those of HashMap, such as initialization, expansion, and conversion of linked lists to red-black trees. However, U.compareAndSwapXXX is heavily used in ConcurrentHashMap

This method is to use a CAS algorithm to achieve lock-free operation to modify the value, which can greatly reduce the performance consumption of the lock agent. The basic idea of this algorithm is to constantly compare the current value of a variable in memory with the value you specify

If a variable is equal, accept the modified value you specify, otherwise reject your operation. Because the value in the current thread is not the latest value, your changes may overwrite the results of changes made by other threads. this

The idea of SVN is similar to optimistic locking.

At the same time, three atomic operations are defined in ConcurrentHashMap to operate on nodes at a specified location. These three atomic operations are widely used in methods like get and PUT on ConcurrentHashMap,

It is these atomic operations that make ConcurrentHashMap thread-safe.

Before ConcurrentHashMap, the JDK used Hashtable to implement thread-safety, but hashtable locked the entire hashtable, so it was inefficient.

ConcurrentHashMap divides data into multiple segments (default: 16). Each Segment contains an array of hashEntries.

For a key, three hash operations are required to locate the element. The three hash operations are:

For a key, perform a hash operation to obtain the hash value h1, h1 = hash1(key).

Hash the top bits of h1 a second time to get the hash value h2, i.e., h2 = hash2(the top bits of h1).

The resulting h1 is hashed a third time to get the hash value h3, which is h3 = hash3(h1), which determines which HashEntry the element is placed in.

Each Segment has a lock. When writing data, only one Segment needs to be locked, while data in other segments is accessible.

26. Have you heard about HashTable?

Hashtable does not support Null keys or values. The put() method of Hashtable is explained in the comment

Hashtable is thread-safe,

Hashtable is thread-safe and uses the Synchronize method for each method, making it inefficient

The default size of a Hashtable is 11, and the size is 2n+1 each time it is expanded.

A Hashtable computes the position of an element by performing a division operation, which can be time-consuming.

27. How to ensure thread safety?

28, synchronized

Synchronized is a keyword in Java, that is, a built-in feature of the Java language

If a code block is modified by synchronized, when a thread acquires the lock and executes the block, the other threads must wait until the thread that acquires the lock releases the lock. The thread that acquires the lock releases the lock only in two cases:

1) The thread that acquires the lock completes the code block and then releases the lock;

2) When thread execution is abnormal, the JVM causes the thread to release the lock automatically

So if the thread that acquired the lock is blocked because it is waiting for an IO or some other reason (such as calling a sleep method), but does not release the lock, the other thread has to wait for nothing. Imagine how inefficient this can be.

Therefore, there needs to be a mechanism to prevent waiting threads from waiting indefinitely (such as waiting for a certain amount of time or being able to respond to interrupts), and this can be done with a Lock

Another example: When multiple threads read and write files, the read and write operations conflict, and the write and write operations conflict, but the read and read operations do not conflict.

But using the synchronized keyword causes a problem:

If multiple threads are only reading, while one thread is reading, the other threads can only wait and cannot read.

So you need a mechanism that allows multiple threads to just read without conflicting threads, and you can do that with a Lock.

In addition, the Lock lets you know if the thread successfully acquired the Lock. Synchronized can’t do this

29. Atomicity of volatile? Why does i++ not support atomicity? From the design of computer principles, the reason why atomicity cannot be guaranteed

Happens before principle

31. Cas operation

The java.util.Concurrent package uses CAS to implement an optimistic lock that differs from synchronized

Cas is the compare and swap algorithm

CAS has three operands, the memory value V, the old expected value A, and the new value B to modify. Change the memory value V to B if and only if the expected value A and the memory value V are the same, otherwise do nothing

The JDK provides the AtomicReference class to ensure atomicity between reference objects, allowing multiple variables to be placed in one object for CAS operations.

32, The difference between lock and synchronized?

Lock is an interface, synchronized is a Java keyword, and synchronized is a built-in language implementation.

2) Synchronized will automatically release the lock occupied by the thread when an exception occurs, so it will not lead to deadlock; UnLock (); unLock(); unLock(); unLock();

3) Lock can make the thread waiting for the Lock respond to the interrupt, but synchronized cannot. When synchronized is used, the waiting thread will wait forever and cannot respond to the interrupt;

4) Using a Lock can tell whether a Lock has been acquired successfully, whereas synchronized cannot.

5) Lock can improve the efficiency of multiple threads to perform read operations.

In terms of performance, if the resource competition is not fierce, the performance of the two is similar, but when the resource competition is very fierce (that is, there are a large number of threads competing at the same time), the performance of Lock is much better than synchronized. So, in the specific use of appropriate circumstances to choose.

 

category

synchronized

Lock

There are levels

Java keywords, at the JVM level

Is a class

The release of the lock

If an exception occurs, the JVM causes the thread to release the lock

In finally, the lock must be released; otherwise thread deadlocks can occur

To acquire the lock

Suppose thread A acquires the lock and thread B waits. If thread A blocks, thread B will wait

Depending on the situation, a Lock can be acquired in multiple ways. The thread can attempt to acquire the Lock without waiting forever

The lock state

Unable to determine

Can be judged

The lock type

Reentrant uninterruptible is not fair

Reentrant, judge, fair (both)

performance

A small amount of synchronization

A large number of simultaneous

Fair lock and unfair lock

The queues for both fair and unfair locks are based on a bidirectional linked list maintained within the lock. The value of Node is each thread requesting the current lock. A fair lock is one that takes the value from the head of the queue each time

Unfair lock In the process of waiting for the lock, if any new thread attempts to acquire the lock, it is very likely to directly acquire the lock

It is clear from ReentrantLock that there are two types of synchronization, FairSync and NonfairSync. The role of fair lock is strictly in accordance with the order of the thread to execute, do not allow other threads to jump the queue to execute; And non-fair locks allow queue-jumping.

By default, ReentrantLock is synchronized through unfair locks, including the synchronized keyword, because performance is better. Since the thread enters the RUNNABLE state and can execute, it takes a long time for the actual thread to execute. Furthermore, after a lock is released, other threads need to retrieve the lock again. The thread that holds the lock releases the lock, other threads recover from suspension to RUNNABLE state, other threads request the lock, acquire the lock, and the thread executes. If a thread were to request the lock directly, it would probably avoid the cost of resuming the RUNNABLE state, so the performance would be better.

Java read/write lock

35. What are the main problems solved by read-write lock design?

Multithreading,

The read operation can be shared, while the write operation is exclusive. The read operation can have multiple reads, while the write operation can only be written, and the simultaneous write operation cannot be read

Solve reading and reading can be carried out at the same time, reading and writing can not be carried out at the same time, writing and writing can not be carried out at the same time

36. In addition to writing Java code, your project also has front-end code. Do you know what front-end frameworks there are?

Vue layer react element

MySQL > select * from pagesql

LIMIT [offset,] rows

Offset specifies the offset of the first row to return, and rows specifies the maximum number of rows to return

MySQL transaction features and isolation levels

Fundamentals of transactions (ACID)

1. Atomicity: After a transaction starts, all operations are either finished or not done. It is impossible to stop in the middle. If an error occurs during transaction execution, it will be rolled back to the state before the transaction began, and all operations will appear as if they did not happen. In other words, the transaction is an indivisible whole, just like the atom in chemistry, which is the basic unit of matter.

2. Consistency: The integrity constraint of the database is not broken before and after the transaction. For example, if A transfers money to B, it is impossible for A to withhold money and B does not receive it.

3. Isolation: Only one transaction is allowed to request the same data at the same time, and there is no interference between different transactions. For example, if A is withdrawing money from A bank card, B cannot transfer money to the card until the withdrawal process is complete.

4, Durability: When a transaction completes, all updates made by a transaction to the database are saved to the database and cannot be rolled back.

Second, the concurrency of transactions

1. Dirty read: Transaction A reads the data updated by transaction B, and then TRANSACTION B rolls back the data

2. Non-repeatable read: Transaction A reads the same data for many times, and transaction B updates and commits the data when transaction A reads the same data for many times, resulting in inconsistent results when transaction A reads the same data for many times.

3, phantom reads: system administrators, A database of all the grades of the students from the specific scores to ABCDE level, but the system administrator B at this time by inserting A specific score record, when A system administrator A change after the found there is no change to come over, A record like the illusion, this is called magic to read

MySQL transaction isolation level

Transaction isolation level

Dirty read

Unrepeatable read

Phantom read

Read uncommitted (read-uncommitted)

Read -committed

Repeatable read

Serializable

In what scenario does unrepeatable read occur?

Use scenario of SQL HAVING

If you need the result of a group function as a condition, you cannot use the WHERE clause; you must use the HAVING clause

What is the whole process of an HTTP request to the back end of a front-end browser address?

Can you tell me?

The server responds to the HTTP request. The browser gets the HTML code. The browser parses the HTML code. And request resources in HTML code (such as JS, CSS, images, etc.) -> browser to render the page to the user

42. Default HTTP port and default HTTPS port

The common port number of the HTTP proxy server is 80/8080/3128/8081/9080

HTTPS server. The default port number is 443/ TCP 443/udp

DNS do you know what it is?

DNS is a Domain Name Server. On the Internet, there is a one-to-one correspondence between domain names and IP addresses. Although domain names are easy for people to remember, machines can only know each other’s IP addresses. The translation between them is called domain name resolution, and domain name resolution needs to be completed by a special domain name resolution server

44, What is the IDE you are developing? Can you tell me some common shortcut keys of IDEA?

45. What do you use for code versioning?

Git rebase is merged with git rebase.

Do you work much overtime in your company

 

Note: Some questions need to be added.