Click the topHollis“Pay attention to me, wonderful content first time present.

Words: 2000

Reading time: 4 minutes

Performance tuning is a big topic. Java Program Performance Tuning says there are five levels of performance tuning: design tuning, code tuning, JVM tuning, database tuning, operating system tuning, and so on. Each level contains a number of methodologies and best practices. This article does not attempt to give a broad overview of these topics. Just a few common Java code optimization solutions, after reading readers can really practice their own code solutions.

The singleton pattern is used in cases where the creation of instances must be limited or a common instance must always be used to save system overhead for some resource-intensive operations such as IO processing, database connection, and configuration file parsing and loading.

public class Singleton { private volatile static Singleton singleton; private Singleton (){} public static Singleton getSingleton() { if (singleton == null) { synchronized (Singleton.class) { if (singleton == null) { singleton = new Singleton(); } } } return singleton; }}Copy the code

There are many ways to write the singleton mode, and my public account has also pushed many articles related to singleton:

Seven ways to write a singleton pattern

Design pattern ii — singleton pattern

Design pattern 3 — those singletons in the JDK

How do you implement a thread-safe singleton without using synchronized and Lock?

How do you implement a thread-safe singleton without using synchronized and Lock? (2)

Deep analysis of the love and hate between singleton and serialization ~

There are three benefits to using thread pools properly.

First: reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.

Second: improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created.

Third: improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.

After Java 5, concurrent programming introduced a bunch of new apis for starting, scheduling, and managing threads. The Executor framework, introduced in Java 5, uses a thread pool mechanism to control the starting, executing, and closing of threads in the java.util.cocurrent package to simplify concurrent programming.

public class MultiThreadTest { public static void main(String[] args) { ThreadFactory threadFactory = new ThreadFactoryBuilder().setNameFormat("thread-%d").build(); ExecutorService executor = new ThreadPoolExecutor(2, 5, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), threadFactory); executor.execute(new Runnable() { @Override public void run() { System.out.println("hello world !" ); }}); System.out.println(" ===> main Thread! "); }}Copy the code

Suppose that a task execution need to spend some time, in order to save unnecessary waiting time, to obtain a “bill of lading”, that is the Future, and then continue with other tasks, arrive until the “goods”, finish the task execution results, which can use the “bill of lading” to pick up the goods, namely, the return value is obtained by the Future object.

public class RealData implements Callable<String> { protected String data; public RealData(String data) { this.data = data; } @override public String call() throws Exception {// Try {thread.sleep (1000); } catch (InterruptedException e) { e.printStackTrace(); } return data; } } public class Application { public static void main(String[] args) throws Exception { FutureTask<String> futureTask =  new FutureTask<String>(new RealData("name")); ExecutorService executor = Executors.newFixedThreadPool(1); Request ("name") sends the request executor.submit(FutureTask); Thread.sleep(2000); thread.sleep (2000); thread.sleep (2000); Println (" data =" + futureTask.get())); }}Copy the code

The JDK has been offering a new I/O programming library since 1.4, NIO for short, which not only introduces new efficient buffers and channels, but also introduces Selector based non-blocking I/O mechanism, multiple asynchronous I/O operations are concentrated into one or several threads for processing, using NIO instead of blocking I/O can improve the concurrent throughput capacity of the program, reduce the system overhead.

For each request, if a separate thread is opened for the corresponding logical processing, when the client data transfer is not always going on, but intermittently, the corresponding thread needs to I/O wait, and context switch. After using the Selector mechanism introduced by NIO, the concurrent efficiency of the program can be improved and this situation can be improved.

public class NioTest { static public void main( String args[] ) throws Exception { FileInputStream fin = new FileInputStream("c:\\test.txt"); FileChannel fc = fin.getChannel(); ByteBuffer = ByteBuffer. Allocate (1024); // Read data into buffer fc.read(buffer); buffer.flip(); while (buffer.remaining()>0) { byte b = buffer.get(); System.out.print(((char)b)); } fin.close(); }}Copy the code

In concurrent scenarios, locks are often used in our code. There is lock, there is lock competition, there is lock competition, will consume a lot of resources. So how do we optimize locks in our Java code? It can be mainly considered from the following aspects:

  • Reduce lock holding time

    • You can use synchronized code blocks instead of synchronized methods. This can reduce the time the lock is held.

  • Reduce lock granularity

    • When using maps in concurrent scenarios, remember to use ConcurrentHashMap instead of HashTable and HashMap.

  • Lock the separation

    • A common lock (such as syncronized) blocks read and write, and blocks read and write. You can separate read and write operations.

  • Lock coarsening

    • In some cases we want to consolidate multiple lock requests into one request to reduce the performance cost of a large number of lock requests, synchronizations, and releases in a short period of time.

  • Lock elimination

    • Lock elimination is a process in which a Java virtual machine (JVM) can be jIT-compiled by scanning the running context to remove locks that cannot compete for shared resources. Lock elimination can save meaningless lock requests.

There will be an article about lock optimization later.

Before for data transmission, to the data is compressed, to reduce the network transmission of bytes, improve the speed of data transmission, the receiver can extract data, to restore the transmission of data, and compressed data can also save the amount of storage medium (disk or memory) space and network bandwidth, reduce the cost. Of course, compression is not free of overhead. Data compression requires a lot of CPU computation, and the complexity of computation and data compression ratio vary greatly according to different compression algorithms. Generally, you need to select different compression algorithms based on different service scenarios.

For the same user request, if the database is repeatedly queried each time, repeated calculation, will waste a lot of time and resources. The computed results to the local cache memory, or through a distributed cache to cache the results of, can save precious CPU resources, reduce duplication of database query or a disk I/O, will otherwise head physical turning into memory electronic movement, improve the response speed, and thread the rapid release of application capacity for promotion.

– MORE | – MORE excellent articles

  • Java Engineer To God (2018 Revision)

  • Java development must master 8 kinds of website attack and defense technology

  • The most thorough article in the hash() analysis of the Map

  • Overview of large site architecture technology

If you saw this, you enjoyed this article.

So please long press the QR code to follow Hollis

Forwarding moments is the biggest support for me.