This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.

The premise is introduced

In this series of articles, you will learn how to build and develop Quarkus microservices framework optimized for Kubernetes. You will learn how to build Quarkus microservices foot environment and scaffolding, and develop Quarkus endpoint services. System and application level configuration is introduced with Quarkus programming model analysis, creating Quarkus application Uber-JAR files and integration into Kubernetes environment.

  1. Learn Quarkus’ zero-base construction and development practices of cloud native microservices
  2. Analysis of Quarkus programming model and integration with Kubernetes environment

Target audience

Java software developers, system architects, microservices development enthusiasts, operation and maintenance deployment personnel, etc.

Current status

In recent years, due to the popularity of cloud native technology, more and more users start to use containers to run micro-service applications. With the rapid development of micro-services, Spring bucket has become the de facto standard of Java Framework, including spring Framework and SpringBoot used by individual applications. Spring Cloud, the service governance framework between micro services, has a complete ecosystem and various components emerge endlessly.

Java cloud biogenic pain points

  • The advent of lightweight container technology has made JVM services more bloated

    • With the introduction of microservices architecture, our service granularity is becoming smaller and smaller, and lightweight and quick-start applications can better adapt to containerized environments. For our current conventional Spring Boot application, the jar package of Restful service is about 30M. If we package JDK and related applications into docker image file, it is about 140M.

    • Regular Go language executables typically generate image packages of no more than 50M. How to slim down the bloated Java application and make it easy to be containerized has become a problem to be solved by the Java application cloud biochemistry.

  • The advent of lightweight container technology has led to excessive usage of JVM service content

    • The increasing memory usage of JVMS can cause FullGC to overload or even OOM.
  • SpringBoot microservice applications are starting slower and slower (JVM startup speed)

    • The process from JVM startup to actual application execution involves VM loading, bytecode file loading, and the JVM’s local optimization of interpreted execution bytecode with JIT(Just in time) compilation techniques to generate locally executed code through the compiler for efficiency. Add in the time spent on garbage collection within the JVM.

Typical Java application load times start in seconds, and it is normal to take a few minutes to load a large application. In the past, since we rarely restarted Java applications, the problem of long Java application startup times was rarely exposed.

  • However, in cloud native application scenarios, the deployment frequency is too frequent as the granularity becomes very fine

    • We often restart the application for rolling upgrades or no service scenarios. The problem of Java application startup time becomes an urgent problem for Java application cloud biotechnology.

The introduction of Quarkus

  • Quarkus is positioned as the Kubernetes Native Java framework tailored for GraalVM and OpenJDK HotSpot.

  • Quarkus is an open source project of Red Hat. With the help of the open source community, it ADAPTS the framework widely used in the industry and combines the characteristics of cloud native applications to provide a set of end-to-end Java cloud native application solutions.

  • Although open source has a short time, the ecological aspect has reached the state of availability, including its own extension framework, has supported frameworks such as Netty, Undertow, Hibernate, JWT, enough for the development of enterprise applications, users can also expand based on the extension framework.

The March toward primordial

In applications that require a long run, where hot code is captured by HotSpot’s detection mechanism and compiled into machine code that can be executed directly by the physical hardware, Java’s efficiency depends largely on the quality of the code output from the just-in-time compiler.

The HotSpot virtual machine includes two just-in-time compilers, a client-side compiler (C1) that takes less time to compile but optimizes the output code, and a server-side compiler (C2) that takes longer to compile but optimizes the output code. Usually they work with the interpreter under a hierarchical compilation mechanism to form the HotSpot VIRTUAL machine’s execution subsystem.

New generation of just-in-time compilers (Graal VM)

Since JDK 10, HotSpot has added a new just-in-time compiler: the Graal compiler, which, as the name suggests, comes from the Graal VM mentioned in the previous section. The Graal compiler was introduced as a replacement for the C2 compiler.

C2 compiler issues

C2 has a long history, dating back to the doctoral work of Cliff Click, a compiler written in C++ that, while still working well, has become so complex that even Cliff Click himself is reluctant to maintain it.

The Graal compiler is itself written in the Java language and is deliberately implemented in the same form as C2 called “sea-of-Nodes” for High IR, making it easier to leverage the benefits of C2.

Graal later than C2 compiler compiler for twenty years, has the advantages of extremely abundant, in keeping the compiled code that can output similar quality at the same time, the development efficiency and scalability are significantly better than C2 compiler, which determine the C2 good code in a compiler optimization technique can be easily transplanted to Graal compiler, However, the same optimizations that work in Graal compiler are very difficult to implement in C2 compiler.

Graal compiler

In just a few years, Graal’s compilation performance quickly caught up with C2 and even began to surpass C2 in some tests.

Graal is able to do more sophisticated optimizations than C2, such as Partial Escape Analysis, and has strategies that are easier to use than C2 with Aggressive predictive Optimization, Support for custom predictive assumptions, etc.

The Graal compiler is still young and has not been tested enough, so it still carries the “experimental status” tag and needs to be activated with a switch parameter. This reminds me of the scene in JDK 1.3 when HotSpot VIRTUAL machine was first released and also needed to be activated with a switch. It also has a history as a replacement for the Classic VIRTUAL machine.

The future of the Graal compiler is promising. As the latest engine for executing Java VIRTUAL machine code, its continuous improvement will inject faster and stronger driving force into both HotSpot and Graal VM.

Summary analysis of GraalVM

GraalVM: In order to improve efficiency, the JVM uses JIT just-in-time compilation technology to locally optimize the bytecode interpreted for execution, and the compiler generates locally executed code to improve application execution efficiency.

GraalVM is a new generation JVM just-in-time compiler for multiple languages developed by Oracle LABS, which has excellent performance and multi-language interoperability. Compared to Java HotSpot VM, Graal provides a 2 – to 5-fold performance improvement with inlining, escape analysis, and push optimization techniques.

GraalVM provides static compilation capabilities that are optimized only for the closed world that is visible at compile time, not for code that uses reflection, dynamic loading, and dynamic proxies.

  • In order for our daily Java applications to run properly, we need to adapt the framework and class libraries used by the application.

  • This is quite a bit of work because of the number of libraries Java code uses, and although GraalVM has been around for more than a year, it is still rare to see large-scale Java applications migrate to the platform.