This post was posted by Shaw on the ScalaCool team blog.

Play! Is an efficient Java and Scala Web application framework that can be used to develop “responsive” Web applications while also integrating components and apis required for modern Web application development. This article introduces Play! As well as the advantages of using the framework to develop Web programs.

A responsive Web development framework


Image of play


Play! Abandoning the paradigm of traditional Java Web frameworks in favor of embracing the concept of “Reactive” applications and designing them from the ground up makes Play! It is possible to build Web applications that can respond to user behavior in real time even under high loads. Play! As a full stack “responsive framework”, it has the following characteristics:

  1. Responsive — At the user level, Play! Ability to respond quickly to user behavior

  2. Scalable — At the load level, Play! Can achieve good horizontal scaling

  3. Event-driven — Play! HTTP Server is implemented based on the event model

Next we Play! These several main features are introduced.

Evented Web Application Server

Before Play 2.6.x, Play! HTTP Server is based on Netty implementation, the latest version of Play 2.6.x is based on Akka HTTP implementation, in the introduction of Play! Before we take a look at how HTTP Server is used by traditional Java Web frameworks.

At present, there are two main types of HTTP Server implementation models — “thread model” and “event model”.

THREADED SERVERS

Servers used in traditional Java Web frameworks, such as the popular Apache Tomcat, are implemented based on the threading model, which works as shown in the following figure:


Image of Thread


The “receiver threads” (Accpter threads) accept HTTP requests from clients and then assign these requests to the “request processing threads” for processing.

The downside of this model is that the “worker threads” (the “request processing threads” mentioned above) are limited, and the number of requests from clients is often greater than the number of “worker threads”. When this happens, threads that have not been processed will remain blocked and waiting, which is reflected in the user level. If the waiting time is too long, patient users will eventually see the Request timeout message, and impatient users will close the page.

Threading is also very resource-intensive. If a request is time consuming, the worker thread handling the request might work like this:


Image of thread-handle


The green is the running time of the program, and the red is the waiting time. It can be seen that because the I/O operation is relatively slow, most of the working time of this thread is waiting, which greatly consumes resources. If it is the use of multi-threading, the consumption of resources will be increased by multiple times.

EVENTED SERVERS

Play! HTTP Server is implemented based on Netty or Akka HTTP, both of which have the advantage of asynchronous non-blocking. Let’s take a look at Play! How the event model server works:


Image of Event


We know that when a user sends a request, it often involves many actions, and Play! These requests can be split into events and processed asynchronously. For example, when a certain event is being processed by the operating system, this process may take some time, said before, if the thread has been waiting for this event is performed and then to carry out an event is a little waste of resources, so in this waiting time, the event loop (message threads) to perform other events in the event queue. When an event is finished, an interrupt is emitted, which counts as an event, and then added to the event queue, waiting to be executed. This asynchronous, non-blocking mode makes Play! Ability to handle heavy traffic with fewer resources. At the user level, Play! Ability to quickly respond to user behavior.

In contrast to the “threading model”, let’s draw a similar graph to explain why the “event model” consumes fewer resources and handles more requests:


Image of Event handle


The execution time of the green part of the figure for the event, the orange part is “free time”, note that there are “free time” rather than the front said “wait time”, in the spare time, the event loop to perform in front of the other events without waiting for an event, when a certain event execution is completed, will send interrupt, This interrupt also generates a new event, which the Event loop eventually executes. This is how the “event model” processes a request. As you can see, the absence of waiting time greatly improves the efficiency of application execution and enables the system to process a large number of requests with fewer resources.

Asynchronous nonblocking

Play! By redesigning and implementing their own HTTP Server this makes Play! The ability to process each request in an “asynchronous” manner. In the use of Play! During development, Play! The default controller configuration is asynchronous, so we can use Play! It’s easy to write asynchronous, non-blocking code. We know that before Java8, writing asynchronous non-blocking code often required the use of callbacks, but the fabled “callback hell” occurred when the business logic became complex and callbacks became numerous, making the code extremely unreadable. The introduction of Future in Scala greatly simplifies the handling of multiple callbacks, making code look much more elegant (how to Play! Future is used to implement asynchronous logic, which will be covered further in a later article. So in the Play! Writing asynchronous code in Scala can be very efficient.

Stateless

Play! The framework abandons the concept of Session in Servlet/JSP, provides no built-in method to bind objects to Server instances, does not store state on the Server side between each HTTP Request, and the required state needs to be passed between HTTP requests. The advantage of this is that the application scales out well at the load level, and we’ll look at some of the stateful and stateless deployments.

Stateful deployment


Image of state


If we store a large amount of “state information” with the client in the session, we need to share this “state information” between each node in order to prevent the loss of the saved session state between the user and the server due to the failure of a certain machine. A common approach is to use clusters, such as the clustering function of Tomcat or JBoss. In this way, the problem of excessive system load cannot be solved by adding nodes, because with the increase of nodes, the session communication between each node will increase, thus increasing the system overhead. Therefore, stateful deployment does not make the system scale well.

Stateless deployment


Image of stateless


As shown in the figure, stateless deployment is adopted. Each node does not store state information such as session, and there is no shared state between nodes, which are independent of each other. When the load of the system increases, we only need to add one node, and then balance the load at the front end to improve the performance of the system. This allows the system to scale well. Of course, without session, Play! To save the state, we can use Play! Cookie-based client user sessions and “external caches” (which will be covered in a later article).

ROR style

For many companies, it is important to get a product up and running quickly. ROR (Ruby on Rails) style frameworks are so efficient that many companies tend to use them instead of traditional Java frameworks when building applications quickly.


Image of play and java layer


The image above can be compared to Play! In contrast to traditional Java EE frameworks, you can see Play! The architecture is clearer and more concise. In the Play! Previously, traditional Java Web frameworks took longer to develop Web applications than ROR style frameworks for two main reasons:

1. Rely on servlets

Traditional Java Web frameworks are built on servlets, and developers need to develop applications that run in Servlet containers, but this has the consequence that developers need to restart the Web server every time they modify the code to see the changes. If the project is small, the restart and compile time is acceptable, but if the project is large, most of the development time is wasted on the restart and compile.

And Play! The framework dynamically loads classes when the source code is modified by ClassLoader, which solves the problem that the server needs to restart when the code is modified, making the development efficiency higher.

Complex XML configuration files

Traditional Java Web frameworks need to introduce a large number of XML configuration files when developing a Web application. These files are troublesome to configure, and if there are a large number of files scattered under different files, the maintenance cost will increase.

Play! The framework is well versed in ROR and uses convention over configuration. There is only one global configuration file, application.conf, and most of the other configuration is done by default.

RESTFul

Traditional Java Web frameworks use servlets to hide the Http protocol, meaning that it is not intuitive for a developer to see an action for a request. And Play! If we want to get a list of users, we can write this in the route file:

GET      /customer/list      controllers.CustomerController.listCopy the code

The /customer/list URL corresponds to the list method in CustomerController.

It’s a little bit more intuitive.

Strongly typed template

From the Play! 2 Start, Play! The template for Scala fully embraces Scala, so Play! All templates are compilable Scala functions, which means you can see template errors directly in the browser or console at compile time, rather than having to wait until the application is deployed and the page is called.