This article has been authorized by the author Yue Meng NetEase cloud community release.

Welcome to visit NetEase Cloud Community to learn more about NetEase’s technical product operation experience.

1.Flink architecture and characteristic analysis

Flink is a fairly old project, started in 2008, but has only recently gained attention. Flink is a native stream processing system with a high level API. Flink also provides apis to do batch processing like Spark, but the fundamentals of the two processes are completely different. Flink treats batch processing as a special case of stream processing. In Flink, all data is considered a stream, which is a good abstraction because it is closer to the real world.

1.1 Basic Architecture

The following describes the basic architecture of Flink. The architecture of Flink is similar to that of Spark, which is based on the master-slave style.

When the Flink cluster starts, a JobManger and one or more TaskManagers are first started. The Client submits a task to the JobManager, which then dispatches the task to each TaskManager for execution. The TaskManager then reports heartbeat and statistics to the JobManager. Data is transferred between taskManagers in the form of streams. All three are independent JVM processes.

Client is the Client that submits jobs. It can run on any machine (connected to the JobManager environment). After the Job is submitted, the Client can end the process (the Streaming task) or wait for the result to return.

JobManager is responsible for scheduling jobs and coordinating tasks to checkpoint, similar to Storm’s Nimbus. After receiving resources such as jobs and JAR packages from clients, the system generates optimized execution plans and allocates them to each TaskManager based on Task units.

TaskManager starts with a number of slots. Each Slot can start a Task, which is a thread. The system receives tasks to be deployed from the JobManager. After the deployment starts, the system establishes a Netty connection with the upstream device to receive data and process the data.

As you can see, Flink’s Task scheduling model is multi-threaded, and different jobs/tasks are mixed together in a TaskManager process. The main components are JobManager, TaskManager, and Client. The following describes the functions of the three components.

JobManager

JobManager is the coordinator of the Flink system. It receives Flink jobs and schedules the execution of multiple tasks that comprise the jobs. In addition, JobManager collects Job status information and manages the TaskManager in the Flink cluster. JobManager is responsible for various management functions. The events it receives and processes mainly include:

RegisterTaskManager

6. Initials Geregistration The TaskManager will register with the JobManager at Flink cluster startup and, if successful, the JobManager will reply to the TaskManager with the message.

SubmitJob

The Flink program internally submits the Flink Job to the JobManager through the Client. In the message SubmitJob, the basic information of the Job is described in the form of JobGraph.

CancelJob

CancelJob CancelJob CancelJob CancelJob CancelJob CancellationSuccess CancellationFailure CancellationFailure

UpdateTaskExecutionState

TaskManager asks JobManager to update the ExecutionVertex state in ExecutionGraph, and returns true if the update succeeds.

RequestNextInputSplit

A Task running on TaskManager requests the NextInputSplit to process, and returns NextInputSplit on success.

JobStatusChanged

ExecutionGraph sends this message to the JobManager to indicate that the state of the Flink Job has changed. For example, RUNNING, CANCELING, and FINISHED are all involved.

TaskManager

TaskManager is also an Actor, which is the group of tasks that perform Flink jobs on the Worker that is actually responsible for performing the computation. Each TaskManager manages resource information on its node, such as memory, disks, and networks, and reports the resource status to JobManager upon startup. TaskManager can be divided into two phases:

Stage of registration

The TaskManager registers the initials with the JobManager, sends the RegisterTaskManager message, waits for the JobManager to return the initials geregistration, and then the TaskManager can proceed with the initialization process.

Operational stage

In this phase, the TaskManager can receive and process task-related messages, such as SubmitTask, CancelTask, and FailTask. If the TaskManager cannot connect to The JobManager, the TaskManager loses contact with the JobManager and automatically enters the “registration phase”. After the registration, the TaskManager can continue processing task-related messages.

Client

When a user submits a Flink program, a Client will be created first. The Client will preprocess the Flink program submitted by the user and submit it to the Flink cluster for processing. Therefore, the Client needs to obtain the address of JobManager from the configuration of the Flink program submitted by the user. Establish a connection to JobManager and submit the Flink Job to JobManager. The Client will assemble a JobGraph from the Flink application submitted by the user and submit it as a JobGraph. A JobGraph is a Flink Dataflow that consists of multiple JobVertex daGs. A JobGraph contains the following information about a Flink program: JobID, Job name, configuration information, a set of JobVertex, and so on.


1.2 Yarn Layer Architecture

The yarn-level architecture is similar to spark on YARN mode. In both scenarios, the Client submits the App to RM for running. RM allocates the first Container to run AM. Note that the YARN mode of Flink is similar to the cluster mode of Spark on YARN. In cluster mode, the dirver runs as a thread in AM. In Flink on Yarn mode, JobManager is also started in a Container to schedule and allocate tasks similar to the driver. Yarn AM and Flink JobManager reside in the same Container. In this way, AM can know the address of Flink JobManager so that AM can apply for a Container to start Flink TaskManager. After Flink successfully runs on the YARN cluster, the Flink YARN Client can submit Flink jobs to Flink JobManager and perform subsequent mapping, scheduling, and calculation.

1.3 components stack

Flink is a system with a layered architecture, with each layer containing components that provide specific abstractions to serve the components at the top.

Deployment layer

This layer mainly involves the deployment mode of Flink, which supports a variety of deployment modes: local, cluster (Standalone/YARN), cloud (GCE/EC2). The Standalone deployment mode is similar to that of Spark. Here, we take a look at the deployment mode of Flink on YARN

The Runtime layer

The Runtime layer provides all the core implementations that support Flink computing, such as distributed Stream processing, jobGraph-to-ExecutionGraph mapping, scheduling, and so on, and provides basic services for the upper API layer.

The API layer

The API layer mainly implements Stream processing oriented to unbounded Stream and batch-oriented Batch processing API, in which stream-oriented processing corresponds to DataStream API and batch-oriented processing corresponds to DataSet API.

Libraries layer

This layer can also be called Flink application framework layer. According to the division of API layer, the implementation computing framework built on the API layer to meet the specific application also corresponds to stream-oriented processing and batch-oriented processing respectively. Stream-oriented processing support: CEP (complex event processing), SQL-like operations (Table based relational operations); Batch-oriented support: FlinkML (Machine learning library), Gelly (Graph processing).


One of the most important features of Flink is that Batch and Streaming use the same processing engine. Batch applications can run efficiently as a special Streaming application.

One problem is how Batch and Streaming are processed using the same processing engine.

1.4 How Batch and Streaming use the same processing engine.

Below is a code explanation of how Batch and Streaming use the same processing engine. First, the Flink test case is used to distinguish between the two.


Batch WordCount Examples


Streaming WordCount Examples


Batch and Streaming use different ExecutionEnviroment, For ExecutionEnviroment read the source data is a DataSet, and StreamExecutionEnviroment source data is a DataStream.

The Batch process from Optimzer to JobGgraph is used to construct the LocalPlanExecutor, and the Remote executor is RemotePlanExecutor


The first step in the Run method is to get the jobGraph, which is the client-side operation, and the client submits the jobGraph to the JobManager to be turned into ExecutionGraph. The difference between Batch and streaming is in getting JobGraph.

If we initialize FlinkPlan StreamingPlan, first of all, constructing Streaming StreamingJobGraphGenerator to will optPlan into JobGraph, Batch directly adopt another way of transformation.

In short, Batch and Streaming will have two different executionEnvironments that will translate different apis into different JobGgrah, In addition to StreamGraph, JobGraph also has OptimizedPlan. OptimizedPlan is converted from the Batch API. StreamGraph is a translation of the Stream API, and JobGraph’s responsibility is to unify the Batch and Stream diagrams.

1.5 Feature Analysis

High throughput & low latency


Flink’s stream processing engine requires minimal configuration to achieve high throughput and low latency. The following figure shows the performance of a distributed counting task, including the streaming data shuffle process.


Event Time and out-of-order events are supported

Flink supports windowing mechanisms for stream processing and Event Time semantics.


Event Time makes it easier to calculate events that arrive out of order or that may arrive late.

The exact-once semantics of state computation

Streams can maintain custom state during computation.


Flink’s Checkpointing mechanism guarantees the exactly once semantics of the state even in the event of a malfunction.

Highly flexible streaming window

Flink supports time Windows, statistics Windows, session Windows, and data-driven Windows


Windows can be customized with flexible trigger conditions to support complex streaming computing patterns.

Continuous flow model with back pressure

Data flow applications perform continuous (resident) Operators.


Flink streaming has natural flow control at run time: slow data sink nodes will backpressure fast data sources.

Fault tolerance

Flink’s fault tolerance mechanism is based on chandy-Lamport Distributed Snapshots.


This mechanism is very lightweight, allowing the system to have high throughput rates while still providing strong consistency.

Batch and Streaming a system uses the same engine for both stream and Batch processing


Flink shares a common engine for both stream and batch applications. Batch applications can run efficiently with a special stream processing application.

Memory management

Flink implements its own memory management within the JVM.


Applications can exceed the main memory size limit and incur less garbage collection overhead.

Iteration and incremental iteration

Flink has specialized support for iterative computation (such as in machine learning and graph computation).


Incremental iterations can take advantage of dependency computations for faster convergence.

Application tuning


Batching automatically optimizes some scenarios, such as avoiding expensive operations (such as Shuffles and sorts) and caching intermediate data.

API and libraries

Flow processing application

The DataStream API supports functional transformations over data flows, with custom state and flexible Windows.

The example on the right shows how to count word occurrences in a text data stream in a sliding window.

WindowWordCount in Flink's DataStream APIcase class Word(word: String, freq: Long)

 val texts: DataStream[String] = ... 

val counts = text .flatMap {

 line => line.split("\\W+") 

} .map { 

token => Word(token, 1) 

} .keyBy("word") .timeWindow(Time.seconds(5),

 Time.seconds(1)) .sum("freq"

Batch application

Flink’s DataSet API lets you write beautiful, type-safe, maintainable code in Java or Scala. It supports a wide range of data types, not just key/ Value pairs, as well as a wealth of Operators.

The example on the right shows one of the core loops of the PageRank algorithm in graph computation.

case class Page(pageId: Long, rank: Double)case class Adjacency(id: Long, neighbors: Array[Long]) 

val result = initialRanks.iterate(30) { pages => pages.join(adjacency).where("pageId").equalTo("pageId") {

 (page, adj, out : Collector[Page]) => {

Collect (Page(page.id, 0.15 / numPages)) for (n < -adj. neighbors) {

Out. Collect (Page (n, 0.85 * Page. Rank/adj. Neighbors. Length))

 } 

 } 

 } .groupBy("pageId").sum("rank")}



The class library ecological


The Flink stack provides a number of advanced apis and libraries for different scenarios: machine learning, graph analysis, and relational data processing. Currently the class library is still in beta and is under development.

Extensive integration

Flink has integration with many projects in the open source big data processing ecosystem.


Flink can run on YARN, work with HDFS, read stream data from Kafka, execute Hadoop program code, and connect to multiple data storage systems.

The deployment of sex

Flink can be deployed independently of Hadoop, relying only on the Java environment, which is relatively simple.


Free experience cloud security (EASY Shield) content security, verification code and other services

For more information about NetEase’s technology, products and operating experience, please click here.




Appium encapsulation displays the Wait class and the ExpectedCondition interface