Author | Haohui Mai, Bill Liu, Naveen Cherukuri


Nuclear Coke

Uber disseminates data from various sources in real time to achieve a more seamless and enjoyable user experience. Specifically, Uber needs an estimated delivery time (ETD) for An UberEATS order, a guided delivery route based on real-time traffic conditions, and key impact indicators in between. Some excellent engineers from Uber have brought us a detailed analysis of this technology.

The rapid growth of Uber’s business requires a data analytics infrastructure that collects a wide range of insights from around the world at any given moment, including city-specific market conditions and estimates of global financial flows. Given that our Kafka infrastructure delivers over a trillion real-time messages a day, the platform must:

(1) Easily provide navigation services for all kinds of users without any requirement on their technical background; (2) Analyze real-time events in a scalable and efficient manner; (3) Extremely powerful, sustainable support for hundreds of critical tasks.

In response to this need, we built and opened source the AthenaX project — our in-house flow analysis platform designed to meet this need and provide accessible flow analysis capabilities to everyone.

AthenaX supports both our technical and non-technical customers, ensuring that they can run comprehensive production-level flow analysis tasks using structured query Language (SQL). SQL makes event flow processing simpler — SQL describes the data to be analyzed, while AthenaX determines how to analyze the data (for example, to locate the data or scale up its calculations).

As a matter of practical experience, we found that AthenaX enabled users to significantly reduce their data flow analysis time from weeks to hours. In today’s post, we’ll explore why we built AthenaX, outline its infrastructure, and detail the many contributions we’ve made to its open source community.

The evolution of Uber’s flow analytics platform

To better provide actionable analytics to our users, Uber must be able to measure application activity and the various external factors that affect it (such as traffic, weather, and major events). In 2013, we built the first generation flow analysis pipeline on Top of Apache Storm. While impressive, the pipeline only counts specific sets of metrics; At the macro level, the solution consumes live events, aggregates results across multiple dimensions (such as geographic regions and time horizons), and ultimately publishes the results on a web page.

As we continue to expand our products, we find that fast and effective flow analysis capabilities are increasingly important and in demand. With UberEATS, real-time metrics like customer satisfaction and sales help restaurants better understand how their business is performing and customer feedback, and target potential revenue. To calculate these metrics, our engineers built multiple flow analysis applications on top of Apache Storm and Apache Samza. More specifically, these applications are responsible for projecting, filtering, or associating multiple Kafka topics and computing composite results using resources provided by hundreds or thousands of containers.

However, these solutions are far from ideal. Users are either forced to implement, manage, and monitor their own flow analysis applications, or are limited to extracting answers from a predefined set of questions.

AthenaX is committed to addressing this challenge by allowing users to build customized, production-ready flow analysis solutions using SQL. To meet the scale requirements of Uber’s business, AthenaX compiled and optimized the SQL query and delivered it to the distributed streaming application to ensure millions of message processing operations per second with just eight YARN containers. AthenaX also manages each application in an end-to-end manner, including continuous monitoring of its health, automatic scaling based on the size of its input data, and smooth recovery through failover in the event of a node or overall data center failure.

In the next chapter, we will explore in detail how to build AthenaX’s powerful and flexible overall architecture.

Build flow analysis applications using SQL

Figure 1: AthenaX takes streaming data and queries as input, calculates the results, and then pushes the results into various outputs.

Along the way, we gained experience that ultimately led to AthenaX, Uber’s next-generation stream analytics platform. The key feature of AthenaX was that users could specify their flow analysis using only SQL, and AthenaX would then be responsible for performing it efficiently. AthenaX’s ability to compile queries into a reliable, efficient, distributed application while managing the full life cycle of the application allows users to focus only on the core business logic. As a result, users at all levels of technology, regardless of scale, can run their own flow analysis applications in a production environment in just a few hours.

As shown in Figure 1, an AthenaX task would take multiple data sources as input, perform the necessary processing and analysis, and then generate the output for many different types of endpoints. AthenaX’s workflow follows the following steps:

  1. The user specifies a task in SQL and submits it to the AthenaX primary node.
  2. The AthenaX master node validates the query and compiles it into a Flink task.
  3. The AthenaX primary node packages, deploys, and executes the task in the YARN cluster. The master node is also responsible for restoring tasks in the event of a failure.
  4. Tasks start processing the data, and the produced results are published to external systems (such as Kafka).

In our experience, SQL does a pretty good job of specifying flow applications. Take the Restaurant Manager; In this use case, the following query counts the number of orders received by the restaurant in the previous 15 minutes, as shown below:

Basically, the query scans the entire UberEats_Workflow Kafka topic, filters out irrelevant events, and aggregates the number of events at 1-minute intervals within a 15-minute sliding window.

AthenaX also supports user-defined functions within queries (UDFs for short), so its functionality will be further enriched. For example, the following query can use a UDF to display flights at a particular airport, where longitude and latitude are converted to an airport ID, as shown below:

Looking at a more complex example — calculating the potential revenue of a particular Restaurant — the Restaurant Manager gives the result in the following way:

This query can incorporate real-time events and their details into the order status structure to calculate potential revenue levels.

Our experience shows that over 70% of streaming media applications in production environments can be expressed in SQL. The AthenaX application can also display different levels of data consistency assurance — AthenaX tasks can process real-time events at most once, at least once, and only once.

Next, we’ll explore the compilation workflow for the AthenaX query.

Query compilation for distributed data flow program

AthenaX uses Apache Flink to implement the classic Volcano method for query compilation and incorporate the results into a distributed data stream program. As shown in Figure 2, the compilation workflow for Restaurant Manager includes:

  1. AthenaX parses the query and converts it into a logical plan (Figure 2 (a)). A logical plan is a set of direct acyclic diagrams (DAG) that describe the specific semantics of the query.
  2. AthenaX optimizes this logical plan (Figure 2 (b)). In this example, this optimization binds projection and filtering to the stream scan task. In this way, it minimizes the amount of data that needs to be associated.
  3. The logical plan is translated into the corresponding physical plan. A physical plan is a set of DAGs that contain details such as data location and parallelism. These details are responsible for describing how the query is executed on the physical device. Using this information, the physical plan can be mapped directly into the final distributed data flow program (Figure 2 (c)).





Figure 2: The AthenaX compilation process consists of a series of DAGs and nodes. Each DAG is responsible for describing the data flow being queried, and each node describes the tasks that need to be performed as the data flows through. Figures 2 (a), 2 (b), and 2 (c) show the original logical plan in Flink, the optimized logical plan, and the compiled data flow program (the physical plan is omitted for brevity, as it is almost identical to Figure 2 (c)).

After the compilation process is complete, AthenaX will execute the compiled data stream program on top of the Flink cluster. These applications are capable of processing millions of messages per second in a production environment using eight SETS of YARN containers. AthenaX’s processing power has the speed and scope to deliver the latest insights and deliver a better user experience.

The AthenaX application in the Uber production environment

During its six-month production run, the current version of AthenaX ran more than 220 applications across multiple data centers, handling billions of messages a day. AthenaX offers a variety of platforms and products, including the Michelangelo project, Restaurant Manager for UberEATS, and UberPOOL.

We also implemented the following features to further extend the platform:

• Resource estimation and automatic scaling AthenaX estimates the amount of virtual core and memory required based on the throughput of query and input data. We also observed significant differences between peak and off-peak workloads. To maximize cluster utilization, the AthenaX primary node will continuously monitor the tagging and garbage collection statistics for each task and restart it if necessary. Flink’s fault-tolerant model ensures that tasks still produce correct results in the meantime.

• Monitoring and automating failure recovery A significant portion of AthenaX tasks are responsible for supporting critical building blocks in the pipeline and therefore require 99.99% availability. The AthenaX master continually monitors the health of all AthenaX tasks and properly recovers them in the event of a node failure or even an overall data center failure.

Looking ahead: A simpler implementation of flow analysis

Uber’s flow analytics team — which did AthenaX’s hard work building it — grinned sweetly for the cameras. Back row: Bill Liu, Ning Li, Jessica Negara, Haohui Mai, Shuyi Chen, Haibo Wang, Xiang Fu and Heming Shou. Front row: Peter Huang, Rong Rong, Chinmay Soman, Naveen Cherukuri and Jing Fan.

By using SQL as an abstraction, AthenaX simplifies the processing of flow analysis tasks and enables users to quickly bring large-scale flow analysis applications into production environments.

To help you easily build your own data streaming platform, we open source AthenaX on GitHub, and contribute a number of core features to Apache Flink and the Apache Calcite community. For example, as part of the Flink 1.3 release, we contributed group Windows and support multiple complex data types; In addition, we are working on sharing a set of JDBC Table Sinks in the next release.