The article | more field

Last week’s California Self-driving Disengagement report, which released the company’s road miles, perfectly extends the next question:

How to make these hard accumulated road test data to the best value?

Each of the companies on the list clearly understands the value of letting autonomous driving systems “learn” valuable road data over and over again, and almost all have a secret technology related to it:

By mixing and matching hundreds of thousands or even millions of real road measurements into a virtual environment, the automated driving system can continue to test it as if it were real (in fact, the computer doesn’t know the difference);

At the same time, you can adjust the data on the fly, compare different models, or see how vehicles behave under unusual events.

Generally speaking, this is one of the effective ways to make full use of road test data, and it is also the value of simulation training platform.

Today, Uber suddenly announced that it was opening up its visualization tools “to help build simulation test environments.”

A couple of years ago, Uber was the darling of the media when it was still in self-driving Army ONE. When they first unveiled their Web-based visualization technology platform in a blog post, the press got a lot of hype.

Of course, the underlying reason the visualization platform got attention wasn’t because of the technology’s complexity or sophistication, but rather because Uber was grappling with a problem that was baffling many self-driving tech companies:

How to further increase the value of data from the perspective of building the environment.

Interestingly, a week before Uber’s August 2017 blog launch, Google’s Waymo was first profiled by the Atlantic for its own simulation tool, Carcraft, which was described as a key factor in making Waymo’s autonomous driving system quickly better.

According to Waymo, while just over 3 million miles were logged on real roads in 2016, more than 2.5 billion miles were logged in virtual environments.

It was the combination of the dual-test approach that led to a significant reduction in Google’s “takeovers” of reported autopilot disengagements in California.

It was at that time, it seemed, that creating virtual test environments began to take off.

A week later, Uber wrote thousands of words showing off the brilliance of its data visualization tool.

To put it simply:

“With this easy-to-use set of tools, engineers and operators can quickly review, debug, and analyze all information gathered during real and virtual testing.”

To make it easier to understand, let’s take an example.

Unlike normal navigation maps, the high-precision maps required for autonomous driving will contain more details, such as high-resolution scanning of the ground, lane boundaries and lane types, turns, speed limits and crosswalks… It covers almost any relevant geographic information.

The high-precision map team can then use this visualization tool and multi-source data sets to examine and update high-precision map details at a given intersection.

“We try to express complex road data by creating a visual metaphor system that provides realistic representations of environmental elements such as ground images and lane markings, allowing engineers to anchor their understanding of the vehicle’s surroundings.” Uber explained in a blog post.

It’s not just a high-precision map, of course. The tools also test and debug the autonomous driving system’s own perception, route planning and decision-making abilities.

Engineers are able to use the vehicle logs to play and manipulate the camera in real time in a virtual environment, adding and subtracting objects from the scene, and this is where Uber’s data visualization framework is thought to come in.

For example, engineers can compare simulations generated by two versions of self-driving software.

For example, the visualization tool allows engineers to break and replay certain travel intervals for further inspection.

To deal with a major road-affecting event like a demonstration, engineers would need to open up maps, close off several main roads on the platform, and add a lot of pedestrians and erratic human drivers to test how the autopilot system would react to an unseen situation.

In reality, visualization is just an easy-to-understand form of technical representation.

Essentially, the purpose of such tools is to take existing data, generate useful data, and ultimately improve artificial intelligence.

It’s not that technology companies have released similar tools before.

Many tech giants, including Nvidia and Intel, offer similar tools for creating virtual worlds, but they’re obviously not perfect

They are desktop-based, inflexible, non-standard, and consume files that are too large to share.

In this blog post from two years ago, Uber almost solved the problem by listing the benefits of its visualization tool in detail:

  • Fast iteration

On the Web, it is fast and easy to develop and deploy features incrementally. If users want the latest version of a product, they simply refresh their browser’s page, rather than download and install a new application.

  • It’s flexible and easy to share

Since the Web is hardware agnostic, anyone, anywhere can use any operating system to work on the platform. On the web, incidents are reported and diagnosed by clicking on a URL.

  • Collaborative and personalized

Each team has unique visualization and data generation needs, so they need to be able to customize their standards.

“HTML5 and JavaScript are tested and trusted tools for creating custom UIs on the fly and easily integrated into other infrastructure and task management systems.” Uber stressed in a blog post.

Of course, in addition to the problems with existing tools, many “highly skilled” self-driving companies are also reluctant and rarely open about their visualization tools (or simulation platforms).

Google, for example, has been working on autonomous driving for 11 years, but its virtual simulation tool Carcraft was only discovered two years ago.

Tech startup Drive.ai also introduced four visualization tools (dashboard displays, 3D data visualization, annotated data sets, and interactive simulations) in 2018, but again only for internal engineers.

Until now, Uber had only allowed internal engineers to use the tool, and the last sentence of that blog post two years ago could be read as a job Posting:

“If you’re interested in ‘mapping’ self-driving technology, welcome to Uber’s data visualization team…”

But This particular Uber move has led prominent tech media outlets like Venturebeat to praise its “open source contribution to the global ecosystem of self-driving development tools.”

“Including Cruise, which opened its own library of 2D / 3D scene graphics last week, it’s a welcome move.

It’s an unprecedented step in a world where self-driving secrets are closely guarded. It’s a small step, but hopefully it will encourage developers to create cool apps that will ultimately lift the industry.” So commented The Verge.

For now, the open source tool is the same suite used by Uber’s Advanced Technology r&d division, which is responsible for developing the self-driving car platform, and even several self-driving companies, including Voyage and Applied Intuition, have committed to using it.

“Being able to intuitively explore state information such as sensor data, predicted paths, tracked objects, and accelerations is invaluable to the classification process.

At Voyage, we use this information to make data-driven decisions about engineering priorities.” Drew Gray, Voyage’s chief technology officer, said in a statement.

Recommended reading:

The Heart of the Machine Synced New Yorker feature: The full story of Waymo and Uber’s driverless centuryMini Program