I recently learned something about Baidu’s autonomous driving platform Apollo project. Here’s what I learned.

Apollo Program Introduction

Apollo is a software platform launched by Baidu for partners in the automotive industry and autonomous driving. The launch date is April 19, 2017. It aims to provide an open, complete and safe software platform to partners in the automotive industry and autonomous driving field, helping them combine vehicle and hardware systems to quickly build a complete autonomous driving system of their own. The program was named “Apollo,” borrowing the meaning of the Apollo moon program.

Here you can get a real car driving experience of Apollo: CES 2018 Baidu Apollo 2.0 unmanned car test ride in Sunnyvale, USA.

SAE Level

For autonomous driving, SAE (Society of Automotive Engineers) International released six different levels of classification system in 2014, ranging from fully manual system to fully automatic system. These six levels are described as follows:

SAE Level Name System capability Driver involvement
0 No Automation None The human at the wheel steers, brakes, accelerates, and negotiates traffic.
1 Drive Assistance Under certain conditions, the car controls either the steering or the vehicle speed, but not both simultaneously. The driver performs all other aspects of driving and has full responsibility for monitoring the road and taking over if the assistance system fails to act appropriately.
2 Partial Automation The car can steer, accelerate, and brake in certain circumstances. Tactical maneuvers such as responding to traffic signals or changing lanes largely fall to the driver, as does scanning for hazards. The driver may have to keep a hand on the wheel as a proxy for paying attention.
3 Conditional Automatio In the right conditions, the car can manage most aspects of driving, Prompts the driver to intervene when it encounters a scenario it can’t navigate. The driver must be available to take over at any time.
4 High Automation The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway.
5 Full Automation The driverless car can operate on any road and in any conditions a human driver could negotiate. Entering a destination.

Apollo’s official website is Apollo. Auto

The Apollo website describes the program’s features as follows:

  • Ability to open: Apollo,It is an open, complete and safe platform that will help partners in the automotive industry and autonomous driving field combine vehicles and hardware systems to quickly build their own autonomous driving systems.
  • Sharing resources, accelerating innovation: Apollo open platform, to provide you with leading technology, wide coverage, high automation of high precision map services; The world’s only open simulation engine with massive data; The world’s largest amount of open data, based on deep learning automatic driving algorithm end-to-end.
  • Continuous win-win: Apollo open platform allows you to develop, test and deploy autonomous vehicles faster. The more participants, the more driving data will accumulate. Apollo matures faster than a closed system, benefits each participant more, and the Apollo platform will be better with your participation!

At present, its official website lists close to 100 partners.

The Apollo blueprints were as follows:

  • 2017-07: Autonomous driving capability in enclosed venues
  • 2017-12: Autonomous driving capability in simple urban road conditions
  • 2020-12: Fully autonomous driving on highways and ordinary city roads

Apollo 2.5, the latest release, is primarily aimed at L2 autonomous driving.

Detailed Apollo version evolution information is shown below:

The source code

The source code for the Apollo project is available at github.com/ApolloAuto. There are five open source projects in this path:

  • Apollo: Source code for Apollo Autonomous driving platform.
  • Apollo-platform: The Apollo project is based on the Robot Operating System. Here is the code. The source code currently distributed is based on ROS Indigo.
  • Apollo-dueros: Apollo-Dueros is an Apollo-related telematics product that includes several open source products. For DuerOS, see here: DuerOS.
  • Apollo-kernel: The Linux kernel of the Apollo project.
  • Apolloau.github. IO: Apollo related documents, which can be accessed via apolloAu.github.

Compile and run

See here: ApolloAu.github. IO for information on how to compile and run the Apollo project.

Ubuntu and Docker environments are required to perform this task.

Once compiled, you can familiarize yourself with the environment on your computer through the project’s Dreamview feature, which is accessed through a browser and looks like this:

For more instructions on Dreamview, see here:
Dreamview Usage Table

The development of

The development of the Abro platform consists of the following steps:

  1. Understand the offline simulation engine Dreamviewer and ApolloAuto core software modules
    • See how algorithms work on cars
    • Start development immediately without using real cars or hardware
  2. Core module integration
    • The Location module
    • The Perception module (which supports third-party solutions such as Mobileye ES4-chip based cameras for L2 development) processes the point cloud data from Lidar and returns segmented object information upon request.
    • Planning module: calculate fine-tuning path and provide vehicle dynamic control information for the path segment serving the path.
    • Routine module: Find the local implementation of the path segment through the Navigator interface.
  3. Hd maps. The L4 level of autopilot requires high-definition maps. Since self-driving cars need to recreate the 3D world in the system, reference object coordinates play an important role in repositioning maps and self-driving in the real world.
  4. Cloud-based online simulation drives the scenario engine and data center.
    • As a baidu partner, you will be granted a Docker certificate to submit new images and replay algorithms you developed on the cloud.
    • Create and manage complex scenarios to simulate real-world driving experiences

Apollo and ROS

ROS stands for Robot Operating System. It includes a set of open source software libraries and tools for building robotics applications. Its official website is here: www.ros.org.

An ROS system consists of a series of individual nodes. These nodes communicate with each other through a publish/subscribe messaging model. For example, a sensor driver can be implemented as a node and then send sensor data in the form of published messages. This data can be received by multiple other nodes, such as filters, logging systems, and so on.

The nodes in a ROS system may reside on different hosts, such as an Arduino device that publishes messages, a laptop that subscribes to those messages, and an Android phone that monitors those messages.

ROS systems contain a Master node. The master node enables other nodes to query each other for communication. All nodes need to be registered with the master node and can then communicate with other nodes. As shown below:

Those familiar with Android may find this similar to the ServiceManager in Binder.

Nodes communicate with each other by publishing and subscribing to Topics. For example, in a robotic system, there is a camera module sitting on the robot that captures image data. In addition, there is an image processing module on the robot that needs to obtain image data, while there is another module located on the PC that also needs these image data. Then, the camera module can publish the /image_data theme for the other two modules to subscribe to. Its structure is shown in the figure below:

The Apollo project is based on ROS, but it has been transformed in the following three areas:

  1. Communication performance optimization
  2. Decentralized network topology
  3. Data Compatibility Extension

Communication performance optimization

Autonomous vehicles contain a large number of sensors, which may generate data at very high frequency, so the whole system requires high data transmission efficiency. In ROS systems, data is copied from publication to subscription nodes. In the case of a large amount of data, it is obvious that this will affect the efficiency of data transmission. So the Apollo project’s first modification of ROS will use shared memory to reduce data copies and improve communication performance. As shown below:

Decentralized network topology

As mentioned earlier, ROS systems contain a master node through which all other nodes communicate. So, obviously, if there is a communication failure at this node, it will affect the communication of the entire system. Moreover, the whole structure also lacks the exception recovery mechanism.

So Apollo’s second transformation of ROS is to remove this centralized network structure. Apollo uses the real-time publish-subscribe (RTPS) service discovery protocol to implement a complete P2P network topology. The whole communication process includes the following four steps:

RTPS can be found here:
Real-Time Publish-Subscribe

Data Compatibility Extension

The last major improvement Apollo made to ROS was a change to the data format.

In ROS systems, the MSG description file is used to define the message interface between modules. Unfortunately, after the interface upgrade, different versions of the module are not compatible.

Therefore, Apollo chose Google’s Protocol Buffers format data to solve this problem.

Protocol Buffers is a data description language developed by Google. Similar to XML, it can serialize structured data and be used for data storage and communication protocols. It is language – and platform-independent and highly extensible. Currently, C++, JAVA, Python are officially supported, but you can find a large number of third-party extensions covering almost all languages.

Note: If you look at the Source code for the Apollo project, you will see a number of folders named “Proto” that contain data structures in the Protocol Buffers (protobuf) format.

Hardware architecture

The required hardware on Apollo 2.5 is shown in the following table:

Peripherals include the following:

The hardware architecture is shown in the figure below:

Software architecture

The software architecture of the Apollo platform is shown below:

The core software modules running at Apollo include:

  • Perception: The Perception module identifies the world around the automatic vehicle. There are two important sub-modules in Perception module: obstacle detection and traffic light detection.
  • Prediction: Prediction module predicts the future movement trajectory of perceived obstacles.
  • Routing: The Routing module tells the autonomous vehicle how to reach its destination through a series of lanes or roads.
  • Planning: The Planning module plans the space-time trajectory of autonomous vehicles.
  • Control: The Control module executes the planned space-time trajectory by generating Control commands such as throttle valves, brakes, and steering.
  • CanBus: Interface through which the CanBus passes control commands to the vehicle hardware. It also passes rack information to the software system.
  • Hd-map: Provides information about the specific structure of the road.
  • Localization: This module uses various information sources such as GPS, LiDAR and IMU to estimate the location of the autonomous vehicle.

The interaction structure of these modules is shown below:

Each module runs as an independent CAROS-based ROS node. Each module node publishes and subscribes to certain topics. Subscribed topics are used as data input, while published topics are used as data output.

For information on system ARCHITECTURE for the Apollo platform, read HOW TO UNDERSTAND ARCHITECTURE AND WORKFLOW.

From this document we can see:

  • The autonomous vehicle is controlled by the planning engine through the Controller Area Network Bus (CAN).
  • In order to calculate efficiency, the Location module, Perception module, and Planning module work together as independent input and output sources through P2P.
  • Read the source code${MODULE_NAME}/confConfiguration files in the directory where we can get basic information about topics that modules subscribe to and publish.
  • Each module is triggered byInitInterface and registration callbacks begin.
  • All modules are registered at this point: AdapterManager::Init. The code snippet for this function is as follows:
void AdapterManager::Init(const AdapterManagerConfig &configs) { if (Initialized()) { return; } instance()->initialized_ = true; if (configs.is_ros()) { instance()->node_handle_.reset(new ros::NodeHandle()); } for (const auto &config : configs.config()) { switch (config.type()) { case AdapterConfig::POINT_CLOUD: EnablePointCloud(FLAGS_pointcloud_topic, config); break; case AdapterConfig::GPS: EnableGps(FLAGS_gps_topic, config); break; case AdapterConfig::IMU: EnableImu(FLAGS_imu_topic, config); break; case AdapterConfig::RAW_IMU: EnableRawImu(FLAGS_raw_imu_topic, config); break; case AdapterConfig::CHASSIS: EnableChassis(FLAGS_chassis_topic, config); break; case AdapterConfig::LOCALIZATION: EnableLocalization(FLAGS_localization_topic, config); break; case AdapterConfig::PERCEPTION_OBSTACLES: EnablePerceptionObstacles(FLAGS_perception_obstacle_topic, config); break; case AdapterConfig::TRAFFIC_LIGHT_DETECTION: EnableTrafficLightDetection(FLAGS_traffic_light_detection_topic, config); .Copy the code

Here is some analysis of the main core modules in the system.

The core module

Apollo /modules/ contains the source code for each module in the system.

Reading the source code, you can see that the classes of these core modules inherit from a common base class, ApolloApp, as shown below:

The structure of the ApolloApp class is shown below:

The main functions in this class are described as follows:

The function name instructions
std::string Name() const Return module name
apollo::common::Status Init() The initialization function of a module, the first function that the module starts
apollo::common::Status Start() The start function of a module
int Spin() The entry point of a module
void Stop() The stop function of a module

The apollo_app.h header also includes a macro to make it easier for each module to declare main as follows:

#define APOLLO_MAIN(APP) \ int main(int argc, char **argv) { \ google::InitGoogleLogging(argv[0]); \ google::ParseCommandLineFlags(&argc, &argv, true); \ signal(SIGINT, apollo::common::apollo_app_sigint_handler); \ APP apollo_app_; \ ros::init(argc, argv, apollo_app_.Name()); \ apollo_app_.Spin(); \ return 0; The \}Copy the code

The root directory of each module contains a readme.md file that describes the module. We can use this as an entry point to understand the implementation of the module.

Perception

Module is introduced

Closest In-path Vehicle, known as CIPV, uses front-facing cameras and radar to keep distance with closest vehicles. The submodule also predicts obstacle motion and position information (for example, heading and speed). Apollo 2.5 supports high-speed autonomous driving on highways, without any maps. Deep network algorithms have learned to process image data. As more data is collected, the performance of the deep network will improve over time.

Module input:

  • Radar data
  • Image data
  • External parameters for radar sensor calibration (from YAML file)
  • External and internal parameters for front camera calibration (from YAML file)
  • Vehicle velocity and angular velocity

Module output:

  • 3D obstacle tracking course, speed and classification information
  • Lane marker information, spatial information and semantic information with fitting curve parameters

Module parse

The Perception module needs to quickly parse two types of information based on input information, namely, roads and objects. The deep network is based on YOLO algorithm [1][2].

Apollo 2.5 does not support high curvature, unmarked roads, including local roads and intersections. The perception module is based on visual detection using a deep network with limited data. So until a better network is released, drivers should drive carefully and always be prepared to remove autonomy by turning the wheels in the right direction.

  • Recommend road
    • Clear white lane lines on both sides
  • Roads not recommended
    • High curvature road
    • A road marked by no lane lines
    • crossroads
    • Docking or dashed lane lines
    • Public roads

For objects, they are divided into static objects and dynamic objects. Static objects include roads and traffic lights. Dynamic objects include motor vehicles, bicycles, pedestrians, animals, etc.

To keep the vehicle in the lane, a series of modules are needed, and the flow chart is as follows:

The Perception module registers a series of classes in its Init function to work properly after the module is started, with the following code:

void Perception::RegistAllOnboardClass() {
  /// regist sharedata
  RegisterFactoryLidarObjectData();
  RegisterFactoryRadarObjectData();
  RegisterFactoryCameraObjectData();
  RegisterFactoryCameraSharedData();
  RegisterFactoryCIPVObjectData();
  RegisterFactoryLaneSharedData();
  RegisterFactoryFusionSharedData();
  traffic_light::RegisterFactoryTLPreprocessingData();

  /// regist subnode
  RegisterFactoryLidarProcessSubnode();
  RegisterFactoryRadarProcessSubnode();
  RegisterFactoryCameraProcessSubnode();
  RegisterFactoryCIPVSubnode();
  RegisterFactoryLanePostProcessingSubnode();
  RegisterFactoryAsyncFusionSubnode();
  RegisterFactoryFusionSubnode();
  RegisterFactoryMotionService();
  lowcostvisualizer::RegisterFactoryVisualizationSubnode();
  traffic_light::RegisterFactoryTLPreprocessorSubnode();
  traffic_light::RegisterFactoryTLProcSubnode();
}
Copy the code

We can use this as an entry point to understand the logic of each sub-module.

RegisterFactoryLidarProcessSubnode, for example.

Code doesn’t really exist RegisterFactoryLidarProcessSubnode the function, the definition of this function is performed by the macro. The relevant codes are as follows:

Lidar(also called LIDAR, LIDAR, or LADAR) stands for Light Detection And Ranging.

// /modules/perception/onboard/subnode.h #define REGISTER_SUBNODE(name) REGISTER_CLASS(Subnode, name) // /modules/perception/lib/base/registerer.h #define REGISTER_CLASS(clazz, name) \ class ObjectFactory##name : public apollo::perception::ObjectFactory { \ public: \ virtual ~ObjectFactory##name() {} \ virtual perception::Any NewInstance() { \ return perception::Any(new name()); \} \}; \ inline void RegisterFactory##name() { \ perception::FactoryMap &map = perception::GlobalFactoryMap()[#clazz]; \ if (map.find(#name) == map.end()) map[#name] = new ObjectFactory##name(); The \}Copy the code

The above macro is used in lidar_process_subnode.h.

REGISTER_SUBNODE(LidarProcessSubnode);
Copy the code

So it will generate a name for ObjectFactoryLidarProcessSubnode class, the class inherits from Apollo: : perception: : ObjectFactory, And contains the name of RegisterFactoryLidarProcessSubnode function.

Prediction

Module is introduced

The Prediction module receives obstacle information from the Perception module. The module requires information including position, heading, speed, acceleration, and generates a predicted trajectory with probability of obstacles.

Module input:

  • Obstacle information from the Prediction module
  • Location information from the Localizaton module

Module output:

  • Predicted trajectories of obstacles

Module parse

Three callback calls have been added to the Prediction Init function to get updates from other modules:

AdapterManager::AddLocalizationCallback(&Prediction::OnLocalization, this);
AdapterManager::AddPlanningCallback(&Prediction::OnPlanning, this);
AdapterManager::AddPerceptionObstaclesCallback(&Prediction::RunOnce, this);
Copy the code

The most important one here is the Prediction::RunOnce function. This function contains the main logic of the Prediction module, which fires when a new obstacle message is received.

There are three important sub-modules in the Prediction module.

The first type is containers, which store data retrieved from subscription channels. Include:

  • Perceived obstacle information
  • Vehicle position information
  • Vehicle Planning information

The second type, Evaluator, predicts course and speed against specified obstacles. There are three types of evaluators, including:

  • Cost evaluator: The calculation of probability by a set of Cost functions
  • MLP Evaluator: Using the MLP model to calculate probability
  • RNN evaluator: Probability is calculated using an RNN model

Evaluator is managed by the EvaluatorManager class. The structure of Evaluator class is shown as follows:

The third important submodule in the Prediction module is Predictor. It is used to predict the trajectory of obstacles.

Different obstacles move differently, so there are many types of Predictor included in the implementation, as shown in the following figure.

Similarly, there is a PredictorManager to manage Predictor.

Routing

Module is introduced

The Routing module generates navigation information based on the request.

Module input:

  • Map data
  • Requests, including: start and end locations

Module output:

  • Route navigation information

Module parse

The internal structure of Routing module is shown as follows:

The input to the Routing module is map data and navigation requests, so its Init function revolves around this logic:

apollo::common::Status Routing::Init() {
  const auto routing_map_file = apollo::hdmap::RoutingMapFile();
  AINFO << "Use routing topology graph path: " << routing_map_file;
  navigator_ptr_.reset(new Navigator(routing_map_file));
  CHECK(common::util::GetProtoFromFile(FLAGS_routing_conf_file, &routing_conf_))
      << "Unable to load routing conf file: " + FLAGS_routing_conf_file;

  AINFO << "Conf file: " << FLAGS_routing_conf_file << " is loaded.";

  hdmap_ = apollo::hdmap::HDMapUtil::BaseMapPtr();
  CHECK(hdmap_) << "Failed to load map file:" << apollo::hdmap::BaseMapFile();

  AdapterManager::Init(FLAGS_routing_adapter_config_filename);
  AdapterManager::AddRoutingRequestCallback(&Routing::OnRoutingRequest, this);
  return apollo::common::Status::OK();
}
Copy the code

The code focuses on the following three areas:

  • apollo::hdmap::RoutingMapFile()Contains HD map data.
  • NavigatorResponsible for navigation, it is easy to imagine that this class should be the core of the module.
  • Routing::OnRoutingRequestIs the callback function that receives the navigation request.

The Routing: : OnRoutingRequest, most main is through the Navigator: : SearchRoute to search the navigation path.

void Routing::OnRoutingRequest(const RoutingRequest& routing_request) { AINFO << "Get new routing request:" << routing_request.DebugString(); RoutingResponse routing_response; apollo::common::monitor::MonitorLogBuffer buffer(&monitor_logger_); const auto& fixed_request = FillLaneInfoIfMissing(routing_request); if (! navigator_ptr_->SearchRoute(fixed_request, &routing_response)) { AERROR << "Failed to search route with navigator."; buffer.WARN("Routing failed! " + routing_response.status().msg()); return; } buffer.INFO("Routing success!" ); AdapterManager::PublishRoutingResponse(routing_response); return; }Copy the code

Currently, navigation in Apollo 2.5 is based on the A* algorithm. This is an algorithm for finding the lowest cost of a path with multiple nodes in the graph plane. This algorithm combines the advantages of best-first Search and Dijkstra algorithm: it can guarantee to find an optimal path (based on evaluation function) while improving the efficiency of the algorithm through heuristic Search.

During the calculation of A* algorithm, multiple paths are attempted. Once an obstacle is encountered, mark points along the path as no further exploration is required (solid points in the figure). Continue to explore the remaining hollow points. Finally, the optimal path is obtained.

The following figure dynamically describes the algorithm process of A* algorithm to find the target path.

Planing

Module is introduced

The Planing module calculates a safe and comfortable form of circuit for the controller to execute based on positioning information, vehicle state (position, speed, acceleration, chassis), map, route, perception and prediction.

The current system implementation includes four types of planners:

  • RTKReplayPlanner (since Apollo 1.0) : The RTK replay planner first loads the recorded track at initialization and sends the appropriate track segment based on the current system time and vehicle location.
  • EMPlanner (since Apollo1.5. EM is short for Expectation Maximization) : EM planner, which calculates driving decisions and routes based on maps, routes and obstacles. The Dynamic programming (DP) based approach is first used to determine the original path and velocity curve, and the Quadratic programming (QP) based approach is then used to further optimize the path and velocity curve to obtain a smooth trajectory.
  • LatticePlanner: Grid planner
  • NaviPlanner: This is a planner based on real-time relative maps. It uses the vehicle’s FLU (front-left -Up) coordinate system to complete cruise, follow, overtake, approach, lane change and stop tasks.

Module input:

  • RTK Replay planner:
    • Localization
    • RTK tracks recorded
  • EM planner:
    • Localization
    • Perception
    • Prediction
    • HD Map
    • Routing

Module output:

  • Collision-free and comfortable trajectory allows the control module to execute

Module parse

The Planing module registers all schedulers in its Init function, and then determines which scheduler is currently used based on the configuration in the configuration file. Currently, the EM planner is configured in the configuration file.

The Planing module sets a Timer in the Start function to complete a scheduled task:

The Status Planning: : Start () {timer_ = AdapterManager: : CreateTimer (ros: : Duration (1.0 / FLAGS_planning_loop_rate), &Planning::OnTimer, this); .Copy the code

The Planning::OnTimer mainly calls RunOnce(), which contains the core logic of the Planing module. Currently, the FLAGS_planning_loop_rate value is 10. That is, the Planing module runs at a frequency of 10 times per second.

In the Planning::Plan function, the route is calculated through the configured planner, and the results are published externally. The key codes are as follows:

Status Planning::Plan(const double current_time_stamp, const std::vector<TrajectoryPoint>& stitching_trajectory, ADCTrajectory* trajectory_pb) { auto* ptr_debug = trajectory_pb->mutable_debug(); if (FLAGS_enable_record_debug) { ptr_debug->mutable_planning_data()->mutable_init_point()->CopyFrom( stitching_trajectory.back()); } auto status = planner_->Plan(stitching_trajectory.back(), frame_.get()); ExportReferenceLineDebug(ptr_debug); const auto* best_ref_info = frame_->FindDriveReferenceLineInfo(); if (! best_ref_info) { std::string msg("planner failed to make a driving plan"); AERROR << msg; if (last_publishable_trajectory_) { last_publishable_trajectory_->Clear(); } return Status(ErrorCode::PLANNING_ERROR, msg); } ptr_debug->MergeFrom(best_ref_info->debug()); trajectory_pb->mutable_latency_stats()->MergeFrom( best_ref_info->latency_stats()); // set right of way status trajectory_pb->set_right_of_way_status(best_ref_info->GetRightOfWayStatus()); for (const auto& id : best_ref_info->TargetLaneId()) { trajectory_pb->add_lane_id()->CopyFrom(id); } best_ref_info->ExportDecision(trajectory_pb->mutable_decision()); . last_publishable_trajectory_->PrependTrajectoryPoints( stitching_trajectory.begin(), stitching_trajectory.end() - 1); for (size_t i = 0; i < last_publishable_trajectory_->NumOfPoints(); ++i) { if (last_publishable_trajectory_->TrajectoryPointAt(i).relative_time() > FLAGS_trajectory_time_high_density_period) { break; } ADEBUG << last_publishable_trajectory_->TrajectoryPointAt(i) .ShortDebugString(); } last_publishable_trajectory_->PopulateTrajectoryProtobuf(trajectory_pb); best_ref_info->ExportEngageAdvice(trajectory_pb->mutable_engage_advice()); return status; }Copy the code

I’m in Control of it.

Module is introduced

The control module uses different control algorithms to generate a comfortable driving experience according to the plan and the current state of the car. The control module can work in normal mode and navigation mode.

Module input:

  • Planned line
  • State of the vehicle
  • Location information
  • Dreamview AUTO mode change request

Module output:

  • Control commands (steering, throttle, brake) to chassis

Module parse

The main logic of the Control module is also executed on a Timer. In the timer function Control::OnTimer, commands are generated and sent:

void Control::OnTimer(const ros::TimerEvent &) { double start_timestamp = Clock::NowInSeconds(); if (FLAGS_is_control_test_mode && FLAGS_control_test_duration > 0 && (start_timestamp - init_time_) > FLAGS_control_test_duration) { AERROR << "Control finished testing. exit"; ros::shutdown(); } ControlCommand control_command; Status status = ProduceControlCommand(&control_command); AERROR_IF(! status.ok()) << "Failed to produce control command:" << status.error_message(); double end_timestamp = Clock::NowInSeconds(); if (pad_received_) { control_command.mutable_pad_msg()->CopyFrom(pad_msg_); pad_received_ = false; } const double time_diff_ms = (end_timestamp - start_timestamp) * 1000; control_command.mutable_latency_stats()->set_total_time_ms(time_diff_ms); control_command.mutable_latency_stats()->set_total_time_exceeded( time_diff_ms < control_conf_.control_period()); ADEBUG << "control cycle time is: " << time_diff_ms << " ms."; status.Save(control_command.mutable_header()->mutable_status()); SendCmd(&control_command); }Copy the code

The Control module has three built-in controllers, which are structured and described as follows:

  • LatController: LQR-based lateral controller that calculates the steering target. See Vehicle Dynamics and Control for details.
  • LonController: Longitudinal controller that calculates brake/throttle values.
  • MPCController: AN MPC controller that combines horizontal and vertical controllers.

conclusion

This paper mainly makes some investigation and analysis based on Apollo project version 2.5.

As we mentioned above, the current version 2.5 is only for L2 autonomous driving, while Baidu plans to achieve L3 autonomous driving by 2019 and L4 autonomous driving by 2021. It can be seen that this project will develop at a very high speed in the following time.

In addition, I had the opportunity to attend CES in Shanghai this year. Two self-driving models from Baidu were also on display at the exhibition:

One is a minibus.

There is also a small logistics vehicle.

I will continue to pay attention to the project in the future, and I will continue to share more information with you.

Resources and recommended readings

  • apolloauto.github.io
  • Apollo 1.0 Preliminary Guide
  • Baidu’s Open Source Self-Driving Platform Apollo in Detail
  • CES 2018 Baidu Apollo 2.0 self-driving car test ride in Sunnyvale, USA
  • Apollo Meetup 10/25/17
  • ROS exploration and practice in autonomous driving. PDF
  • Apollo capability open and resource open. PDF
  • End-to- End autonomous driving Scheme based on deep learning. PDF
  • ROS 101: Intro to the Robot Operating System
  • You Only Look Once: Unified, Real-Time Object Detection
  • YOLO9000: Better, Faster, Stronger