Introduction of turtlebotTurtlebot is familiar to many people who have come into contact with ROS. It is a low-cost, open source personal robot suite that is officially supported by ROS and is the ROS standard platform. The Turtlebot series is designed to be easy to buy, build, and assemble, and can be redeveloped by downloading the official SDK from ROS Wiki, as an essential entry level robot mobile platform. Turtlebot has been released for three generations. Today, we will introduce how to implement Lane detection in Gazebo with Turtlebot 3 as the carrier.

preparationTo make Turtlebot3 self-driving, you need to install some environments. This test is based on the ROS Noetic version of Ubuntu 20.04.

1. Install turtlebot3 and run CD ~/catkin_ws/ SRC/git clone github.com/ROBOTIS-GIT… CD Turtlebot3 git Clone github.com/ROBOTIS-GIT Git clone -b noetic-devel github.com/ROBOTIS-GIT… Git clone github.com/ROBOTIS-GIT… sudo apt install ros-noetic-image-transport ros-noetic-cv-bridge ros-noetic-vision-opencv python3-opencv libopencv-dev Ros-noetic -image-proc CD ~/catkin_ws catkin_make

When we are finished compiling, we can try to start the emulated environment. Open the terminal and enter roslaunch turtlebot3_gazebo turtlebot3_autorace_2020.launch If the following picture is displayed, it indicates that the simulation environment is successfully installed

2. Camera calibrationAutopilot relies heavily on visual processing, so camera calibration plays an important role in successfully completing the mission. In gazebo simulation environment, it is not necessary to calibrate the camera’s internal parameters, but to calibrate the camera’s external parameters. Note: in actual use, the internal and external parameters of the camera need to be calibrated in detail according to certain steps. First of all, we start the simulation environment roslaunch Turtlebot3_gazebo Turtlebot3_autorace_2020. launch a new terminal (terminator is recommended here, a very useful terminal, Roslaunch Turtlebot3_Autorace_camera intrinsic_camera_calibration. Launch New terminal, Roslaunch Turtlebot3_autorace_camera extrinsic_camerA_calibration. Launch mode:= Calibration Choose plugins > Visualization > Image View. Create two Windows (repeat twice) and select the /camera/ Image_Extrinsic_calib /compressed topic in the left window and the /camera/ Image_projected_Compensated topic in the right window. As shown in the figure below.Then open a terminal again, Enter rosrun rqt_reconfigure and select /camera/image_projection and /camera/ Image_compensation_projection on the left side of the window. Adjust the parameters, and then observe the two Windows of RQT. Until the red wire frame of the left window is inside the road, and the top view of the road is on the right. As shown in the figure below.The corresponding parameters are shown in the figure.When the parameters are set, do not click Save. Find the source file for loading parameters and modify them manually. In this article, The positions of both parameters are in ~ / catkin_ws/SRC/turtlebot3 turtlebot3_autorace_2020 / turtlebot3_autorace_camera/calibration/extrinsic_calibration/directory Find the above file and modify the parametersCompensation. Yaml File contentsMeguiar. Yaml file contents

At this point, the camera parameter adjustment in gazebo simulation environment is over.

3. Road detectionOnce we’ve calibrated the camera, the next step is road testing. Note: Lane detection is yellow on the left and white on the right. Make sure the yellow lane is on the robot’s left. To open a terminal, enter Roslaunch Turtlebot3_gazebo Turtlebot3_autorace_2020. launch a new terminal, Type roslaunch turtlebot3_Autorace_camera intrinsic_camera_calibration. Launch New terminal Roslaunch Turtlebot3_Autorace_camera Extrinsic_camerA_calibration. Launch Roslaunch turtlebot3_autorace_detect DETECt_lane. launch mode:=calibration Select /detect/image_lane/compressed topic to see the result of lane detection. As shown in the figure below.Select the topic /detect/image_yellow_lane_marker/compressed to see the yellow line on the left, as shown below.But that’s not what we want, we just need the yellow line on the left, so we need to adjust the parameters. Select the topic /detect/image_white_lane_marker/compressed to see the white line on the right, as shown below.That’s not what we want either. Parameters need to be adjusted. For a new terminal, enter rosrun rqt_reconfigure rqt_reconfigure to select detect_lane on the left of the window and adjust the parameters so that the yellow line on the left only detects yellow and the white line on the right becomes fuller. The final adjustment result is shown in the figure belowThe following figure shows the adjustment parametersOpen the source file path and modify parameters in the parameter list. In this paper, the parameter file is located in the ~ / catkin_ws/SRC/turtlebot3 / turtlebot3_autorace_2020 turtlebot3_autorace_detect/param/laneLane. yaml file contents

Tips: Line filter calibration can be difficult in practice due to the physical environment, such as the brightness of the light in the room. First calibrate the Hue value from low to high. Then calibrate the saturation values from low to high and then calibrate the lightness values from low to high. But there is an automatic adjustment in the source code, so it makes no sense to calibrate the brightness value. Just set it to 255. The parameters mentioned above are calibrated with reference to the HSV color model. At this point, the road detection calibration is completed and the simulation test begins.

4. The simulation first closes all terminals, and then starts the test. roslaunch turtlebot3_gazebo turtlebot3_autorace_2020.launch

roslaunch turtlebot3_autorace_camera intrinsic_camera_calibration.launch

roslaunch turtlebot3_autorace_camera extrinsic_camera_calibration.launch

roslaunch turtlebot3_autorace_detect detect_lane.launch

roslaunch turtlebot3_autorace_driving turtlebot3_autorace_control_lane.launch

5. Conclusion Automatic driving is a trend in the future, as well as a hot cutting-edge technology. Turtlebot3’s autonomous driving provides a platform and method for low-cost, low-speed autonomous driving for research and teaching, so that we can have a basic understanding of autonomous driving. Amu Lab is committed to cutting-edge IT technology education and intelligent equipment, so that robot research and development more efficient!