current position:Home>How does a autonomous vehicle perceive its surroundings?

How does a autonomous vehicle perceive its surroundings?

2022-05-15 01:16:18Pingyuan Jun 2088


Self driving cars need to be seen and heard , Then you need to use a variety of sensors , Common sensors are as follows :
 Insert picture description here
Laser radar : According to the time difference between transmitting and receiving laser , Calculate the distance according to the speed of light . Because the wavelength of laser is short , Therefore, the surface of the scanned object can be constructed . But because the laser keeps emitting , You need to be active , High power consumption . Lidar is divided into mechanical type : constantly 360 Degree rotation scanning ; The other is multi probe phased radar : Scanning by transformation of different phases , Can also achieve 360 Scanning effect of degree angle , This radar has a longer life , It is the future development trend .
Millimeter wave radar : Emitted millimeter wave , For example, ultrasonic . Because the wavelength is a little longer , So you can't construct the surface of an object .
camera : The most commonly used sensors , No need to launch , Just receiving optical signals . It's not easy to use at night , And in rainy and snowy days, the effect is not good , Generally, the depth cannot be detected , But now RGB-D Binocular depth camera can make up for the deficiency .

Navigation and positioning

When the car is driving, it can according to the electronic map ( For example, baidu map ) and GPS Combination of receivers , You can know the current location . In this way, the accuracy of meters can be achieved . But sometimes it is impossible to distinguish the main road from the auxiliary road , It is impossible to distinguish the upper and lower lanes of the viaduct , So we also need to achieve the accuracy of decimeter and centimeter .“ Chihiro positioning ” Can reach decimeter level . And Gaode high-precision map , Take a street view with a camera , Then, when positioning the car, compare the image obtained by the sensor with the high-precision map of Gaode , Find the difference , Can achieve higher accuracy . Of course, some detection and positioning equipment can also be installed on both sides of the road , adopt 5G Technology enables the car to interact with the surrounding network environment to get location and positioning , This is more accurate . However, the investment cost of this method is high , It belongs to infrastructure construction .
At present, the most popular is SLAM(simultaneous localization and mapping), Full name: real-time positioning and map construction or concurrent map construction and positioning . Nowadays, sweeping robots and unmanned aircraft are widely used , Can achieve high accuracy , Up to the centimeter level .

Multi-sensor fusion

The remaining problem is the integration of multiple information sources . Many kinds of sensors are installed on autonomous vehicle , And each sensor has multiple , These sensors are installed in different positions and angles , So how to fuse the information of these sensors ?
 Insert picture description here
1: First, different sensors get different information , So it needs to be integrated , Only when it becomes a unified spatial information can it be processed .
2: The second is that the perceived information is located in different positions and angles , Image required 3D restructure , Here, we need to use the eigenvalues of the image for three-dimensional reconstruction . It's also complicated here , I mentioned 3D reconstruction in the previous article , Now use AI Technology can carry out eigenvalue matching 3D Spatial reconstruction .
3: Different sensors sample at different frequencies , So we need to calibrate the time .
4: Even if the space calibration and time calibration are completed , Then the information obtained by different sensors may be contradictory , For example, the camera found a wall in front , And the lidar found a slope . This is the difference in the accuracy of various sensors . At this time, it involves the fusion of sensors with different accuracy , Here we need to use Kalman Filter Technology , Not detailed here .

In short, through the combination of the above sensors , Autonomous vehicle can get the surrounding environment , Subsequently, these information are delivered to the central processing unit for calculation , Then make a decision , Control driving routes and road planning .

This article refers to teacher Li Yongle's video , In this thank you !

copyright notice
author[Pingyuan Jun 2088],Please bring the original link to reprint, thank you.

Random recommended