• This email address is being protected from spambots. You need JavaScript enabled to view it.

Environmental Perception

Autonomous Vehicle

AV Control Loop

Control Loop

The first stage in the autonomous vehicle control loop is the environmental perception, which together with the localization data will give the system an understanding of the current status. Different types of sensors - camera, radar, lidar, ultrasonic etc. - will provide some sort of recognition result that will be provided to the ADAS controller for processing. The processing will lead to some type of action (planning stage) and that action will be implemented (control stage). Once the action is implemented there is a need to understand if the planned action led to the expected change in the environment, so the whole cycle starts again.

It is quite obvious that if the environmental perception is flawed or incomplete it may cause the ADAS controller to come to incorrect conclusions, which in turn will lead to incorrect action which in the worst cases may have very bad effects.

We need to consider that the sensors need to work under any condition: day and night; sun, fog, rain, snow, sleet; dry road, wet road, icy road etc.

Sensor Fusion

A way to improve on the intelligence provided by the various sensors is to combine the data from sensors, something that goes under the general description of sensor fusion. Examples of fusion are combining camera and lidar data, camera and radar, different kinds of radar etc.

The aim of the sensor processing is to provide as comprehensive an environmental model as possible. For the autonomous vehicle this ultimately will mean to determine the "free" space - where is the driveable area and what limitations may there be on this; which path to take in the driveable area - essentially the geometry of all possible routes in the area; detect any moving object on the driveable area/path and also other elements from the scene like traffic lights, road markings, pedestrian direction (and even where the pedestrian may be looking).

This is a contantly changing environment, so it is critical that data is provided in a timely manner. For this there are two main directions on how to handle sensor data. On the left is a view of track-to-track fusion and on the right is a visualization of raw sensor data fusion.

 

Track-to-Track Fusion

Image

In Track-to-Track fusion stand-alone sensors process the data according to predefined (tracking) algorithms, and the output to the system will consist of lists of tracked objects. Fusion will then essentially mean combining lists of tracked objects, from which conclusions can be drawn and passed to the system processing the function. This solution puts less of a processing load on the system, but may also suffer from loss of information, since the only thing pass on is the tracked objects, based on the sensor algorithms.

Raw Data Fusion

Image

In raw data fusion all sensor data is fed to the central system, which then handles both data fusion and tracking. This allows for combining raw data from different sensors, and then deciding on appropriate action based on the total scenario. It is a more complete system, but it also puts much higher requirements on elements like latency, efficiency and  processing power.