Sensor Fusion's role in autonomous vehicles

According to recent CB Insights, 46 corporations are working on Autonomous Vehicles (AV) technologies and it is expected that the AV-market will grow from $54B to $556B in 2026. Next to its large market size, AV’s carry a big promise to change society for the better. Technology is moving fast, and every company is trying to get a pie of this projected market size and contribute to a safer and cleaner world.

What is Sensor Fusion?

Sensor fusion is combining of data from different sensors to increase the quality of the outcome. For example: using both lidar and camera data to accurately detect a pedestrian would result in higher reliability than if one only would use a camera. Sensor fusion plays an important role in a multitude of today’s technologies: computer vision, localisation, path planning & control, etc. All these technologies combined help an autonomous vehicle to ride safely.

Tracking of stationary and moving objects plays an important role in the Autonomous Vehicle ecosystem. The signals from multiple sensors are fused together to determine the position, trajectory and the speed of the object. The reason Autonomous Vehicles need multiple sensors is because each sensor provides different type of information on the tracked objects.

Types of Sensors used for Sensor Fusion

  • Lidar: Lidar sensors use infrared sensors to determine the distance to an object. The rotation system mounted on the top of the car is helping send out the waves and it calculates the time taken to return to the system. A Lidar is a so-called 3D point cloud sensor. It generates roughly 2 million points per second.

  • Radar: Radar emit radio waves to detect objects within a radius of a few meters. Radars have been in our vehicles since a long time to detect vehicles in blind spots or to avoid collisions. They have better results on moving objects than on static objects.

  • Camera: The camera substitutes the driver’s vision. It is often used to understand the environment with artificial intelligence by classifying roads, pedestrians, signs etc.

  • Ultrasonic Sensors: They are used to estimate position of static cars, normally used in parking.

  • Odometrical Sensors: To determine the speed by analysing the displacement of wheels.

What role does Sensor Fusion play in Autonomous Vehicles?

Sensor Fusion particularly plays a vital role in adverse weather conditions like heavy snow/fog/rain as different sensors work differently in the conditions.

A camera for instance cannot detect as well in fog or at night-time as during a cloudless day. Radar on the other hand performs well in these situations and will also provide accurate distance of the object. Lidar works well only in good weather conditions. Whereas Radar on the other hand is a more robust technology that can be used in adverse weather conditions. Having said that, the accuracy of Lidar is again better than that of Radar. In short: each technology has its own pros and cons. Sensor Fusion combines the best that each technology has to offer.

Let’s take an example a Radar signal tells the system that the car in the adjacent lane is driving at 65 Km/hr but the Lidar sensors predict the speed of the same car at 60Km/hr. In this case of contradictions, a technique called filtering is used to fuse these data points with the history and a conclusion is formed by the system. The above example of estimating speed using multiple sensors is an example of Sensor Fusion, but the same principle can be applied to measuring distance, detecting objects and various other scenarios.

The combination of all sensor data into one stream helps to eliminate blind spots. Secondly, during complex situations the combined data enables the system to react smart and thereby create a safer environment. Sensor fusion also helps in overcoming the inaccuracies of one sensor, also known as sensor noise. Therefore, Sensor fusion is an indispensable tool which promotes safety and reliability in an autonomous driving setting.


The main challenges for effective and efficient Sensor Fusion are:

  • Power consumption: A rise in the number of sensors used inevitably leads to an increase in power usage.

  • Coherency: Another challenge with sensor fusion lies in the fact that the algorithms designed should be coherent as different sensors can operate at different frequencies.

  • Latency: The design of algorithms must ensure maximal efficiency to enable the real-time and simultaneous use of many of sensors.

This is where Teraki brings in its expertise. Teraki’s intelligent and efficient data processing algorithms run in real-time on embedded devices and make Sensor Fusion highly reliable, cost-efficient and fast.

How is Teraki helping?

Teraki efficiently synchronizes depth information coming from Lidars and Camera sensors in real-time. This makes Sensor Fusion very efficient and possible at low hardware specs.

The state-of-the-art clustering algorithm developed by Teraki is developed for Lidar/Time-of-Flight/RGBD sensors. This is how it works: an importance is assigned to each cluster based on the distance of the cluster centroid from the sensor.

After the clustering process, a 3D bounding box is fitted over the points. This is then mapped to the 2D domain utilizing the calibration matrices provided by the sensor.

Finally, 2D masks are passed on to Teraki’s Region of Interest (RoI) toolbox. This toolbox can process streams of data and apply differential compression rates to different masks based on a new compression scheme developed by Teraki.

The scheme developed based on synchronization of Lidar and Camera sensors is a key differentiator when it comes to processing and streaming relevant and useful information real-time on low-powered device to the cloud. Such a scheme can only be achieved by designing algorithms which are lightweight for processing and simultaneously compressing large streams of video and point cloud data. Teraki developed and continuously improves this technology.

Due to large volumes of image and video data collected by the cars, efficient compression methods are essential to store and process data efficiently and at very low latency. Standard encoding algorithms such as JPEG and H.264 were once created to fit human consumption. However, especially when higher data rate efficiencies are required, these encoding algorithms lead to considerable degradation of the machine learning algorithms’ accuracy.

At CES 2020, Teraki will showcase a real-time, low latency Sensor Fusion demo on a low-powered device. Stay tuned for more.

If you want to stay updated with the latest tech news and research on the connected vehicle and autonomous driving, then feel free to sign up to our fortnightly newsletter by clicking here. Sign Up now

Share this Post: