The anatomy of an autonomous vehicle

As leading innovators strive to improve the safety of mobility, cars are advancing their level of safety through automation. Automotive safety has a long history, dating back to the introduction of the seat belt in 1959 — a three-point seat belt, that would automatically hold the driver back in the seat through a collision. Over the following decades, the industry ascended the innovation ladder by improving various components of safety through active and passive add-ons such as Airbags, Anti Lock Braking System, Traction Control, Electronic Stability Control, Safety Exit Assist, Blind Spot Detection, Reverse Parking cameras, radar and ultrasonic sensors, and Adaptive Cruise Control.

All advancements in safety have one thing in common: an accurate sensor that is seamlessly integrated with the car to assist the driver. Autonomous driving is a tremendous undertaking to realize the vision of a crash-free mobility and can be achieved by bringing together everything that we have learned over the past decades on vehicle safety.

THE LEVELS OF DRIVING AUTOMATION

SAE Driving Automation Levels

SAE Driving Automation Levels

Current commercial deployments generally fall somewhere between either level 2 or 3 and rely on a sensor suite of cameras, a front-facing radar, and - in some cases - low-resolution lidar sensors. Behind this current limited suite of sensors is also a very basic processing stack.

For engineers looking to advance towards SAE level 4 and 5, there are a few key technical hurdles:

  • Accurate and robust perception and localization in all environments
  • Faster real-time decision making in diverse conditions
  • Reliable and cost-effective systems that can be commercialized in production volumes

As these are combinations of deeply complex technologies, these associated problems need to be approached from both a hardware and software perspective.

Hardware: the sensors

To push the frontier of SAE level 4 and 5, autonomous vehicle makers are deploying more robust sensor suites. The most common sensor suite includes lidar, radar, and cameras, supported by deterministic processors for efficient and accurate data processing. Due to the need for both spatial perception and localization of the vehicle, the volume and quality of sensors for level 4 and 5 is dramatically higher than what is seen today on level 2 or 3 commercial vehicles.

Lidar

DESCRIPTION
Provides accurate depth and spatial information of the environment in a 3-dimensional format as a 3D point cloud. Lidar is an active sensor, meaning it works day or night, independent of most external conditions.
PROS
Highly accurate and precise depth (range) information, agnostic to lighting conditions (works day or night), mid-level resolution (much higher than radar, lower than camera).
CONS
High data rate, higher cost today, larger form factor.

Lidar is used on every level 4 or level 5 prototype vehicle due to the spatial awareness that lidar natively provides to the software (sometimes called “the driver”). Lidar provides accurate spatial information of the environment for up to ~250 m around the vehicle, allowing the software to augment camera and radar data to more quickly and accurately classify objects around the vehicle.

Lidar setup

Lidar setup

A standard lidar integration will include 4 short-range lidar sensors around the edges of the vehicle. These short-range sensors have a 90° vertical field of view and see dark objects at about a 15 m distance. These sensors are used for identifying potential risks immediately around the vehicle - small animals, boxes, cones, or curbs - all items that are often even difficult for humans to manage. This setup would include one sensor on the grill of the vehicle, two sensors near the side-view mirrors, and one sensor on the tailgate of the vehicle (and yes, they can withstand closure shock testing).

Lidar short range

Lidar short range

The next set of lidar sensors is traditionally seen placed at an angle on the edges of the roof of the vehicle. These sensors are traditionally mid-range lidar sensors that play the biggest role in mapping and localization. Placing the sensors at an angle is a more effective orientation for mapping, and with a 50-60 m range for dark objects, these sensors can effectively localize the vehicle in urban settings.

Lidar long range

Lidar long range

Finally, on the top of the vehicle are long-range lidar sensors. There are usually two of these sensors for redundancy, and they can be either 360 or forward facing sensors. They see dark objects at up to 200 m and are used for detecting potential obstacles in front of the vehicle when it is traveling at high speeds.

Camera

DESCRIPTION
Provides high-resolution, accurate color and detailed information of the environment in a 2D array.
PROS
Inexpensive, easy to integrate (can be nearly hidden in the car), high resolution and full color, see the world in the same way as humans do.
CONS
Passive sensors dependent on light from external sources and sensitive to variable light conditions, susceptible to adverse weather conditions, computationally intensive 360º view requires computational work to stitch images.

Cameras provide the traditional core of an autonomous vehicle perception stack. Level 4 or 5 vehicles have many, even 20+, cameras placed around the vehicle that are then aligned and calibrated to create an extremely high definition 360º view of the vehicle. Cameras have been in use for decades and have a large digital image processing ecosystem to help power the algorithms that analyze camera data. Cameras, though, struggle with depth perception and see degraded performance when dealing with direct sun glare, at night, and in weather conditions like rain or snow.

Increasingly, we are seeing specialized cameras, such as infrared or thermal cameras, deployed on vehicles in order to address some of scenarios where traditional cameras struggle.

Radar

DESCRIPTION
Provides low-resolution 3D point clouds with depth-information of objects and environment using radio waves as a frequency-modulated continuous wave.
PROS
Inexpensive, robust, immune to adverse weather conditions, long range.
CONS
Low resolution, false negatives with stationary objects.

Over the past 15 years, radar has become commonplace in automotive applications. Radar is extremely robust and sees little to no performance degradation in rain and snow. For level 4 and level 5 autonomous vehicles, radar will be placed 360º around the vehicle and used as a reliable redundant unit for object detection. Though radar is a highly robust sensor, it lacks resolution and can occasionally return false negatives for critical obstacles. Radar sensors have traditionally struggled to reliably detect stationary objects, and can get confused with objects that are in the same vertical plane. For example, if a car is parked under an overpass, the radar sensor will struggle to distinguish between the car and the overpass itself.

All three of these above sensors have their pros and cons, but in combination they create a highly robust suite of sensors that enables level 4 and level 5 autonomous driving.

Given the number of sensors being deployed on these vehicles, the processing requirements are tremendous. Next let’s look at different ways companies are tackling this problem.

Hardware: the processors

Autonomous vehicles require advanced computer processing systems in order to handle the tremendous amount of data generated by the sensor suite. There are two main approaches for handling this data:

  • Centralized processing
  • Distributed (edge) processing

Centralized processing

DESCRIPTION
All data processing and decision making is done in a single central processing unit. The data from the sensors is all sent to this central processing unit. In this scenario, the sensor side consists only of sensors and no intelligence.
PROS
The sensor side is small, low cost, and low power as it is only sensing. Typically, the processing unit has lower functional safety requirements because no processing or decision-making is done there. More sensors can be deployed because they are low cost and have a small form factor.The sensor side is small, low cost, and low power as it is only sensing.
CONS
Large bandwidth for communication required because the raw data from the sensors is transferred to the central processing unit. This setup adds the possibility of higher electromagnetic interference. Central processing typically demands high-performance chipsets with high processing power and speed to handle all incoming raw data. Processing all raw data is challenging for latency. Moreover, these chipsets are expensive, consume a lot of power, and generate considerable heat. Adding more sensors will put higher demands on the central processing unit, which can become the bottleneck in this scenario.

Distributed processing

DESCRIPTION
With distributed processing, the processing is done at the sensor level. The relevant information from each sensor is then sent to a central unit where it is compiled and used for analysis or decision making. The sensor module with an application processor does the high level of data processing and local decision-making could be done at the edge. E.g. for emergency braking.
In other cases, sensor modules send object data (or meta-data; i.e. data that describes object characteristics and/or identifies objects) to a central fusion unit. Here this information is combined with other information and a final decision is made.
PROS
A lower bandwidth and cheaper interface between the sensor modules and central processing unit, as the object data is much smaller than full raw data. The central processing unit only has to fuse the object data. This requires lower processing power (i.e. less energy) and also runs faster, i.e. lower application latency. It features easier addition of extra sensors as more sensors do not drastically increase the performance needs.
CONS
The setup necessitates higher functional safety requirements in the sensor module itself, since the local pre-processing and possibly even decision-making is done at the sensor edge.
Companies traditionally have centralized processing combining sensor data streams at the central processing unit. However, the growing number of sensors in a vehicle contributes to exponential data growth and severe challenges to central processing. This model cannot be upheld longer. The distributed processing model for vehicles is becoming the preferred way.
This is illustrated in the next section where we talk about Sensor Fusion and the challenges to simultaneously process data-intensive sensor signals in real-time.

Understanding sensor fusion

Sensor fusion is the process of merging multiple sensor inputs together in order to create a more robust dataset of the environment around the vehicle.

Data from multiple sensor types are algorithmically merged to obtain better results and make safer decisions than what is possible by relying on each sensor data stream independently. Sensor fusion supports quick and accurate calculations on a central compute unit.

To demonstrate the benefits of fusion and the capabilities of edge processing, we used software from Teraki to fuse an Ouster lidar sensor and an HD camera. Lidar detects the position of objects in a large field of view from 0.25m to up to 200 meters with a high level of accuracy. While high-resolution 64 and 128-channel lidar can be used for object classification (recognition of what exactly an object is), the accuracy of that classification can be improved by fusing the lidar data with camera data. The camera’s HD resolution and full color complement the strengths of lidar in measuring object depth.

Sensor fusion helps overcome the shortcomings of each sensor type, combining the high resolution of camera, depth information of lidar, and weather-immunity of radar.

As lidar captures the environment around the car as a 3D point cloud, Teraki’s library installed on the ECU identifies moving objects as regions of interest with a bounding box and compresses them into the point cloud data. The detected region of interest is then passed to the camera where it is overlaid onto the video data. The camera data uses the region of interest as an input to compress the video with the help of Teraki’s ROI based encoder.

This process involves reducing the resolution of the regions of low interest such as sky, vegetation, etc. while maintaining high resolution in the regions of interest, which can consist of moving objects like cars and pedestrians or static objects like street signs and traffic lights. What qualifies as ROI and non-ROI can be easily configured by the customer according to their use case. The reduced point cloud and video data can then either be processed locally or streamed to the cloud in case of cloud-based operation.

To see what the sensor fusion looks like in the real world, check out this video on sensor fusion.

Teraki software with Ouster lidar

Ouster lidar generates high-resolution point cloud data that maps the environment around the sensor. The high-density information produced by the lidar sensor (as high as 250 Mbps for 128 channel sensors) can present a challenge for rapid processing by the hardware available in the car. Combining this with the complex calculations needed for video processing to make accurate decisions with near real-time latency is a daunting challenge for autonomous driving use-cases.

Sensor Fusion

Sensor Fusion

Teraki provides the software necessary to meet the demands of efficient processing, optimal power consumption (that also subsequently impacts the mileage of the vehicle) and thermal management. Teraki’s software also aids in solving processing-related issues by reducing 3D point cloud data size by 10x without impacting data quality.

Sensor Fusion - Processing

Sensor Fusion - Processing

Teraki processes 10K points of a point cloud in a single core CPU, such as the A72 2.0 Ghz processor, down to 1/10th its original size in 2.3 ms. The lightweight segmentation and ROI detection then takes about 10.1 ms and finally the reduction of HD video data by 4x on top of standard codecs takes 12.6 ms. This rapid reduction enables quick and accurate sensor fusion with ROI object detection at the edge - all completed under 25 ms. The single thread solution is 150x faster than VoxelNet-CPU.

The Teraki solution is flexible, allowing customers to train their own Region of Interest (ROI) or Target of Interest (TOI) models to run closer to their sensors. The edge-computing algorithms can be adapted to segment and classify objects in various scenarios (low-light situations, snow situations, etc.) and then be regularly retrained and deployed to the car via over-the-air (OTA) updates. Customers can use Teraki’s model training tools to collect tailored data for their model, reduce model complexity, and then distribute these simplified and more accurate models to their sensors.

Using this model for pre-processing on the edge, customers can improve mapping, localization and perception algorithms.

Smarter edge for series production

Autonomous vehicles are a tremendous engineering challenge. SAE level 4 and 5 vehicles require many high-resolution sensors, including cameras, lidar sensors, and radars, to safely navigate their operational design domain. The new high-resolution sensors on the market, such as Ouster’s new 128-channel sensors, produce significant amounts of data that can overwhelm the existing processing infrastructure.

Teraki’s embedded edge compute sensor fusion product helps to solve this challenge with embedded, intelligent selection of sensor information - accurately done at low power. It can detect and recognize the regions of interest in the environment around the car, pass only the relevant information to the central computing unit, and dramatically reduce the data latency of the vehicle’s onboard compute and storage by as much as a factor of ten. The result is a safer vehicle that makes better decisions - and a step closer to an autonomous future that’s safer for everyone.

Learn more

To learn more about Ouster’s high resolution sensors, please visit their website, or check out their blog.

To learn more about Teraki’s accurate and efficient edge computing solutions, please visit our website, or check out our blogs on Sensor Fusion and Lidar-Camera marriage.

Share this Post: