Blog
|
May 10, 2021

Types of Autonomy

Introduction

Today many players are aiming to deliver autonomous operations in various industries: cars, trucks, forklifts, robots, drones, etc. Current deployments generally fall somewhere between either level 2 or 3 and rely on sensors such as cameras, short- and long-range Radars, 360-degree Lidars, etc. Based on this sensor suite runs an extensive software stack trained to reach increasing levels of autonomous operations.

We’ve studied the market and identified four main strategies companies pursue to claim ‘autonomous’ operations. We've grouped and named these four main strategies as follows:

1) Tele-operation-only

2) Map-based autonomy

3) Brute force autonomy

4) Perception-based autonomy.

In this blog we’ll describe and analyze these four methods.

1. Tele-operation

Tele-operation - or also called Remote Control - means the steering of an unmanned vehicle or a machine by a human operator that is not in the vehicle. Typically, the remote operator or driver is in a control room that could be 1.000s km’s away from the vehicle. Tele-operation has been around for a while this feature is finding its place in as a stem of autonomous driving. New AI-models enable vehicles to operate autonomously need a lot of training in real environments. For that training, many hours of safe operations are required to capture sufficient sensor data (video, radar, lidar, IMU). During this process the vehicle is not yet autonomous and needs a human guide. Initially for 100% of the time. Later, and as autonomous AI-capabilities grow, the human driver is required for part-time operation of the vehicle.

Refer to our blog on teleoperation for more details.

Tele-operation is done by some companies doing robot operations. Can be a good start in order to train the software for AD. One needs to have a well laid out plan to get there. If a company does not have such a clear and credible path, this approach cannot be named ‘autonomous’ at all.

That's why hereafter, we focus on the three remaining approaches to achieve autonomous operations of which the main characteristics are summarized in this table:

2. R&D autonomy

The next generation of autonomous driving technology requires higher quality and more detailed map content to support sensor data and guarantee driver safety and comfort. Many companies have been developing digital maps for almost a decade in the form of ADAS maps and HD maps to support the different levels of driving automation.

It’s fully based on a HD-map, which makes it costly and time consuming to roll out as every area needs to be mapped digitally. The vehicle continuously must request an update from the cloud about its whereabouts, making it depending on cloud and network availability. Most importantly, it hinges on the concept that the digital map is leading; and - by definition - a map is outdated the minute after you have created it. New things may pop-up (e.g. construction work, parked vehicles, pallets in a warehouse, parked bicycles on the sidewalk, etc.). In this approach the reaction to object detection gets translated in to ‘dumb stops’.

Most importantly, this method will not be approved by homologation and certification bodies - such as TÜV - to operate L4-operations in the public space. Even in non-public segments such as indoor situations like warehouses, users have been disappointed by the lack of ability to react properly to the ever-changing circumstances such as pallets being placed in aisles and corridors.

3. Brute force

Typically seen in R&D cars in the automotive space with OEMs and Tier1s. The premise is to collect all raw data from all sensors and process them in real-time. As there are some 6-10 cameras; 1-5 radars; 1-2 lidars on board of a typical L2+ car and as these sensors are high resolution/high frequency sensors, the amount of raw data to be processed is humongous. The CAPEX and OPEX costs are huge due to the generated data streams and requirement to process all this data a) fast and b) accurately. Using this strategy, you will end up needing a whole lot of powerful processing power. Resulting in a struggle how to fit in a car and cool it properly.

If you want to process this below 100ms and achieve reliable detection and safe AD decisions, you need quite some compute power (e.g. GPUs, etc.). This translates to high costs and high-power consumption which depletes the vehicle's battery and thereby reduces the range of operation. Refer to our blog on 30% more energy required for this process.

In summary, as a research & development approach it's an acceptable method to train first L2+ models. However, it's not a scalable option to run it in production cars.

4. Production-ready

Bridging the chasm between Brute Force and scalable production and without the limitations of the ”R&D autonomy" approach is “Perception-based” autonomy. In this strategy, intelligent edge AI does a real-time and reliable selection of relevant information in from the various sensor streams. By this smart way of only using what is relevant information for a given use case, processing requirements are lowered and higher AI-reliability is achieved. For more details, refer to ROI/TOI blog.

Relevant information from one sensor can then also be fused with the relevant information of another sensor (e.g. camera with radar, camera with lidar, camera with IMU, etc.) and deliver sub-50 ms, accurate object information to the vehicle’s path planning scheme. Here vehicles can detect and decide to overtake another vehicle or object. Moreover, it can run on low-powered, ASIL-D grade chipsets, enabling mass production of safe L2+ applications.

This strategy still can be enhanced with adding HD-map information, e.g. for navigating complex intersections or for indoor situation – but is does not rely on HD-maps, it’s merely using them as an additional aid.

TERAKI

Teraki has implemented above mentioned AI-models and Sensor Fusion on production-grade chipsets (e.g. on ARM's R52, A53, A73 and Infineon Aurix TCX) multiple times.

The Teraki software stack helps to solve these challenges by pre-filtering and optimizing the amount of data to be transmitted and processed later by using Regions-of-Interest (ROI, objects within an image), Times-of-Interest (TOI, events within a sequences of images). Customers typically want to analyze new data and refine & develop their L2+ application, for instance by focusing on situations in where the application did not work as intended and retrain the underlying model to become more robust.

Teraki allows for focusing on TOI, but also for adapting the ROI and TOI models to continuous refine the application without the need to go through the data-filtering and data-labeling loop all over again for each model version. Moreover, Teraki allows in each iteration loop for preserving important features for better machine learning model accuracy.

One of the concrete benefits of applying the above techniques, is that far better detection levels are achieved by using radar-only data. Such increased detection levels enable to safely choose for camera-radar fusion instead of camera-lidar fusion for L4 operations. Effectively meaning that lidar can be replaced by far less expensive high resolution-radar. The effect this has on the cost per vehicle is impressive:

Overview

Autonomous vehicles are a tremendous engineering challenge. Level 4 vehicles require many high-resolution sensors -such as cameras, lidars and radars - to safely navigate in their operational environments. The latest high-resolution sensors on the market produce significant amounts of data that pose challenges to costs, latency and power when delivering real-time, 99%+ AI-accuracy.

We've covered four ways to claim “autonomy” described the pros and cons of each of them. In Table 1 a good overview of the main characteristics of each of the strategies is given. Perception-based autonomy leading the pack as it is production ready, low costs and delivers certifiable L4 accuracies.

One of the companies in this production-ready space is Teraki. Teraki helps to overcome these challenges with embedded, intelligent selection of sensor information - accurately done with low power hardware. It can detect and recognize the regions of interest in the environment around the car, pass only the relevant information to the central computing unit and dramatically reduce the data latency of the vehicle’s onboard compute-power and storage capacity by as much as a factor of ten. The result is a safer vehicle that makes 10%-30% better decisions - and a step closer to an autonomous future that’s safer for everyone.

Get in touch

Feel free to contact us at info@teraki.com

Download File

PIONEERING THE FUTURE OF INTELLIGENT MOBILITY

Learn more about our products and solutions today
Enquire Now