Train Your Own - Model

Introduction

Self-driving cars and robots rely on AI models to run (semi-)autonomously. Creating these AI models involves extensive training. Before going in operation, the model is fed with large amounts of annotated data so machine learning can take place. This needs to be done to gather insights that will automate tasks at scale.

Nevertheless, currently most AD training processes rely on extensive data collection. Secondly, most Tech vendors agree that 96% of the collected data typically is not useable for the training of the specific models. This leads to a highly increased processing costs for all involved systems in the automotive EE architecture as used to fuse and further process the data. More importantly, this method does not allow to allocate higher resolution and framerates to the interesting and relevant events – the so called “Times of Interest” (TOI). These TOI edge cases, are most interesting and as such computing and storage resources should be allocated to these and events and not to the generic raw data collection – which is not useful for ML model training.

AI Model Training

AI Model Training

Teraki’s state-of-the-art edge AI, selects the relevant sensor data from the edge while supporting 10-20 times faster training of such models and resulting in average to 5x lower inference times. As such Teraki edge AI enables to place most of the data-filtering scheme closer to the sensor and sensor fusion and discard 96% of unused data. How this intelligent data selection is done at the edge can be read in our previous blog on this topic. For a large part it is based on detecting and selecting what is relevant information. The technique of selecting and zooming in on interesting information is called “Region of Interest”, or in short: ROI.

ROI-driven Use Cases

Let’s first start from the point of view of automotive and Unmanned Vehicle market demands. What use cases are being enabled by Teraki’s pre-processing edge software? What value does ROI bring to customers?

Here are a few examples of Teraki ROI-products and what they deliver:

  • ROI model training. ROI allows for more than 5x lower RAM/storage at the edge. This is relevant as edge RAM is expensive. For situations where customers need to transmit video from the vehicle to a central location (a.k.a. “Remote Control” or “Tele-operation”), the bandwidth is decreased with factor 5x whereas the video quality - e.g. measured on ML or on VMAF metrics - stays the same.
    For situations where ROI based processing needs to happen in the SoC reading out Radar/Camera and LIDAR data determine on which regions to focus most LIDAR/Radar data rates.

The Teraki ROI models can be trained with up to 5x lower inference times against standard models available in the market such as PSPNet, Fast CNN, MobileNet and SSD or preserving up to 20mAP better accuracies at the same inference speed. References are available upon request.

In Q1 2021 will follow:

  • ROI model training for ML-based perception. Here the ROI delivers an 20% improvement of AI-quality for our customers. Equivalently, it can deliver up to factor 5x improved RAM/storage capacities for the SoC design (available references in Q1 2021).

  • ROI pre-processed edge case data streams in Teraki platform Early next year, Teraki will provide TOI reference data from automotive and robotics use cases to validate the AD perception stack. The resulting TOI data streams consisting of synchronised sensor data (camera, lidar, radar, IMU) and optimized for specific situations such as lane departure, overtake scenarios are obtained from Teraki’s own fleet of cars and robots deployed in EU. With this data, customers can test the positive impact of Teraki ROI/TOI models on their AI-model training and enabling to improve the accuracy of own AD stacks. Teraki enables customers to start their own perception model based on these data streams but ultimately enable to capture more edge events at better accuracy per event to improve overall perception model robustness.

ROI: detecting both common and uncommon objects

Teraki’s “Region of Interest” is based on a ‘detector’. This is a piece of state-of-the-art and up to 5x lower latency software stack that can run in cars, robots and drones. At the edge it captures the relevant information that is required for any given use case. Many frequently seen use cases are already enabled by existing and commonly used detectors in the Teraki Platform such as e.g. cars, pedestrians, sky, vegetation, cyclists, people, etc. However, we didn’t want our customers to be limited to the more frequently requested objects. We found out that some of our customers have very specific and different use case demands. Particularly in detections, there is a plethora of objects in a variety of verticals that customers are looking for. We could never train detectors for all these use cases. Neither do we want to do so. We want our customers to be able to run their own detector, however specific or seldom these items, characteristics or objects may be.

Custom trained detection

Custom trained detection

This customization is not often given to an individual customer that often has a very specific use case. For that we developed a feature of ROI: “Train Your Own” (‘TYO’). With TYO, training of your own AI-models becomes simple and can be done without in-house data expertise.

Train Your Own (TYO): adding use case specific objects

To serve the demand of scattered use cases, Teraki developed the TYO feature as part of its ROI product. “Train Your Own Model” is developed in line with customer demand. From crack detection to smoke & fluid detection, the customer can use Teraki’s Platform to get a ROI-model. Without further coding, the customer can make use of the Teraki Platform REST API service layer to train and evaluate its own model by calling the services. One can directly upload the video and label file on web interface. With a click of a button, the trained & evaluated model is generated and visualized in the Platform’s dashboard. With it, automatically come the KPI’s that indicate the levels of accuracy and reliability.

The main objective of TYO is to address the current gap in video model training & evaluation, specifically the customization. Until now, the customer had no option of bringing its own model and train and evaluate this customized model on video data. With the TYO feature created with next gen technology, the customer can upload video file in .mp4 or .avi format coupled with a separate label file in COCO format and generate accuracy KPIs related to the trained model. Since the labels file is separate from the video file, it is possible to modify the labels without reuploading the video file.

How do you use TYO?

In the first step, the user uploads video file and its corresponding Label file by calling the API Files service to get the video file ID and Label File ID. In the second step, these two file IDs are used by the user to register a Labelled Video in the API video service. In the third step, the user proceeds to perform a RoI-training using an existing Model and the Labelled Video to fine-tune the model using the Labelled Video as a training set. This will return a ROI Training ID. Finally, the user can perform ROI-evaluation to generate RoI KPIs on another Labelled Video.

Steps to TYO

Steps to TYO

For better insights & decision making, the KPIs ‘False Positive Rate, ‘False Negative Rate’, ‘Area of RoI covered’ in % units and ‘Intersection of Union’ (IOU) are generated in addition to smart data reduction capabilities. It immediately gives the customer the result of the accuracy of the ROI-model he trained. Making it easy to decide to either deploy in operation or to retrain towards improved KPIs have been achieved.

Features of TYO-module

In short, these are some of the main features of TYO:

  • ‘TYO’ helps to customize and train your own models for object segmentation and detection.

  • Used with Teraki ‘Region of Interest’ RoI Encoder for data reduction at the edge.

  • If required, the models can re-trained with additional labelled data.

  • Teraki Platform automatically scores the accuracy rates of these models.

Summary

Teraki ROI-technique helps customers getting better AI-performance. As feature of this, Teraki made available Train Your Own. With TYO models customers will detect the objects better, so AI-accuracy improves. Hence the product improves, the operational performance improves and better end-results (higher model accuracy) are delivered to customers. Teraki is helping specialized AV, UV and drone companies that want to run better AI at the edge. The custom-trained ROI improves the model accuracy; this leads to better outcomes as processes become more efficient when detection quality goes up.

Get in touch

To find out how we can support your use-cases and to get a quick overview of Teraki services check out the Teraki Platform. We offer an evaluation license for free. Feel free to contact us at info@teraki.com for any assistance or to train your own ‘detector’.

Share this Post: