With the new improved look and feel, we’ve been working hard on Teraki’s DevCenter in the recent months. DevCenter demonstrates Teraki’s capabilities for data reduction to automate accurate and highly efficient EDGE analytics. DevCenter is the tool for our customers to model and improve their data collection campaigns by reducing the data extraction at the embedded level. With DevCenter’s latest updates, we have created a playground to experiment with time series data and to find the optimal balance between data reduction and acceptable maximum deviation (accuracy) that best fits the use case.
Now customers can create Projects to include Signals and Sensors with over 100k data points per instance. Customers can include and adopt several models inside one project. Users can train and test multiple reduction models based on different standard deviations within the same project. Customers can compare their results with different models to find the most suitable model that fits the use case. Along with the new look, a new project structure is introduced to make multiple models in one project look more natural and intuitive.
New Structure of DevCenter
The new DevCenter is structured into 6 parts for an intuitive understanding of the process flow. These parts display the algorithm development competence through training of sample data while controlling the maximum deviation. Customers can easily explore the trade-offs between more reduction (efficiency) and accuracy (quality). They can choose between various accuracy settings, i.e. the allowed maximum deviation per point and with that, they can experience the corresponding reduction result and check whether it fits their use case. Most customers would use the intrinsic sensor noise as the threshold for this allowed maximum deviation. We wrote a nice blog on this: https://teraki.com/blog/making-lossy-compression-practically-lossless/.
Here are the 6 steps:
Upload The first section of the project flow. Here, the user has the option to upload multiple sensor recordings to Teraki’s DevCenter in the defined data format.
Configure The uploaded data is defined in the configure section, where data and signal types are provided for the training of the reduction models.
Model The Model section provides the user with the option to create multiple models, essentially by specifying a multitude of “Deviation rates”. These settings will be used later in the next step (Train) as well as in the following step (Test) to identify the model that best balances the user needs in terms of compression and error rate.
Train The Train section is where training of data reduction models takes place. A model can be selected and trained against a dataset resulting in a trained reduction model custom for treating signals as the ones provided in the training dataset.
Test The testing section provides the provision to apply the trained model against existing and new data sets to test and infer the results of the applied compression.
Visualize The data is visualized with the display of the most important parameters required to choose a specific model. It includes a graphical comparison of raw and reconstructed data over-time, the total reduction percentage achieved (from Encoded Payload Size with Raw Data Size), along with maximum deviation and root mean square error.
Sample Use case:
Enhancing data collection
The user can capture and gather the first sets of raw data from the field by driving. With your own raw sensor data, you can develop and choose the most suitable reduction model. This would be an optimal mix of sensor inaccuracy - on one side - and of the requested compression performance on the other side. So, all bandwidth, processing, latency and accuracy needs are achieved. The process to follow starts by uploading the sample to DevCenter, training and evaluating of different models to the data set; checking the most suitable model based on the resulting encoded payload size and error tolerance. Once the best suitable model is selected, Teraki provide its customers with a Software Development Kit (SDK) which includes the encoder and/or decoder libraries in addition to the trained model for an embedded system implementation e.g. as direct pre-processing of sensor data in the car.