AkiSens
Adaptive AI-Based Real-Time Analysis of High-Frequency Sensor Data
Introduction
AkiSens is a joint research project of the Technical University of Munich and the IfTA GmbH, funded by the Bavarian Ministry of Economic Affairs, Regional Development and Energy in the context of the Bavarian Collaborative Research Program (BayVFP) and managed by the VDI/VDE Innovation + Technik GmbH.
The goal of this research project is to develop AI-based methods which are suitable for the real-time analysis of high-frequency sensor data.
Description
Nowadays, high-frequency environmental sensing is on the rise. More and more machines and devices, ranging from industrial facilities over power plants to consumer devices, have a large variety of integrated sensors, which enable them to collect data about their environment and state of operation. Especially high-frequency sensor signals, as generated, for example, by laser sensors with sampling rates of several million samples per second, are valuable for the operation of the considered devices.
In order to get meaningful information out of the raw sensor data, sophisticated and adaptive algorithms are needed to process and interpret the data and derive actions based on the analysis results. This adaptive sensor signal processing can be accomplished with AI- and machine learning-based methods, enabling the device to detect and classify certain situations and to react accordingly. Neural Network-based approaches have recently dominated the scoreboards of many machine learning applications, achieving state-of-the-art results.
However, due to their deeply-stacked architecture, typical implementations of Neural Network models, including Deep Fully-Connected Neural Networks, Recurrent Neural Networks, Long-Short-Term Memories, and Transformers, are not suitable for the application to time-critical real-time processing of high-frequency sensor signals as they are not capable of running several million inferences per second.
Thus, implementing AI methods in hardware and applying them to such high-frequent data remains challenging and requires further research.
Application
A prominent example considered in this research project is the processing of laser sensor data measured at two gears attached at both ends of a turbine shaft, as depicted in the picture above.
While the turbine shaft and attached gears rotate during operation, the two laser sensors detect the gears' position, resulting in square signals, each with a sampling rate of 4 MHz.
During the turbine's operation, the shaft can be subject to torsional oscillations and vibrations, which can damage the turbine at a certain level. Hence, it is desired to perform a highly accurate measurement of the torque and torsional oscillations using the two sensor signals.
However, the shape of the sensor signals can vary drastically under the influence of movements of the shaft, wear of the materials, and dirt particles, leading to the fact that, for example, simple thresholding techniques are insufficient. Therefore, more than classic and deterministic signal processing methods is required. AI-based methods can be exploited in this scenario to overcome the abovementioned issues and adapt to changing operating conditions.
In order to meet the real-time requirement, the signal processing should happen on an FPGA, which requires the considered models to be simple to implement and highly parallelizable.
Our Contributions
In the context of this research project, we develop and improve reservoir computing models as an adaptive AI method for high-frequent sensor signal processing on FPGAs. We aim to optimize the developed models for simplicity in their implementation on the one hand, as well as effectiveness on the other hand, resulting in a well-balanced compromise between throughput and accuracy.
In our research, we are dealing with the following topics:
- Reservoir Computing and Echo State Networks as a feasible architecture for machine learning models with high-frequent inferences
- Cellular Automata as simple reservoirs in Reservoir Computing models and the analytical analysis of their dynamics when described as linear mappings over Galois fields and rings
- Implementation of such models on FPGAs
Open Student Work
Current Student Work
FPGA Implementations of RNNs: A Survey
Beschreibung
Field-programmable gate array (FPGA) implementations of recurrent neural networks (RNNs) are crucial because they provide high performance with low power consumption, making them ideal for real-time applications and embedded systems. Recent advances have shown that FPGAs can outperform traditional platforms like GPUs regarding energy efficiency while maintaining comparable accuracy.
In this seminar topic, your task is to introduce and summarize recent approaches for FPGA-based RNN accelerators. Furthermore, a comparison of different implementations concerning resource usage (lookup tables (LUTs), registers, digital signal processors (DSPs), and power dissipation) and performance (predictions per second, real-time capability) should be composed.
Outline:
- Literature Review: Get an overview of recent advances in FPGA implementations of RNNs
- Comparative Analysis: Summarize and compare the concepts of the most important implementations concerning resource usage and performance
- Scientific Writing: Compose your findings in a paper, resulting in a concise overview and comparison
- Presentation: Present your findings to other members of the seminar.
Voraussetzungen
- Be familiar with deep learning, especially recurrent neural network architectures
- Be familiar with FPGAs
Betreuer:
Enhancement of Vehicle Control Systems using Time-Series-Prediction
Time Series Prediction, Machine Learning, Neural Networks
Beschreibung
Summary:
Current vehicle dynamics control systems regulate various vehicle state variables using classic PID control methods by comparing desired and actual states. The quality of such a controller can only be improved to a limited extent through parameter optimization, as the control is based solely on measured actual states. A conventional approach to solving this problem involves using a highly complex physical model to predict the future behavior of a signal based on known input parameters, thereby improving controller performance.
However, as such a model far exceeds the hardware limitations of a vehicle control unit, an alternative solution is to make predictions using a machine learning model. This research aims to investigate the feasibility and quality of such machine learning predictions and the resulting control loop quality using the example of motorcycle traction control.
Methodology:
The proposed methodology involves developing a time-series prediction approach,
potentially utilizing sequence-to-sequence classification, e.g., to determine the road
surface, tire types, loading conditions and other parameters as input for time series
prediction. To achieve this, various suitable model architectures (e.g., LSTM, GRU,
Transformer, Reservoir Computing) will be identified in the literature, and appropriate signals and datasets will be selected from existing vehicle data. The models will then be verified as open-loop in simulation, and the most suitable method and relevant data will be identified. If the simulation results are positive, the model will be implemented in a real-time hardware environment to test closed-loop performance.
Research Questions:
- Is predicting sensor signals possible using time-series prediction in an open-loop system?
- What data and model are necessary to enable robust prediction?
- Is control based on prediction possible in a closed-loop system?
Kontakt
Email: Florian.huelsmann@bmw.de
Betreuer:
Student
Completed Student Work
Kontakt
frieder.jespers@nxp.com
Betreuer:
Student
Kontakt
Betreuer:
Kontakt
Betreuer:
Kontakt
Jonas Kantic | Room: N2118 | Tel: +49.89.289.22962 | E-Mail: jonas.kantic@tum.de
Betreuer:
Kontakt
Jonas Kantic
Chair of Integrated Systems
Office N2118, Building N1
Betreuer:
Student
Kontakt
Email: fabian.legl@ifta.com
Betreuer:
Betreuer:
Betreuer:
Publications
Preprints
J. Kantic and F. C. Legl and W. Stechele and J. Hermann. "ReLiCADA - Reservoir Computing using Linear Cellular Automata Design Algorithm" in arXiv, 2023, eprint: arXiv:2308.11522, DOI: https://doi.org/10.48550/arXiv.2308.11522.