Advanced Seminar Embedded Systems and Internet of Things
It is mandatory to attend all the lectures of our Advanced Seminar in presence to complete the course successfully. Virtual Attendance is not possible.
Application Process
Due to the high interest in our seminar topics we use an application process to assign the topics.
If you are interested in one of the topics below, please send your application together with your CV and your transcript of records to seminar.esi.ei(at)tum.de. Express your interest and explain why you want to have that specific topic and why you think that you are most suitable for the topic. This allows us to choose the most suitable candidate for the desired topic to maximize the seminar's learning outcome and to avoid dropouts.
Additionally, you can indicate a second topic that you would like to take, such that we can still find a topic for you if your primary choice is not available.
Note: We do not assign topics on a first-come-first-served basis. Even though we appreciate your interest if you have asked or applied early for a topic we can not guarantee that you get a seat. Generally we have 3-4 applicants per topic. Please think carefully if you are able to do the work required as we have to reject other students. Generally, email clients remember the people you have communicated with.
Kick-off meeting
This semester the seminar will be conducted in physical mode. This means that you must join the physical classes and presentation which you will find on the Moodle page. Additionally, you can schedule weekly meetings with your supervisor via Zoom or on campus. Lecture materials and videos will be available on Moodle.
The kick-off meeting will be on the 23rd of April at 9:45 on Campus. We ask all successfully selected participants to be present in the kick-off meeting. Please notify us in case you can not make it to the meeting, otherwise we will assume that you are no longer interested and give your place to another applicant.
Topics
Title: Is the World Only Black and White? Analysis of Grayscale Image Object Detection for Autonomous Driving
Description: For the development of automated driving, cameras are important sensors to sense the environment. While RGB cameras are state-of-the-art in multimedia applications, autonomous vehicles often use different color patterns. Although color images contain color information, the signal-to-noise ratio and their low-light sensitivity is typically worse. We want to analyze whether pure grayscale images (”black&white images”) can be an alternative to RGB images in autonomous vehicles to increase robustness and still perform well in perception algorithms, especially with respect to an increased robustness against physical adversarial samples. Most research on grayscale image object detection is already several years old (e.g., [1]), but the scientific community recently re-discovered this topic [2, 3].
References: [1] J. Fasola and M. Veloso, “Real-time object detection using segmented and grayscale images,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, 2006, pp. 4088–4093. [2] M. Tasyurek and E. Gul, “A new deep learning approach based on grayscale conversion and DWT for object detection on adversarial attacked images,” J. Supercomput., vol. 79, no. 18, pp. 20383–20416, 2023. [3] X. Dai, X. Yuan, and X. Wei, “TIRNet: Object detection in thermal infrared images for autonomous driving,” Appl. Intell., vol. 51, no. 3, pp. 1244–1261, 2021.
Supervisor: Michiael
Status: FreeTitle: Processing Pipeline Attacks on Radar Sensors for Automated Driving
Description: For the development of automated driving, Radars are important sensors to sense the environment. The complexity in the data processing pipeline (from the environment to the software layer) [2] gives attackers many opportunities to attack and manipulate data to disturb potentially safety-critical tasks of autonomous vehicles. Existing work [1, 3] often summarizes attacks on Radar sensors without providing a context on their level of operation although such knowledge is necessary to understand both attacks and possible countermeasures. As part of this work, the context of attacks on Radar sensors should be given by comparing, analyzing, and classifying different attacks on Radar sensors.
References: [1] M. A. Vu, W. C. Headley, and K. P. Heaslip, “A comparative overview of automotive radar spoofing countermeasures,” in 2022 IEEE International Conference on Cyber Security and Resilience (CSR), 2022. [2] S. Sun, A. P. Petropulu, and H. V. Poor, “MIMO radar for advanced driver-assistance systems and autonomous driving: Advantages and challenges,” IEEE Signal Process. Mag., vol. 37, no. 4, pp. 98–117, 2020. [3] Z. Sun, S. Balakrishnan, L. Su, A. Bhuyan, P. Wang, and C. Qiao, “Who is in control? Practical physical layer attack and defense for mmWave-based sensing in autonomous vehicles,” IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 3199–3214, 2021.
Supervisor: Michael, Marco
Status: FreeTitle: Secure Multiparty Computation for Autonomous Vehicles Collaborative Perception
Description: Collaborative perception is a cutting-edge paradigm in autonomous systems designed to enhance the perception capabilities of individual vehicles through the exchange of perception data with other vehicles. However, this data sharing poses significant privacy risks for the vehicles involved. Secure Multiparty Computation (SMC) is a privacy-preserving technology that enables parties to compute functions jointly while keeping their individual inputs private and ensuring fairness.
This seminar aims to explore the potential integration of SMC within the framework of collaborative perception. We will focus on the necessary requirements for successful implementation and conduct a feasibility analysis of achieving collaborative perception in a privacy-respecting manner.[2] T. Li, L. Lin, and S. Gong, AutoMPC: Efficient Multi-Party Computation for Secure and Privacy-Preserving Cooperative Control of Connected Autonomous Vehicles. 2019.
Supervisor: Marco
Status: FreeTitle: Comparative Analysis of Radar and LiDAR Sensors security for Sensor Fusion in Autonomous Vehicles
Description: This seminar will delve into the critical role of Radar and LiDAR sensors in achieving Level 5 autonomy in autonomous vehicles. We will explore the similarities and differences between these sensors, focusing on their spatial information, depth dimensions, and signal strength capabilities. Additionally, we will discuss the security implications of potential attacks on these sensors and whether such attacks can be translated between them. The seminar will also examine the strengths and weaknesses of fusing LiDAR and Radar data with other sensors, such as RGB cameras, for sensor fusion applications like collaborative perception.
References:
[1] M. A. Vu, W. C. Headley, and K. P. Heaslip, “A comparative overview of automotive radar spoofing countermeasures,”
in 2022 IEEE International Conference on Cyber Security and Resilience (CSR), 2022.
[2]R. Ravindran, M. J. Santora and M. M. Jamali, "Camera, LiDAR, and Radar Sensor Fusion Based on Bayesian Neural Network (CLR-BNN)," in IEEE Sensors Journal, vol. 22, no. 7, pp. 6964-6974, 1 April1, 2022, doi: 10.1109/JSEN.2022.3154980. keywords: {Uncertainty;Sensors;Sensor fusion;Neural networks;Laser radar;Bayes methods;Computational modeling;Sensor fusion;automated vehicles (AV);deep neural network (DNN);Bayesian neural network (BNN);camera;RADAR;LiDAR;multi-object detection (MOD)}Supervisor: Marco, Michael
Status: FreeTitle: Investigating Cost Representations and Planning Techniques for Multi-Dimensional Costs in IoT Environments
Description: Cost of actions can be a critical factor in reasoning and decision-making, allowing systems to compare different options and optimize outcomes. IoT devices often involve multiple dimensions of cost, such as time, money, and resources. In this topic, the student should investigate existing methods for representing costs and explore planning techniques that take these costs into account. The research should include a review of general approaches to cost representation, such semantic models, and analyze their applicability and adaptability to IoT environments, especially as a representation in the Web of Things Thing Description.
References:
- www.w3.org/TR/wot-thing-description11/
- https://link.springer.com/article/10.1007/s00170-020-05068-5
Supervisor: Roman
Status: FreeTitle: Challenges and Advances in WiFi-TSN
Description: This topic aims to explore the integration of WiFi with Time-Sensitive Networking (TSN), highlighting the challenges, recent advances, and potential applications in industries requiring deterministic wireless communication. The student will perform the following tasks:1. Find papers related to this topic covering the latest advancements in WiFi-TSN.
2. Understand and read all the related work.
3 Find the technical challenges in WiFi-TSN and their advances and solutions.References:
1. www.mdpi.com/1424-8220/21/15/4954
2. https://ieeexplore.ieee.org/abstract/document/10034532?casa_token=N-ZvUTkKpQcAAAAA:J9iab_qL-BaQMEpkSoG8PHV74CHtzbE-rzlkkEiOUELQjHl5xL9aJ_n4fuuACO87kMD8AXVvSupervisor: Rubi
Status: Free