Patients having suffered accidents or stroke often have to go through extensive rehabilitation to regain motor skills for an independent and self-determined life. In contrast to classical physical therapists, robotic rehabilitation systems are able to tirelessly and precisely apply intense manual labor over long periods of time, while accurately measuring performance and improvements of the patient.
As a Team of researchers at TUM and in collaboration with partners across Europe, for the ReHyb project we are developing the control of an upper-body exoskeleton using shared control strategies relying on model-based descriptions of the robotic system and data-driven system identification of the human. Our goal is to develop a patient-specific, assist-as-needed device for rehabilitation and daily living activities.
Human Intention Estimation
With recent advances in robotic technologies, lightweight robots are becoming more accessible and are increasingly being deployed in close proximity to humans. In order for humans and robots to cooperate effectively in previously unspecified contexts, the robotic partner needs the capacity to infer the intent of the human during tasks and adapt its behavior accordingly.
One common approach to perform the intention estimation is the inverse reinforcement learning (IRL) framework. However, methods based on this approach suffer from intrinsic optimality assumptions towards the observed agent, therefore, not generalizing well to suboptimal and learning agents.
My research efforts center around generalizing IRL frameworks for suboptimal agents, such as human with injuries or limited movement capabilities, by combining insights from control theory and data-driven learning methods.
Therefore, a multitude of intesting research questions result in the field, such as:
Design of an inverse reinforcement learning algorithm based on suboptimal demonstrations
Provide guaranteed bounds for predicted intent and convergence behavior
Adaptation of inferred cost function for agents with time-varying control policies
Open theses (Bachelor / Master / IP / FP)
FP / MA: Learning Human Motion Models using Inverse Reinforcement Learning [PDF]
FP / MA: Model Learning and Action Estimation in Human-Exoskeleton Shared Control [PDF]
Please feel free to contact me via e-mail, if any of the topics above interest you.
I'm always looking for motivated students, who are interested in my research. So, if none of the above topics fit your specific interests or you have a proposal of your own, don't hesitate to contact me.
Please include your transcript of records, CV (if available) and your preferred starting date in your e-mail.
Publications
2023
Römer, Ralf; Lederer, Armin; Tesfazgi, Samuel; Hirche, Sandra: Vision-Based Uncertainty-Aware Motion Planning Based on Probabilistic Semantic Segmentation. IEEE Robotics and Automation Letters 8 (11), 2023, 7825-7832 more…BibTeX
Tesfazgi, S.; Sangouard, R.; Endo, S.; Hirche, S.: Uncertainty-aware Automated Assessment of the Arm Impedance with Upper-limb Exoskeletons. Frontiers in Neurorobotics 17, 2023, 1167604 more…BibTeX
2022
A. Lederer; M. Zhang; S. Tesfazgi; S. Hirche: Networked Online Learning for Control of Safety-Critical Resource-Constrained Systems based on Gaussian Processes. Proceedings of the IEEE Conference on Control Technology and Applications, 2022 more…BibTeX
2021
S. Tesfazgi; A. Lederer; S. Hirche: Inverse Reinforcement Learning: A Control Lyapunov Approach. Proceedings of the 60th Conference on Decision and Control (CDC), 2021 more…BibTeX
2020
Köpf, Florian; Tesfazgi, Samuel; Flad, Michael; Hohmann, Sören: Deep Decentralized Reinforcement Learning for Cooperative Control. IFAC-PapersOnLine, 2020 more…BibTeX