Overview
The Human Activity Understanding research at LMT builds models of human interaction with the environment using Computer Vision, Sensor Fusion, and AI techniques. The insights gained from these models can be used to build technology that improves human well-being, comfort and convenience.
Current focus: Understanding Human-Object Interactions using Computer Vision and Machine Learning
Under this topic, we build models of human-object interactions using camera data (RGB and Depth) recorded in indoor environments. Depending on the availability, werable sensors (e.g. Inertial Measurement Units) or sensors installed in the environment (RFID, motion detectors, etc.) may be fused together with RGB-D data.
HAU researchers at LMT
M.Sc. Constantin Patsch, M.Sc. Yuankai Wu, M.Sc. Marsil Zakour, Dr.-Ing. Rahul Chaudhari
Videos
Enjoy below some impressions from our ongoing work on the 3D human activity simulator (Zakour, Marsil; Mellouli, Alaeddine; Chaudhari, Rahul: HOIsim: Synthesizing Realistic 3D Human-Object Interaction Data for Human Activity Recognition. 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), 2021).