November 11th 10:30-11:30, in room 2026, Karlstr. 45.
Jiayi Wang:
Legged robots possess immense potential to inspect and traverse complex terrains such as disaster sites and industrial environments, freeing humans from dangerous tasks. When facing disturbances, legged robots must be able to replan their motions online. However, online motion planning is often prohibitive for legged robots due to high-dimensionality, non-linearity, and combinatorial complexity (determining the sequence of contacts). To overcome these issues, we focus on simplifying the computational complexity of the legged locomotion planning problem. Specifically, to tackle the high-dimensionality and non-linearity, we propose the idea of relaxing the accuracy of the dynamics constraint along the planning horizon, and employing machine learning techniques to extract the value function from past experiences. For the combinatorial complexity, we propose to leverage machine learning techniques and mixed-integer non-linear optimization to establish gait pattern selection maps offline. The effectiveness of the proposed approaches has been demonstrated on the humanoid robot Talos and the quadruped robot ANYmal. Notably, our method enabled Talos to perform online locomotion planning on uneven terrains, providing the robot with the capability to adapt its motions in response to unexpected environmental changes.
Namiko Saito:
To realize robots which support various daily tasks, I focus on ``tool-use''. To use tools, it is essential to acquire the ``relationships among tools, objects, effects, and actions.'' In this presentation, I propose a deep learning model which acquires the relationships in its latent space and can generate actions according to them. With the deep learning model, a robot can detect the target effect of the task and the characteristics of the object to be manipulated, select the appropriate tool, and flexibly adjust actions. The robot completed tasks even the tools, objects and effects are unknown.