Research Agenda at the ATARI Lab

The Applied and Theoretical Aspects of Robot Intelligence (ATARI) Lab envisions a future where humanoid robots autonomously perform complex tasks in dynamic and unstructured environments. Consider a scenario where a robot tidies a cluttered house each day, encountering different configurations and challenges. This vision requires the robot to:

  • Plan sequences of actions (O1) in novel situations.
  • Learning to execute and adapt plans dynamically (O2) while accounting for execution failures.
  • Interact with diverse objects and surfaces (O3) reliably and effectively.

No existing framework offers the generality, adaptability, and robustness required for such tasks. At ATARI, we develop foundational principles and frameworks to endow robots with these capabilities by integrating learning-based and model-based approaches. From imitation learning, reinforcement learning, and foundation models to model predictive control and trajectory optimization, we aim to harness the strengths of both paradigms to advance the frontiers of robotics.

To find videos of our research, check out our YouTube Channel.
 

Planning (O1)

Discovering the Right Abstractions

The power of abstraction lies in its ability to represent complex systems concisely. In vision, convolutional filters revolutionized learning by embedding spatial hierarchies, while in language, transformers demonstrated the utility of attention mechanisms. In robotics, we hypothesize that contacts with the environment are the key abstraction for planning and control. Developing algorithms that leverage this abstraction will enable robots to reason about physical interactions more effectively.

Decomposing Complex, Long-Horizon Tasks

Tasks such as cooking, cleaning, or assembling objects involve multiple interdependent subtasks, each requiring precise execution and adaptability. For instance, cooking an omelette involves actions like retrieving ingredients, cracking eggs, and frying. ATARI’s research focuses on hierarchical task decomposition, where high-level planners define sub-goals, and low-level controllers execute them reliably. Additionally, we investigate real-time replanning to ensure responsiveness to unexpected changes.

Breaking Free from Rigid Contact Models

Many state-of-the-art robotic models rely on rigid contact assumptions, limiting their applicability in dynamic and deformable environments. While reinforcement learning benefits from domain randomization for robustness, the underlying simulations remain grounded in rigid models. ATARI is exploring hybrid formulations that integrate soft-contact dynamics and probabilistic representations, offering a more nuanced approach to modeling and planning in real-world scenarios.

Planning in Novel Situations

Effective planning in unstructured environments involves addressing vast, high-dimensional state spaces and generating feasible action sequences in real time. ATARI Lab investigates methods to efficiently learn in large state spaces and to create algorithms that scale across diverse scenarios. 

Optimal Control vs. Deep Reinforcement Learning

Optimal control and reinforcement learning each have distinct strengths—optimal control provides precision and interpretability, while reinforcement learning excels in robustness and scalability. However, neither fully satisfies the needs of general-purpose robots. At ATARI, we investigate hybrid approaches that leverage the strengths of both, integrating data-driven generalization with model-based rigor to push the boundaries of robot autonomy.

Learning (O2)

The Role of Inductive Biases

Inductive biases shape how learning algorithms prioritize exploration and prune infeasible solutions, reducing sample complexity. In robotics, biases rooted in physics, geometry, or contact dynamics can accelerate learning while maintaining generality. ATARI explores the design of task-agnostic inductive biases that facilitate efficient learning without over-constraining exploration, enabling robots to handle a broader range of tasks.

Training Generalizable Policies

Robot policies must generalize across diverse embodiments and tasks. In imitation learning, a major bottleneck is the lack of large-scale, high-quality data akin to internet-scale datasets for language models. For reinforcement learning, challenges include defining generic goal representations and reward structures that accommodate diverse contact-rich tasks. At ATARI, we develop methods to learn from limited data, including meta-learning, self-supervised learning, and techniques for synthesizing realistic training data across varied robotic platforms.

Representation Learning for Robotics

Compact, expressive representations of robotic data are essential for efficient learning and control. While CNNs revolutionized computer vision and transformers have become a standard for sequential data, the optimal representation architecture for robotics remains unclear. We explore Graph Neural Networks (GNNs) to capture relational structures, transformers for long-horizon planning, and innovative designs tailored to encapsulate contact dynamics. Identifying these representations is key to developing versatile and efficient policies.

Learning Safely in the Real World

Learning directly in the real world remains a formidable challenge due to the high sample complexity of state-of-the-art algorithms, coupled with safety considerations. Most approaches rely on simulation with massive parallelization to collect diverse data. We explore strategies for safe and efficient real-world learning, aiming to reduce sample complexity while ensuring reliable performance.

Adaptation (O3)

Handling Uncertainty in Real-World Environments

The stochasticity of real-world environments introduces significant challenges. Simulators fail to capture the full spectrum of uncertainties robots face, from material properties to dynamic obstacles. At ATARI, we focus on enabling robots to detect out-of-distribution scenarios and respond reliably through robust policy adaptation and uncertainty-aware decision-making.

Developing Adaptive and Robust Policies

Current reinforcement learning techniques rely heavily on domain randomization to handle variability, but this approach often falls short when encountering novel situations. Instead, we are investigating adaptive policies that adjust dynamically to new environments. Techniques like meta-reinforcement learning, online adaptation, and contextual policy optimization offer promising pathways for equipping robots with the ability to learn and adapt in real time.

Our Vision

ATARI Lab is committed to advancing robotics research to create robots capable of autonomous manipulation, adaptive planning, and seamless learning in unstructured environments. By addressing the fundamental challenges of planning, learning, and adaptation, we aim to develop the foundational frameworks for a future where humanoid robots are reliable collaborators in dynamic, human-centric settings.