Differentiable Optimization Everywhere:

Simulation, Estimation, Learning, and Control

Workshop at Conference on Robot Learning

November 9th, 2024 in Munich, Germany

Workshop about the latest and future advances in differentiable optimization for robotics

The workshop will take place in the Venus 2 room at TUM Garching

For remote attendance, please fill out this form before Friday November 8, 2024, 23:59 CET, and you will receive the link via e-mail the morning of the workshop.

Differentiable optimization plays a key role in connecting machine-learning frameworks to model-based approaches. It enables the backpropagation of gradient information resulting from solving optimization problems, such as those that arise when simulating a robot interacting with its environment. 


The promise of differentiable simulation is that the development and dissemination of mature tools, similar to those developed for vision and language in deep learning (e.g., PyTorch, Jax, TensorFlow, etc.), will open the door to seamlessly complementing existing physical models with data from the real world or to using gradients from simulation or estimation to train control policies efficiently. However, computing well-behaved gradients in settings with physical interactions is challenging because of the inherent non-smoothness of contact dynamics. Similarly, non-smooth operations such as rasterization can make common sensors in robotics such as cameras non-differentiable by design. 


The workshop is framed around three main subtopics:

Differentiable optimization and simulation
Algorithms and open-source frameworks targeted to provide gradients of non-smooth problems.

Improving simulations from data
Methods and examples for improving models based on real-world measurements through backpropagation.

Applications in policy learning and control
Application of differentiable optimization to increase efficiency and performance of policy learning.


This workshop aims to bring together academic and industry researchers and practitioners to highlight the current challenges and developments of differentiable simulation and discuss the field's practical applications, future directions, and limitations. In particular, our primary target audience is researchers from both the robotics and the learning communities working on model-based and end-to-end learned methods, and anywhere in between, to enable cross-pollination between these two complementary approaches, as enabled by differentiable optimization. Similarly, we draw speakers and panelists from both fields and emphasize speakers from industry to encourage the practical relevance of discussions and foster communication between academia and industry.

Speakers

Kelsey Allen

Google Deepmind

Eric Heiden

Nvidia

Yunzhu Li

Columbia University

Hae-Won Park

Korea Advanced Institute of Science & Technology

Felix Petersen

Stanford

Lin Shao

National University of Singapore

Yuval Tassa

Google Deepmind

Emo Todorov

University of Washington

Organizers

Bibit Bianchini

University of Pennsylvania

Justin Carpentier

INRIA Paris

Frederike Dümbgen

INRIA Paris

Quentin Le Lidec

INRIA Paris

Louis Montaut

INRIA Paris

Michael Posa

University of Pennsylvania

 

Sponsor