Skip to content

Sparse Mixture of Experts-Enhanced Robot Learning for Foundation Models in Robotics

  • On-site
    • Ecully, Auvergne-Rhône-Alpes, France
  • €2,399 per month
  • LIRIS

Job description

Context & Motivation: Direct application of Foundation Models (FMs) to robotic learning faces significant challenges in scalability, generalization, and efficient adaptation [Luo et al.@TPAMI2025]. One primary limitation is the monolithic nature of existing architectures, which struggle to adapt dynamically to different environments, embodiments, and tasks. A promising solution is the Sparse Mixture of Experts (MoE) paradigm [Shazeer et al.2017,Lepikhin et al.@NeuIPS2020, Ben Soltana et al.@ICIP2011], which introduces modular, adaptive learning to enhance scalability and specialization in robotic learning.

Objectives and Proposed Approach: This task aims to investigate through a CDD of 6 months Sparse MoE-enhanced Foundation Models for Robotics (MoE-FMR) [Fedus et al.@JMLR2022] that enable efficient, scalable, and transferable robotic skill learning. Sparse MoE architectures allow robots to learn task-specific and domain-specific knowledge by dynamically selecting specialized sub-models (experts) while keeping computation efficient. By leveraging a sparse gating mechanism, only the most relevant experts are activated per task, ensuring high efficiency and improved generalization across robotic scenarios.This involves tackling the following challenges: 1) Sparse Modular Learning for Scalable Adaptation – How can we design efficient sparse MoE architectures that dynamically select relevant experts while minimizing computational overhead? 2) Multi-Task and Cross-Domain Generalization – How can MoE-FMR enable robots to efficiently transfer learned skills across tasks, environments, and robotic embodiments

or

Apply with Indeed unavailable
  • Ecully, Auvergne-Rhône-Alpes, France
€2,399 per month
LIRIS