SENSEI - Semantic Exploration Guided by Foundation Models For Learning Versatile World Models


Cansu Sancaktar*, Christian Gumbsch*, Andrii Zadaianchuk, Pavel Kolev & Georg Martius
Preprint, TAFM RLC workshop 2024, *equal contribution
Paper, Website, Video

TL;DR We propose SENSEI to equip model-based RL agents with intrinsic motivation for semantically meaningful exploration using VLMs.

intrinsic motivation exploration world models vision language models foundation models

Abstract

Exploring useful behavior is a keystone of reinforcement learning (RL). Existing approaches to intrinsic motivation, following general principles such as information gain, mostly uncover low-level interactions. In contrast, children’s play suggests that they engage in semantically meaningful high-level behavior by imitating or interacting with their caregivers. Recent work has focused on using foundation models to inject these semantic biases into exploration. However, these methods often rely on unrealistic assumptions, such as environments already embedded in language or access to high-level actions. To bridge this gap, we propose SEmaNtically Sensible ExploratIon (Sensei), a framework to equip model-based RL agents with intrinsic motivation for semantically meaningful behavior. To do so, we distill an intrinsic reward signal of interestingness from Vision Language Model (VLM) annotations. The agent learns to predict and maximize these intrinsic rewards using a world model learned directly from intrinsic rewards, image observations, and low-level actions. We show that in both robotic and video game-like simulations Sensei manages to discover a variety of meaningful behaviors. We believe Sensei provides a general tool for integrating feedback from foundation models into autonomous agents, a crucial research direction as openly available VLMs become more powerful.

Teaser video

More information