Four papers from our lab have been accepted to ICLR 2026:

Recurrent Action Transformer with Memory — A new architecture that integrates recurrent memory mechanisms into transformer-based offline reinforcement learning, addressing the quadratic complexity limitation of attention and improving performance on memory-dependent tasks.

ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL — A transformer-based approach with structured external memory that extends effective horizons up to 100,000 times beyond the attention window, nearly doubling performance on real robotic manipulation tasks.

Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation — A systematic study proposing practical definitions and standardized evaluation methods for memory mechanisms in reinforcement learning agents.

Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning — Introducing MIKASA, a comprehensive benchmark framework for evaluating memory capabilities in RL, including MIKASA-Robo with 32 robotic manipulation tasks.