Ten years ago… in Montreal 2015, the RAM workshop took place to bring together the burgeoning field covering the “interplay of reasoning, attention and memory”, just before Transformers were invented – but when many of the components to get there had just been published and were in place. The workshop included many speakers who are still prominent in pushing these directions today: Yoshua Bengio, Kyunghyun Cho, Jürgen Schmidhuber, Sainbayar Sukhbaatar, Ilya Sutskever, and more. See the historical website for more details.
Ten years later… we are hosting RAM 2 in the same location in Montreal, with a two-fold purpose. Firstly, as a retrospective and analysis of what has happened in the last 10 years. We are inviting presenters from the first workshop to this end, as well as to add their current perspectives. Hence secondly, and more importantly, we will bring together the field to discuss new trends and future directions for the next 10 years – which is further enabled by inviting new speakers, panelists and poster presenters discussing these fresh ideas.
Why does this make sense? The RAM topic is as important as ever, and has gone on to dominate the field.
These new directions include:
R: New reasoning methods including both token-based and that use continuous vectors, and how they combine with memory.
A: New attention methods that enable better reasoning and use of short and long-term memory.
M: Architectural changes to LLMs to improve memory and reasoning capabilities.
Overall, we highlight that the workshop is most concerned with methods that aim to explore the interplay between these three aspects.
We will host paper submissions on open review. We invite researchers and practitioners to submit their work to the COLM 2025 Workshop on Reasoning, Attention & Memory 2 (RAM2@COLM25).
Start time | End Time | Event | Title | Speaker |
---|---|---|---|---|
9:20 AM | 9:40 AM | Opening remarks | Jason Weston | |
9:40 AM | 10:20 AM | Invited talk 1 | Insights from Memory Architectures for Building Intelligence | Sainbayar Sukhbaatar |
10:20 AM | 11:00 AM | Invited talk 2 | New Frontiers in AI Capability via Self-Improvement | Azalia Mirhoseini |
11:00 AM | 11:40 AM | Invited talk 3 | The 1991 Unnormalized Linear Transformer, Closely Related Neural Fast Weight Programmers of the 2020s, and the Future of Neural Networks Learning to Program other Neural Networks. | Juergen Schmidhuber |
11:40 AM | 12:00 PM | Panel 1 | Panel discussion: Memory and attention | |
12:00 PM | 12:20 PM | Paper talk 1 | Let’s Predict Sentence by Sentence | Hanseok Oh |
12:20 PM | 12:40 PM | Paper talk 2 | Style over Substance: Distilled Language Models Reason Via Stylistic Replication | Philip Lippmann |
12:40 PM | 2:00 PM | Lunch / break | ||
2:00 PM | 2:20 PM | Paper talk 3 | MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents | Paul Liang |
2:20 PM | 3:00 PM | Invited talk 4 | Retrospective on connecting MT with AI 10 years ago… | Kyunghyun Cho |
3:00 PM | 3:40 PM | Invited talk 5 | The Art of Artificial Reasoning for Small Language Models | Yejin Choi |
3:40 PM | 4:00 PM | Panel 2 | Panel discussion: reasoning | |
4:00 PM | 4:20 PM | Paper talk 4 | Learning to Discover Abstractions for LLM Reasoning | Anikait Singh |
4:20 PM | 5:00 PM | Invited talk 6 | Turning advances in reasoning and capability into improved safety assurances | Yoshua Bengio |
5:00 PM | 5:20 PM | Panel 3 | Panel discussion: Future of LLMs |
Email: ram2_pc [at] googlegroups.com