Audio note: this article contains 78 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
This post covers work done by several researchers at, visitors to and collaborators of ARC, including Zihao Chen, George Robinson, David Matolcsi, Jacob Stavrianos, Jiawei Li and Michael Sklar. Thanks to Aryan Bhatt, Gabriel Wu, Jiawei Li, Lee Sharkey, Victor Lecomte and Zihao Chen for comments.
In the wake of recent debate about pragmatic versus ambitious visions for mechanistic interpretability, ARC is sharing some models we've been studying that, in spite of their tiny size, serve as challenging test cases for any ambitious interpretability vision. The models are RNNs and transformers trained to perform algorithmic tasks, and range in size from 8 to 1,408 parameters. The largest model that we believe we more-or-less fully understand has 32 parameters; the next largest model that we have put substantial effort into, but have failed to fully understand, has 432 parameters. The models are available at the AlgZoo GitHub repo.
We think that the "ambitious" side of the mechanistic interpretability community has historically underinvested in "fully understanding slightly complex [...]
---
Outline:
(03:09) Mechanistic estimates as explanations
(06:16) Case study: 2nd argmax RNNs
(08:30) Hidden size 2, sequence length 2
(14:47) Hidden size 4, sequence length 3
(16:13) Hidden size 16, sequence length 10
(19:52) Conclusion
The original text contained 20 footnotes which were omitted from this narration.
---
First published:
January 26th, 2026
Source:
https://www.lesswrong.com/posts/x8BbjZqooS4LFXS8Z/algzoo-uninterpreted-models-with-fewer-than-1-500-parameters
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Show More
Show Less