Route Sparse Autoencoder to Interpret Large Language Models cover art

Route Sparse Autoencoder to Interpret Large Language Models

Route Sparse Autoencoder to Interpret Large Language Models

Listen for free

View show details

About this listen

This paper introduces Route Sparse Autoencoder (RouteSAE), a novel framework designed to improve the interpretability of large language models (LLMs) by effectively extracting features across multiple layers. Traditional sparse autoencoders (SAEs) primarily focus on single-layer activations, failing to capture how features evolve through different depths of an LLM. RouteSAE addresses this by incorporating a routing mechanism that dynamically assigns weights to activations from various layers, creating a unified feature space. This approach leads to a higher number of interpretable features and improved interpretability scores compared to previous methods like TopK SAE and Crosscoder, while maintaining computational efficiency. The study demonstrates RouteSAE's ability to identify both low-level (e.g., "units of weight") and high-level (e.g., "more [X] than [Y]") features, enabling targeted manipulation of model behavior.


Source: May 2025 - Route Sparse Autoencoder to Interpret Large Language Models - https://arxiv.org/pdf/2503.08200

No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.