
PA-LRP & absLRP
Failed to add items
Add to basket failed.
Add to Wish List failed.
Remove from Wish List failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
About this listen
We focus on two evolutions to AX, they focus on advancing the explainability of deep neural networks, particularly Transformers, by improving Layer-Wise Relevance Propagation (LRP) methods. One source introduces Positional Attribution LRP (PA-LRP), a novel approach that addresses the oversight of positional encoding in prior LRP techniques, showing it significantly enhances the faithfulness of explanations in areas like natural language processing and computer vision. The other source proposes Relative Absolute Magnitude Layer-Wise Relevance Propagation (absLRP) to overcome issues with conflicting relevance values and varying activation magnitudes in existing LRP rules, demonstrating its superior performance in generating clear, contrastive, and noise-free attribution maps for image classification. Both works also contribute new evaluation metrics to better assess the quality and reliability of these attribution-based explainability methods, aiming to foster more transparent and interpretable AI models.
Sources:
1) June 2025 - https://arxiv.org/html/2506.02138v1 - Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability
2) December 2024 - https://arxiv.org/pdf/2412.09311 - Advancing Attribution-Based Neural Network Explainability
through Relative Absolute Magnitude Layer-Wise Relevance
Propagation and Multi-Component Evaluation
To help with context the original 2024 AttLRP paper was also given as a source:
3) June 2024 - https://arxiv.org/pdf/2402.05602 - AttnLRP: Attention-Aware Layer-Wise Relevance Propagation
for Transformers