Publications
* denotes equal contribution.
2024
- NeurIPSMambaLRP: Explaining Selective State Space Sequence ModelsRezaei Jafari, F., Montavon, G., Müller, K-R., and Eberle, O.<p text-align: justify;>We propose MambaLRP, a novel algorithm within the LRP framework, which ensures a more stable and reliable relevance propagation through these components. Our proposed method is theoretically sound and excels in achieving state-of-the-art explanation performance across a diverse range of models and datasets. Moreover, MambaLRP facilitates a deeper inspection of Mamba architectures, uncovering various biases and evaluating their significance. It also enables the analysis of previous speculations regarding the long-range capabilities of Mamba models.</p>
- arXivTowards Symbolic XAI – Explanation Through Human Understandable Logical Relationships Between FeaturesSchnake, T.*, Rezaei Jafari, F.*, Lederer, J, Xiong, P., Nakajima, S., Gugler, S., Montavon, G., and Müller, K-R.* denotes equal contribution.<p text-align: justify;>We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features, thereby capturing the abstract reasoning behind a model’s predictions. The methodology is built upon a simple yet general multi-order decomposition of model predictions. This decomposition can be specified using higher-order propagation-based relevance methods, such as GNN-LRP, or perturbation-based explanation methods commonly used in XAI. The effectiveness of our framework is demonstrated in the domains of natural language processing (NLP), vision, and quantum chemistry (QC), where abstract symbolic domain knowledge is abundant and of significant interest to users. The Symbolic XAI framework provides an understanding of the model’s decision-making process that is both flexible for customization by the user and human-readable through logical formulas.</p>
2022
- ECCV OralAdaptive Token Sampling For Efficient Vision TransformersFayyaz, M.*, Abbasi Kouhpayegani, S.*, Rezaei Jafari, F.*, Sengupta, S., Sommerlade, E., Vaezi Joze, H., Pirsiavash, H., and Gall, J.* denotes equal contribution.<p text-align: justify;>In this work, we therefore introduce a differentiable parameter-free Adaptive Token Sampler (ATS) module, which can be plugged into any existing vision transformer architecture. ATS empowers vision transformers by scoring and adaptively sampling significant tokens. As a result, the number of tokens is not constant anymore and varies for each input image. By integrating ATS as an additional layer within the current transformer blocks, we can convert them into much more efficient vision transformers with an adaptive number of tokens. Since ATS is a parameter-free module, it can be added to the off-the-shelf pre-trained vision transformers as a plug and play module, thus reducing their GFLOPs without any additional training. Moreover, due to its differentiable design, one can also train a vision transformer equipped with ATS. We evaluate the efficiency of our module in both image and video classification tasks by adding it to multiple SOTA vision transformers. Our proposed module improves the SOTA by reducing their computational costs (GFLOPs) by 2 times, while preserving their accuracy on the ImageNet, Kinetics-400, and Kinetics-600 datasets.</p>