Logo des Repositoriums
Zur Startseite
  • English
  • Deutsch
Anmelden
  1. Startseite
  2. SuUB
  3. Dissertationen
  4. From Deep Neural Network Predictions Toward Understanding: Advances in Explainable AI for Complex Black-Box Models
 
Zitierlink DOI
10.26092/elib/5421

From Deep Neural Network Predictions Toward Understanding: Advances in Explainable AI for Complex Black-Box Models

Veröffentlichungsdatum
2025-11-26
Autoren
Koenen, Niklas  
Betreuer
Wright, Marvin N.  
Gutachter
Wright, Marvin N.  
Bischl, Bernd
Zusammenfassung
Machine learning models, particularly deep neural networks (DNNs), have demonstrated impressive ability to learn complex relationships and derive accurate predictions from high-dimensional, often multimodal data. Yet their decision-making processes remain hidden inside the "black box," creating tension between predictive performance and reliable interpretability. Explainable AI addresses this challenge through feature-based approaches that reveal which input features are decisive for predictions. The first part of this cumulative thesis focuses on feature attribution methods for DNNs. While numerous methods exist, there is still a lack of a deeper understanding of their properties and accessible software frameworks. To address this gap, innsight is introduced as an R framework, making the methods available to a broader statistical community. Another paper examines the disagreement problem in controlled settings, i.e., contradictory explanations produced by different methods. These methods are generalized to survival analysis, extending attribution techniques to quantify time-dependent effects more efficiently than existing alternatives. Practical relevance is demonstrated in multimodal settings, applying Shapley-based techniques to explain DNNs for diagnosis and early detection of cognitive impairment. The second part investigates the interplay between XAI and generative AI. One paper extends feature importance measurement using a generative model into a conditional variant. The other applies XAI methods to systematically evaluate generative model quality. This thesis highlights two perspectives for advancing interpretability: user-oriented implementation and methodological extension of feature attribution methods for DNNs, and the interplay of XAI methods and GenAI into a promising symbiosis. Both contribute to breaking open the black-box nature of modern machine learning models.
Schlagwörter
Interpretable Machine Learning (IML)

; 

Explainable Artificial Intelligence (XAI)

; 

Feature Attribution

; 

Generative Modeling

; 

Deep Neural Network (DNN)
Institution
Universität Bremen  
Fachbereich
Fachbereich 03: Mathematik/Informatik (FB 03)  
Dokumenttyp
Dissertation
Lizenz
https://creativecommons.org/licenses/by/4.0/
Sprache
Englisch
Dateien
Lade...
Vorschaubild
Name

Dissertation_Koenen_SuUB.pdf

Size

14.36 MB

Format

Adobe PDF

Checksum

(MD5):cec75b6d9e182c1656d1a3382b480cab

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Datenschutzbestimmungen
  • Endnutzervereinbarung
  • Feedback schicken