Development of Intrinsically Interpretable and Post-hoc Explainable Machine Learning Models

LecturerProf. Dr. Patrick Zschech

Chair of Business Information Systems, esp. Intelligent Systems and Services
TU Dresden
DateSeptember 7 & 8, 2026
with classes from 9:00 a.m. to 4:00 p.m. each day
Room/AddressGeorg Schumann-Bau (SCH/B37)
TU Dresden 
Seminar contentMachine learning (ML) models are increasingly used for prediction and decision-making, yet many high-performing approaches operate as „black boxes“ and lack transparency, raising challenges for trust, accountability, and model validation. This course introduces methods for developing interpretable ML models as well as techniques for explaining complex ML models after training. Focusing on supervised learning with tabular data, it provides a structured overview of the model development process, including data preparation, model selection, and evaluation.
 
Building on this foundation, participants will explore the trade-off between predictive performance and interpretability and learn how different model classes position themselves along this spectrum.

A central part of the course is the study of model-agnostic, post-hoc explanation methods (e.g., SHAP, LIME) alongside intrinsically interpretable models, such as generalized additive models (GAMs) and their modern extensions, which provide transparency by design through structured, additive forms. Participants will explore different types of explanations and their respective strengths, limitations, and use cases, and critically assess how these approaches support model understanding, debugging, and communication. Hands-on sessions in Python enable participants to implement, analyze, and compare post-hoc explainable and intrinsically interpretable ML approaches on practical datasets.
StructureDay 1 Morning:
  • Introduction and motivation: Interpretability and explainability in ML
  • Foundations of supervised ML with tabular data
  • Overview of model development workflow
  • Hands-on: Supervised learning pipeline in Python
Day 1 Afternoon:
  • Post-hoc explanation methods (e.g., LIME, SHAP)
  • Understanding and visualizing feature effects and model behavior
  • Hands-on: Applying explanation techniques to complex ML models
Day 2 Morning:
  • Intrinsically interpretable models (e.g., GAMs and modern extensions)
  • Hands-on: Development and comparison of intrinsically interpretable and post-hoc explainable models
Day 2 Afternoon:
  • Group work and discussion of results, trade-offs, and limitations
PrerequisitesFamiliarity with Python is helpful for the hands-on sessions.
CertificateDoctoral candidates from the Faculty of Business and Economics, TU Dresden can earn a certificate according to § 9 of the Ph.D. doctoral regulations (PromO 2018):
Doctoral candidates of Business Administration: § 9 (1) Nr. 5 or 6
Doctoral candidates of Business Information Systems: § 9 (1) Nr. 6
Doctoral candidates of Economics: § 9 (1) Nr. 6

Doctoral candidates from other universities can earn a certificate as well.
AssignmentParticipants will complete a practical assignment in which they develop a predictive ML model and apply both post-hoc explanation techniques and intrinsically interpretable approaches. The assignment includes implementation, evaluation, and critical discussion of model performance and interpretability/explainability. Code and results must be submitted (e.g., via Jupyter Notebooks).

Before the course, participants are asked to read:
  • Zschech, P., Weinzierl, S., & Kraus, M. (2026). Inherently Interpretable Machine Learning: A Contrasting Paradigm to Post-hoc Explainable AI. Business & Information Systems Engineering, 68(2), 445–463. https://doi.org/10.1007/s12599-025-00964-0

  • Kruschel, S., Hambauer, N., Weinzierl, S., Zilker, S., Kraus, M., & Zschech, P. (2026). Challenging the Performance-Interpretability Trade-Off: An Evaluation of Interpretable Machine Learning Models. Business & Information Systems Engineering, 68(1), 159–183. 
    https://doi.org/10.1007/s12599-024-00922-2
RegistrationParticipation is limited (max. 15).
To register send an e-mail to Dr. Uta Schwarz: uta.schwarz@tu-dresden.de
Phone: +49 351 463-33141

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert