Literatur vom gleichen Autor/der gleichen Autor*in
plus bei Google Scholar

Bibliografische Daten exportieren
 

Attention Please: What Transformer Models Really Learn for Process Prediction

Titelangaben

Käppel, Martin ; Ackermann, Lars ; Jablonski, Stefan ; Härtl, Simon:
Attention Please: What Transformer Models Really Learn for Process Prediction.
In: Marrella, Andrea ; Resinas, Manuel ; Jans, Mieke ; Rosemann, Michael (Hrsg.): Business Process Management : 22nd International Conference, BPM 2024, Krakow, Poland, September 1–6, 2024, Proceedings. - Cham : Springer , 2024 . - S. 203-220 . - (Lecture Notes in Computer Science ; 14940 )
ISBN 978-3-031-70396-6
DOI: https://doi.org/10.1007/978-3-031-70396-6_12

Volltext

Link zum Volltext (externe URL): Volltext

Abstract

Predictive process monitoring aims to support the execution of a process during runtime with various predictions about the further evolution of a process instance. In the last years a plethora of deep learning architectures have been established as state-of-the-art for different prediction targets, among others the transformer architecture. The transformer architecture is equipped with a powerful attention mechanism, assigning attention scores to each input part that allows to prioritize most relevant information leading to more accurate and contextual output. However, deep learning models largely represent a black box, i.e., their reasoning or decision-making process cannot be understood in detail. This paper examines whether the attention scores of a transformer based next-activity prediction model can serve as an explanation for its decision-making. We find that attention scores in next-activity prediction models can serve as explainers and exploit this fact in two proposed graph-based explanation approaches. The gained insights could inspire future work on the improvement of predictive business process models as well as enabling a neural network based mining of process models from event logs.

Weitere Angaben

Publikationsform: Aufsatz in einem Buch
Begutachteter Beitrag: Ja
Keywords: Predictive Process Monitoring; Transformer; Attention Mechanism; Explainability
Institutionen der Universität: Fakultäten > Fakultät für Mathematik, Physik und Informatik
Fakultäten > Fakultät für Mathematik, Physik und Informatik > Institut für Informatik
Fakultäten > Fakultät für Mathematik, Physik und Informatik > Institut für Informatik > Lehrstuhl Angewandte Informatik IV
Fakultäten > Fakultät für Mathematik, Physik und Informatik > Institut für Informatik > Lehrstuhl Angewandte Informatik IV > Lehrstuhl Angewandte Informatik IV - Univ.-Prof. Dr.-Ing. Stefan Jablonski
Titel an der UBT entstanden: Ja
Themengebiete aus DDC: 000 Informatik,Informationswissenschaft, allgemeine Werke > 004 Informatik
Eingestellt am: 14 Okt 2024 07:31
Letzte Änderung: 14 Okt 2024 07:31
URI: https://eref.uni-bayreuth.de/id/eprint/90660