Literature by the same author
plus at Google Scholar

Bibliografische Daten exportieren
 

Building Bridges for Better Machines: From Machine Ethics to Machine Explainability and Back

Title data

Speith, Timo:
Building Bridges for Better Machines: From Machine Ethics to Machine Explainability and Back.
Saarbrücken : Saarländische Universitäts-und Landesbibliothek , 2023
( Doctoral thesis, 2023, Saarland University)
DOI: https://doi.org/10.22028/D291-40450

Project information

Project financing: Deutsche Forschungsgemeinschaft
VolkswagenStiftung

Abstract in another language

Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects.

Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework.

In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish.

Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.

Further data

Item Type: Doctoral thesis
Institutions of the University: Faculties > Faculty of Cultural Studies > Department of Philosophy
Faculties > Faculty of Cultural Studies > Department of Philosophy > Chair Philosophy, Computer Science and Artificial Intelligence
Result of work at the UBT: No
DDC Subjects: 000 Computer Science, information, general works > 004 Computer science
100 Philosophy and psychology > 100 Philosophy
Date Deposited: 06 Dec 2023 06:50
Last Modified: 06 Dec 2023 06:51
URI: https://eref.uni-bayreuth.de/id/eprint/87982