Skip to main navigation Skip to search Skip to main content

Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios

  • UNSW Sydney
  • Federation University Australia
  • Deakin University

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

14 Scopus citations

Abstract

Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations.

Original languageEnglish
Title of host publication2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages894-901
Number of pages8
ISBN (Electronic)9781665479271
DOIs
StatePublished - 2022

Publication series

NameIEEE International Conference on Intelligent Robots and Systems
Volume2022-October
ISSN (Print)2153-0858
ISSN (Electronic)2153-0866

Fingerprint

Dive into the research topics of 'Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios'. Together they form a unique fingerprint.

Cite this