TY - GEN
T1 - Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
AU - Cruz, Francisco
AU - Young, Charlotte
AU - Dazeley, Richard
AU - Vamplew, Peter
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations.
AB - Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations.
UR - https://www.scopus.com/pages/publications/85144813388
U2 - 10.1109/IROS47612.2022.9981334
DO - 10.1109/IROS47612.2022.9981334
M3 - Conference contribution
AN - SCOPUS:85144813388
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 894
EP - 901
BT - 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022
Y2 - 23 October 2022 through 27 October 2022
ER -