Skip to main navigation Skip to search Skip to main content

An evaluation methodology for interactive reinforcement learning with simulated users

  • Adam Bignold
  • , Francisco Cruz
  • , Richard Dazeley
  • , Peter Vamplew
  • , Cameron Foale

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Interactive reinforcement learning methods utilise an external information source to evaluate decisions and accelerate learning. Previous work has shown that human advice could significantly improve learning agents’ performance. When evaluating reinforcement learning algorithms, it is common to repeat experiments as parameters are altered or to gain a sufficient sample size. In this regard, to require human interaction every time an experiment is restarted is undesirable, particularly when the expense in doing so can be considerable. Additionally, reusing the same people for the experiment introduces bias, as they will learn the behaviour of the agent and the dynamics of the environment. This paper presents a methodology for evaluating interactive reinforcement learning agents by employing simulated users. Simulated users allow human knowledge, bias, and interaction to be simulated. The use of simulated users allows the development and testing of reinforcement learning agents, and can provide indicative results of agent performance under defined human constraints. While simulated users are no replacement for actual humans, they do offer an affordable and fast alternative for evaluative assisted agents. We introduce a method for performing a preliminary evaluation utilising simulated users to show how performance changes depending on the type of user assisting the agent. Moreover, we describe how human interaction may be simulated, and present an experiment illustrating the applicability of simulating users in evaluating agent performance when assisted by different types of trainers. Experimental results show that the use of this methodology allows for greater insight into the performance of interactive reinforcement learning agents when advised by different users. The use of simulated users with varying characteristics allows for evaluation of the impact of those characteristics on the behaviour of the learning agent.

Original languageEnglish
Article number13
Pages (from-to)1-15
Number of pages15
JournalBiomimetics
Volume6
Issue number1
DOIs
StatePublished - 2021

Keywords

  • Interactive reinforcement learning
  • Methodology for simulated users
  • Reinforcement learning
  • Reward shaping

Fingerprint

Dive into the research topics of 'An evaluation methodology for interactive reinforcement learning with simulated users'. Together they form a unique fingerprint.

Cite this