Skip to main navigation Skip to search Skip to main content

Persistent rule-based interactive reinforcement learning

  • Adam Bignold
  • , Francisco Cruz
  • , Richard Dazeley
  • , Peter Vamplew
  • , Cameron Foale
  • Federation University Australia
  • Deakin University

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Interactive reinforcement learning has allowed speeding up the learning process in autonomous agents by including a human trainer providing extra information to the agent in real-time. Current interactive reinforcement learning research has been limited to real-time interactions that offer relevant user advice to the current state only. Additionally, the information provided by each interaction is not retained and instead discarded by the agent after a single-use. In this work, we propose a persistent rule-based interactive reinforcement learning approach, i.e., a method for retaining and reusing provided knowledge, allowing trainers to give general advice relevant to more than just the current state. Our experimental results show persistent advice substantially improves the performance of the agent while reducing the number of interactions required for the trainer. Moreover, rule-based advice shows similar performance impact as state-based advice, but with a substantially reduced interaction count.

Original languageEnglish
Pages (from-to)23411-23428
Number of pages18
JournalNeural Computing and Applications
Volume35
Issue number32
DOIs
StatePublished - Nov 2023

Keywords

  • Interactive reinforcement learning
  • Persistent advice
  • Reinforcement learning
  • Rule-based advice

Fingerprint

Dive into the research topics of 'Persistent rule-based interactive reinforcement learning'. Together they form a unique fingerprint.

Cite this