Self context-aware emotion perception on human-robot interaction

Producción científica: Contribución a una revistaArtículo de la conferenciarevisión exhaustiva

Resumen

Emotion recognition plays a crucial role in various domains of human-robot interaction. In long-term interactions with humans, robots need to respond continuously and accurately, however, the mainstream emotion recognition methods mostly focus on short-term emotion recognition, disregarding the context in which emotions are perceived. Humans consider that contextual information and different contexts can lead to completely different emotional expressions. In this paper, we introduce self context-aware model (SCAM) that employs a two-dimensional emotion coordinate system for anchoring and re-labeling distinct emotions. Simultaneously, it incorporates its distinctive information retention structure and contextual loss. This approach has yielded significant improvements across audio, video, and multimodal. In the auditory modality, there has been a notable enhancement in accuracy, rising from 63.10% to 72.46%. Similarly, the visual modality has demonstrated improved accuracy, increasing from 77.03% to 80.82%. In the multimodal, accuracy has experienced an elevation from 77.48% to 78.93%. In the future, we will validate the reliability and usability of SCAM on robots through psychology experiments.

Idioma originalInglés
PublicaciónAustralasian Conference on Robotics and Automation, ACRA
EstadoPublicada - 2023
Evento2023 Australasian Conference on Robotics and Automation, ACRA 2023 - Sydney, Australia
Duración: 4 dic. 20236 dic. 2023

Huella

Profundice en los temas de investigación de 'Self context-aware emotion perception on human-robot interaction'. En conjunto forman una huella única.

Citar esto