OCEAN: Towards Developing an Opportunistic Continuous Emotion Annotation Framework

Abstract

Emotion-aware video consumption services typically deploy a machine learning model to infer the emotion automatically and provide the service accordingly. The ground truth labels to train such models are usually collected as emotion self-reports from users in a continuous manner (using additional devices) while they watch different videos. This process of continuous annotation induces additional cognitive workload and degrades the viewing experiences. To overcome these challenges, we propose a framework OCEAN (Opportunistic Continuous Emotion Annotation) that collects emotion self-reports opportunistically. The key idea of OCEAN is to identify the moments when the physiological responses change significantly and use only those moments as the self-report collection (or probing) moments. We evaluate OCEAN using the CASE dataset (a publicly available dataset capturing continuous emotion annotation for different videos). Our preliminary results demonstrate that OCEAN reduces continuous annotation effort (median number of the probe is four and an average reduction of 89% probes) and yet collects ratings similar to continuous annotations.

Publication
In International Conference on Pervasive Computing and Communications - Work In Progress

Related