Chapter 5

Recognition of Acoustic Emotion 1

5.1. Introduction

In the past 15 years, research into emotions in speech has moved beyond merely analyzing vocal manifestations of emotional states to focusing on developing automatic emotion classification systems. This has arisen as a result of the emergence of affective sciences or affective computing [PIC 97] and their potentially numerous practical applications.

Many instances of emotion voice recognition involve human–machine interactions. A good example of this might be dialog systems where being able to identify a user’s emotional state will allow us to adapt our dialog strategy. In a call center context, for example, if the user shows signs of irritation or frustration with the automatic system, then a strategy may be triggered, sending the caller to a human operator [DEV 05, LEE 02]. However, emotion recognition is not only applicable to dialog systems. For example, some medical academic research [IST 03] has focused on emotion recognition as a means of helping older or hospitalized patients.

Similarly, this technology can be used for security, with applications such as crisis management or audio surveillance. In crisis management, for example, work has been undertaken to monitor victims’ and rescue workers’ emotions using collaborative search and rescue robots [LOO 07]. These robots are vital for crisis management, particularly in environments considered too dangerous or inaccessible to emergency crews. In the field of security, ...

Get Emotion-Oriented Systems now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.