Natural Language Dialog System Considering Speaker’s Emotion Calculated from Acoustic Features
URI | http://harp.lib.hiroshima-u.ac.jp/hiroshima-cu/metadata/12354 | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
File |
IWSDS2016mera.pdf
( 155.0 KB )
Open Date
:2017-12-25
|
||||||||||||||||||||||||||||||
Title |
Natural Language Dialog System Considering Speaker’s Emotion Calculated from Acoustic Features
|
||||||||||||||||||||||||||||||
Author |
|
||||||||||||||||||||||||||||||
Subject |
Interactive Voice Response system (IVR)
Acoustic features
Emotion
Support Vector Machine (SVM)
Artificial Intelligence Markup Language (AIML)
|
||||||||||||||||||||||||||||||
Abstract |
With the development of Interactive Voice Response (IVR) systems, people can not only operate computer systems through task-oriented conversation but also enjoy non-task-oriented conversation with the computer. When an IVR system generates a esponse, it usually refers to just verbal information of the user’sutterance. However, when a person gloomily says “I’m fine,” people will respond not by saying “That’s wonderful” but “Really?” or “Are you OK?” because we can consider both verbal and non-verbal information such as tone of voice, facial expressions, gestures, and so on. In this paper, we propose an intelligent IVR system that considers not only verbal but also non-verbal information. To estimate a speaker’s emotion (positive, negative, or neutral), 384 acoustic features extracted from the speaker’s utterance are utilized to machine learning (SVM). Artificial Intelligence Markup Language (AIML)-based response generating rules are expanded to be able to consider the speaker’s emotion. As a result of the experiment, subjects felt that the proposed dialog system was more likable, enjoyable, and did not give machine-like reactions. |
||||||||||||||||||||||||||||||
Description Peer Reviewed |
有
|
||||||||||||||||||||||||||||||
Journal Title |
Lecture Notes in Electrical Engineering
|
||||||||||||||||||||||||||||||
Volume |
427
|
||||||||||||||||||||||||||||||
Spage |
145
|
||||||||||||||||||||||||||||||
Epage |
157
|
||||||||||||||||||||||||||||||
Published Date |
2016-12-25
|
||||||||||||||||||||||||||||||
Publisher |
Springer
|
||||||||||||||||||||||||||||||
ISSN |
1876-1100
|
||||||||||||||||||||||||||||||
ISBN |
978-981-10-2584-6
978-981-10-2585-3
|
||||||||||||||||||||||||||||||
DOI |
10.1007/978-981-10-2585-3_11
|
||||||||||||||||||||||||||||||
Language |
eng
|
||||||||||||||||||||||||||||||
NIIType |
Conference Paper
|
||||||||||||||||||||||||||||||
Text Version |
著者版
|
||||||||||||||||||||||||||||||
Rights |
Copyright 2017 Springer. This is the author’s version of a work that was accepted for publication in the following source: Takumi Takahashi, Kazuya Mera, Tang Ba Nhat, Yoshiaki Kurosawa, Toshiyuki Takezawa (2017) Natural Language Dialog System Considering Speaker’s Emotion Calculated from Acoustic Features. In Kristiina Jokinen, Graham Wilcock (Eds.) Dialogues with Social Robots : Enablements, Analyses, and Evaluation, Lecture Notes in Electrical Engineering, volume 427, 145-157. The final publication is available at Springer via http://dx.doi.org/10.1007/978-981-10-2585-3_11.
|
||||||||||||||||||||||||||||||
Relation URL | |||||||||||||||||||||||||||||||
Note |
This paper was previously accepted at the 7th International Workshop on Spoken Dialogue System (IWSDS2016), Saariselkä, Finland, January 13-16, 2016. |
||||||||||||||||||||||||||||||
Set |
hiroshima-cu
|