Contact us
[email protected] | |
3275638434 | |
Paper Publishing WeChat |
Useful Links
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Article
A System of Associated Intelligent Integration for Human State Estimation
Author(s)
Akihiro Matsufuji, Wei-Fen Hsieh, Eri Sato-Shimokawara and Toru Yamaguchi
Full-Text PDF XML 4582 Views
DOI:10.17265/2159-5275/2019.03.003
Affiliation(s)
Department of Computer Science, Graduate School of Systems Design, Tokyo Metropolitan University, Hino, Tokyo 191-0065, Japan
ABSTRACT
We propose a learning
architecture for integrating multi-modal information e.g., vision, audio
information. In recent years, artificial intelligence (AI) is making major
progress in key tasks like a language, vision, voice recognition tasks. Most
studies focus on how AI could achieve human-like abilities. Especially, in
human-robot interaction research field, some researchers attempt to make robots
talk with human in daily life. The key challenges for making robots talk
naturally in conversation are to need to consider multi-modal non-verbal
information same as human, and to learn with small amount of labeled
multi-modal data. Previous multi-modal learning needs a large amount of
labeled data while labeled multi-modal data are shortage and difficult to be
collected. In this research, we address these challenges by integrating
single-modal classifiers which trained each modal information respectively. Our
architecture utilized knowledge by using bi-directional associative memory.
Furthermore, we conducted the conversation experiment for collecting
multi-modal non-verbal information. We verify our approach by comparing
accuracies between our system and conventional system which trained multi-modal
information.
KEYWORDS
Multi-modal learning, bi-directional associative memory, non-verbal, human robot interaction.
Cite this paper
References