Holistic Multi-modal Memory Network for Movie Question Answering

Abstract

Answering questions according to multi-modal context is a challenging problem as it requires a deep integration of different data sources. Existing approaches only employ partial interactions among data sources in one attention hop. In this paper, we present the Holistic Multi-modal Memory Network (HMMN) framework which fully considers the interactions between different input sources (multi-modal context, question) in each hop. In addition, it takes answer choices into consideration during the context retrieval stage. Therefore, the proposed framework effectively integrates multi-modal context, question, and answer information, which leads to more informative context retrieved for question answering. Our HMMN framework achieves state-of-the-art accuracy on MovieQA dataset. Extensive ablation studies show the importance of holistic reasoning and contributions of different attention strategies.

Publication
IEEE Transactions on Image Processing

Link: https://arxiv.org/abs/1811.04595

Luu Anh Tuan
Luu Anh Tuan
Assistant Professor

My research interests lie in the intersection of Artificial Intelligence and Natural Language Processing.