Research

Multilayer and Multimodal Fusion of Deep Neural Networks for Video Classification

"Multilayer and Multimodal Fusion of Deep Neural Networks for Video Classification"
Xiaodong Yang (NVIDIA), Pavlo Molchanov (NVIDIA), Jan Kautz (NVIDIA), in ACM Multimedia, October 2016
Research Area: Computer Vision
Author(s): Xiaodong Yang (NVIDIA), Pavlo Molchanov (NVIDIA), Jan Kautz (NVIDIA)
Date: October 2016
Download(s):
 
Abstract: This paper presents a novel framework to combine multiple layers and modalities of deep neural networks for video classification. We first propose a multilayer strategy to simultaneously capture a variety of levels of abstraction and invariance in a network, where the convolutional and fully connected layers are effectively represented by the proposed feature aggregation methods. We further introduce a multimodal scheme that includes four highly complementary modalities to extract diverse static and dynamic cues at multiple temporal scales. In particular, for modeling the long-term temporal information, we propose a new structure, FC-RNN, to effectively transform pre-trained fully connected layers into recurrent layers. A robust boosting model is then introduced to optimize the fusion of multiple layers and modalities in a unified way. In the extensive experiments, we achieve state-of-the-art results on two public benchmark datasets: UCF101 and HMDB51.