Unsupervised Stylish Image Description Generation via Domain Layer Norm

Most of the existing works on image description focus on generating expressive descriptions. The only few works that are dedicated to generating stylish (e.g., romantic, lyric, etc.) descriptions suffer from limited style variation and content digression. To address these limitations, we propose a controllable stylish image description generation model. It can learn to generate stylish image descriptions that are more related to image content and can be trained with the arbitrary monolingual corpus without collecting new paired image and stylish descriptions. Moreover, it enables users to generate various stylish descriptions by plugging in style-specific parameters to include new styles into the existing model. We achieve this capability via a novel layer normalization layer design, which we will refer to as the Domain Layer Norm (DLN). Extensive experimental validation and user study on various stylish image description generation tasks are conducted to show the competitive advantages of the proposed model.

Authors: 
Cheng-Kuan Chen (National Tsing Hua University)
Zhu-Feng Pan (National Tsing Hua University)
Min Sun (National Tsing Hua University)
Publication Date: 
Friday, February 1, 2019
Uploaded Files: