earticle

논문검색

Visual Sentiment Analysis with Network in Network

초록

영어

In modern society, visual content like images and videos is increasingly becoming a new form of media to express users’ opinions on the Internet. As a complement to textual sentiment analysis, visual sentiment analysis intends to provide more robust information for data analytics by extracting emotion and sentiment toward topics and events from images and videos. Inspired by recent works that applied deep convolutional neural networks (CNN) to this challenging problem, we proposed a framework for image sentiment analysis with a novel deep neural network called Network in Network (NIN) which intends to improve the discriminability for local patches within receptive fields. We trained our network on a dataset consisting of nearly half a million Flickr images and minimized the effect of noisy training data by fine-tuning the network in a progressive manner. Extensive experiments conducted on manually labeled Twitter images show that the proposed architecture performs better in visual sentiment analysis than conventional CNN and other traditional algorithms.

목차

Abstract
 1. Introduction
 2. Network in Network
 3. Overall Structure and Progressive Fine-Tuning
 4. Experiments
  4.1. Training on Flickr Dataset
  4.2. Twitter Test Dataset
  4.3. Transfer Learning
 5. Conclusions
 Acknowledgments
 References

저자정보

  • Zuhe Li School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, China, School of Computer and Communication Engineering,Zhengzhou University of Light Industry, Zhengzhou 450002, China
  • Yangyu Fan School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, China
  • Fengqin Wang School of Computer and Communication Engineering,Zhengzhou University of Light Industry, Zhengzhou 450002, China

참고문헌

자료제공 : 네이버학술정보

    함께 이용한 논문

      ※ 원문제공기관과의 협약기간이 종료되어 열람이 제한될 수 있습니다.

      0개의 논문이 장바구니에 담겼습니다.