Visuals to text: a comprehensive review on automatic image captioning

Yue Ming, Nannan Hu, Chunxiao Fan, Fan Feng, Jiangwan Zhou, Hui Yu

Research output: Contribution to journalArticlepeer-review

32 Downloads (Pure)

Abstract

Image captioning refers to automatic generation of descriptive texts according to the visual content of images. It is a technique integrating multiple disciplines including the computer vision (CV), natural language processing (NLP) and artificial intelligence. In recent years, substantial research efforts have been devoted to generate image caption with impressive progress. To summarize the recent advances in image captioning, we present a comprehensive review on image captioning, covering both traditional methods and recent deep learning-based techniques. Specifically, we first briefly review the early traditional works based on the retrieval and template. Then deep learning-based image captioning researches are focused, which is categorized into the encoder-decoder framework, attention mechanism and training strategies on the basis of model structures and training manners for a detailed introduction. After that, we summarize the publicly available datasets, evaluation metrics and those proposed for specific requirements, and then compare the state of the art methods on the MS COCO dataset. Finally, we provide some discussions on open challenges and future research directions.
Original languageEnglish
Pages (from-to)1339-1365
Number of pages27
JournalIEEE/CAA Journal of Automatica Sinica
Volume9
Issue number8
DOIs
Publication statusPublished - 3 Aug 2022

Keywords

  • Image captioning
  • Encoder-decoder framework
  • Attention mechanism
  • Training strategies
  • Artificial intelligence
  • Multi-modal understanding

Fingerprint

Dive into the research topics of 'Visuals to text: a comprehensive review on automatic image captioning'. Together they form a unique fingerprint.

Cite this