A review on the attention mechanism of deep learning

Zhaoyang Niu, Guoqiang Zhong*, Hui Yu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Attention has arguably become one of the most important concepts in the deep learning field. It is inspired by the biological systems of humans that tend to focus on the distinctive parts when processing large amounts of information. With the development of deep neural networks, attention mechanism has been widely used in diverse application domains. This paper aims to give an overview of the state-of-the-art attention models proposed in recent years. Toward a better general understanding of attention mechanisms, we define a unified model that is suitable for most attention structures. Each step of the attention mechanism implemented in the model is described in detail. Furthermore, we classify existing attention models according to four criteria: the softness of attention, forms of input feature, input representation, and output representation. Besides, we summarize network architectures used in conjunction with the attention mechanism and describe some typical applications of attention mechanism. Finally, we discuss the interpretability that attention brings to deep learning and present its potential future trends.

Original languageEnglish
Pages (from-to)48-62
Number of pages15
JournalNeurocomputing
Volume452
Early online date1 Apr 2021
DOIs
Publication statusPublished - 10 Sep 2021

Keywords

  • Attention mechanism
  • Computer vision applications
  • Convolutional Neural Network (CNN)
  • Deep learning
  • Encoder-decoder
  • Natural language processing applications
  • Recurrent Neural Network (RNN)
  • Unified attention model

Fingerprint

Dive into the research topics of 'A review on the attention mechanism of deep learning'. Together they form a unique fingerprint.

Cite this