Skip to content

Visual saliency detection by integrating spatial position prior of object with background cues

Research output: Contribution to journalArticlepeer-review

  • Muwei Jian
  • Jing Wang
  • Professor Hui Yu
  • Guodong Wang
  • Xiangjing Meng
  • Lu Yang
  • Junyu Dong
  • Yilong Yin
In this paper, we propose an effective visual saliency-detection model based on spatial position prior of attractive objects and sparse background features. Firstly, since multi-orientation features are among the key visual stimuli in the human visual system (HVS) to perceive object spatial information, discrete wavelet frame transform (DWDT) is applied to extract directionality characteristics for calculating the centoid of remarkable objects in the original image. Then, the color contrast feature is used to represent the physical characteristics of salient objects. Thirdly, in order to explore and utilize the background features of an input image, sparse dictionary learning is performed to statistically analyze and estimate the background feature map. Finally, three distinctive cues of the directional feature including the color contrast feature and the background feature are combined to generate a final robust saliency map. Experimental results on three widely used image datasets show that our proposed method is effective and efficient, and is superior to other state-of-the-art saliency-detection models.
Original languageEnglish
JournalExpert Systems with Applications
Publication statusAccepted for publication - 1 Nov 2020


  • Visual Saliency Detection by Integrating Spatial Position Prior_pp

    Rights statement: The embargo end date of 2050 is a temporary measure until we know the publication date. Once we know the publication date the full text of this article will be able to view shortly afterwards.

    Accepted author manuscript (Post-print), 1.64 MB, PDF document

    Due to publisher’s copyright restrictions, this document is not freely available to download from this website until: 1/01/50

Related information

Relations Get citation (various referencing formats)

ID: 23212043