Visual saliency detection via combining center prior and U-Net

Xiangwei Lu, Muwei Jian*, Xing Wang, Hui Yu, Junyu Dong, Kin Man Lam

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

36 Downloads (Pure)


At present, poor background suppression is one major problem for visual saliency detection. Although many mainstream saliency detection models can effectively locate salient objects, objects in complicated backgrounds in some natural images are often mistaken for salient objects. Therefore, this paper proposed to set up a center prior-based encoder-decoder network to improve background suppression results. A traditional center prior-based method and U-Net model were combined efficiently. First, multi-scale group convolution was used to replace general convolution, which can highlight the semantic information characteristics, and high-level characteristics at the bottom of U-Net were integrated and optimized on the basis of the consideration of center prior. Then, refinements were delivered throughout the whole network by upgrading the network structure, so as to ensure the optimized features can be made full use of. Since the changes to the U-Net architecture somewhat affected the stability of the network. Therefore, branch network modules were adopted and adaptive parameters defined to coordinate the relationships between branch networks to keep the network structure well balanced. The method has been tested with four widely used databases and proven effective by comparing its results with those of another seven popular methods.

Original languageEnglish
JournalMultimedia Systems
Early online date3 May 2022
Publication statusEarly online - 3 May 2022


  • background suppression
  • center prior
  • loss function
  • saliency object detection
  • U-Net


Dive into the research topics of 'Visual saliency detection via combining center prior and U-Net'. Together they form a unique fingerprint.

Cite this