Skip to content

Visual saliency detection via background features and object-location cues

Research output: Chapter in Book/Report/Conference proceedingConference contribution

In this paper, we propose a simple visual saliencydetection model based on spatial position of salient objects and background cues. At first, discrete wavelet frame transform (DWDT) are used to extract directionality characteristics for estimating the centoid of salient objects in the input image. Then, the colour contrast feature performed is to represent the physical characteristics of salient objects. Conversely, sparse dictionary learning is applied to obtain the background feature map. Finally, three typical cues of the directional feature, the colour contrast feature and the background feature are mixed to generate a credible saliency map. Experimental results verify that the designed method is useful and effective.
Original languageEnglish
Title of host publicationProceedings of the 2019 25th International Conference on Automation and Computing (ICAC)
PublisherInstitute of Electrical and Electronics Engineers
Number of pages4
ISBN (Electronic)978-1-8613-7665-7
ISBN (Print)978-1-7281-2518-3
DOIs
Publication statusPublished - 11 Nov 2019
Event25th IEEE International Conference on Automation and Computing - Lancaster, United Kingdom
Duration: 5 Sep 20197 Sep 2019

Conference

Conference25th IEEE International Conference on Automation and Computing
Abbreviated titleICAC'19
CountryUnited Kingdom
CityLancaster
Period5/09/197/09/19

Documents

  • Visual saliency detection

    Rights statement: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

    Accepted author manuscript (Post-print), 1.01 MB, PDF document

Related information

Relations Get citation (various referencing formats)

ID: 15356062