Integrating object proposal with attention networks for video saliency detection

Muwei Jian, Jiaojin Wang, Hui Yu*, Gai-Ge Wang

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    73 Downloads (Pure)

    Abstract

    Video saliency detection is an active research issue in both information science and visual psychology. In this paper, we propose an efficient video saliency-detection model, based on integrating object-proposal with attention networks, for efficiently capturing salient objects and human attention areas in the dynamic scenes of videos. In our algorithm, visual object features are first exploited from individual video frame, using real-time neural networks for object detection. Then, the spatial position information of each frame is used to screen out the large background in the video, so as to reduce the influence of background noises. Finally, the results, with backgrounds removed, are further refined by spreading the visual clues through an adaptive weighting scheme into the later layers of a convolutional neural network. Experimental results, conducted on widespread and commonly used databases for video saliency detection, verify that our proposed framework outperforms existing deep models.
    Original languageEnglish
    Pages (from-to)819-830
    JournalInformation Sciences
    Volume576
    Early online date25 Aug 2021
    DOIs
    Publication statusPublished - 1 Oct 2021

    Keywords

    • Video saliency detection
    • Saliency
    • Object proposal
    • Attention networks
    • Spatiotemporal features

    Fingerprint

    Dive into the research topics of 'Integrating object proposal with attention networks for video saliency detection'. Together they form a unique fingerprint.

    Cite this