Abstract
In this work, we introduce a novel approach for saliency detection through the utilization of a generative adversarial network guided by perceptual loss. Achieving effective saliency detection through deep learning entails intricate challenges influenced by a multitude of factors, with the choice of loss function playing a pivotal role. Previous studies usually formulate loss functions based on pixel-level distances between predicted and ground-truth saliency maps. However, these formulations don't explicitly exploit the perceptual attributes of objects, such as their shapes and textures, which serve as critical indicators of saliency. To tackle this deficiency, we propose an innovative loss function that capitalizes on perceptual features derived from the saliency map. Our approach has been rigorously evaluated on six benchmark datasets, demonstrating competitive performance when compared against the forefront methods in terms of both Mean Absolute Error (MAE) and F-measure. Remarkably, our experiments reveal consistent outcomes when assessing the perceptual loss using either grayscale saliency maps or saliency-masked colour images. This observation underscores the significance of shape information in shaping the perceptual saliency cues. The code is available at https://github.com/XiaoxuCai/PerGAN.
Original language | English |
---|---|
Article number | 119625 |
Number of pages | 13 |
Journal | Information Sciences |
Volume | 654 |
Early online date | 3 Nov 2023 |
DOIs | |
Publication status | Published - 1 Jan 2024 |
Keywords
- Deep learning
- Generative Adversarial Network
- Perceptual loss
- Saliency detection
- UKRI
- EPSRC
- EP/N025849/1