CSS-Net: a consistent segment selection network for audio-visual event localization

Fan Feng, Yue Ming, Nannan Hu, Hui Yu, Yuanan Liu

Research output: Contribution to journalArticlepeer-review


Audio-visual event (AVE) localization aims to localize the temporal boundaries of events that contains visual and audio contents, to identify event categories in unconstrained videos. Existing work usually utilizes successive video segments for temporal modeling. However, ambient sounds or irrelevant visual targets in some segments often cause the problem of audio-visual semantics inconsistency, resulting in inaccurate global event modeling. To tackle this issue, we present a consistent segment selection network (CSS-Net) in this paper. First, we propose a novel bidirectional guided co-attention (BGCA) block, containing two distinct attention paths from audio to vision and from vision to audio, to focus on sound-related visual regions and event-related sound segments. Then, we propose a novel context-aware similarity measure (CASM) module to select semantic consistent visual and audio segments. A cross-correlation matrix is constructed using the correlation coefficients between the visual and audio feature pairs in all time steps. By extracting highly correlated segments and discarding low correlated segments, visual and audio features can learn global event semantics in videos. Finally, we propose a novel audio-visual contrastive loss to learn the similar semantics representation for visual and audio global features under the constraints of cosine and L2 similarities. Extensive experiments on public AVE dataset demonstrates the effectiveness of our proposed CSS-Net. The localization accuracies achieve the best performance of 80.5% and 76.8% in both fully- and weakly-supervised settings compared with other state-of-the-art methods.

Original languageEnglish
Number of pages13
JournalIEEE Transactions on Multimedia
Early online date26 Apr 2023
Publication statusEarly online - 26 Apr 2023


  • attention mechanism
  • audio-visual event localization
  • correlation
  • feature extraction
  • location awareness
  • multi-modal learning
  • semantics
  • task analysis
  • videos
  • visualization

Cite this