Explanation guided cross-modal social image clustering

Xiaoqiang Yan, Yiqiao Mao, Yangdong Ye, Hui Yu, Fei-Yue Wang

    Research output: Contribution to journalArticlepeer-review

    67 Downloads (Pure)

    Abstract

    The integration of visual and semantic information has been found to play a role in increasing the accuracy of social image clustering methods. However, existing approaches are limited by the heterogeneity gap between the visual and semantic modalities, and their performances significantly degrade due to the commonly sparse and incomplete tags in semantic modality. To address these problems, we propose a novel clustering framework to discover reasonable categories in unlabeled social images under the guidance of human explanations. First of all, a novel Explanation Generation Model (EGM) is proposed to automatically boost textual information for the sparse and incomplete tags based on an extra lexical database with human knowledge. Then, a novel clustering algorithm called Group Constrained Information Maximization (GCIM) is proposed to learn image categories. In this algorithm, a new type of constraint named group level side information is unprecedentedly defined to bridge the well-known heterogeneity gap between visual and textual modalities. Finally, an interactive draw-and-merge optimization method is proposed to ensure an optimal solution. Extensive experiments on several social image datasets including NUS-Wide, IAPRTC, MIRFlickr, ESP-Game and COCO demonstrate the superiority of the proposed approach to state-of-the-art baselines.
    Original languageEnglish
    Number of pages16
    JournalInformation Sciences
    Volume593
    Early online date10 Feb 2022
    DOIs
    Publication statusPublished - 1 May 2022

    Keywords

    • UKRI
    • EPSRC
    • EP/N025849/1

    Fingerprint

    Dive into the research topics of 'Explanation guided cross-modal social image clustering'. Together they form a unique fingerprint.

    Cite this