TY - JOUR
T1 - Shared-private information bottleneck method for cross-modal clustering
AU - Yan, Xiaoqiang
AU - Ye, Yangdong
AU - Mao, Yiqiao
AU - Yu, Hui
PY - 2019/3/12
Y1 - 2019/3/12
N2 - Recently, the cross-modal analysis has drawn much attention due to the rapid growth and widespread emergence of multimodal data. It integrates multiple modalities to improve learning and generalization performance. However, most previous methods just focus on learning a common shared feature space for all modalities and ignore the private information hidden in each individual modality. To address this problem, we propose a novel shared-private information bottleneck (SPIB) method for cross-modal clustering. First, we devise a hybrid words model and a consensus clustering model to construct the shared information of multiple modalities, which capture the statistical correlation of low-level features and the semantic relations of the high-level clustering partitions, respectively. Second, the shared information of multiple modalities and the private information of individual modalities are maximally preserved through a unified information maximization function. Finally, the optimization of SPIB function is performed by a sequential “draw-and-merge” procedure, which guarantees the function converges to a local maximum. Besides, to solve the lack of tags in cross-modal social images, we also investigate the use of structured prior knowledge in the form of knowledge graph to enrich the information in semantic modality and design a novel semantic similarity measurement for social images. The experimental results on four types of cross-modal datasets demonstrate that our method outperforms the state-of-the-art approaches.
AB - Recently, the cross-modal analysis has drawn much attention due to the rapid growth and widespread emergence of multimodal data. It integrates multiple modalities to improve learning and generalization performance. However, most previous methods just focus on learning a common shared feature space for all modalities and ignore the private information hidden in each individual modality. To address this problem, we propose a novel shared-private information bottleneck (SPIB) method for cross-modal clustering. First, we devise a hybrid words model and a consensus clustering model to construct the shared information of multiple modalities, which capture the statistical correlation of low-level features and the semantic relations of the high-level clustering partitions, respectively. Second, the shared information of multiple modalities and the private information of individual modalities are maximally preserved through a unified information maximization function. Finally, the optimization of SPIB function is performed by a sequential “draw-and-merge” procedure, which guarantees the function converges to a local maximum. Besides, to solve the lack of tags in cross-modal social images, we also investigate the use of structured prior knowledge in the form of knowledge graph to enrich the information in semantic modality and design a novel semantic similarity measurement for social images. The experimental results on four types of cross-modal datasets demonstrate that our method outperforms the state-of-the-art approaches.
U2 - 10.1109/ACCESS.2019.2904554
DO - 10.1109/ACCESS.2019.2904554
M3 - Article
SN - 2169-3536
VL - 7
SP - 36045
EP - 36056
JO - IEEE Access
JF - IEEE Access
ER -