Skip to content

A joint guidance-enhanced perceptual encoder and atrous separable pyramid-convolutions for image inpainting

Research output: Contribution to journal › Article

  • Yongle Zhang
  • Yingyu Wang
  • Junyu Dong
  • Lin Qi
  • Hao Fan
  • Xinghui Dong
  • Muwei Jian
  • Professor Hui Yu
Satisfactory image inpainting requires visually exquisite details and semantically-plausible structures, where encoder-decoder networks have shown their potentials but bear undesired local and global inconsistencies, such as blurry textures. To address this issue, we incorporate a perception operation in the encoder, which extracts features from known areas of the input image, to improve textured details in missing areas. We also propose an iterative guidance loss for the perception operation to guide perceptual encoding features approaching to ground-truth encoding features. The guidance-enhanced perceptual encoding features are transferred to the decoder through skip connections, mutually reinforcing the entire encoder-decoder performance. Since the inpainting task involves different levels of feature representations, we further apply atrous separable parallel-convolutions (i.e. atrous separable pyramid-convolutions or ASPC) with different receptive fields in the last guidance enhanced perceptual encoding feature, which is used to learn high-level semantic features with multi-scale information. Experiments on public databases show that the proposed method achieves promising results in terms of visual details and semantic structures.
Original languageEnglish
Number of pages12
Early online date23 Jan 2020
Publication statusEarly online - 23 Jan 2020


  • Joint_Image_Inpainting_Overleaf_2020

    Accepted author manuscript (Post-print), 8.1 MB, PDF document

    Due to publisher’s copyright restrictions, this document is not freely available to download from this website until: 23/01/21

    Licence: CC BY-NC-ND

Related information

Relations Get citation (various referencing formats)

ID: 20099745