Kinect depth recovery via the cooperative profit random forest algorithm

Jianyuan Sun, Lin Qi, Xuguang Zhang, Junyu Dong, Hui Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

107 Downloads (Pure)

Abstract

The depth map captured by Kinect usually contain missing depth data. In this paper, we propose a novel method to recover the missing depth data with the guidance of depth information of each neighborhood pixel. In the proposed framework, a self-taught mechanism and a cooperative profit random forest (CPRF) algorithm are combined to predict the missing depth data based on the existing depth data and the corresponding RGB image. The proposed method can overcome the defects of the traditional methods which is prone to producing artifact or blur on the edge of objects. The experimental results on the Berkeley 3-D Object Dataset (B3DO) and the Middlebury benchmark dataset show that the proposed method outperforms the existing method for the recovery of the missing depth data. In particular, it has a good effect on maintaining the geometry of objects.
Original languageEnglish
Title of host publication2018 11th International Conference on Human System Interaction (HSI)
PublisherIEEE
Pages57-62
ISBN (Electronic)978-1-5386-5024-0
ISBN (Print)978-1-5386-5025-7
DOIs
Publication statusPublished - 13 Aug 2018
Event11th International Conference on Human System Interaction - Gdansk, Poland
Duration: 4 Jul 20186 Jul 2018

Conference

Conference11th International Conference on Human System Interaction
Abbreviated titleHSI 2018
Country/TerritoryPoland
CityGdansk
Period4/07/186/07/18

Keywords

  • RCUK
  • EPSRC
  • EP/N025849/1

Fingerprint

Dive into the research topics of 'Kinect depth recovery via the cooperative profit random forest algorithm'. Together they form a unique fingerprint.

Cite this