The depth map captured by Kinect usually contain missing depth data. In this paper, we propose a novel method to recover the missing depth data with the guidance of depth information of each neighborhood pixel. In the proposed framework, a self-taught mechanism and a cooperative profit random forest (CPRF) algorithm are combined to predict the missing depth data based on the existing depth data and the corresponding RGB image. The proposed method can overcome the defects of the traditional methods which is prone to producing artifact or blur on the edge of objects. The experimental results on the Berkeley 3-D Object Dataset (B3DO) and the Middlebury benchmark dataset show that the proposed method outperforms the existing method for the recovery of the missing depth data. In particular, it has a good effect on maintaining the geometry of objects.
|Title of host publication||2018 11th International Conference on Human System Interaction (HSI)|
|Publication status||Published - 13 Aug 2018|
|Event||11th International Conference on Human System Interaction - Gdansk, Poland|
Duration: 4 Jul 2018 → 6 Jul 2018
|Conference||11th International Conference on Human System Interaction|
|Abbreviated title||HSI 2018|
|Period||4/07/18 → 6/07/18|