Skip to content

Two-stage deep regression enhanced depth estimation from a single RGB image

Research output: Contribution to journalArticlepeer-review

Depth estimation plays a significant role in industrial applications, e.g. augmented reality, robotic mapping and autonomous driving. Traditional approaches for capturing depth, such as laser or depth sensor based methods, are difficult to use in most scenarios due to the limitations of high system cost and limited operational conditions. As an inexpensive and convenient approach, using the computational models to estimate depth from a single RGB image offers a preferable way for the depth prediction. Although the design of computational models to estimate the depth map has been widely investigated, the majority of models suffers from low prediction accuracy due to the sole utilization of a one-stage regression strategy. Inspired by both theoretical and practical success of two-stage regression, we propose a two-stage deep regression model, which is composed of two state-of-the-art network architectures, i.e. the fully convolutional residual network (FCRN) and the conditional generation adversarial network (cGAN). FCRN has been proved to possess a strong prediction ability for depth prediction, but fine details in the depth map are still incomplete. Accordingly, we have improved the existing cGAN model to refine the FCRN-based depth prediction. The experimental results show that the proposed two-stage deep regression model outperforms existing state-of-the-art methods.
Original languageEnglish
Number of pages9
JournalIEEE Transactions on Emerging Topics in Computing
Early online date28 Oct 2020
DOIs
Publication statusEarly online - 28 Oct 2020

Documents

  • bare_jrnl_compsoc_pp

    Rights statement: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

    Accepted author manuscript (Post-print), 1.46 MB, PDF document

Related information

Relations Get citation (various referencing formats)

ID: 23092167