TY - JOUR
T1 - Enhancement of real-time grasp detection by cascaded deep convolutional neural networks
AU - Weng, Yaoqing
AU - Sun, Ying
AU - Jiang, Du
AU - Tao, Bo
AU - Yun, Juntong
AU - Liu, Ying
AU - Zhou, Dalin
PY - 2020/9/1
Y1 - 2020/9/1
N2 - Robot grasping technology is a hot spot in robotics research. In relatively fixed industrialized scenarios, using robots to perform grabbing tasks is efficient and lasts a long time. However, in an unstructured environment, the items are diverse, the placement posture is random, and multiple objects are stacked and occluded each other, which makes it difficult for the robot to recognize the target when it is grasped and the grasp method is complicated. Therefore, we propose an accurate, real‐time robot grasp detection method based on convolutional neural networks. A cascaded two‐stage convolutional neural network model with course to fine position and attitude was established. The R‐FCN model was used as the extraction of the candidate frame of the picking position for screening and rough angle estimation, and aiming at the insufficient accuracy of the previous methods in pose detection, an Angle‐Net model is proposed to finely estimate the picking angle. Tests on the Cornell dataset and online robot experiment results show that the method can quickly calculate the optimal gripping point and posture for irregular objects with arbitrary poses and different shapes. The accuracy and real‐time performance of the detection have been improved compared to previous methods.
AB - Robot grasping technology is a hot spot in robotics research. In relatively fixed industrialized scenarios, using robots to perform grabbing tasks is efficient and lasts a long time. However, in an unstructured environment, the items are diverse, the placement posture is random, and multiple objects are stacked and occluded each other, which makes it difficult for the robot to recognize the target when it is grasped and the grasp method is complicated. Therefore, we propose an accurate, real‐time robot grasp detection method based on convolutional neural networks. A cascaded two‐stage convolutional neural network model with course to fine position and attitude was established. The R‐FCN model was used as the extraction of the candidate frame of the picking position for screening and rough angle estimation, and aiming at the insufficient accuracy of the previous methods in pose detection, an Angle‐Net model is proposed to finely estimate the picking angle. Tests on the Cornell dataset and online robot experiment results show that the method can quickly calculate the optimal gripping point and posture for irregular objects with arbitrary poses and different shapes. The accuracy and real‐time performance of the detection have been improved compared to previous methods.
U2 - 10.1002/cpe.5976
DO - 10.1002/cpe.5976
M3 - Article
SN - 1532-0626
JO - Concurrency and Computation: Practice and Experience
JF - Concurrency and Computation: Practice and Experience
ER -