TY - JOUR
T1 - Target detection based on two-stream convolution neural network with self-powered sensors information
AU - Huang, Li
AU - Xiang, Zhao
AU - Yun, Juntong
AU - Sun, Ying
AU - Liu, Yuting
AU - Jiang, Du
AU - Ma, Hongjie
AU - Yu, Hui
N1 - Publisher Copyright:
IEEE
PY - 2022/11/16
Y1 - 2022/11/16
N2 - With the rapid development of artificial intelligence, neural network is widely used in various fields. Target detection algorithm is mainly based on neural network, but the accuracy of target detection algorithm is greatly related to the complexity of scene and texture. A target detection algorithm based on RGB-D image from the perspective of the lightweight of target detection network model and the integration of depth map to overcome the weak environmental illumination with self-powered sensors information is proposed. This paper analyzes the network model structure of YOLOv4 and MobileNet, compares the variation of parameter number between depthwise separable convolution and convolutional neural network, and combines the advantages of YOLOv4 network and MobileNetv3 network. The main network of three effective feature layers in YOLOv4 is replaced by MobileNetv3 network for initial feature layer extraction to strengthen the feature extraction network. At the same time, the standard convolution models in the network are replaced by depthwise separable convolution. The proposed method is compared with YOLOv4 and YOLOv4-MobileNetv3 in this paper, and the experimental results show that the proposed network retains its original advantages in accuracy, but the size of the network model is about 23% of that of YOLOv4 network model, and the processing speed is about 42% higher than that of YOLOv4 network model, and the detection accuracy can still reach 83% in the environment with poor lighting conditions.
AB - With the rapid development of artificial intelligence, neural network is widely used in various fields. Target detection algorithm is mainly based on neural network, but the accuracy of target detection algorithm is greatly related to the complexity of scene and texture. A target detection algorithm based on RGB-D image from the perspective of the lightweight of target detection network model and the integration of depth map to overcome the weak environmental illumination with self-powered sensors information is proposed. This paper analyzes the network model structure of YOLOv4 and MobileNet, compares the variation of parameter number between depthwise separable convolution and convolutional neural network, and combines the advantages of YOLOv4 network and MobileNetv3 network. The main network of three effective feature layers in YOLOv4 is replaced by MobileNetv3 network for initial feature layer extraction to strengthen the feature extraction network. At the same time, the standard convolution models in the network are replaced by depthwise separable convolution. The proposed method is compared with YOLOv4 and YOLOv4-MobileNetv3 in this paper, and the experimental results show that the proposed network retains its original advantages in accuracy, but the size of the network model is about 23% of that of YOLOv4 network model, and the processing speed is about 42% higher than that of YOLOv4 network model, and the detection accuracy can still reach 83% in the environment with poor lighting conditions.
KW - Depthwise separable convolution
KW - MobileNet
KW - Self-powered sensors information
KW - Target detection
KW - Two-stream convolutional neural network
KW - YOLOv4
UR - http://www.scopus.com/inward/record.url?scp=85142849876&partnerID=8YFLogxK
U2 - 10.1109/JSEN.2022.3220341
DO - 10.1109/JSEN.2022.3220341
M3 - Article
AN - SCOPUS:85142849876
SN - 1530-437X
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
ER -