Show simple item record

contributor authorXiao-feng Yan
contributor authorKe-xin Huo
contributor authorXiao-huan Li
contributor authorXin Tang
contributor authorShao-hua Xu
date accessioned2023-11-27T23:32:13Z
date available2023-11-27T23:32:13Z
date issued3/1/2023 12:00:00 AM
date issued2023-03-01
identifier otherJHTRCQ.0000858.pdf
identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4293643
description abstractTo improve the vehicle detection accuracy under the condition of low light and long-distance detection requirements on expressway, a radar–camera fusion vehicle target detection method based on visual enhancement and feature weighting is proposed. First, starting with the fusion of the radar–camera data layer, the spatial location of the potential target based on millimeter wave radar is characterized, and the characterization result is used for the division of long-distance target areas in visual images. Then, the images of the divided area are reconstructed, detected, and restored to improve visual detection accuracy of the long-distance target. Next, the radar–camera detection feature layer is fused and modeling is conducted. Considering the difference in the contribution of different layers to feature detection, the weight parameters of different feature maps are obtained through model training, and the features of different layers are fused and calculated according to the weight to enhance the feature information of the target. Next, a branch network is added, and different receptive field information in the feature map is extracted by using convolutional layers of different sizes, and the branch output results are fused to obtain a stronger image representation ability, which can achieve the goal of improving the detection accuracy in low light. Finally, combining the feature weighting radar–camera framework and the vision enhancement based on millimeter wave radar spatial preprocessing, the radar–camera fusion detection network based on YOLOv4-tiny is designed and the verification system is built. The results show that (1) the average precision (AP) of the proposed algorithm in the low-light environment increased by 20% compared with that of YOLOv4, and the AP increased by 5% compared with that of the radar–camera fusion algorithm RVNet; (2) in the test of detection performance at different distances, when detecting a 120-meter target, the AP of the proposed algorithm is 73% higher than that of the YOLOv4 algorithm, and 63% higher than that of RVNet, which improved the coverage distance and low-light detection accuracy of vehicle detection in intelligent transportation systems (ITS).
publisherASCE
titleRadar–Camera Fusion Vehicle Detection Based on Feature Weighting and Visual Enhancement
typeJournal Article
journal volume17
journal issue1
journal titleJournal of Highway and Transportation Research and Development (English Edition)
identifier doi10.1061/JHTRCQ.0000858
journal fristpage72
journal lastpage81
page10
treeJournal of Highway and Transportation Research and Development (English Edition):;2023:;Volume ( 017 ):;issue: 001
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record