YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Highway and Transportation Research and Development (English Edition)
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Highway and Transportation Research and Development (English Edition)
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Radar–Camera Fusion Vehicle Detection Based on Feature Weighting and Visual Enhancement

    Source: Journal of Highway and Transportation Research and Development (English Edition):;2023:;Volume ( 017 ):;issue: 001::page 72
    Author:
    Xiao-feng Yan
    ,
    Ke-xin Huo
    ,
    Xiao-huan Li
    ,
    Xin Tang
    ,
    Shao-hua Xu
    DOI: 10.1061/JHTRCQ.0000858
    Publisher: ASCE
    Abstract: To improve the vehicle detection accuracy under the condition of low light and long-distance detection requirements on expressway, a radar–camera fusion vehicle target detection method based on visual enhancement and feature weighting is proposed. First, starting with the fusion of the radar–camera data layer, the spatial location of the potential target based on millimeter wave radar is characterized, and the characterization result is used for the division of long-distance target areas in visual images. Then, the images of the divided area are reconstructed, detected, and restored to improve visual detection accuracy of the long-distance target. Next, the radar–camera detection feature layer is fused and modeling is conducted. Considering the difference in the contribution of different layers to feature detection, the weight parameters of different feature maps are obtained through model training, and the features of different layers are fused and calculated according to the weight to enhance the feature information of the target. Next, a branch network is added, and different receptive field information in the feature map is extracted by using convolutional layers of different sizes, and the branch output results are fused to obtain a stronger image representation ability, which can achieve the goal of improving the detection accuracy in low light. Finally, combining the feature weighting radar–camera framework and the vision enhancement based on millimeter wave radar spatial preprocessing, the radar–camera fusion detection network based on YOLOv4-tiny is designed and the verification system is built. The results show that (1) the average precision (AP) of the proposed algorithm in the low-light environment increased by 20% compared with that of YOLOv4, and the AP increased by 5% compared with that of the radar–camera fusion algorithm RVNet; (2) in the test of detection performance at different distances, when detecting a 120-meter target, the AP of the proposed algorithm is 73% higher than that of the YOLOv4 algorithm, and 63% higher than that of RVNet, which improved the coverage distance and low-light detection accuracy of vehicle detection in intelligent transportation systems (ITS).
    • Download: (4.277Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Radar–Camera Fusion Vehicle Detection Based on Feature Weighting and Visual Enhancement

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4293643
    Collections
    • Journal of Highway and Transportation Research and Development (English Edition)

    Show full item record

    contributor authorXiao-feng Yan
    contributor authorKe-xin Huo
    contributor authorXiao-huan Li
    contributor authorXin Tang
    contributor authorShao-hua Xu
    date accessioned2023-11-27T23:32:13Z
    date available2023-11-27T23:32:13Z
    date issued3/1/2023 12:00:00 AM
    date issued2023-03-01
    identifier otherJHTRCQ.0000858.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4293643
    description abstractTo improve the vehicle detection accuracy under the condition of low light and long-distance detection requirements on expressway, a radar–camera fusion vehicle target detection method based on visual enhancement and feature weighting is proposed. First, starting with the fusion of the radar–camera data layer, the spatial location of the potential target based on millimeter wave radar is characterized, and the characterization result is used for the division of long-distance target areas in visual images. Then, the images of the divided area are reconstructed, detected, and restored to improve visual detection accuracy of the long-distance target. Next, the radar–camera detection feature layer is fused and modeling is conducted. Considering the difference in the contribution of different layers to feature detection, the weight parameters of different feature maps are obtained through model training, and the features of different layers are fused and calculated according to the weight to enhance the feature information of the target. Next, a branch network is added, and different receptive field information in the feature map is extracted by using convolutional layers of different sizes, and the branch output results are fused to obtain a stronger image representation ability, which can achieve the goal of improving the detection accuracy in low light. Finally, combining the feature weighting radar–camera framework and the vision enhancement based on millimeter wave radar spatial preprocessing, the radar–camera fusion detection network based on YOLOv4-tiny is designed and the verification system is built. The results show that (1) the average precision (AP) of the proposed algorithm in the low-light environment increased by 20% compared with that of YOLOv4, and the AP increased by 5% compared with that of the radar–camera fusion algorithm RVNet; (2) in the test of detection performance at different distances, when detecting a 120-meter target, the AP of the proposed algorithm is 73% higher than that of the YOLOv4 algorithm, and 63% higher than that of RVNet, which improved the coverage distance and low-light detection accuracy of vehicle detection in intelligent transportation systems (ITS).
    publisherASCE
    titleRadar–Camera Fusion Vehicle Detection Based on Feature Weighting and Visual Enhancement
    typeJournal Article
    journal volume17
    journal issue1
    journal titleJournal of Highway and Transportation Research and Development (English Edition)
    identifier doi10.1061/JHTRCQ.0000858
    journal fristpage72
    journal lastpage81
    page10
    treeJournal of Highway and Transportation Research and Development (English Edition):;2023:;Volume ( 017 ):;issue: 001
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian