YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Transportation Engineering, Part A: Systems
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Transportation Engineering, Part A: Systems
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Improved Nighttime Vehicle Detection Using the Cross-Domain Image Translation

    Source: Journal of Transportation Engineering, Part A: Systems:;2024:;Volume ( 150 ):;issue: 008::page 04024043-1
    Author:
    Feng Guo
    ,
    Yihao Deng
    ,
    Honglei Chang
    ,
    Huayang Yu
    DOI: 10.1061/JTEPBS.TEENG-8341
    Publisher: American Society of Civil Engineers
    Abstract: Accurate detection of vehicles at nighttime is essential for transportation monitoring and management. However, annotating nighttime vehicle data is challenging, and vehicle features differ significantly between day and night, introducing difficulties in nighttime detection using pretrained models trained on daytime data. In this study, the nighttime vehicle detection performance is improved by employing a patchwise contrastive learning technique to enhance the representation of informative features for various traffic instances. An object detection network with reduced computational complexity and hyperparameters is utilized to conduct vehicle detection at night. Extensive experiments have been performed using images acquired from a section of Jingshi Road in Jinan, China. The impacts of learning rates and crop sizes are discussed. Three commonly adopted indicators, including mean average precision (mAP), precision, and recall, have been used to evaluate the training performance of the adopted FreeAnchor detector. Experimental results indicate that using a crop size of 320 and a learning rate of 2e-4, the developed generative adversarial network (GAN) achieves the best performance in image translation. Moreover, with a ratio of 60% real images to 40% fake images in model training, the FreeAnchor detector achieves the highest mAP of 96.6%. Visualized results for both image translation and nighttime vehicle detection demonstrate improved performance, underscoring the effectiveness of the proposed framework. This study paves the way for leveraging GAN-based networks to assist in vehicle detection under nighttime conditions.
    • Download: (2.705Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Improved Nighttime Vehicle Detection Using the Cross-Domain Image Translation

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4298305
    Collections
    • Journal of Transportation Engineering, Part A: Systems

    Show full item record

    contributor authorFeng Guo
    contributor authorYihao Deng
    contributor authorHonglei Chang
    contributor authorHuayang Yu
    date accessioned2024-12-24T10:06:16Z
    date available2024-12-24T10:06:16Z
    date copyright8/1/2024 12:00:00 AM
    date issued2024
    identifier otherJTEPBS.TEENG-8341.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4298305
    description abstractAccurate detection of vehicles at nighttime is essential for transportation monitoring and management. However, annotating nighttime vehicle data is challenging, and vehicle features differ significantly between day and night, introducing difficulties in nighttime detection using pretrained models trained on daytime data. In this study, the nighttime vehicle detection performance is improved by employing a patchwise contrastive learning technique to enhance the representation of informative features for various traffic instances. An object detection network with reduced computational complexity and hyperparameters is utilized to conduct vehicle detection at night. Extensive experiments have been performed using images acquired from a section of Jingshi Road in Jinan, China. The impacts of learning rates and crop sizes are discussed. Three commonly adopted indicators, including mean average precision (mAP), precision, and recall, have been used to evaluate the training performance of the adopted FreeAnchor detector. Experimental results indicate that using a crop size of 320 and a learning rate of 2e-4, the developed generative adversarial network (GAN) achieves the best performance in image translation. Moreover, with a ratio of 60% real images to 40% fake images in model training, the FreeAnchor detector achieves the highest mAP of 96.6%. Visualized results for both image translation and nighttime vehicle detection demonstrate improved performance, underscoring the effectiveness of the proposed framework. This study paves the way for leveraging GAN-based networks to assist in vehicle detection under nighttime conditions.
    publisherAmerican Society of Civil Engineers
    titleImproved Nighttime Vehicle Detection Using the Cross-Domain Image Translation
    typeJournal Article
    journal volume150
    journal issue8
    journal titleJournal of Transportation Engineering, Part A: Systems
    identifier doi10.1061/JTEPBS.TEENG-8341
    journal fristpage04024043-1
    journal lastpage04024043-10
    page10
    treeJournal of Transportation Engineering, Part A: Systems:;2024:;Volume ( 150 ):;issue: 008
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian