YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Transportation Engineering, Part A: Systems
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Transportation Engineering, Part A: Systems
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Reliable Autonomous Driving Environment Perception: Uncertainty Quantification of Semantic Segmentation

    Source: Journal of Transportation Engineering, Part A: Systems:;2025:;Volume ( 151 ):;issue: 003::page 04024117-1
    Author:
    Rui Wang
    ,
    Tengkun Yang
    ,
    Ci Liang
    ,
    Mengying Wang
    ,
    Yusheng Ci
    DOI: 10.1061/JTEPBS.TEENG-8660
    Publisher: American Society of Civil Engineers
    Abstract: Despite the impressive achievements of computer vision technologies such as semantic segmentation, their applications in safety-critical areas, such as autonomous driving, present substantial challenges, particularly in ensuring the safety of the intended functionality (SOTIF). It is well-recognized that the lack of confidence estimation or overconfidence in a model prediction hinders model applicability and dependability in critical sectors. Profiting from the expressive modeling ability of Dempster–Shafer theory for uncertain information, we propose EviSeg as an approach of uncertainty estimation for semantic segmentation models, which is grounded in the evidential classifier framework. Specifically, we first transform the fully convolutional neural networks used for semantic segmentation via pixelwise classification into an evidential model. Subsequently, the outputs of the penultimate convolutional layer and parameters of the final convolutional layer of a conventionally trained semantic segmentation model constitute a raw evidence pool. Reasoning from this evidence pool, we quantify the predictive uncertainties with the metric conflict. The proposed method does not affect model performance because it does not necessitate alterations to the model architecture and training objective. We utilize the CamVid urban road scene data set and Nighttime Driving data set for our experimental analysis. These experiments demonstrated that, in comparison with the baseline methods, our proposed approach not only provides competitive performance but also enhances computational efficiency significantly. Our study directly contributes to improving the safety and reliability of connected and automated vehicles (CAVs). Such a contribution is crucial to reduce the accidents of CAVs caused by environment perception issues and improving the SOTIF of CAVs.
    • Download: (1.006Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Reliable Autonomous Driving Environment Perception: Uncertainty Quantification of Semantic Segmentation

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4303815
    Collections
    • Journal of Transportation Engineering, Part A: Systems

    Show full item record

    contributor authorRui Wang
    contributor authorTengkun Yang
    contributor authorCi Liang
    contributor authorMengying Wang
    contributor authorYusheng Ci
    date accessioned2025-04-20T10:00:11Z
    date available2025-04-20T10:00:11Z
    date copyright12/18/2024 12:00:00 AM
    date issued2025
    identifier otherJTEPBS.TEENG-8660.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4303815
    description abstractDespite the impressive achievements of computer vision technologies such as semantic segmentation, their applications in safety-critical areas, such as autonomous driving, present substantial challenges, particularly in ensuring the safety of the intended functionality (SOTIF). It is well-recognized that the lack of confidence estimation or overconfidence in a model prediction hinders model applicability and dependability in critical sectors. Profiting from the expressive modeling ability of Dempster–Shafer theory for uncertain information, we propose EviSeg as an approach of uncertainty estimation for semantic segmentation models, which is grounded in the evidential classifier framework. Specifically, we first transform the fully convolutional neural networks used for semantic segmentation via pixelwise classification into an evidential model. Subsequently, the outputs of the penultimate convolutional layer and parameters of the final convolutional layer of a conventionally trained semantic segmentation model constitute a raw evidence pool. Reasoning from this evidence pool, we quantify the predictive uncertainties with the metric conflict. The proposed method does not affect model performance because it does not necessitate alterations to the model architecture and training objective. We utilize the CamVid urban road scene data set and Nighttime Driving data set for our experimental analysis. These experiments demonstrated that, in comparison with the baseline methods, our proposed approach not only provides competitive performance but also enhances computational efficiency significantly. Our study directly contributes to improving the safety and reliability of connected and automated vehicles (CAVs). Such a contribution is crucial to reduce the accidents of CAVs caused by environment perception issues and improving the SOTIF of CAVs.
    publisherAmerican Society of Civil Engineers
    titleReliable Autonomous Driving Environment Perception: Uncertainty Quantification of Semantic Segmentation
    typeJournal Article
    journal volume151
    journal issue3
    journal titleJournal of Transportation Engineering, Part A: Systems
    identifier doi10.1061/JTEPBS.TEENG-8660
    journal fristpage04024117-1
    journal lastpage04024117-10
    page10
    treeJournal of Transportation Engineering, Part A: Systems:;2025:;Volume ( 151 ):;issue: 003
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian