YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Transportation Engineering, Part A: Systems
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Transportation Engineering, Part A: Systems
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Driver Emotion Recognition Involving Multimodal Signals: Electrophysiological Response, Nasal-Tip Temperature, and Vehicle Behavior

    Source: Journal of Transportation Engineering, Part A: Systems:;2024:;Volume ( 150 ):;issue: 001::page 04023125-1
    Author:
    Jie Ni
    ,
    Wanying Xie
    ,
    Yiping Liu
    ,
    Jike Zhang
    ,
    Yugu Wan
    ,
    Huimin Ge
    DOI: 10.1061/JTEPBS.TEENG-7802
    Publisher: ASCE
    Abstract: Accurate driver emotion recognition is one of the key challenges in the construction of an intelligent vehicle safety assistant system. In this paper, we conduct a driving simulator study on driver emotion recognition. Taking the car-following scene as an example, the multimodal parameters of a driver in the five emotional states of neutral, joy, fear, sadness, and anger are obtained from the emotion induction experiment and the simulated driving experiment. Wavelet denoising and debase processing are used to reduce the influence of signal noise and the individual differences between drivers. The statistical domain and the time-frequency domain features of the electrophysiological response signals, nasal-tip temperature signals, and vehicle behavior signals are analyzed. The factor analysis method is used to extract and reduce the feature parameters, and the driver’s emotion recognition model is established based on machine learning methods such as random forest (RF), K-nearest-neighbor (KNN), and extreme gradient boosting (XGBoost). Through the verification and the comparison of different modalities and different modality combinations with different machine learning methods, the RF model, based on the feature combination of three types of modal data, has the best model recognition effect. The research results can provide a theoretical basis for driver emotion recognition of intelligent vehicles and have positive significance for promoting the development of human-computer interaction (HCI) systems of intelligent vehicles and improving road traffic safety.
    • Download: (2.025Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Driver Emotion Recognition Involving Multimodal Signals: Electrophysiological Response, Nasal-Tip Temperature, and Vehicle Behavior

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4296886
    Collections
    • Journal of Transportation Engineering, Part A: Systems

    Show full item record

    contributor authorJie Ni
    contributor authorWanying Xie
    contributor authorYiping Liu
    contributor authorJike Zhang
    contributor authorYugu Wan
    contributor authorHuimin Ge
    date accessioned2024-04-27T22:32:17Z
    date available2024-04-27T22:32:17Z
    date issued2024/01/01
    identifier other10.1061-JTEPBS.TEENG-7802.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4296886
    description abstractAccurate driver emotion recognition is one of the key challenges in the construction of an intelligent vehicle safety assistant system. In this paper, we conduct a driving simulator study on driver emotion recognition. Taking the car-following scene as an example, the multimodal parameters of a driver in the five emotional states of neutral, joy, fear, sadness, and anger are obtained from the emotion induction experiment and the simulated driving experiment. Wavelet denoising and debase processing are used to reduce the influence of signal noise and the individual differences between drivers. The statistical domain and the time-frequency domain features of the electrophysiological response signals, nasal-tip temperature signals, and vehicle behavior signals are analyzed. The factor analysis method is used to extract and reduce the feature parameters, and the driver’s emotion recognition model is established based on machine learning methods such as random forest (RF), K-nearest-neighbor (KNN), and extreme gradient boosting (XGBoost). Through the verification and the comparison of different modalities and different modality combinations with different machine learning methods, the RF model, based on the feature combination of three types of modal data, has the best model recognition effect. The research results can provide a theoretical basis for driver emotion recognition of intelligent vehicles and have positive significance for promoting the development of human-computer interaction (HCI) systems of intelligent vehicles and improving road traffic safety.
    publisherASCE
    titleDriver Emotion Recognition Involving Multimodal Signals: Electrophysiological Response, Nasal-Tip Temperature, and Vehicle Behavior
    typeJournal Article
    journal volume150
    journal issue1
    journal titleJournal of Transportation Engineering, Part A: Systems
    identifier doi10.1061/JTEPBS.TEENG-7802
    journal fristpage04023125-1
    journal lastpage04023125-11
    page11
    treeJournal of Transportation Engineering, Part A: Systems:;2024:;Volume ( 150 ):;issue: 001
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian