YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Design Knowledge as Attention Emphasizer in Large Language Model-Based Sentiment Analysis

    Source: Journal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002::page 21007-1
    Author:
    Han, Yi
    ,
    Moghaddam, Mohsen
    DOI: 10.1115/1.4067212
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Aspect-based sentiment analysis (ABSA) enables a systematic identification of user opinions on particular aspects, thus improving the idea creation process in the initial stages of a product/service design. Large language models (LLMs) such as T5 and GPT have proven powerful in ABSA tasks due to their inherent attention mechanism. However, some key limitations remain. First, existing research mainly focuses on relatively simpler ABSA tasks such as aspect-based sentiment analysis, while the task of extracting aspects, opinions, and sentiment in a unified model remains largely unaddressed. Second, current ABSA tasks overlook implicit opinions and sentiments. Third, most attention-based LLMs use position encoding in a linear projected manner or through split-position relations in word distance schemes, which could lead to relation biases during the training process. This paper incorporates domain knowledge into LLMs by introducing a new position encoding strategy for the transformer model. This paper addresses these gaps by (1) introducing the ACOSI (aspect, category, opinion, sentiment, implicit indicator) analysis task, developing a unified model capable of extracting all five types of labels in the ACOSI analysis task simultaneously in a generative manner; (2) designing a new position encoding method in the attention-based model; and (3) introducing a new benchmark based on ROUGE score that incorporates design domain knowledge inside. The numerical experiments on manually labeled data from three major e-Commerce retail stores for apparel and footwear products showcase the domain knowledge inserted transformer method’s performance, scalability, and potential.
    • Download: (682.4Kb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Design Knowledge as Attention Emphasizer in Large Language Model-Based Sentiment Analysis

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4306580
    Collections
    • Journal of Computing and Information Science in Engineering

    Show full item record

    contributor authorHan, Yi
    contributor authorMoghaddam, Mohsen
    date accessioned2025-04-21T10:37:43Z
    date available2025-04-21T10:37:43Z
    date copyright12/13/2024 12:00:00 AM
    date issued2024
    identifier issn1530-9827
    identifier otherjcise_25_2_021007.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4306580
    description abstractAspect-based sentiment analysis (ABSA) enables a systematic identification of user opinions on particular aspects, thus improving the idea creation process in the initial stages of a product/service design. Large language models (LLMs) such as T5 and GPT have proven powerful in ABSA tasks due to their inherent attention mechanism. However, some key limitations remain. First, existing research mainly focuses on relatively simpler ABSA tasks such as aspect-based sentiment analysis, while the task of extracting aspects, opinions, and sentiment in a unified model remains largely unaddressed. Second, current ABSA tasks overlook implicit opinions and sentiments. Third, most attention-based LLMs use position encoding in a linear projected manner or through split-position relations in word distance schemes, which could lead to relation biases during the training process. This paper incorporates domain knowledge into LLMs by introducing a new position encoding strategy for the transformer model. This paper addresses these gaps by (1) introducing the ACOSI (aspect, category, opinion, sentiment, implicit indicator) analysis task, developing a unified model capable of extracting all five types of labels in the ACOSI analysis task simultaneously in a generative manner; (2) designing a new position encoding method in the attention-based model; and (3) introducing a new benchmark based on ROUGE score that incorporates design domain knowledge inside. The numerical experiments on manually labeled data from three major e-Commerce retail stores for apparel and footwear products showcase the domain knowledge inserted transformer method’s performance, scalability, and potential.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleDesign Knowledge as Attention Emphasizer in Large Language Model-Based Sentiment Analysis
    typeJournal Paper
    journal volume25
    journal issue2
    journal titleJournal of Computing and Information Science in Engineering
    identifier doi10.1115/1.4067212
    journal fristpage21007-1
    journal lastpage21007-12
    page12
    treeJournal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian