YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation

    Source: Journal of Mechanical Design:;2015:;volume( 137 ):;issue: 003::page 31101
    Author:
    Burnap, Alex
    ,
    Ren, Yi
    ,
    Gerth, Richard
    ,
    Papazoglou, Giannis
    ,
    Gonzalez, Richard
    ,
    Papalambros, Panos Y.
    DOI: 10.1115/1.4029065
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Crowdsourced evaluation is a promising method of evaluating engineering design attributes that require human input. The challenge is to correctly estimate scores using a massive and diverse crowd, particularly when only a small subset of evaluators has the expertise to give correct evaluations. Since averaging evaluations across all evaluators will result in an inaccurate crowd evaluation, this paper benchmarks a crowd consensus model that aims to identify experts such that their evaluations may be given more weight. Simulation results indicate this crowd consensus model outperforms averaging when it correctly identifies experts in the crowd, under the assumption that only experts have consistent evaluations. However, empirical results from a real human crowd indicate this assumption may not hold even on a simple engineering design evaluation task, as clusters of consistently wrong evaluators are shown to exist along with the cluster of experts. This suggests that both averaging evaluations and a crowd consensus model that relies only on evaluations may not be adequate for engineering design tasks, accordingly calling for further research into methods of finding experts within the crowd.
    • Download: (1.087Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/158788
    Collections
    • Journal of Mechanical Design

    Show full item record

    contributor authorBurnap, Alex
    contributor authorRen, Yi
    contributor authorGerth, Richard
    contributor authorPapazoglou, Giannis
    contributor authorGonzalez, Richard
    contributor authorPapalambros, Panos Y.
    date accessioned2017-05-09T01:20:47Z
    date available2017-05-09T01:20:47Z
    date issued2015
    identifier issn1050-0472
    identifier othermd_137_03_031101.pdf
    identifier urihttp://yetl.yabesh.ir/yetl/handle/yetl/158788
    description abstractCrowdsourced evaluation is a promising method of evaluating engineering design attributes that require human input. The challenge is to correctly estimate scores using a massive and diverse crowd, particularly when only a small subset of evaluators has the expertise to give correct evaluations. Since averaging evaluations across all evaluators will result in an inaccurate crowd evaluation, this paper benchmarks a crowd consensus model that aims to identify experts such that their evaluations may be given more weight. Simulation results indicate this crowd consensus model outperforms averaging when it correctly identifies experts in the crowd, under the assumption that only experts have consistent evaluations. However, empirical results from a real human crowd indicate this assumption may not hold even on a simple engineering design evaluation task, as clusters of consistently wrong evaluators are shown to exist along with the cluster of experts. This suggests that both averaging evaluations and a crowd consensus model that relies only on evaluations may not be adequate for engineering design tasks, accordingly calling for further research into methods of finding experts within the crowd.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleWhen Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation
    typeJournal Paper
    journal volume137
    journal issue3
    journal titleJournal of Mechanical Design
    identifier doi10.1115/1.4029065
    journal fristpage31101
    journal lastpage31101
    identifier eissn1528-9001
    treeJournal of Mechanical Design:;2015:;volume( 137 ):;issue: 003
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian