YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Dynamic Systems, Measurement, and Control
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Dynamic Systems, Measurement, and Control
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Estimation of State Transition Probabilities in Asynchronous Vector Markov Processes

    Source: Journal of Dynamic Systems, Measurement, and Control:;2012:;volume( 134 ):;issue: 006::page 61003
    Author:
    Waleed A. Farahat
    ,
    H. Harry Asada
    DOI: 10.1115/1.4006087
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Vector Markov processes (also known as population Markov processes) are an important class of stochastic processes that have been used to model a wide range of technological, biological, and socioeconomic systems. The dynamics of vector Markov processes are fully characterized, in a stochastic sense, by the state transition probability matrix P . In most applications, P has to be estimated based on either incomplete or aggregated process observations. Here, in contrast to established methods for estimation given aggregate data, we develop Bayesian formulations for estimating P from asynchronous aggregate (longitudinal) observations of the population dynamics. Such observations are common, for example, in the study of aggregate biological cell population dynamics via flow cytometry. We derive the Bayesian formulation, and show that computing estimates via exact marginalization are, generally, computationally expensive. Consequently, we rely on Monte Carlo Markov chain sampling approaches to estimate the posterior distributions efficiently. By explicitly integrating problem constraints in these sampling schemes, significant efficiencies are attained. We illustrate the algorithm via simulation examples and show that the Bayesian estimation schemes can attain significant advantages over point estimates schemes such as maximum likelihood.
    keyword(s): Flow (Dynamics) , Algorithms , Markov processes AND Probability ,
    • Download: (2.751Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Estimation of State Transition Probabilities in Asynchronous Vector Markov Processes

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/148428
    Collections
    • Journal of Dynamic Systems, Measurement, and Control

    Show full item record

    contributor authorWaleed A. Farahat
    contributor authorH. Harry Asada
    date accessioned2017-05-09T00:48:59Z
    date available2017-05-09T00:48:59Z
    date copyrightNovember, 2012
    date issued2012
    identifier issn0022-0434
    identifier otherJDSMAA-926036#061003_1.pdf
    identifier urihttp://yetl.yabesh.ir/yetl/handle/yetl/148428
    description abstractVector Markov processes (also known as population Markov processes) are an important class of stochastic processes that have been used to model a wide range of technological, biological, and socioeconomic systems. The dynamics of vector Markov processes are fully characterized, in a stochastic sense, by the state transition probability matrix P . In most applications, P has to be estimated based on either incomplete or aggregated process observations. Here, in contrast to established methods for estimation given aggregate data, we develop Bayesian formulations for estimating P from asynchronous aggregate (longitudinal) observations of the population dynamics. Such observations are common, for example, in the study of aggregate biological cell population dynamics via flow cytometry. We derive the Bayesian formulation, and show that computing estimates via exact marginalization are, generally, computationally expensive. Consequently, we rely on Monte Carlo Markov chain sampling approaches to estimate the posterior distributions efficiently. By explicitly integrating problem constraints in these sampling schemes, significant efficiencies are attained. We illustrate the algorithm via simulation examples and show that the Bayesian estimation schemes can attain significant advantages over point estimates schemes such as maximum likelihood.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleEstimation of State Transition Probabilities in Asynchronous Vector Markov Processes
    typeJournal Paper
    journal volume134
    journal issue6
    journal titleJournal of Dynamic Systems, Measurement, and Control
    identifier doi10.1115/1.4006087
    journal fristpage61003
    identifier eissn1528-9028
    keywordsFlow (Dynamics)
    keywordsAlgorithms
    keywordsMarkov processes AND Probability
    treeJournal of Dynamic Systems, Measurement, and Control:;2012:;volume( 134 ):;issue: 006
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian