YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Dynamic Systems, Measurement, and Control
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Dynamic Systems, Measurement, and Control
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Learning Algorithms for Neural Networks Based on Quasi-Newton Methods With Self-Scaling

    Source: Journal of Dynamic Systems, Measurement, and Control:;1993:;volume( 115 ):;issue: 001::page 38
    Author:
    H. S. M. Beigi
    ,
    C. J. Li
    DOI: 10.1115/1.2897405
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Previous studies have suggested that, for moderate sized neural networks, the use of classical Quasi-Newton methods yields the best convergence properties among all the state-of-the-art [1]. This paper describes a set of even better learning algorithms based on a class of Quasi-Newton optimization techniques called Self-Scaling Variable Metric (SSVM) methods. One of the characteristics of SSVM methods is that they provide a set of search directions which are invariant under the scaling of the objective function. With an XOR benchmark and an encoder benchmark, simulations using the SSVM algorithms for the learning of general feedforward neural networks were carried out to study their performance. Compared to classical Quasi-Newton methods, it is shown that the SSVM method reduces the number of iterations required for convergence by 40 percent to 60 percent that of the classical Quasi-Newton methods which, in general, converge two to three orders of magnitude faster than the steepest descent techniques.
    keyword(s): Algorithms , Artificial neural networks , Feedforward control , Engineering simulation AND Optimization ,
    • Download: (643.8Kb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Learning Algorithms for Neural Networks Based on Quasi-Newton Methods With Self-Scaling

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/111702
    Collections
    • Journal of Dynamic Systems, Measurement, and Control

    Show full item record

    contributor authorH. S. M. Beigi
    contributor authorC. J. Li
    date accessioned2017-05-08T23:40:55Z
    date available2017-05-08T23:40:55Z
    date copyrightMarch, 1993
    date issued1993
    identifier issn0022-0434
    identifier otherJDSMAA-26191#38_1.pdf
    identifier urihttp://yetl.yabesh.ir/yetl/handle/yetl/111702
    description abstractPrevious studies have suggested that, for moderate sized neural networks, the use of classical Quasi-Newton methods yields the best convergence properties among all the state-of-the-art [1]. This paper describes a set of even better learning algorithms based on a class of Quasi-Newton optimization techniques called Self-Scaling Variable Metric (SSVM) methods. One of the characteristics of SSVM methods is that they provide a set of search directions which are invariant under the scaling of the objective function. With an XOR benchmark and an encoder benchmark, simulations using the SSVM algorithms for the learning of general feedforward neural networks were carried out to study their performance. Compared to classical Quasi-Newton methods, it is shown that the SSVM method reduces the number of iterations required for convergence by 40 percent to 60 percent that of the classical Quasi-Newton methods which, in general, converge two to three orders of magnitude faster than the steepest descent techniques.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleLearning Algorithms for Neural Networks Based on Quasi-Newton Methods With Self-Scaling
    typeJournal Paper
    journal volume115
    journal issue1
    journal titleJournal of Dynamic Systems, Measurement, and Control
    identifier doi10.1115/1.2897405
    journal fristpage38
    journal lastpage43
    identifier eissn1528-9028
    keywordsAlgorithms
    keywordsArtificial neural networks
    keywordsFeedforward control
    keywordsEngineering simulation AND Optimization
    treeJournal of Dynamic Systems, Measurement, and Control:;1993:;volume( 115 ):;issue: 001
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian