Show simple item record

contributor authorRomero, David A.
contributor authorMarin, Veronica E.
contributor authorAmon, Cristina H.
date accessioned2017-05-09T01:20:44Z
date available2017-05-09T01:20:44Z
date issued2015
identifier issn1050-0472
identifier othermd_137_01_011402.pdf
identifier urihttp://yetl.yabesh.ir/yetl/handle/yetl/158770
description abstractMetamodels, or surrogate models, have been proposed in the literature to reduce the resources (time/cost) invested in the design and optimization of engineering systems whose behavior is modeled using complex computer codes, in an area commonly known as simulationbased design optimization. Following the seminal paper of Sacks et al. (1989, “Design and Analysis of Computer Experiments,â€‌ Stat. Sci., 4(4), pp. 409–435), researchers have developed the field of design and analysis of computer experiments (DACE), focusing on different aspects of the problem such as experimental design, approximation methods, model fitting, model validation, and metamodelingbased optimization methods. Among these, model validation remains a key issue, as the reliability and trustworthiness of the results depend greatly on the quality of approximation of the metamodel. Typically, model validation involves calculating prediction errors of the metamodel using a data set different from the one used to build the model. Due to the high cost associated with computer experiments with simulation codes, validation approaches that do not require additional data points (samples) are preferable. However, it is documented that methods based on resampling, e.g., cross validation (CV), can exhibit oscillatory behavior during sequential/adaptive sampling and model refinement, thus making it difficult to quantify the approximation capabilities of the metamodels and/or to define rational stopping criteria for the metamodel refinement process. In this work, we present the results of a simulation experiment conducted to study the evolution of several error metrics during sequential model refinement, to estimate prediction errors, and to define proper stopping criteria without requiring additional samples beyond those used to build the metamodels. Our results show that it is possible to accurately estimate the predictive performance of Kriging metamodels without additional samples, and that leaveoneout CV errors perform poorly in this context. Based on our findings, we propose guidelines for choosing the sample size of computer experiments that use sequential/adaptive model refinement paradigm. We also propose a stopping criterion for sequential model refinement that does not require additional samples.
publisherThe American Society of Mechanical Engineers (ASME)
titleError Metrics and the Sequential Refinement of Kriging Metamodels
typeJournal Paper
journal volume137
journal issue1
journal titleJournal of Mechanical Design
identifier doi10.1115/1.4028883
journal fristpage11402
journal lastpage11402
identifier eissn1528-9001
treeJournal of Mechanical Design:;2015:;volume( 137 ):;issue: 001
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record