How Diverse Initial Samples Help and Hurt Bayesian OptimizersSource: Journal of Mechanical Design:;2023:;volume( 145 ):;issue: 011::page 111703-1DOI: 10.1115/1.4063006Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: Design researchers have struggled to produce quantitative predictions for exactly why and when diversity might help or hinder design search efforts. This paper addresses that problem by studying one ubiquitously used search strategy—Bayesian optimization (BO)—on a 2D test problem with modifiable convexity and difficulty. Specifically, we test how providing diverse versus non-diverse initial samples to BO affects its performance during search and introduce a fast ranked-determinantal point process method for computing diverse sets, which we need to detect sets of highly diverse or non-diverse initial samples. We initially found, to our surprise, that diversity did not appear to affect BO, neither helping nor hurting the optimizer’s convergence. However, follow-on experiments illuminated a key trade-off. Non-diverse initial samples hastened posterior convergence for the underlying model hyper-parameters—a model building advantage. In contrast, diverse initial samples accelerated exploring the function itself—a space exploration advantage. Both advantages help BO, but in different ways, and the initial sample diversity directly modulates how BO trades those advantages. Indeed, we show that fixing the BO hyper-parameters removes the model building advantage, causing diverse initial samples to always outperform models trained with non-diverse samples. These findings shed light on why, at least for BO-type optimizers, the use of diversity has mixed effects and cautions against the ubiquitous use of space-filling initializations in BO. To the extent that humans use explore-exploit search strategies similar to BO, our results provide a testable conjecture for why and when diversity may affect human-subject or design team experiments.
|
Collections
Show full item record
contributor author | Kamrah, Eesh | |
contributor author | Ghoreishi, Seyede Fatemeh | |
contributor author | Ding, Zijian “Jason” | |
contributor author | Chan, Joel | |
contributor author | Fuge, Mark | |
date accessioned | 2023-11-29T19:29:17Z | |
date available | 2023-11-29T19:29:17Z | |
date copyright | 8/29/2023 12:00:00 AM | |
date issued | 8/29/2023 12:00:00 AM | |
date issued | 2023-08-29 | |
identifier issn | 1050-0472 | |
identifier other | md_145_11_111703.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4294798 | |
description abstract | Design researchers have struggled to produce quantitative predictions for exactly why and when diversity might help or hinder design search efforts. This paper addresses that problem by studying one ubiquitously used search strategy—Bayesian optimization (BO)—on a 2D test problem with modifiable convexity and difficulty. Specifically, we test how providing diverse versus non-diverse initial samples to BO affects its performance during search and introduce a fast ranked-determinantal point process method for computing diverse sets, which we need to detect sets of highly diverse or non-diverse initial samples. We initially found, to our surprise, that diversity did not appear to affect BO, neither helping nor hurting the optimizer’s convergence. However, follow-on experiments illuminated a key trade-off. Non-diverse initial samples hastened posterior convergence for the underlying model hyper-parameters—a model building advantage. In contrast, diverse initial samples accelerated exploring the function itself—a space exploration advantage. Both advantages help BO, but in different ways, and the initial sample diversity directly modulates how BO trades those advantages. Indeed, we show that fixing the BO hyper-parameters removes the model building advantage, causing diverse initial samples to always outperform models trained with non-diverse samples. These findings shed light on why, at least for BO-type optimizers, the use of diversity has mixed effects and cautions against the ubiquitous use of space-filling initializations in BO. To the extent that humans use explore-exploit search strategies similar to BO, our results provide a testable conjecture for why and when diversity may affect human-subject or design team experiments. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | How Diverse Initial Samples Help and Hurt Bayesian Optimizers | |
type | Journal Paper | |
journal volume | 145 | |
journal issue | 11 | |
journal title | Journal of Mechanical Design | |
identifier doi | 10.1115/1.4063006 | |
journal fristpage | 111703-1 | |
journal lastpage | 111703-11 | |
page | 11 | |
tree | Journal of Mechanical Design:;2023:;volume( 145 ):;issue: 011 | |
contenttype | Fulltext |