| description abstract | Machine learning (ML) solutions are rapidly changing the landscape of many fields, including structural engineering. Despite their promising performance, these approaches are usually only demonstrated as proofs of concept in structural engineering, and are rarely deployed for real-world applications. This paper illustrates the challenges of developing ML models suitable for deployment with a focus on generalizability and explainability. Among various pitfalls, the paper discusses the impact of model overfitting, underfitting, and underspecification, training data non-representativeness, variable omission bias, and possible shortcomings of conventional cross-validation and feature importance–based explainability for correlated random variables. Two structural engineering–specific illustrative examples highlight the importance of implementing rigorous model validation techniques through adaptive sampling, careful physics-informed feature selection, and considerations of both model complexity and generalizability. | |