Evaluation Model for Crack Detection with Deep Learning: Improved Confusion Matrix Based on Linear FeaturesSource: Journal of Construction Engineering and Management:;2025:;Volume ( 151 ):;issue: 003::page 04024210-1Author:Ching-Lung Fan
DOI: 10.1061/JCEMD4.COENG-14976Publisher: American Society of Civil Engineers
Abstract: Damage due to cracking can be detected through either manual visual methods or machine vision techniques for early prevention and maintenance. In recent years, image-based deep learning methods have emerged as potent tools for automatic crack detection. In this study, five deep learning object detection algorithms—faster R-CNN, single-shot detector (SSD), You Only Look Once (YOLO) v3 and v8, and RetinaNet—were systematically compared, and the results were analyzed. Object detection involves the generation of bounding boxes of various sizes for objects of interest. Because cracks are thin and small and thus difficult to capture in a unique bounding box, redundant measurements are common, but they compromise the accuracy and consistency of the model. Therefore, an improved confusion matrix based on linear features was employed in this study to evaluate the crack detection performance of the five object detection algorithms. In evaluation experiments, the overall accuracy levels of SSD were 90.6% on visible atmospherically resistant index (VARI) images, indicating effective concrete crack detection performance. Notably, SSD excels in cases involving small cracks and data imbalance, thus demonstrating a high level of model stability. This comparative analysis of the performances of different deep learning algorithms in crack detection contributes to the formulation of methods for automatic damage detection.
|
Show full item record
contributor author | Ching-Lung Fan | |
date accessioned | 2025-04-20T10:20:38Z | |
date available | 2025-04-20T10:20:38Z | |
date copyright | 12/24/2024 12:00:00 AM | |
date issued | 2025 | |
identifier other | JCEMD4.COENG-14976.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4304518 | |
description abstract | Damage due to cracking can be detected through either manual visual methods or machine vision techniques for early prevention and maintenance. In recent years, image-based deep learning methods have emerged as potent tools for automatic crack detection. In this study, five deep learning object detection algorithms—faster R-CNN, single-shot detector (SSD), You Only Look Once (YOLO) v3 and v8, and RetinaNet—were systematically compared, and the results were analyzed. Object detection involves the generation of bounding boxes of various sizes for objects of interest. Because cracks are thin and small and thus difficult to capture in a unique bounding box, redundant measurements are common, but they compromise the accuracy and consistency of the model. Therefore, an improved confusion matrix based on linear features was employed in this study to evaluate the crack detection performance of the five object detection algorithms. In evaluation experiments, the overall accuracy levels of SSD were 90.6% on visible atmospherically resistant index (VARI) images, indicating effective concrete crack detection performance. Notably, SSD excels in cases involving small cracks and data imbalance, thus demonstrating a high level of model stability. This comparative analysis of the performances of different deep learning algorithms in crack detection contributes to the formulation of methods for automatic damage detection. | |
publisher | American Society of Civil Engineers | |
title | Evaluation Model for Crack Detection with Deep Learning: Improved Confusion Matrix Based on Linear Features | |
type | Journal Article | |
journal volume | 151 | |
journal issue | 3 | |
journal title | Journal of Construction Engineering and Management | |
identifier doi | 10.1061/JCEMD4.COENG-14976 | |
journal fristpage | 04024210-1 | |
journal lastpage | 04024210-16 | |
page | 16 | |
tree | Journal of Construction Engineering and Management:;2025:;Volume ( 151 ):;issue: 003 | |
contenttype | Fulltext |