AbstractCrack inspection is a crucial but labor-intensive work of maintenance for in-service bridges. Recently, the development of fully convolutional network (FCN) provides pixel-wise semantic segmentation, which is promising as a means of automatic crack detection. However, the demand for numerous training images with pixel-wise labels poses challenges. In this study, a benchmark data set called a bridge crack library (BCL) containing 11,000 pixel-wise labeled images with 256×256 resolution was established, which has 5,769 nonsteel crack images, 2,036 steel crack images, 3,195 noise images, and their labels. It is aimed at crack detection on multiple structural materials including masonry, concrete, and steel. The raw images were collected by multiple cameras from more than 50 in-service bridges during a period of 2 years. Various crack images with numerous crack forms and noise motifs in different scenarios were collected. Quality control measures were carried out during the raw image collection, subimage cropping, and subimage annotation steps. The established BCL was used to train three deep neural networks (DNNs) for applicability validation. The results indicate that the BCL could be applied to effectively train DNNs for crack detection and serve as a benchmark data set for performance evaluation of DNN models.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *