AbstractImage segmentation is an essential step in vision sensing and image processing. It enables the understanding of the object’s classes, spatial locations, and extents in the scene, which can be used to support a wide range of construction applications such as progress monitoring, safety management, and productivity analysis. The recent ground-breaking achievements of deep learning-based approaches for semantic segmentation are at the cost of expensive large-scale training datasets annotated at the pixel level. Although building information modeling (BIM) has been leveraged to alleviate labeling costs using automatically generated, color-coded images as semantic labels, the differences between the BIM models and the real-world scenes make it difficult to apply networks trained on BIM-generated labels to real images. Furthermore, it takes nontrivial efforts to reduce such differences. To address these problems, this paper proposes a weakly supervised segmentation approach that uses inexpensive image-level labels. The missing boundary information in image-level labels is compensated by BIM-extracted object information. The proposed method consists of three modules: (1) detect initial object locations from image-level labels; (2) extract object information from BIM as prior knowledge; and (3) incorporate the prior knowledge into the network to enhance the detected object locations. Three extensive experiments are designed to evaluate the effectiveness of the proposed method. Results show that the proposed method substantially improves the detected object areas by using prior knowledge of target objects from BIM and outperforms the state-of-the-art weakly supervised methods.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *