Automatic extraction of building roofs using LIDAR data and multispectral imagery

https://doi.org/10.1016/j.isprsjprs.2013.05.006Get rights and content

Abstract

Automatic 3D extraction of building roofs from remotely sensed data is important for many applications including city modelling. This paper proposes a new method for automatic 3D roof extraction through an effective integration of LIDAR (Light Detection And Ranging) data and multispectral orthoimagery. Using the ground height from a DEM (Digital Elevation Model), the raw LIDAR points are separated into two groups. The first group contains the ground points that are exploited to constitute a ‘ground mask’. The second group contains the non-ground points which are segmented using an innovative image line guided segmentation technique to extract the roof planes. The image lines are extracted from the grey-scale version of the orthoimage and then classified into several classes such as ‘ground’, ‘tree’, ‘roof edge’ and ‘roof ridge’ using the ground mask and colour and texture information from the orthoimagery. During segmentation of the non-ground LIDAR points, the lines from the latter two classes are used as baselines to locate the nearby LIDAR points of the neighbouring planes. For each plane a robust seed region is thereby defined using the nearby non-ground LIDAR points of a baseline and this region is iteratively grown to extract the complete roof plane. Finally, a newly proposed rule-based procedure is applied to remove planes constructed on trees. Experimental results show that the proposed method can successfully remove vegetation and so offers high extraction rates.

Introduction

Up to date 3D building models are important for many GIS (Geographic Information System) applications such as urban planning, disaster management and automatic city planning (Gröger and Plümer, 2012). Therefore, 3D building reconstruction has been an area of active research within the photogrammetric, remote sensing and computer vision communities for the last two decades. Building reconstruction implies the extraction of 3D building information, which includes corners, edges and planes of the building facades and roofs from remotely sensed data such as aerial imagery and LIDAR (Light Detection And Ranging) data. The facades and roofs are then reconstructed using the available information. Although the problem is well understood and in many cases accurate modelling results are delivered, the major drawback is that the current level of automation is comparatively low (Cheng et al., 2011).

Three-dimensional building roof reconstruction from aerial imagery alone seriously lacks in automation partially due to shadows, occlusions and poor contrast. The introduction of LIDAR has offered a favourable option for improving the level of automation in 3D reconstruction when compared to image-based reconstruction alone. However, the quality of the reconstructed building roofs from LIDAR data is restricted by the ground resolution of the LIDAR which is still generally lower than that of the aerial imagery. That is why the integration of aerial imagery and LIDAR data has been considered complementary in automatic 3D reconstruction of building roofs. The issue of how to optimally integrate data from the two sources with dissimilar characteristics is still to be resolved and relatively few approaches have thus far been published.

Different approaches for building roof reconstruction have been reported in the literature. In the model driven approach, also known as the parametric approach, a predefined catalogue of roof forms (e.g., flat, saddle, etc.) is prescribed and the model that best fits the data is chosen. An advantage of this approach is that the final roof shape is always topologically correct. The disadvantage, however, is that complex roof shapes cannot be reconstructed if they are not in the input catalogue. In addition, the level of detail in the reconstructed building is compromised as the input models usually consist of rectangular footprints. In the data driven approach, also known as the generic approach (Lafarge et al., 2010) or polyhedral approach (Satari et al., 2012), the roof is reconstructed from planar patches derived from segmentation algorithms. The challenge here is to identify neighbouring planar segments and their relationship, for example, coplanar patches, intersection lines or step edges between neighbouring planes. The main advantage of this approach is that polyhedral buildings of arbitrary shape may be reconstructed (Rottensteiner, 2003). The main drawback of data driven methods is their susceptibility to the incompleteness and inaccuracy of the input data; for example, low contrast and shadow in images and low point density in LIDAR data. Therefore, some roof features such as small dormer windows and chimneys cannot be represented if the resolution of the input data is low. Moreover, if a roof is assumed to be a combination of a set of 2D planar faces, a building with a curved roof structure cannot be reconstructed. Nonetheless, in the presence of high density LIDAR and image data, curved surfaces can be well approximated (Dorninger and Pfeifer, 2008). The structural approach, also known as the global strategy (Lafarge et al., 2010) or Hybrid approach (Satari et al., 2012), exhibits both model and data driven characteristics. For example, Satari et al. (2012) applied the data driven approach to reconstruct cardinal planes and the model-driven approach to reconstruct dormers.

The reported research in this paper concentrates on 3D extraction of roof planes. A new data driven approach is proposed for automatic 3D roof extraction through an effective integration of LIDAR data and multispectral imagery. The LIDAR data is divided into two groups: ground and non-ground points. The ground points are used to generate a ‘ground mask’. The non-ground points are iteratively segmented to extract the roof planes. The structural image lines are classified into several classes (‘ground’, ‘tree’, ‘roof edge’ and ‘roof ridge’) using the ground mask, colour orthoimagery and image texture information. In an iterative procedure, the non-ground LIDAR points near to a long roof edge or ridge line (known as the baseline) are used to obtain a roof plane. Finally, a newly proposed rule-based procedure is applied to remove planes constructed on trees. Promising experimental results for 3D extraction of building roofs have been obtained for two test data sets.

Note that the initial version of this method was introduced in Awrangjeb et al. (2012a), where the preliminary idea was briefly presented without any objective evaluation of the extracted roof planes. This paper not only presents full details of the approach and the objective evaluation results, but also proposes a new rule-based procedure in order to remove trees.

The rest of the paper is organised as follows: Section 2 presents a review of the prominent data driven methods for 3D building roof extraction. Section 3 details the proposed extraction algorithm. Section 4 presents the results for two test data sets, discusses the sensitivity of two algorithmic parameters and compares the results of the proposed technique with those of existing data driven techniques. Concluding remarks are then provided in Section 5.

Section snippets

Literature review

The 3D reconstruction of building roofs comprises two important steps (Rottensteiner et al., 2004). The detection step is a classification task and delivers regions of interest in the form of 2D lines or positions of the building boundary. The reconstruction step constructs the 3D models within the regions of interest using the available information from the sensor data. The detection step significantly reduces the search space for the reconstruction step. In this section, a review of some of

Proposed extraction procedure

Fig. 1 shows an overview of the proposed building roof extraction procedure. The input data consists of raw LIDAR data and multispectral or colour orthoimagery. In the detection step (top dashed rectangle in Fig. 1), the LIDAR points on the buildings and trees are separated as non-ground points. The primary building mask known as the ‘ground mask’ (Awrangjeb et al., 2010b) is generated using the LIDAR points on the ground. The NDVI (Normalised Difference Vegetation Index) is calculated for each

Performance study

In the performance study conducted to assess the proposed approach, two data sets from two different areas were employed. The objective evaluation followed a previously proposed automatic and threshold-free evaluation system (Awrangjeb et al., 2010b, Awrangjeb et al., 2010c).

Conclusion and future work

This paper has presented a new method for automatic 3D roof extraction through an effective integration of LIDAR data and aerial orthoimagery. Like any existing methods, the proposed roof extraction method uses a number of algorithmic parameters, the majority of which are either adopted from the existing literature or directly related to the input data. An empirical study has been conducted in order to examine the sensitivity of the rest of the parameters. It is shown that in terms of object-

Acknowledgments

Dr. Awrangjeb is the recipient of the Discovery Early Career Researcher Award by the Australian Research Council (Project Number DE120101778). The authors would like to thank Ergon Energy (www.ergon.com.au) for providing the data sets.

References (31)

  • M. Awrangjeb et al.

    Building detection in complex scenes thorough effective separation of buildings from trees

    Photogrammetric Engineering and Remote Sensing

    (2012)
  • Barista, 2011. The Barista Software....
  • L. Chen et al.

    Reconstruction of building models with curvilinear boundaries from laser scanner and aerial imagery

    Lecture Notes in Computer Science

    (2006)
  • L. Cheng et al.

    3d building model reconstruction from multi-view aerial imagery and LIDAR data

    Photogrammetric Engineering and Remote Sensing

    (2011)
  • P. Dorninger et al.

    A comprehensive automated 3d approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds

    Sensors

    (2008)
  • Cited by (147)

    • Point cloud voxel classification of aerial urban LiDAR using voxel attributes and random forest approach

      2023, International Journal of Applied Earth Observation and Geoinformation
    • Automated extraction of building instances from dual-channel airborne LiDAR point clouds

      2022, International Journal of Applied Earth Observation and Geoinformation
    • Point2Roof: End-to-end 3D building roof modeling from airborne LiDAR point clouds

      2022, ISPRS Journal of Photogrammetry and Remote Sensing
    • Data driven tools to assess the location of photovoltaic facilities in urban areas

      2022, Expert Systems with Applications
      Citation Excerpt :

      These problems do not allow the automatic segmentation of buildings in urban areas and other information sources are therefore needed. LiDAR technology offers 3D images (taken by drones or aeroplanes) with information on the height of urban objects and can be used to segment objects in urban areas (Awrangjeb et al., 2013). Additionally to the segmentation, other features such as orientation or inclination can be easily calculated for every roof in the area.

    View all citing articles on Scopus
    View full text