Next Article in Journal
Crop Classification in a Heterogeneous Arable Landscape Using Uncalibrated UAV Data
Next Article in Special Issue
Evaluating a Static Multibeam Sonar Scanner for 3D Surveys in Confined Underwater Environments
Previous Article in Journal
Drivers of Landscape Changes in Coastal Ecosystems on the Yukon-Kuskokwim Delta, Alaska
Previous Article in Special Issue
Automating Parameter Learning for Classifying Terrestrial LiDAR Point Cloud Using 2D Land Cover Maps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstruction of Three-Dimensional (3D) Indoor Interiors with Multiple Stories via Comprehensive Segmentation

1
School of Resource and Environmental Sciences, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
2
Collaborative Innovation Centre of Geospatial Technology, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
*
Authors to whom correspondence should be addressed.
Remote Sens. 2018, 10(8), 1281; https://doi.org/10.3390/rs10081281
Submission received: 3 July 2018 / Revised: 1 August 2018 / Accepted: 9 August 2018 / Published: 14 August 2018
(This article belongs to the Special Issue 3D Modelling from Point Clouds: Algorithms and Methods)

Abstract

:
The fast and stable reconstruction of building interiors from scanned point clouds has recently attracted considerable research interest. However, reconstructing long corridors and connected areas across multiple floors has emerged as a substantial challenge. This paper presents a comprehensive segmentation method for reconstructing a three-dimensional (3D) indoor structure with multiple stories. With this method, the over-segmentation that usually occurs in the reconstruction of long corridors in a complex indoor environment is overcome by morphologically eroding the floor space to segment rooms and by overlapping the segmented room-space with partitioned cells via extracted wall lines. Such segmentation ensures both the integrity of the room-space partitions and the geometric regularity of the rooms. For spaces across floors in a multistory building, a peak-nadir-peak strategy in the distribution of points along the z-axis is proposed in order to extract connected areas across multiple floors. A series of experimental tests while using seven real-world 3D scans and eight synthetic models of indoor environments show the effectiveness and feasibility of the proposed method.

Graphical Abstract

1. Introduction

Three-dimensional (3D) indoor reconstructions have received increasing attention in recent years [1,2,3]. In the Architecture, Engineering, and Construction (AEC) domain, blueprints and as-built Building Information Modeling (BIM) have become must-have tools throughout a facility’s life cycle [1,4]. However, the built structure may be significantly different than what was proposed in the original plan [1,5,6], and blueprints of facilities may be unavailable [7]. Consequently, the reconstruction of indoor interiors via a precise 3D model has been an emerging challenge.
The creation of a 3D model for indoor interiors entails large amounts of time and human resources [8]. To accelerate data acquisition and improve the accuracy of a reconstructed model, many research groups have developed various sensor-based surveying technologies [5,9]. Laser-scanning technologies are significantly advanced [4,10,11,12] and they can rapidly capture the details of a complex indoor structure’s geometry; thus, laser-scanning technologies show promise for certain applications [8]. The high-quality reconstruction of a watertight mesh model from scanned point clouds [13] has drawn increasing attention in recent years in many areas, such as computer graphics and non-manifold repair [1,13,14,15,16].
Various types of input point clouds are suitable for surface reconstruction [17], such as those acquired from aerial Light Detection and Ranging scans (LiDAR), consumer-level color and depth (RGB-D) cameras, mobile laser scanning (MLS), and terrestrial laser scanning (TLS). Aerial LiDAR was proposed for large-scale outdoor environments and it may not be suitable for indoor reconstruction. RGB-D cameras (e.g., Microsoft Kinect) are affordable to the general public [18], although their depth information is noisy and possibly distorted and can have large gaps. MLS sensors (e.g., Zeb-Revo) ensure that the indoor environment has good coverage; however, their precision and density are not as high as those of TLS sensors. TLS sensors (e.g., Faro) have good precision and range, but they occasionally suffer from dynamic occlusion [19], i.e., moving objects, resulting in integrality and quality losses. MLS and RGB-D datasets [20,21,22,23,24] are tested in this study.
Reconstructing indoor interiors from these scanned point clouds is still in the early stage, and this procedure is complicated by restrictions in the data and the complexity of indoor environments, which may exhibit high levels of clutter and occlusion [14]. Despite recent research efforts, a satisfactory solution for indoor interior reconstruction is still undeveloped [14,15]. Various planar-detection methods have been proposed to rebuild interiors [4,25,26,27,28,29,30,31,32]. However, these plane detection-based methods are not robust when missing data, while the indoor point clouds exhibit high levels of missing data and noisy data because of windows and other highly reflective surfaces [15]. More recently, the focus has shifted to floor plane segmentation to address missing data [33,34,35,36,37,38]. These methods consider the indoor-reconstruction problem to be a floor map-reconstruction issue and target room segmentation, but fail to consider wall-shape detection and reconstruction. Spatial partitioning along the wall direction, which is integrated with labeling partitioned cells via a graph-cut operation, was proposed to resolve these issues [1,14,21,39,40,41]. However, detecting long corridors by graph cut leads to over-segmentation [1,21,40]. Thus, reconstructing complete long corridors is still a challenge for these methods. Furthermore, the cited works did not consider the connected area across multiple floors. Although scholars [1,33,42] have applied their methods to multistory datasets and other researchers [26,43] have tested their algorithms on stairs, none of these studies have explored the connected area between two floors.
The present study proposes a comprehensive segmentation method for reconstructing the indoor interiors of a multistory building. The rooms and corridors in each story are segmented by overlapping segmented room-space that is created by a morphological erosion method with cells that are partitioned by extracted wall lines. The space across multiple floors is extracted by using the peak-nadir-peak strategy in a histogram that describes the distribution of points along the z-axis. The ceiling and floor planes are shown as two peaks, and the connected area is shown as a nadir in the histogram. The raw point cloud is considered to be the input source, and the watertight model is represented as the output indoor model.
The remainder of this paper is organized as follows. Related works are described in Section 2. The proposed indoor-reconstruction method is described in Section 3. Experimental results for seven real-world datasets and eight synthetic datasets are presented in Section 4. The evaluation results are shown in Section 5, and the conclusions are presented in Section 6.

2. Related Works

Previous studies that addressed indoor reconstruction via 3D laser-scanning point clouds may be classified into three categories: (1) plane detection-based methods, (2) floor map segmentation-based methods, and (3) cell decomposition-based methods.

2.1. Plane Detection-Based Methods

Surface reconstruction has been a popular research topic for decades [1,44]. Indoor interior reconstruction is closely related to outdoor reconstruction [45], which has been more thoroughly studied [1,14], although the addressed issues are different. Early research on indoor reconstruction focused on detecting planes to reconstruct indoor environments, similar to methods of outdoor reconstruction.
A Random Sample Consensus (RANSAC)-based algorithm to extract various shapes from raw point clouds was proposed by Schnabel et al. [25]. The weakness of Schnabel’s algorithm is its highly variable point density and strong anisotropy [1]. Furthermore, this algorithm is not robust on outliers and noise points that are caused by the uncertainty of randomly sampling the minimum subset with three points [2,46].
Principal component analysis and model fitting were proposed to reconstruct interior planes, and this method addresses floor planes, wall planes, and stair planes [26]. Interior planes are successively extracted by region-growing segmentation and the least-squares fitting algorithm [47]. The least-squares fitting algorithm is based on RANSAC and alpha-shape-based algorithms. However, this method cannot rebuild a complete wall or ceiling planes when missing data and it cannot model a watertight structure. Moreover, this method suffers from the same weakness as Schnabel’s method because of issues with missing and noisy data.
A method for wall detection and reconstruction via supervised learning [4,31,32] was proposed to reconstruct the missing portions of walls. This method first labels wall surfaces as cluttered, occupied, or empty areas by studying the relationship between the scanner points and the wall points. Then, a supervised learning method is used to distinguish walls from clutter. Although this method can export a watertight model, indoor structures are restricted to very simple shapes in this type of research.

2.2. Floor Map Segmentation-Based Method

The above methods have weaknesses that are associated with missing and noisy data, and certain works can only utilize occlusion-free data. Moreover, rebuilding an indoor watertight model is difficult in certain cases [26,27]. In recent approaches, the focus shifted towards segmentation into different individual rooms to resolve issues with missing data for ceilings and floors [14], and then reconstruct indoor models through segmented maps.
Floor maps can be segmented by various methods, such as the morphological method [35], k-medoids clustering method [34], spectral clustering method [48], and by the properties of Delaunay triangulation [33]. However, some approaches [33,34] require viewpoints or a priori knowledge on the number of rooms.
A two-step room-segmentation and wall-/ceiling-detail reconstruction method [34] was proposed to reconstruct indoor environments. The room segmentation is formulated by k-medoids clustering, whereas wall-/ceiling-detail reconstruction is determined by an “offset map”. The proposed approach provides a new algorithm for room segmentation and reconstruction; however, the binary visibility vector is not robust for sparse point clouds. Furthermore, the output model is restricted to a simple rectangular model.
Bormann et al. [35] introduced four methods for room segmentation, and Mielle et al. [37] proposed a novel ripple-segmentation method for floor map segmentation. Both studies focused on floor map segmentation and not a 3D model.

2.3. Cell Decomposition-Based Method

All of the floor map segmentation-based methods tend to project raw data onto an x-y plane and ignore wall information. Although the room segmentation results may be good, indoor walls are difficult to accurately reconstruct because of a high level of missing and noisy data at the boundaries. Furthermore, certain works [4,27,28,29,31,32,34] can only address buildings under a restrictive Manhattan world assumption, i.e., exactly three orthogonal directions: one for floors and two for walls [1]. Although this assumption may hold true for many indoor environments, many architectural elements of real-world buildings deviate from the strong Manhattan world assumption.
Different from floor map segmentation-based methods, a space partitioning and cell labeling method based on a graph-cut operation was proposed to address these issues [1,14,21,39,40,41]. The space is first partitioned into a cell decomposition [48,49] via extracted wall lines, and then the labels of each cell are confirmed by an energy minimization approach. However, this method has a drawback regarding the complete reconstruction of long corridors [1,21,40]. After partitioning the floor space via extended wall lines, long corridors will be segmented into several sections. Thus, the labeling algorithm will separate these regions by implausible walls that are not part of the building’s true walls [21,40].

2.4. Summary

The cited research neglected the reconstruction of spaces across floors in a multistory building. Although a few works [26,43] reconstructed the planes of stairs, the reconstruction of a complete connected area across floors in a multistory building was not mentioned.
The above methods showed that the reconstruction of 3D indoor models in a multistory building has been far from satisfactory, and that the prominent deficiency of these methods lies in the reconstruction of long corridors and connected areas across floors. Labeling the partitioned cells on a long corridor is difficult to perform when using an energy minimization approach, which is resolved by a graph-cut operation. Because of the presence of decomposed cells, over-segmentation occurs for extended wall lines, particularly in long corridor areas. To resolve this issue, research has focused on the implicit assumption that each room, including long corridors and other large rooms, are scanned from exactly one position [21,40]. However, these approaches failed in most situations that are commonly found in the real world. In this study, a comprehensive segmentation method is proposed to overcome this prominent deficiency, and room-space segmentation is separated from geometric space partitioning. Long corridor reconstruction is resolved by overlapping segmented rooms that are created by the morphological erosion method with space-partitioned cells. Such an integration of vector and raster preserves the integrity of room-space and the geometric regularity of rooms.
In addition, watertight models can be reconstructed in this paper without a priori knowledge on viewpoints, and this method is not restricted to the strong Manhattan world assumption, thus making the reconstruction model more reliable and faithful. Several assumptions are followed in the proposed method.
(1) The ceilings and floors are horizontal, and the vertical planes are parallel to the gravity vector [1,42].
(2) Each step in the stair area shares the same height, width and length [26].
(3) The door is located between a pair of parallel walls.
(4) The input point cloud should be relatively homogeneous in terms of density; otherwise, the method will break during line fitting and room-space segmentation.

3. Materials and Methods

3.1. Overview

This paper focuses on room and corridor segmentation, so clear definitions of rooms and corridors are required [12,23]. In the Oxford dictionary, a room is defined as a part or division of a building enclosed by walls, a floor, and a ceiling, and a corridor can be considered to be a special room. Moreover, a corridor tends to traverse an entire building for convenient access to rooms [12]. In this study, a corridor is considered a special room that is connected to more than three rooms and is located in the center of the connected map. Each room is connected by doors in the wall, rather than openings in the wall. Two rooms that are separated by an opening in the wall are considered to be one room. In this paper, a door is lower than the height of the wall in which it is contained, whereas an opening extends the entire height of the wall and reaches the ceiling [39].
In this paper, the geometric structure of building interiors can be organized by floors [11,50,51] and the connected area between two floors. Furthermore, each floor can be deconstructed into rooms and corridors [10,11,12]. Each room that connects other rooms or corridors via doors is enclosed by a floor, a ceiling, and walls. The proposed method uses raw point clouds as inputs and consists of two main steps, as depicted in Figure 1: comprehensive segmentation and indoor reconstruction. Comprehensive segmentation contains two parts: story segmentation and room segmentation. The input data in this section were captured by an MLS device in a Zeb-Revo sensor [23].
  • Story segmentation: The origin data are partitioned into multiple stories and the connected area across floors with a histogram that describes the distribution of points along the z-axis.
  • Room segmentation: The points in each story are split into several slices. Then, cells are partitioned by extended wall line segments, which are extracted by a region-growing line extraction algorithm, followed by a line fusion algorithm. The iterative reweighted least-squares (IRLS) algorithm is used for line fitting in the processing of region-growing. In the meantime, the room-space in each story is segmented by using the morphological erosion method after projection onto a horizontal plane and a cutoff connection between rooms through projecting the offset space from the ceiling height. Finally, the rooms in each story are segmented by overlapping the segmented room-space with cell decomposition. The corridor is labeled via a connection analysis between the rooms on each floor. A room that connects more than three rooms and is located in the center of the connected map is labeled a corridor.
  • Indoor reconstruction: The height of each story and connected area is extracted via the histogram from the previous step. To obtain an accurate height of each room in each story, the height of the ceiling and floor in each room has been recalculated via the ceiling plane and floor plane, which are extracted by the RANSAC method in each room. The doors’ locations in each room or corridor are determined by a horizontal connected analysis. Door reconstruction follows the work of [34]. The planes on stairs are extracted by using a region-growing plane-extraction method and rebuilt by using an arithmetic progression calculation for the height, length, and width of one stair. All of the story and stair models are merged into a connected area model through the models’ coordinates. The final model is rebuilt after deleting shared areas through a union operator in the merged models.

3.2. Story Segmentation

Walls are assumed to be vertical and perpendicular to the floor and ceiling, although arbitrary horizontal orientations are allowed [1,42]. A histogram that describes the distribution of points along the z-axis is created, as shown in Figure 2a. The bin size has to be manually specified; a default value of 5–10 cm is suggested. The scanning of a horizontal structure creates a high number of points that are sharing the same height [1]. Moreover, connected areas crossing the floor in a multistory building are sandwiched between the ceiling of the first floor and the floor of the second floor. Hence, the horizontal structure is visible as a peak in the point distribution along the gravity vector, and the connected area is displayed as a nadir between two peaks. Then, the connected area is extracted by the nadirs between two peaks with a gap that is below a threshold. The threshold is determined by the thickness of the floor slab; a default value of 0.2–1 m is suggested. The partitioning result is shown in Figure 2b. One color represents one floor or connected area that crosses a floor, such as the yellow piece in Figure 2b. Then, room segmentation is performed for each story.

3.3. Room Segmentation

This section contains four steps to segment each story from the previous section: cell decomposition, room-space segmentation, overlap analysis, and corridor detection.

3.3.1. Cell Decomposition

In this step, the floor plane in each story is partitioned into decomposed cells by the extracted wall lines. The partitioned cells determine the geometric shape of the rooms, and the extracted wall lines determine the shape of the partitioned cells. Thus, we expect that all of the wall lines can be detected as integrated and accurate in the indoor environment. However, detecting complete wall lines directly from the original point cloud is difficult because indoor environments exhibit extremely high levels of clutter and occlusion [1,15]. Moreover, certain portions of walls are not sampled because the sight of the laser scanner is occluded by clutter [1,36]. To ensure that almost all wall lines in the building can be detected, the points in each story are first split into several horizontal slices [1,42,52,53] that share the same floor plan structure, as shown in Figure 3a. Then, the wall segments in all slices are extracted and projected onto a horizontal plane.
The identification is split into four steps: (one-slicing) the points are sliced into a set of pieces; (two-line extraction) a region-growing line extraction method and an IRLS line fitting algorithm are proposed to extract a segment hypothesis that represents the wall direction by extending a previous work [46]; (three-line projection & fusion) the extracted segments are projected into the horizontal plane and merged by a line fusion algorithm; and, (four-cell decomposition) the plane spaces are partitioned into a two-dimensional (2D) cell decomposition by extended lines from extracted segments.
One-Slicing: Each story is first split into several horizontal slices, as shown in Figure 3a. In this dataset, each story is split into ten pieces. The number of slices is influenced by the height and density of the input point cloud.
Two-Line extraction: The points along every linear wall are separated by the region-growing method [54,55]. In this study, an initial seeding point is selected in the area with the smallest curvature. The k-Nearest Neighbors (kNN) points that satisfy n p · n s > cos ( θ t h ) are added to the current region, and the kNN points that satisfy r p < r t h are added to the list of potential seed points and continue to grow from these points in the list of potential seed points. The process is iteratively applied until all of the points are segmented and grouped. n p is the normal of the current seed and n s is the normal of its neighbor. θ t h is a smoothness threshold, which should be specified in terms of the angle between the normals of the current seed and its neighbors. r p is the residual of a point in the list of potential seed points. r t h is a residual threshold, which should be specified by the percentile of the sorted residuals.
Then, an IRLS algorithm [46] that uses the M-estimator is proposed for line fitting in each separated region. For a point cloud P = { p 1 p n } R 3 , the line-fitting problem can be considered to fit points to a line. Least-squares line fitting is known to suffer from outliers. The standard least-squares (LS) algorithm maintains i d i s ( P i ,   S e g ) 2 at a minimum, where d i s ( P i ,   S e g ) is the distance of the ith point to the segment [56]. Therefore, even a single outlier can cause the results to deviate from the ground-truth value. However, the M-estimator is robust for outliers. The line-fitting problem becomes the following IRLS problem after adding the M-estimator:
m i n i = 1 N w ( d i s ( P i ,   S e g ) ) d i s ( P i ,   S e g ) 2
where w ( d i s ( P i ,   S e g ) ) is calculated by using the Welsch weighted function [57], which should be recalculated after each iteration and will be used in the next iteration.
w ( d i s ) = e x p ( d i s 2 k W u 2 ) ,   k W u = 2.985
The distance algorithm d i s ( P i ,   S e g ) is calculated by
d i s ( P i ,   S e g ) = ( x i x ¯ ) · n ,   n = 1
where n is the normal of the line and x ¯ is the mean of the point cloud.
Three-Line projection & fusion: The extracted segments in each slice are then projected to a horizontal plane, as shown in Figure 3b. However, the projected segments contain considerable clutter because of the complex indoor environment. Furthermore, several segments are nearly coincident or collinear after the projection, as shown in Figure 3b. Line fusion is performed to reduce repeated wall lines and to obtain more accurate line segments.
In this research, the wall segments are first projected to the x-y planes and sorted by length. Lines should be deleted when their length is less than the threshold. The longest projected segment is added into the final dataset. Then, each projected segment is compared to the segments in the final dataset. If a segment is neither a corridor nor parallel to the segments in the final dataset, this segment will be added in the final dataset. The working details are shown in Algorithm 1, and the fusion result is shown in Figure 3c. Two preconditions are imposed here: if the angular differences between two segments are smaller than the given threshold, then the segments can be considered to be parallel; and, if the distance between two parallel lines is smaller than the given thresholds, then the segments are considered to be collinear.
Algorithm 1 Line Fusion
Input: S e g : projected line segments sorted by length
m i n d i s t a n c e : minimum distance between two segments
m i n l e n g t h : minimum length of segments
Initialize: s e g f i n a l ; // the output segments set
add the longest segment S e g 1 into s e g f i n a l
for k = 1 to size( S e g )
 if length( S e g k ) < m i n l e n g t h delete L i n e m ;
end for
for k = 1 to size( S e g )
 for m = 1 to size( s e g f i n a l )
  if S e g k is paralleled with s e g f i n a l m
   if S e g k is collinear with s e g f i n a l m
    if Point in S e g k is inside s e g f i n a l m
     get bounding box of S e g k and s e g f i n a l m ;
     update s e g f i n a l m through IRLS algorithm from extracted points along S e g k and s e g f i n a l m ;
     break;
    else
     continue;
    end if
   end if
  end if
 end for
 if m ≥ size( s e g f i n a l )
  Add S e g k into s e g f i n a l ;
 else
  continue;
 end if
end for
Return s e g f i n a l
Four-Cell decomposition: The wall segments created in the previous step are extended to lines that cross the floor plane. Then, the floor plane is partitioned by extended lines via the CGAL [58] arrangement data structure and split into a 2D cell decomposition, as shown in Figure 3d.

3.3.2. Room-Space Segmentation

In this step, the floor plane is segmented into a room-space by using a morphological erosion method after projecting the points of one story onto a horizontal plane.
A binary image is created after the points in each story are projected onto a horizontal plane. The projection image is presented in Figure 4a. The color is black if the pixel does not contain points. A pixel is colored gray if it contains no less than one point. The size of each pixel is set to 25 mm in this dataset. The size of each pixel is determined by the thickness of the walls and the size and density of the point cloud. We suggest that the size of the pixel should be less than 1/5 of the wall thickness. Then, a pixel with more than one point inside was labeled as 1 and a pixel with no points inside was labeled as 0. The nonblack pixels in the projected image are colored white and the black pixels are colored black, as shown in Figure 4b. The white pixels in the binary image represent the accessible areas inside the building, while the black pixels indicate inaccessible areas, such as walls and outer areas.
The rooms that contain long corridors are segmented by using a morphological erosion method. The algorithm is inspired by the work of [35,59]. The morphological erosion method has two important parameters: the room area’s lower limit (lower threshold) and upper limit (upper threshold). The lower threshold represents the smallest room size in the data, while the upper threshold means the largest room size.
A small value for the upper threshold will lead to over-segmentation, especially in a long corridor, as shown in Figure 5a,b. This is because a long corridor, which usually traverses the entire building [11,12], tends to occupy a large space in the floor map. Furthermore, if the size of the largest room is far larger than that of the smallest room, the large upper threshold will lead to under-segmentation in certain adjacent domains, as shown in Figure 5c,d. Under these problems, room-space segmentation is error-prone and unreliable. If no connection exists between each room, the under-segmentation condition will become rare, so the results of room segmentation will become more stable.
By definition, a room is enclosed by walls, ceilings, and floors, while doorways break the closure of a room. Thus, the doorways in each room that lead to other rooms or corridors should be closed to obtain a better result. An offset space from the ceiling height interval is defined, as shown in Figure 6 and Figure 7. The points above the interval along the z-axis and its normal horizontal with the floor are projected onto the horizontal plane as boundaries, as shown in Figure 8a. Vertical walls are as high as the ceiling, so most clutter, windows, and doorways are less than the ceiling height, as shown in Figure 6. Moreover, this method can easily discern an open door from an opening in a wall, as illustrated in Figure 7.
The accessible pixels in the maps (white pixels in Figure 8a) are iteratively eroded by one pixel. Then, connectivity analysis is performed to verify whether any areas are separated after erosion. If a separated area has a certain size between the lower and upper thresholds after connectivity analysis, then all of the pixels in this area are labeled an individual area, as shown in Figure 8b. This procedure repeats until all of the remaining areas are smaller than the lower threshold. If one area is surrounded by another, this area will be merged into the surrounding region, as shown in Figure 8b. Based on the labeled areas in Figure 8c, the unlabeled area in the accessible regions is extended by a wavefront propagation algorithm [60], as shown in Figure 8d. The working details are shown in Algorithm 2. The thresholds are chosen by the room size in the data. The upper threshold is approximately the size of the largest room, while the lower threshold is approximately the size of the smallest room. Two thresholds are set to 20 and 70 in the current dataset.
Algorithm 2 Room-Space Segmentation Method
Input: b i n a r y   i m a g e : generated binary image after projecting point clouds on one floor
max e r o d e : iterations
l o w e r   t h r e s h o l d : lower limit of area
h i g h e r   t h r e s h o l d : higher limit of area
c e l l   s i z e : size of one cell
Initialize: l a b e l s ; // the zero set, which has the same size as the b i n a r y   i m a g e
O u t m a p ; // resegmentation result
count = 0;
for i = 1 to max e r o d e
b i n a r y   i m a g e = erode( b i n a r y   i m a g e , strel (‘disk’,1));
r e g i o n = edge detection on b i n a r y   i m a g e ;
 for each α r e g i o n
   r o o m   a r e a = c e l l   s i z e * c e l l   s i z e * area of α ;
  if r o o m   a r e a > l o w e r   t h r e s h o l d && r o o m   a r e a < h i g h e r   t h r e s h o l d
    l a b e l s in α = count;
   count++;
  end if
 end for
end for // l a b e l s shown in Figure 4b
for i = 1 to unique( l a b e l s )
 if l a b e l s i is surrounded by l a b e l s j
   l a b e l s i = j;
 end if
end for // l a b e l s shown in Figure 4c
O u t m a p = wavefront algorithm on l a b e l s ; // O u t m a p shown in Figure 4d
Return O u t m a p

3.3.3. Overlap Analysis

The cells in the floor plane were partitioned in Section 3.3.1, and the room-space was segmented in Section 3.3.2. The aim of this section is to create a floor map by overlapping the partitioned cells with the segmented room-space.
The cells’ overlap with the segmented room-space results are shown in Figure 9a. Then, a random point set is created in the space, as described in Figure 9b. However, in some special cases, no points exist inside a small cell. To overcome these circumstances, a center point set that contains center points in each cell is added to the random point set. Each point extracts the label information from the room-space segmentation results. The value of each cell is extracted from the inside points by two rules sequentially:
  • Rule 1: The number of points with the same label in the cell is calculated. Then, the cell is assigned a label based on the label that occurs with the highest frequency in this cell.
  • Rule 2: If a labeled cell is surrounded by the same labeled cells, this cell will be labeled with the same label.
The results are visualized in Figure 9c. Then, the cells with the same label are merged, and the cells that are labeled by 0 are deleted. The final map is shown in Figure 9d.

3.3.4. Corridor Detection

The door location is detected in a pair of parallel walls [34,42] within a threshold distance along the normal of the wall planes. This threshold is determined by the thickness of the wall; one default value is 0.5 m. If two rooms are connected by a door, these two rooms are connected. Then, a graph of connected rooms is created by connecting the room nodes that have a connected relationship, as shown in Figure 10. The red points are boundary nodes, and the green points are center nodes. A room that is connected to more than three rooms and it is located in the center of the graph is labeled a corridor.

3.4. Indoor Reconstruction

3.4.1. Story Reconstruction

The height of the ceiling and floor in each room is generated by horizontal plane fitting via the RANSAC algorithm [5,9,34]. The mesh geometry model in one story is created from the floor map by constrained Delaunay triangulation (CDT) [60,61,62,63], as shown in Figure 11, after being colored and displayed with Google Sketchup [64].

3.4.2. Connected Area Reconstruction

The ceiling height and floor height of the connected area are extracted from the histogram in Section 3.1. Then, Delaunay triangulation (DT) is employed in order to reconstruct the connected areas.

3.4.3. Stairs Reconstruction

Typically, all of the steps in the connected area share the same length, width, and height [26]. Thus, stair models can be reconstructed by step-plane fitting, step-attribute extraction, and model reconstruction. The points in a stair area are first extracted by vertical extrusion from a connected area. The extracted connected area is shown in Figure 12a. Then, the step planes are extracted by using an NDT-RANSAC plane-filter method and region-growing plane-extraction method. Finally, the length, width, and height of each stair and the number of steps are obtained by using an arithmetic progression calculation in the stair area.
One-Stair-plane extraction: Many non-step planes are present in stair areas, such as walls, ceilings, and floors. Thus, a coarse search for step planes is proposed to limit the influence of surfaces in stair areas. The planes in a stair area are extracted from point clouds by our previous NDT-RANSAC algorithm [46] to filter these non-step surfaces. The extracted non-step surfaces are shown in Figure 12b. The filtered result is shown in Figure 12c. Then, the region-growing method [55] is performed on the filtered points to extract the planes on the stairs, as shown in Figure 12d.
Two-Stair-attribute extraction and model reconstruction: The points on steps after manually removing non-stair surfaces while using CloudCompare [65] are shown in Figure 12e. Then, the length, width, and height of each stair are obtained by using an arithmetic progression calculation. The recovered stair model is shown in Figure 12f.

3.4.4. Merging Process

The story models from the previous step are merged into the connected area model according to their coordinates. In the story-segmentation section, the story and connected area in this building were separated by a histogram-based method. The upper surface of the connected area shares the same height as the floor of the second floor, while the lower surface of the connected area shares the same height as the ceiling of the first floor. Although each room in the story model has different floor and ceiling heights, the height of the entire story was restricted by the extracted height from the histogram. Thus, no gap existed between the connected area models and the story models.
However, some shared areas exist between the connected area and the floor and ceiling after joining all the models together by their coordinates. Then, a union operator is applied between the surface in the story models and the connected area model to delete the shared area. The final results are colored and displayed by using Google Sketchup, as shown in Figure 13.

4. Experimental Test

4.1. Input Data

The proposed method was tested on seven real and eight synthetic datasets of indoor scenes. The algorithm was implemented by the Computational Geometry Algorithms Library, Cloud Compare, and MATLAB. All of the experiments were performed on a 3.60 Hz Intel Core i7-4790 processor with 12 GB of RAM.
Real dataset: Figure 14a illustrates the seven real building model datasets, and their statistics are shown in Table 1. Dataset-1 was captured by an MLS device from [24]. Dataset-2 and -3 were obtained by [23]. Dataset-2 was captured by an MLS device in a Zeb-Revo sensor. Dataset-3 was acquired by a Zeb-1 sensor. Dataset-4 and -5 were provided by [20,22] and were captured by RGB-D sensors. Dataset-6 and -7 were acquired by RGB-D sensors from [21]. Clutter and occlusion were present in these datasets. Dataset-1, -2, and -3 were captured by an MLS device. The density of these point clouds was moderate, and the accuracy was much better than that of Dataset-4 to -7. Dataset-4 to -7 were obtained by RGB-D sensors; their accuracy was not as good as in Dataset-1, -2, and -3. Dataset-3, -4, and -5 were acquired in a multistory building. The clutter in Dataset-3 was low, while the clutter and occlusion were moderate in Dataset-4 and -5, especially in the second floors of Dataset-4 and -5.
Synthetic dataset: Eight synthetic datasets were created to evaluate the method by using Google Sketchup. The point clouds were sampled from an exported mesh, and a small amount of uniform noise (5 cm) was added after sampling. Figure 15a illustrates these synthetic datasets of indoor scenes, and their statistics are shown in Table 1. All of the synthetic data were acquired in a multistory building with long corridors, except for Synthetic Data-5. Synthetic Data-1 was tested for common buildings with straight corridors that traverse the entire building and connected areas across several floors. Synthetic Data-2 was tested for common buildings with L-shaped corridors. Synthetic Data-3 was designed to evaluate the performance on a multistory building with ring-shaped corridor. Synthetic Data-4 was tested for reconstructing indoor interiors with round rooms and curving walls. Synthetic Data-5 was a large-scale indoor environment with more than fifty rooms that shared different ceiling and floor heights. Moreover, the thickness of the wall in certain rooms varied in Synthetic Data-5, which inhibited wall-line extraction and room segmentation. Synthetic Data-6, -7, and -8 shared the same indoor structures, but their scanning conditions were different. Synthetic Data-6 was tested for reconstructing a building with arbitrary orientations along the z-axis. Neither of its wall lines was restricted along the x-axis or y-axis. In Synthetic Data-7, abundant furniture was present in the building, especially at the corners of walls. Many missing regions were present in corners, which inhibited indoor wall-line extraction. In Synthetic Data-8, many areas were removed from the scanning data to test the effect of missing data, as shown in row 8 in Figure 15b. The removed areas in the first floor in Synthetic Data-8 were located inside a room, and the removed areas in the second floor in Synthetic Data-8 were located in the area that was sandwiched between rooms.
Quantitative evaluations on the reconstruction results were conducted by using five metrics: IoU (intersection over union), DDP (Euclidean distance deviation between corner points), ADR (area deviation between rooms), completeness and correctness.
The IoU metric is defined as the ratio between the area of intersection, which is overlaid on the segmented map with the ground truth of the area of union. The DDP metric represents deviations between the selected corner points in the created floor map and reference data. This measure indicates the robustness against over- or under-segmentation [36]. The ADR metric represents deviations in rooms in the same location between the created floor map and ground-truth data.
I o U = A r e a   o f   I n t e r s e c t i o n A r e a   o f   U n i o n
D D P = d i s ( P m P a )
A D R = A r e a m A r e a a
C o m p l e t e n e s s = T P T P + F N
C o r r e c t n e s s = T P T P + F P
where P m is the selected corner points for ground truthing; P a represents the same points from the proposed method; A r e a m is the area of the room in the reference map; A r e a m is the area of the same room in the reconstructed floor map; TP represents true positives, which refer to the number of rooms, walls or doors that were detected in both the reconstructed building and ground truth; FP represents false positives, which refer to the number of detected rooms, walls or doors that were not found in the ground truth; and, FN represents false negatives, which refer to the number of undetected ground-truth rooms, walls, or doors.
The ground-truth floor plans in Dataset-6 and -7 were obtained from [21]. The synthetic data were sampled from an exported mesh, which was built from a designed floor plan with Google Sketchup. Thus, we could use a ground-truth floor plan and 3D model. The parameters used in this study are included in Table S1 in the supplementary file.

4.2. Real Dataset Results

The reconstruction results of the real dataset are shown in Figure 14. The parameters for the real datasets are included in Table S2 in the supplementary file.

4.3. Synthetic Dataset Results

Eight synthetic point clouds were created to verify the feasibility of the proposed method via a more statistical method. The reconstructed results for the synthetic data are shown in Figure 15. The parameters for the synthetic datasets are included in Table S3 in the supplementary file.

5. Discussion

5.1. Real Dataset Evaluation

The quality of the reconstructed results in the real datasets was evaluated through two steps: general evaluation and floor map evaluation.

5.1.1. General Evaluation

Evaluating the reconstructed results with real data might be difficult without ground-truth data. Thus, the evaluation of real data is shown in Table 2. In these tests, a corridor is considered to be one room, and only doors that are connected to rooms are considered.
As Table 2 shows, all of the indoor rooms and corridors in a real dataset were detected in both the raw data and reconstructed models, except for Dataset-5 and -6. A false room and under-segmented room both occurred in the second floor of Dataset-5, as shown in the red and purple boxes in Figure 16. The room in the purple box exhibited extremely high levels of clutter, and heavy occlusions from walls and other structures prevented the doorway from closing. Moreover, the size of this room was very small (2.5 m2), while its connected room was the largest room (more than 50 m2). These two reasons caused the under-segmentation in the room in the purple box. The error in the red box mainly occurred because of missing data in the connected area between the corridor and this room. Two rooms were merged in Dataset-6 because two adjacent regions that are separated by an opening in the wall (not a door) are they considered to be one room, as shown in the purple region in row 6 in Figure 15c.
The number of reconstructed doors was the same as that in the real model, except for Dataset-5 and -6. The false-room segmentation results lead to the wrong number of detected doors; if two adjacent regions are merged, then the door between two rooms cannot be detected. Furthermore, a door between two adjacent regions may be added because of heavy noise in the wall, as shown in the green box in Figure 16.
These results showed that the proposed method was robust for indoor interior reconstructions of real-world datasets. However, the test on Dataset-5 showed that room segmentation may fail in rooms with very small size and high levels of clutter. The test on Dataset-6 indicated that the door-detection method encountered difficulties when the doors were located in the under-segmented regions.

5.1.2. Floor Map Evaluation

We used the same dataset as [21] in order to evaluate the room segmentation method and compare the results to related works. The results and a comparison with the state of the art are shown in Table 3. The results of the Voronoi method [35], Ochmann et al. [40], and Mura et al. [14] were referenced. The completeness and correctness metrics were calculated from the number of detected rooms in the reconstructed building against the ground truth. The calculated IoU metrics for each room and corridor are shown in Figure 17.
As Table 3 shows, all of the indoor rooms and corridors were detected with the proposed method. Moreover, the mean IoU was more than 95%, so the detected rooms and ground truth were almost consistent. According to [21], the method of Ochmann et al., which segments rooms by graph cut, tended to over-segment corridor areas in both datasets, although the results when detecting true walls were good. Labeling the partitioned cells in the corridor area was difficult because of the presence of implausible walls through energy minimization, which was resolved by a graph-cut operation. As Table 3 shows, the completeness and correctness of Mura et al. were high, while the area of the rooms was more prone to errors, such as in Dataset-6, possibly because the method of Mura et al. encodes the environment into six types of structural paths, while some cases in Dataset-6 challenged this approach.
As Table 3 shows, the proposed method had better room-segmentation results when compared to the Voronoi Method [35], Ochmann et al. [40], and Mura et al. [14]. As Figure 17 shows, the IoUs of all the corridors were more than 94%, which indicates that under- or over-segmentation in long corridors was overcome by using the proposed method. This performance could be explained as follows: (i) the projected offset space cut off the connection between rooms and improved the precision of room-space segmentation, and (ii) overlapping the room-space segmentation results with partitioned cells through wall lines ensured the precision of boundaries in each room. Furthermore, the methods of Ochmann et al. and Mura et al. relied on viewpoints, while our method did not require prior knowledge regarding viewpoints.

5.2. Synthetic Dataset Evaluation

The quality of the reconstructed results for the synthetic datasets was evaluated via a comparison between the results and ground-truth data. This evaluation contained two steps: floor map evaluation and wall evaluation.

5.2.1. Floor Map Evaluation

The quality of the floor map results was evaluated by comparing the floor map of the reconstructed model and those of the ground-truth planes. Quantitative evaluations on a 2D plane were conducted by using three metrics: IoU, DDP, and ADR.
Table 4 lists the calculated evaluation metrics for the eight synthetic datasets. The calculated IoU metric in each room and corridor is shown in Figure 18. The DDP metrics in each room and corridor are shown in Figure 19, and the ADR metrics in each room and corridor are shown in Figure 20. The IoU, DDP, and ADR metric in each room and corridor in Synthetic Data-4 are not shown in Figure 18, Figure 19 and Figure 20, as this paper addresses the indoor interiors with perpendicular walls and not curved walls.
As Table 4 shows, all of the indoor rooms and corridors were reconstructed by using the proposed method. This finding shows that this method is robust in terms of detecting room and corridor numbers. The IoU of each floor was more than 95%, except Synthetic Data-4 and the second floor of Synthetic Data-8, which showed that the reconstructed rooms and corridors and ground truth were almost consistent. The results showed the effectiveness and availability of the proposed method for room-space segmentation. Under- or over-segmentation was overcome in this study. Moreover, the calculated DDPs in most rooms were under 15 cm, and some values were below 5 cm. The results showed that points between the floor map and ground-truth data were extremely close. The absolute values of ADR calculations in each floor were below 2 m2, except for Synthetic Data-4 and the second floor of Synthetic Data-8, which showed that the reconstructed area in the floor plan and ground-truth area were similar. All of the experiments showed the robustness and capability of the proposed method for floor map reconstruction.
The IoUs of all the corridors, including straight corridors (Synthetic Data-1), ring-shaped corridors (Synthetic Data-3, -5, and -6), and L-shaped corridors (Synthetic Data-2), were above 95%, except the second floor of Synthetic Data-4, which indicates that under or over-segmentation in long corridors was overcome by using the proposed method. The detected corridors were nearly consistent with the ground-truth data. The DDPs of all the corridors were less than 15 cm, and the added uniform noise was 5 cm. Furthermore, the absolute deviation of the area in each corridor was less than 1.5 m2, except Synthetic Data-4. All of the experiments showed the robustness and availability of the proposed method for corridor detection and reconstruction.
According to the evaluation of Synthetic Data-4 in Table 4, curved walls and round rooms could not be correctly reconstructed, as shown in row 4 in Figure 16c. Instead, a curve was detected as a set of lines along the curve because our method is designed for wall-line extraction and considers a curve to be a set of lines. Thus, the proposed method can only address indoor interiors with perpendicular walls and not curved walls. According to the evaluation of Synthetic Data-6, the proposed method was robust in terms of floor map reconstruction with vertical walls, which were parallel to the gravity vector. However, clutter and occlusion, especially in the corners, affected the level of detail in the reconstruction without altering the coarse structure, according to the evaluation of the first floor in Synthetic Data-7 in Table 4. A small amount of clutter and occlusion slightly affects small details. With more clutter, fewer details may be reconstructed, although the coarse structure could be correctly recovered, as shown in row 7 in Figure 16c. For partially scanned rooms, some walls were not sampled at all, especially in the corners in Synthetic Data-7. Thus, these missing data may have hampered the extraction of wall lines, and some walls could not be detected. According to the performance on the first floor of Synthetic Data-8 in Table 4, the missing data in the rooms could be handled by the proposed method because the labels of the cells in the room were determined by the overlapping and interior non-value cells, which were surrounded by the same labeled cells, and were thus labeled by the neighbors’ label. However, the missing data in the wall area may have hampered the reconstructed floor map, as shown in the second floor of the evaluation of Synthetic Data-8. This result occurred because the missing data in the walls prevented the extraction of certain walls. Thus, the floor map of the second floor of Synthetic Data-8 could not be adequately reconstructed, as shown in row 8 in Figure 16c.
According to Figure 18, the IoU of room 31 in Synthetic Data-8 was below 75%, and the ADR was approximately 7 m2. This is because many points (almost 30%) in this room and along walls have been removed, and the missing data hampered the reconstructed result, as shown in the dark green region in row 8 in Figure 16c. The same is true for the other rooms in Synthetic Data-8. According to Figure 18, the IoU of room 7 in Synthetic Data-2 was below 88%. Furthermore, the absolute value of the DDP of room 7 was higher than 0.5 m2, as shown in Figure 19. Room 7 was a small room with an area of 1.52 m2 and a certain amount of clutter along the wall. Certain extracted implausible wall lines are influenced by small-room reconstruction, which demonstrates that the proposed method must be improved for small rooms. The same reason is shared with room 40 and room 41 in Synthetic Data-5.

5.2.2. Wall Evaluation

The quality of the wall reconstruction results was evaluated by comparing the created walls with the ground-truth data. Quantitative evaluations of the walls were conducted by using two metrics: completeness and correctness. The completeness and correctness metrics of walls were calculated from the number of detected walls in the reconstructed building against the ground truth. The completeness and correctness metrics of doors were calculated from the number of detected doors in the reconstructed building against the ground truth.
The results of the assessments of walls are presented in Table 5.
As shown in Table 5, the correctness of the wall and door numbers shows that the reconstructed walls or doors could be detected in both the reference data and the reconstructed models. Moreover, the completeness and correctness of the wall and door numbers was more than 0.82, except the second floor of Synthetic Data-7. These experiments show the stability and capability of the proposed method for indoor wall construction.
However, certain reconstructed walls were not found in the ground-truth data, as shown in Figure 21, because certain noise points in the corner influenced the results of the wall line extraction.

5.3. Limitations

This paper presents a method to reconstruct multistory interiors. This model can handle constructions that are restricted to the weak Manhattan world assumption, in which ceilings and floors are horizontal and vertical planes are parallel to the gravity vector. However, some buildings in the real world contain non-vertical walls and inclined floors, such as lofts or attics (mansards). Our approach failed in such cases.
Locating connected areas and stories for point clouds (Section 3.2) depends on the amount of points that are detected on the horizontal structure. For large-scale or multifunctional buildings, such horizontal structures require a very large number of sample points. For such cases, the proposed method may only achieve low accuracy during story segmentation.
The label of each room is determined by the offset space and the morphological erosion method. Heavy occlusion along walls and other structures may prevent the connections between rooms from being cut off through the offset space. This case occurs in very small rooms, which tend to be storage rooms with abundant clutter, such as the second floor of Dataset-5 and the first floor of Synthetic Data-2. Thus, this method exhibits limitations in terms of reconstructing very small 3D indoor rooms.
Finally, this paper presents a comprehensive segmentation method to reconstruct indoor interiors. The output is a reconstructed mesh model. In terms of BIM standards, many elements, such as walls, floors, and other elements, are represented by volumetric solids, rather than surfaces. The reconstruction of these elements and clash detection in reconstructed buildings were not examined in this research.

6. Conclusions

Current methods of reconstructing 3D indoor interiors focus on room-space in each individual story and show obvious defects in terms of reconstructing long corridors and connected spaces across floors. To eliminate such deficiencies, this paper presented a comprehensive segmentation method for the reconstruction of 3D indoor interiors that includes multiple stories, long corridors, and connected areas across floors. The proposed approach overcomes the over-segmentation of graph-cut operations when reconstructing long corridors and reconstructs connected areas across multiple floors by removing shared surfaces.
The proposed method was tested with different datasets, including seven real building models and eight synthetic models. The experiments on the real models showed that the proposed method reconstructed indoor interiors without viewpoint information, which is essential for other methods. The experiments on the synthetic models with ground-truth data showed that the proposed method output accurate 3D models, with the overall IoUs reaching 95% and almost all IoUs of corridors above 95%. These findings show that the proposed method is appropriate for reconstructing long corridors. The experiments showed the robustness and availability of the proposed method.
However, this method can only address indoor interiors with vertical walls and horizontal floors, and experiences limitations when reconstructing 3D indoor rooms of very small size. The reconstruction of lofts and attics will be considered in future work. We will also further improve our method to reconstruct volumetric solid models, rather than surfaces.

Supplementary Materials

The following are available online at https://www.mdpi.com/2072-4292/10/8/1281/s1. The input parameters in this paper are shown in Table S1. The parameters for the real datasets are shown in Table S2. The parameters for the synthetic datasets are shown in Table S3.

Author Contributions

Conceptualization, L.L., F.S., F.Y. and S.Y.; Data curation, F.L.; Investigation, X.Z.; Methodology, L.L., F.S., F.Y., H.Z., D.L., X.Z., F.L., Y.L. and S.Y.; Resources, Y.L.; Validation, F.S. and F.Y.; Writing—original draft, F.S.; Writing—review & editing, L.L.

Funding

This research was funded by the National Natural Science Fund of China (41471325, 41671381, 41531177), Scientific and Technological Leading Talent Fund of National Administration of Surveying, Mapping and Geo-information (2014), The National Key R&D Program of China (2016YFF0201300, 2017YFB0503500), Hubei Provincial Natural Science Fund (2017CFA050) and Wuhan ‘Yellow Crane Excellence’ (Science and Technology) program (2014).

Acknowledgments

The authors acknowledge the ISPRS WG IV/5 for the acquisition of the 3D point clouds. The authors would like to gratefully acknowledge Axel Wendt [21] for their help. We would like to thank Angel Chang [22] for their help in accessing and processing the data. We would like to thank Satoshi Ikehata [34] for their help in this paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Oesau, S.; Lafarge, F.; Alliez, P. Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut. ISPRS J. Photogramm. Remote Sens. 2014, 90, 68–82. [Google Scholar] [CrossRef] [Green Version]
  2. Diakité, A.A.; Zlatanova, S. Spatial subdivision of complex indoor environments for 3D indoor navigation. Int. J. Geogr. Inf. Sci. 2018, 32, 213–235. [Google Scholar] [CrossRef]
  3. Zeng, L.; Kang, Z. Automatic recognition of indoor navigation elements from kinect point clouds. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W7, 431–437. [Google Scholar] [CrossRef]
  4. Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic creation of semantically rich 3D building models from laser scanner data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef] [Green Version]
  5. Jung, J.; Hong, S.; Yoon, S.; Kim, J.; Heo, J. Automated 3D Wireframe Modeling of Indoor Structures from Point Clouds Using Constrained Least-Squares Adjustment for As-Built BIM. J. Comput. Civ. Eng. 2015, 30. [Google Scholar] [CrossRef]
  6. Staats, B.R.; Diakité, A.A.; Voûte, R.L.; Zlatanova, S. Automatic generation of indoor navigable space using a point cloud and its scanner trajectory. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W4, 393–400. [Google Scholar] [CrossRef]
  7. Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Autom. Constr. 2010, 19, 829–843. [Google Scholar] [CrossRef]
  8. Hong, S.; Jung, J.; Kim, S.; Cho, H.; Lee, J.; Heo, J. Semi-automated approach to indoor mapping for 3D as-built building information modeling. Comput. Environ. Urban Syst. 2015, 51, 34–46. [Google Scholar] [CrossRef]
  9. Jung, J.; Hong, S.; Jeong, S.; Kim, S.; Cho, H.; Hong, S.; Heo, J. Productive modeling for development of as-built BIM of existing indoor structures. Autom. Constr. 2014, 42, 68–77. [Google Scholar] [CrossRef]
  10. Khoshelham, K.; Vilariño, L.D. 3D Modelling of Interior Spaces: Learning the Language of Indoor Architecture. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 321–326. [Google Scholar] [CrossRef]
  11. Becker, S.; Peter, M.; Fritsch, D.; Philipp, D.; Baier, P.; Dibak, C. Combined Grammar for the Modeling of Building Interiors. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-4/W1, 1–6. [Google Scholar] [CrossRef]
  12. Becker, S.; Peter, M.; Fritsch, D. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W4, 17–24. [Google Scholar] [CrossRef]
  13. Hornung, A.; Kobbelt, L. Robust reconstruction of watertight 3D models from non-uniformly sampled point clouds without normal information. In Proceedings of the Eurographics Symposium on Geometry Processing, Cagliari, Italy, 26–28 June 2006. [Google Scholar]
  14. Mura, C.; Mattausch, O.; Pajarola, R. Piecewise-planar reconstruction of multi-room interiors with arbitrary wall arrangements. In Proceedings of the Pacific Conference on Computer Graphics and Applications, Okinawa, Japan, 11–14 October 2016. [Google Scholar] [CrossRef]
  15. Mura, C.; Mattausch, O.; Villanueva, A.J.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [Google Scholar] [CrossRef] [Green Version]
  16. Mura, C.; Mattausch, O.; Villanueva, A.J.; Gobbetti, E.; Pajarola, R. Robust Reconstruction of Interior Building Structures with Multiple Rooms under Clutter and Occlusions. In Proceedings of the International Conference on Computer-Aided Design and Computer Graphics, Guangzhou, China, 16–18 November 2013. [Google Scholar]
  17. Musialski, P.; Wonka, P.; Aliaga, D.G.; Wimmer, M.; Van Gool, L.; Purgathofer, W. A Survey of Urban Reconstruction. Comput. Graph. Forum 2013, 32, 146–177. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, K.; Lai, Y.K.; Hu, S.M. 3D indoor scene modeling from RGB-D data: A survey. Comput. Vis. Media 2015, 1, 267–278. [Google Scholar] [CrossRef]
  19. Chen, C.; Yang, B. Dynamic occlusion detection and inpainting of in situ captured terrestrial laser scanning point clouds sequence. ISPRS J. Photogramm. Remote Sens. 2016, 119, 90–107. [Google Scholar] [CrossRef]
  20. Matterport3D Datasets. Available online: https://niessner.github.io/Matterport/ (accessed on 30 May 2018).
  21. Ambruş, R.; Claici, S.; Wendt, A. Automatic Room Segmentation from Unstructured 3-D Data of Indoor Environments. IEEE Robot. Autom. Lett. 2017, 2, 749–756. [Google Scholar] [CrossRef]
  22. Chang, A.; Dai, A.; Funkhouser, T.; Halber, M.; Niessner, M.; Savva, M.; Song, S.; Zeng, A.; Zhang, Y. Matterport3D: Learning from RGB-D Data in Indoor Environments. In Proceedings of the International Conference on 3D Vision, Qingdao, China, 10 October 2017. [Google Scholar]
  23. Khoshelham, K.; Vilariño, L.D.; Peter, M.; Kang, Z.; Acharya, D. The ISPRS benchmark on indoor modelling. Int. Arch. Photogramme. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W7, 367–372. [Google Scholar] [CrossRef]
  24. Liu, M.; Colas, F.; Siegwart, R. Challenging data sets for point cloud registration algorithms. Int. J. Robot. Res. 2012, 31, 1705–1711. [Google Scholar] [CrossRef] [Green Version]
  25. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2010, 26, 214–226. [Google Scholar] [CrossRef]
  26. Sanchez, V.; Zakhor, A. Planar 3D modeling of building interiors from point cloud data. In Proceedings of the IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012. [Google Scholar] [CrossRef]
  27. Budroni, A.; Boehm, J. Automated 3D Reconstruction of Interiors from Point Clouds. Int. J. Archit. Comput. 2010, 8, 55–73. [Google Scholar] [CrossRef]
  28. Budroni, A.; Böhm, J. Automatic 3d Modelling Of Indoor Manhattan-world Scenes from Laser Data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 115–120. [Google Scholar]
  29. Budroni, A.; Böhm, J. Toward automatic reconstruction of interiors from laser data. Carbon Lett. 2010, 11, 127–130. [Google Scholar]
  30. Budroni, A. Automatic model reconstruction of indoor Manhattan-world scenes from dense laser range data. Laryngoscope 2014, 120. [Google Scholar] [CrossRef]
  31. Adan, A.; Huber, D. 3D Reconstruction of Interior Wall Surfaces under Occlusion and Clutter. In Proceedings of the International Conference on 3D Imaging, Modeling, Processing, Hangzhou, China, 16–19 May 2011. [Google Scholar] [CrossRef]
  32. Adán, A.; Quintana, B.; Vázquez, A.S.; Olivares, A.; Parra, E.; Prieto, S. Towards the automatic scanning of indoors with robots. Sensors 2015, 15, 11551–11574. [Google Scholar] [CrossRef] [PubMed]
  33. Turner, E.; Cheng, P.; Zakhor, A. Fast, Automated, Scalable Generation of Textured 3D Models of Indoor Environments. IEEE J. Sel. Top. Signal Process. 2015, 9, 409–421. [Google Scholar] [CrossRef]
  34. Ikehata, S.; Yang, H.; Furukawa, Y. Structured Indoor Modeling. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
  35. Bormann, R.; Jordan, F.; Li, W.; Hampp, J.; Hýgele, M. Room segmentation: Survey, implementation, and analysis. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar] [CrossRef]
  36. Jung, J.; Stachniss, C.; Kim, C. Automatic Room Segmentation of 3D Laser Data Using Morphological Processing. Int. J. Geo-Inf. 2017, 6, 206. [Google Scholar] [CrossRef]
  37. Mielle, M.; Magnusson, M.; Lilienthal, A.J. A method to segment maps from different modalities using free space layout—MAORIS: MAp of RIpples Segmentation. arXiv, 2017; arXiv:1709.09899. [Google Scholar]
  38. Ochmann, S.; Vock, R.; Wessel, R.; Tamke, M.; Klein, R. Automatic generation of structural building descriptions from 3D point cloud scans. In Proceedings of the International Conference on Computer Graphics Theory and Applications, Lisbon, Portugal, 5–8 January 2014. [Google Scholar]
  39. Wang, R.; Xie, L.; Chen, D. Modeling Indoor Spaces Using Decomposition and Reconstruction of Structural Elements. Photogramm. Eng. Remote Sens. 2017, 83, 827–841. [Google Scholar] [CrossRef]
  40. Ochmann, S.; Vock, R.; Wessel, R.; Klein, R. Automatic reconstruction of parametric building models from indoor point clouds. Comput. Graph. 2016, 54, 94–103. [Google Scholar] [CrossRef]
  41. Capobianco, R.; Gemignani, G.; Bloisi, D.D.; Nardi, D.; Iocchi, L. Automatic Extraction of Structural Representations of Environments. In Proceedings of the International Conference on Intelligent Autonomous Systems, Padua, Italy, 15–19 July 2014; pp. 721–733. [Google Scholar] [CrossRef]
  42. Xiao, J.; Furukawa, Y. Reconstructing the world’s museums. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012. [Google Scholar] [CrossRef]
  43. Digne, J.; Cohen-Steiner, D.; Alliez, P.; Goes, F.D.; Desbrun, M. Feature-Preserving Surface Reconstruction and Simplification from Defect-Laden Point Sets. J. Math. Imaging Vis. 2014, 48, 369–382. [Google Scholar] [CrossRef]
  44. Pulli, K.; Duchamp, T.; Hoppe, H.; Mcdonald, J.; Shapiro, L.; Stuetzle, W. Robust Meshes from Multiple Range Maps. In Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, ON, Canada, 12–15 May 1997. [Google Scholar]
  45. Yang, B.; Dong, Z.; Liang, F.; Liu, Y. Automatic registration of large-scale urban scene point clouds based on semantic feature points. ISPRS J. Photogramm. Remote Sens. 2016, 113, 43–58. [Google Scholar] [CrossRef]
  46. Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An Improved RANSAC for 3D Point Cloud Plane Segmentation Based on Normal Distribution Transformation Cells. Remote Sens. 2017, 9, 433. [Google Scholar] [CrossRef]
  47. Edelsbrunner, H. Alpha Shapes—A Survey. Tessellations Sci. 2010, 27, 1–25. [Google Scholar]
  48. Brunskill, E.; Kollar, T.; Roy, N. Topological mapping using spectral clustering and classification. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007. [Google Scholar] [CrossRef]
  49. Becker, S. Generation and application of rules for quality dependent façade reconstruction. ISPRS J. Photogramm. Remote Sens. 2009, 64, 640–653. [Google Scholar] [CrossRef]
  50. Truonghong, L.; Laefer, D.F. Tunneling Appropriate Computational Models from Laser Scanning Data. In Proceedings of the 39th IABSE Symposium-Engineering the Future, Vancouver, BC, Canada, 21–23 September 2017. [Google Scholar]
  51. Dehbi, Y.; Plümer, L. Learning grammar rules of building parts from precise models and noisy observations. ISPRS J. Photogramm. Remote Sens. 2011, 66, 166–176. [Google Scholar] [CrossRef]
  52. Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line segment extraction for large scale unorganized point clouds. ISPRS J. Photogramm. Remote Sens. 2015, 102, 172–183. [Google Scholar] [CrossRef]
  53. Zolanvari, S.M.I.; Laefer, D.F. Slicing Method for curved façade and window extraction from point clouds. ISPRS J. Photogramm. Remote Sens. 2016, 119, 334–346. [Google Scholar] [CrossRef] [Green Version]
  54. Truong-Hong, L.; Laefer, D.F. Quantitative evaluation strategies for urban 3D model generation from remote sensing data. Comput. Graph. 2015, 49, 82–91. [Google Scholar] [CrossRef] [Green Version]
  55. Rabbani, T.; Heuvel, F.A.V.D.; Vosselman, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 248–253. [Google Scholar]
  56. Aftab, K.; Hartley, R. Convergence of Iteratively Re-weighted Least Squares to Robust M-Estimators. In Proceedings of the Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015. [Google Scholar] [CrossRef]
  57. Zhang, Z. Parameter estimation techniques: A tutorial with application to conic fitting. Image Vis. Comput. 1997, 15, 59–76. [Google Scholar] [CrossRef]
  58. CGAL. Available online: https://www.cgal.org/ (accessed on 30 May 2018).
  59. Fabrizi, E.; Saffiotti, A. Augmenting topology-based maps with geometric information. Robot. Autonom. Syst. 2002, 40, 91–97. [Google Scholar] [CrossRef]
  60. Truong-Hong, L.; Laefer, D.F.; Hinks, T.; Carr, H. Flying Voxel Method with Delaunay Triangulation Criterion for Façade/Feature Detection for Computation. J. Comput. Civ. Eng. 2012, 26, 691–707. [Google Scholar] [CrossRef] [Green Version]
  61. Fitzgerald, M.; Truong-Hong, L.; Laefer, D.F. Processing of Terrestrial Laser Scanning Point Cloud Data for Computational Modelling of Building Facades. Recent Pat. Comput. Sci. 2011, 4, 16–29. [Google Scholar]
  62. Boulaassal, H.; Landes, T.; Grussenmeyer, P. Automatic extraction of planar clusters and their contours on building façades recorded by terrestrial laser scanner. Int. J. Archit. Comput. 2009, 7, 1–20. [Google Scholar] [CrossRef]
  63. Pu, S.; Vosselman, G. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 575–584. [Google Scholar] [CrossRef]
  64. Google Sketchup. Available online: https://www.sketchup.com/ (accessed on 30 May 2018).
  65. CloudCompare. Available online: https://www.cloudcompare.org/ (accessed on 30 May 2018).
Figure 1. Workflow of the proposed method. (1) Story segmentation. (2) Room segmentation and corridor detection. (3) Stairs reconstruction. (4) Indoor reconstruction.
Figure 1. Workflow of the proposed method. (1) Story segmentation. (2) Room segmentation and corridor detection. (3) Stairs reconstruction. (4) Indoor reconstruction.
Remotesensing 10 01281 g001
Figure 2. Result of story segmentation. (a) Point distribution along the z-axis. (1) The floor of the first floor. (2) The ceiling of the first floor. (3) The floor of the second floor. (4) The ceiling of the second floor. (b) Result of story segmentation (one color per floor).
Figure 2. Result of story segmentation. (a) Point distribution along the z-axis. (1) The floor of the first floor. (2) The ceiling of the first floor. (3) The floor of the second floor. (4) The ceiling of the second floor. (b) Result of story segmentation (one color per floor).
Remotesensing 10 01281 g002
Figure 3. Results of space partitioning. (a) Split slices. (b) Projected segments. (c) Remaining segments after applying the line-fusion algorithm. (d) Two-dimensional (2D) cell decomposition.
Figure 3. Results of space partitioning. (a) Split slices. (b) Projected segments. (c) Remaining segments after applying the line-fusion algorithm. (d) Two-dimensional (2D) cell decomposition.
Remotesensing 10 01281 g003
Figure 4. Image after projecting points in each story onto a horizontal plane. (a) Projected image (a pixel is colored gray if it contains no less than one point, and a pixel with no points inside is colored black). (b) Binary image (a pixel is colored white if it contains no less than one point, and a pixel with no points inside is colored black).
Figure 4. Image after projecting points in each story onto a horizontal plane. (a) Projected image (a pixel is colored gray if it contains no less than one point, and a pixel with no points inside is colored black). (b) Binary image (a pixel is colored white if it contains no less than one point, and a pixel with no points inside is colored black).
Remotesensing 10 01281 g004
Figure 5. Results of room-space segmentation with different upper thresholds. (a) Results when the upper threshold is 5 in this dataset. (b) Results when the upper threshold is 25 in this dataset. (c) Results when the upper threshold is 50 in this dataset. (d) Results when the upper threshold is 100 in this dataset.
Figure 5. Results of room-space segmentation with different upper thresholds. (a) Results when the upper threshold is 5 in this dataset. (b) Results when the upper threshold is 25 in this dataset. (c) Results when the upper threshold is 50 in this dataset. (d) Results when the upper threshold is 100 in this dataset.
Remotesensing 10 01281 g005
Figure 6. Offset space for closing doorways.
Figure 6. Offset space for closing doorways.
Remotesensing 10 01281 g006
Figure 7. Offset space for discerning an open door from an opening in a wall.
Figure 7. Offset space for discerning an open door from an opening in a wall.
Remotesensing 10 01281 g007
Figure 8. Room-space segmentation. (a) Binary image after cutting off connections between rooms via projecting the offset space onto the x-y plane. (b) Labeled regions. (c) Results after merging. (d) Final results after the wavefront algorithm.
Figure 8. Room-space segmentation. (a) Binary image after cutting off connections between rooms via projecting the offset space onto the x-y plane. (b) Labeled regions. (c) Results after merging. (d) Final results after the wavefront algorithm.
Remotesensing 10 01281 g008
Figure 9. Result of the overlap analysis. (a) Overlap of the room-space segmentation result with cell decomposition. (b) Created random points and center points. (c) Labeled cells with the room-space segmentation result. (d) Final results after deleting cells with a null value.
Figure 9. Result of the overlap analysis. (a) Overlap of the room-space segmentation result with cell decomposition. (b) Created random points and center points. (c) Labeled cells with the room-space segmentation result. (d) Final results after deleting cells with a null value.
Remotesensing 10 01281 g009
Figure 10. Graph of connected rooms. The red points are boundary nodes, and the green points are center nodes.
Figure 10. Graph of connected rooms. The red points are boundary nodes, and the green points are center nodes.
Remotesensing 10 01281 g010
Figure 11. Results of indoor reconstruction. (a) Story model with a roof. (b) Story model without a roof.
Figure 11. Results of indoor reconstruction. (a) Story model with a roof. (b) Story model without a roof.
Remotesensing 10 01281 g011
Figure 12. Results of stair reconstruction. (a) Points in a stair area. (b) Results of non-step surface extraction via NDT-RANSAC. (c) Points in a stair area after filtering. (d) Planes that were extracted by the region-growing method in a stair area. (e) Planes in a stair area after removing non-step surfaces. (f) Stair reconstruction results.
Figure 12. Results of stair reconstruction. (a) Points in a stair area. (b) Results of non-step surface extraction via NDT-RANSAC. (c) Points in a stair area after filtering. (d) Planes that were extracted by the region-growing method in a stair area. (e) Planes in a stair area after removing non-step surfaces. (f) Stair reconstruction results.
Remotesensing 10 01281 g012
Figure 13. Final results of indoor reconstruction. The basic view for the entire building is displayed in row 1. The zoom view in the dashed box is shown in row 2. The camera view from the colored points is displayed in row 3.
Figure 13. Final results of indoor reconstruction. The basic view for the entire building is displayed in row 1. The zoom view in the dashed box is shown in row 2. The camera view from the colored points is displayed in row 3.
Remotesensing 10 01281 g013
Figure 14. Results of indoor reconstruction. (a) Original data (certain datasets have the ceilings removed for clarity). (b) Top view of data for one story. (c) Reconstructed floor map of one story. (d) Indoor-reconstruction models of a single story that were colored in Google Sketchup. (e) Reconstruction of a multistory building.
Figure 14. Results of indoor reconstruction. (a) Original data (certain datasets have the ceilings removed for clarity). (b) Top view of data for one story. (c) Reconstructed floor map of one story. (d) Indoor-reconstruction models of a single story that were colored in Google Sketchup. (e) Reconstruction of a multistory building.
Remotesensing 10 01281 g014aRemotesensing 10 01281 g014b
Figure 15. Synthetic data reconstruction results. (a) Original data (certain datasets have the ceilings removed for clarity). (b) Top view of data for one story. (c) Reconstructed floor map of one story. (d) Indoor-reconstruction models of one story that were colored in Google Sketchup. (e) Reconstruction of a multistory building.
Figure 15. Synthetic data reconstruction results. (a) Original data (certain datasets have the ceilings removed for clarity). (b) Top view of data for one story. (c) Reconstructed floor map of one story. (d) Indoor-reconstruction models of one story that were colored in Google Sketchup. (e) Reconstruction of a multistory building.
Remotesensing 10 01281 g015aRemotesensing 10 01281 g015b
Figure 16. Comparison between a top view of original data and the room segmentation results of the second floor for Dataset-5. (a) Top view of the original data for the second floor in Dataset-5. (b) Room segmentation results for the second floor in Dataset-5.
Figure 16. Comparison between a top view of original data and the room segmentation results of the second floor for Dataset-5. (a) Top view of the original data for the second floor in Dataset-5. (b) Room segmentation results for the second floor in Dataset-5.
Remotesensing 10 01281 g016
Figure 17. Intersection over union (IoU) of rooms and corridors (red points represent corridors). (a) IoU of each room and corridor in Dataset-6. (b) IoU of each room and corridor in Dataset-7.
Figure 17. Intersection over union (IoU) of rooms and corridors (red points represent corridors). (a) IoU of each room and corridor in Dataset-6. (b) IoU of each room and corridor in Dataset-7.
Remotesensing 10 01281 g017
Figure 18. IoU of rooms and corridors Each line represents IoUs in different data.
Figure 18. IoU of rooms and corridors Each line represents IoUs in different data.
Remotesensing 10 01281 g018
Figure 19. Euclidean distance deviation between corner points (DDP) of rooms and corridors. Each line represents DDPs in different data.
Figure 19. Euclidean distance deviation between corner points (DDP) of rooms and corridors. Each line represents DDPs in different data.
Remotesensing 10 01281 g019
Figure 20. Area deviation between rooms (ADR) of rooms and corridors. Each line represents ADRs in different data.
Figure 20. Area deviation between rooms (ADR) of rooms and corridors. Each line represents ADRs in different data.
Remotesensing 10 01281 g020
Figure 21. False positives (FP) example. (a) Reconstructed walls. (b) Ground-truth walls.
Figure 21. False positives (FP) example. (a) Reconstructed walls. (b) Ground-truth walls.
Remotesensing 10 01281 g021
Table 1. Descriptions of the datasets.
Table 1. Descriptions of the datasets.
Test SitesRoomsWindowsDoorsClutterPointsRelative Accuracy
Dataset-15-5Moderate16,425,000-
Dataset-2242151Low33,600,0002–3 cm
Dataset-37-14Moderate13,900,0002–3 cm
Dataset-420-16High69,606,121-
Dataset-531-28Moderate97,327,138-
Dataset-69-8Moderate4,661,877-
Dataset-76-5Moderate4,581,111-
Synthetic Data-124-23Low10,000,0005 cm
Synthetic Data-218-17Moderate40,000,0005 cm
Synthetic Data-337-36Moderate52,584,9615 cm
Synthetic Data-416-18Moderate101,125,4845 cm
Synthetic Data-553-85Moderate41,667,4025 cm
Synthetic Data-640-65Moderate34,604,9295 cm
Synthetic Data-740-65Moderate29,830,2575 cm
Synthetic Data-840-65Moderate28,532,2855 cm
Table 2. Evaluation metrics.
Table 2. Evaluation metrics.
Real DataFloorRoom & Corridor NumberDetected Room & Corridor NumberDoor NumberDetected Door Number
Dataset-1Overall 5544
Dataset-2First Floor15151414
Second Floor9988
Overall 24242222
Dataset-3Overall7766
Dataset-4First Floor8866
Second Floor12121010
Overall20201616
Dataset-5First Floor99811
Second Floor98710
Third Floor14141313
Overall31312834
Dataset-6Overall9887
Dataset-7Overall6655
Table 3. Results and comparison with the state of the art.
Table 3. Results and comparison with the state of the art.
Real DataVoronoi Method [35]Ochmann et al. [40]Mura et al. [14]Our Method
Com.Cor.IoUCom.Cor.IoUCom.Cor.IoUCom.Cor.IoU
Dataset-610.90.710.80.80.740.910.75110.955
Dataset-70.7510.770.610.7110.9110.955
Table 4. Evaluation metrics (mean ± standard deviation).
Table 4. Evaluation metrics (mean ± standard deviation).
Synthetic DataFloorRoom & Corridor NumberDetected Room & Corridor NumberIoU (%)DDP (cm)ADR (m2)
Synthetic Data-1First Floor121296.15 ± 0.995.10 ± 1.12−0.12 ± 0.11
Second Floor121296.39 ± 1.526.26 ± 1.98−0.16 ± 0.15
Overall242496.27 ± 1.295.68 ± 1.71−0.14 ± 0.13
Synthetic Data-2First Floor9996.74 ± 3.2410.29 ± 3.70−0.18 ± 0.25
Second Floor5597.26 ± 1.319.89 ± 3.19−0.19 ± 0.30
Third Floor4496.85 ± 1.6210.16 ± 2.64−0.19 ± 0.18
Overall181896.91 ± 2.7110.4 ± 3.35−0.19 ± 0.21
Synthetic Data-3First Floor222296.69 ± 0.724.29 ± 0.530.03 ± 0.07
Second Floor151598.23 ± 0.242.89 ± 0.210.77 ± 020
Overall373797.31 ± 0.463.73 ± 0.330.04 ± 0.04
Synthetic Data-4First Floor8889.87 ± 2.0537.37 ± 4.622.29 ± 1.01
Second Floor8884.07 ± 2.6145.74 ± 5.524.51 ± 0.70
Overall161686.64 ± 1.7841.37 ± 3.613.40 ± 1.14
Synthetic Data-5Overall535394.87 ± 0.589.15 ± 1.16−0.03 ± 0.18
Synthetic Data-6First Floor272798.95 ± 0.2110.77 ± 0.82−0.86 ± 0.16
Second Floor131398.43 ± 0.424.89 ± 0.910.45 ± 0.15
Overall404098.78 ± 0.168.55 ± 0.65−0.43 ± 0.15
Synthetic Data-7First Floor272793.37 ± 0.7116.58 ± 1.25−0.85 ± 0.24
Second Floor131398.78 ± 0.164.89 ± 0.910.45 ± 0.15
Overall404095.01 ± 0.6112.17 ± 0.95−0.43 ± 0.20
Synthetic Data-8First Floor272798.95 ± 0.2110.77 ± 0.82−0.86 ± 0.16
Second Floor131390.71 ± 3.4439.51 ± 11.311.55 ± 0.71
Overall404096.27 ± 1.2821.73 ± 4.59−0.08 ± 0.31
Table 5. Evaluation metrics.
Table 5. Evaluation metrics.
Synthetic DataFloorCorrectness on WallCompleteness on WallCorrectness on DoorCompleteness on Door
Synthetic Data-1First Floor1111
Second Floor1111
Overall 1111
Synthetic Data-2First Floor10.8710.92
Second Floor1111
Third Floor1111
Overall 10.9410.97
Synthetic Data-3First Floor1111
Second Floor1111
Overall 1111
Synthetic Data-4First Floor--11
Second Floor--11
Overall --11
Synthetic Data-5Overall 0.970.930.850.82
Synthetic Data-6First Floor0.920.910.980.85
Second Floor0.970.910.950.95
Overall 0.940.910.970.88
Synthetic Data-7First Floor0.850.540.930.83
Second Floor0.970.910.950.95
Overall 0.900.660.930.86
Synthetic Data-8First Floor0.920.910.980.85
Second Floor0.830.8110.95
Overall 0.890.880.970.88

Share and Cite

MDPI and ACS Style

Li, L.; Su, F.; Yang, F.; Zhu, H.; Li, D.; Zuo, X.; Li, F.; Liu, Y.; Ying, S. Reconstruction of Three-Dimensional (3D) Indoor Interiors with Multiple Stories via Comprehensive Segmentation. Remote Sens. 2018, 10, 1281. https://doi.org/10.3390/rs10081281

AMA Style

Li L, Su F, Yang F, Zhu H, Li D, Zuo X, Li F, Liu Y, Ying S. Reconstruction of Three-Dimensional (3D) Indoor Interiors with Multiple Stories via Comprehensive Segmentation. Remote Sensing. 2018; 10(8):1281. https://doi.org/10.3390/rs10081281

Chicago/Turabian Style

Li, Lin, Fei Su, Fan Yang, Haihong Zhu, Dalin Li, Xinkai Zuo, Feng Li, Yu Liu, and Shen Ying. 2018. "Reconstruction of Three-Dimensional (3D) Indoor Interiors with Multiple Stories via Comprehensive Segmentation" Remote Sensing 10, no. 8: 1281. https://doi.org/10.3390/rs10081281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop