Next Article in Journal
Research on the Forward and Reverse Calculation Based on the Adaptive Zero-Velocity Interval Adjustment for the Foot-Mounted Inertial Pedestrian-Positioning System
Previous Article in Journal
Effects of Center Metals in Porphines on Nanomechanical Gas Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Registration of Laser Scanning Point Clouds: A Review

1
Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China
2
Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China
3
Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing University, Nanjing 210093, China
4
School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(5), 1641; https://doi.org/10.3390/s18051641
Submission received: 21 March 2018 / Revised: 9 May 2018 / Accepted: 16 May 2018 / Published: 21 May 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
The integration of multi-platform, multi-angle, and multi-temporal LiDAR data has become important for geospatial data applications. This paper presents a comprehensive review of LiDAR data registration in the fields of photogrammetry and remote sensing. At present, a coarse-to-fine registration strategy is commonly used for LiDAR point clouds registration. The coarse registration method is first used to achieve a good initial position, based on which registration is then refined utilizing the fine registration method. According to the coarse-to-fine framework, this paper reviews current registration methods and their methodologies, and identifies important differences between them. The lack of standard data and unified evaluation systems is identified as a factor limiting objective comparison of different methods. The paper also describes the most commonly-used point cloud registration error analysis methods. Finally, avenues for future work on LiDAR data registration in terms of applications, data, and technology are discussed. In particular, there is a need to address registration of multi-angle and multi-scale data from various newly available types of LiDAR hardware, which will play an important role in diverse applications such as forest resource surveys, urban energy use, cultural heritage protection, and unmanned vehicles.

1. Introduction

Rapid acquisition of spatial information has become important in the development of “geospatial big data”, also facilitating application of these data in social management and scientific research. Data obtained by remote sensing methods are gradually extended from 2D to 3D paradigms and are widely used in professional areas such as geographical monitoring, resource investigation, environmental monitoring, change detection, water surveys, disaster assessment, and other fields. For most current applications, increasing attention is being paid to large-scale, multi-dimensional, comprehensive acquisition of geospatial data. However, it is relatively difficult to meet all these requirements with a single sensor due to limitations of collection range, scanning time, acquisition perspective, and acquisition accuracy. Integration of multi-platform, multi-angle, and multi-temporal remote sensing data is therefore important for geospatial data application.
The registration technique is a core element in integration of multi-platform, multi-angle, and multi-temporal remote sensing data. Early registration techniques mainly focused on 2D image registration. Research in this field began in the 1970s [1], initially for military purposes, but has been gradually extended to remote sensing, medicine, computer vision, and other fields. Image registration methods include the grayness-based and feature-based methods. In general, the feature-based method appears to yield better registration because it considers more contextual image information. Smallest univalue segment assimilating nucleus (SUSAN) [2], scale-invariant feature transform (SIFT) [3], maximally stable extremal regions (MSER) [4], and speeded up robust features (SURF) [5] are widely used operators to extract image features needed for image registration. The development of registration methods has differed in different fields because each field has its own requirements and characteristics. Several researchers have reviewed registration methods and proposed mature image registration frames [6,7,8,9].
As an advanced active remote sensing technology, LiDAR can obtain 3D point clouds of the target object. LiDAR registration is thus 3D rather than 2D. In 1987, the quaternions method was first proposed to estimate transformations between 3D point sets [10]. Subsequently, Besl and McKay proposed a classical iterative closest point (ICP) algorithm for point cloud registration [11]. This method was continuously improved in subsequent research, eventually becoming a comprehensive fine registration method [12]. Early LiDAR point cloud registration was mostly used in industrial fields, with point clouds obtained by a laser scanning system at close distance, and where registration objects were mostly single-target small-scale dense point clouds. Over the last two decades, the LiDAR system has been widely used in earth surface research, such as for forest parameter estimation [13], building reconstruction [14,15], natural disaster monitoring [16], and solar energy potential estimation [17]. LiDAR registration has thus become a key area of research also in the fields of photogrammetry and remote sensing.
There are valuable reviews in the literature of range image or points cloud registration techniques in the fields of computer vision and mobile robotics [12,18,19]. Salvi et al. [18] surveys different pre-2007 techniques for both pair-wise and multi-view range image registration, and provide an overview framework, with techniques ranging from coarse to fine. Tam et al. [19] provide a better understanding of registration from the perspective of data fitting and also consider non-rigid registration. Pomerleau et al. [12] focus on the different ICP variants during the last twenty years as well as their use cases for mobile robotics applications.
Compared with the computer and industrial fields, objects that need to be scanned by the LiDAR system in photogrammetry and remote sensing are mainly larger-scale geospatial features that cover complex and diverse geographical entities and that have distinct spatial stratification. LiDAR point clouds are thus multi-level and have large range, high noise, and small point density, with these being the major factors leading to differences between the point cloud registration algorithms used in photogrammetry and remote sensing and those used in the computer vision field. This paper reviews existing laser scanning point cloud registration methods mainly in photogrammetry and remote sensing, and can thus be regarded as extending the overview of point cloud registration methods to these fields. In order to render the description of registration methods more comprehensive, some registration methods from other fields also are mentioned.

2. Brief Presentation of LiDAR Technology and Research

LiDAR has developed rapidly in the past 30 years and the LiDAR sensor can now be mounted on various platforms, including airborne, vehicle, tripod, and satellite platforms. Different platforms have distinct traits and applications, as shown in Table 1.
We performed a search of peer-reviewed journal publications in the Scopus database as of 2016 (using the search statement: TITLE-ABS-KEY (“LiDAR” OR “terrestrial laser scan *” OR “mobile laser scan *” OR “airborne laser scan *” OR “space-based laser scan *”) AND NOT TITLE-ABS-KEY (“aerosol”)), and statistically analyzed the results.
Figure 1a summarizes the number of published articles and reviews discussing LiDAR for each year from 2000 through 2016; the number of publications shows an overall upward trend, indicating the increasing importance of this research. Given the rapid development and significant application of LiDAR, it is useful to summarize current research. A tag cloud map was therefore created based on frequency of occurrence (Figure 1b) [20]. The tag cloud map shows that, from a technology perspective, most papers deal with research and construction of algorithms for LiDAR data, such as classification, segmentation, information extraction, reconstruction, and biomass estimation.
Figure 2 shows the proportion of different literature types, with the most common being articles and conference papers; review papers are relatively rare. Available review papers cover forest resource investigations [13,21], land cover classification [22], geological hazard assessment [16,23], building model reconstruction [14,24], road extraction [25], snow depth measurement [26], cryosphere studies [27], and sea ice and ice sheet monitoring [28,29], but no reviews have been published in international peer-reviewed journals discussing registration of laser scanning data in photogrammetry and remote sensing.
We refined our search across publications related to registration (by using the search statement: (TITLE-ABS-KEY(“LiDAR” OR “terrestrial laser scan *” OR “mobile laser scan *” OR “airborne laser scan *” OR “space-based laser scan *”) AND NOT TITLE-ABS-KEY(“aerosol”)) AND (TITLE (“registrat *”) OR KEY(“registrat *”))). Annual variations in percentages of registration-related publications on LiDAR are shown by the red polyline in Figure 1a. This shows that that the proportion of registration-related publications generally increased year-by-year; while researchers continue to expand the range and depth of LiDAR applications, they therefore also continue data-registration research, for the purpose of improving the accuracy and efficiency of LiDAR application by integrating multi-source data.
In total, 501 papers related to LiDAR registration were published in different journals between 2000 and 2016; Figure 3 shows the 14 journals or conferences publishing more than five papers on LiDAR registration. It can be noted that the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences published the most papers related to LiDAR registration, followed by mostly authoritative photogrammetric and remote sensing journals such as the ISPRS Journal of Photogrammetry and Remote Sensing, Remote Sensing, and IEEE Transactions on Geoscience and Remote Sensing. The number of publications in the journal Sensors, which publishes research on the science and technology of sensors and biosensors, is also significant.
In the present paper, we review publications from international peer-reviewed journals on LiDAR registration in the fields of photogrammetry and remote sensing. In order to render the review more comprehensive, we also include important research papers and conference papers that have significant reference value. Section 3 of this paper introduces the classification of LiDAR registration methods and briefly describes the research status of same-platform and different platform LiDAR registration. In Section 4, we discuss in detail current methods used for LiDAR registration based on the principles of methods used. Section 5 briefly introduces current methods used for evaluating the accuracy of LiDAR registration. Section 6 compares different LiDAR registration techniques. Section 7 and Section 8 respectively present discussions and conclusions.

3. Classification of LiDAR Registration Methods

Depending on which platform originally generated the point cloud requiring registration, LiDAR point cloud registration can be divided into same-platform registration and registration between different platforms. In earth surface research, there are four main LiDAR systems, i.e., space-based laser scanning (SLS), airborne laser scanning (ALS), mobile laser scanning (MLS), and terrestrial laser scanning (TLS), divided according to the mounted platform (as shown in Table 1). Same-platform registration mainly includes multi-station TLS registration and ALS strip adjustment. For LiDAR registration between different platforms, research mainly focuses on ALS-MLS and ALS-TLS registration.
TLS can obtain accurate location information from global navigation satellite system (GNSS) receivers at the same time the target point clouds are obtained; this location information can be used directly to integrate the different point clouds [30]. However, it is not easy to use GNSS to obtain the exact position of the control point, and even slight instrumental deviations can produce large errors [31]. Furthermore, the spatial accuracy of GNSS in urban areas and forests is limited and prone to lockout, leading to a lack of reliability. Another solution is therefore to set the standard target while simultaneously acquiring the point clouds of the target object, and to then use the standard target to stitch together adjacent-station point clouds. However, in most cases, due to lack of GNSS and standard targets, one must rely only on point clouds themselves for registration.
Due to scanning height and field of view limitations, each flight path for ALS data acquisition can only cover a limited ground width. When large areas must be scanned, many flight paths are required, and a certain degree of overlap between each path must be maintained. Since ALS is an integrated system, there are a number of potential systematic errors for points on the flight paths, including laser ranging errors, sensor mounting errors, and POS and orientation system errors [32,33,34]. In order to eliminate differences between point clouds on flight paths, difference adjustment between different flight paths is necessary. At present, either data-driven methods or sensor system-driven methods can be used to accomplish this adjustment [35]. Data-driven methods use geometric features extracted from the point clouds to calculate the rotation matrix and translation vector [34,36,37]. However, since systematic errors are not linearly distributed, it is difficult to achieve high-precision flight path adjustment using data-driven methods [38], and most researchers opt for the sensor system-driven method, which uses the LiDAR positioning equation as an adjustment model [39,40,41,42]. Both data-driven and sensor system-driven methods require determination of the control unit (point, line, or surface feature), which is used to assess and correct differences between the flight paths [36].
For LiDAR registration between different platforms, research focuses on ALS-MLS and ALS-TLS registration. SLS data are characterized by large spots and sparse distribution and are mainly used for forest resource surveys, and monitoring of sea ice and land ice; integration of SLS data with LiDAR data acquired from other platforms makes no sense. There is therefore no research on registration between SLS and LiDAR data from other platforms.
The data obtained by different LiDAR platforms are heterogeneous in three respects: (1) different perspectives: data collected by SLS and ALS systems are from a top view, while data collected by MLS or TLS systems are from a side view; (2) different spatial resolution: the resolution of ALS data is generally at the meter scale, while MLS and TLS data are at the centimeter scale, with TLS being more precise; (3) different content of focus: ALS data cover general features, while MLS data cover both trajectory sides. Due to the heterogeneity and discreteness of point cloud data, it is very difficult to automatically register two or more point clouds from different platforms. Although there is great potential for automatic registration of point clouds under feature guidance, there are still significant challenges, including how to obtain the conjugate feature which can guide point cloud registration by overcoming heterogeneity, and then how to perform high-quality 3D registration using this conjugate feature.
Point cloud data registration studies frequently apply a coarse-to-fine registration strategy [43,44,45,46,47]. This strategy is not widely adopted for registration between different platforms, but is used for same-platform registration, as described above. In coarse registration, the initial registration parameters for the rigid body transformation of two point clouds are mainly estimated using the feature-based method. In fine registration, the main objective is to achieve maximum overlap of two point clouds, mainly using the iterative approximation method, random sample consensus method, normal distribution transform method, or methods with auxiliary data. We will further describe these methods adopted for coarse-to-fine strategies in Section 4.

4. Registration Techniques for LiDAR Data

Based on the coarse-to-fine registration strategy, this section presents an overview of four feature-based coarse registration methods and four fine registration methods. Fine registration methods are iterative approximation methods, random sample consensus methods, normal distributions transform methods, and methods with auxiliary data.

4.1. Coarse Registration Methods

The feature-based coarse registration method mainly refers to registration based on point, line and surface features, which possess some invariance over a certain period of time and are widely used for coarse registration [48]. In LiDAR data registration, these features may include building corners, contours, road networks, roof patches, and similar site features [49,50,51]. Since this method uses the feature primitive rather than directly registering the point clouds, identification of appropriate registration primitives is critical for registration accuracy. In practice, discrepancies within LiDAR data arising from use of different platforms (such as different perspectives, different resolutions, and discretization of point cloud data) make it difficult to locate conjugate features of objects to be registered [52], and study of LiDAR data registration methods using conjugate point, line, and surface features remains an area of active research. Here, we classify feature-based methods into four classes: point-based methods, line-based methods, surface-based methods, and others.

4.1.1. Point-Based Methods

Points are most widely used within the feature-based LiDAR registration method [53]. Extraction of feature points is very important in this method and the extraction result directly affects the registration accuracy of point clouds. Compared with natural environments and objects, the artificial objects are often more geometrically regular, and the accuracy of their geometric feature information is relatively high [54]. Building corners, traffic signs, and road signs are therefore commonly-used feature points.
On the other hand, feature points can also be points extracted by using point feature operator, such as point feature histograms [55], spin images of points [56], or scale invariant feature transform features [57]. These feature points extracted using operator are also referred to as keypoints. A good feature operator should possess good noise resistance and be invariant with the rotation and translation of point clouds [3]. There are numerous 3D keypoint operators including local surface patches (LSP) [58], intrinsic shape signatures (ISS) [59], keypoint quality (KPQ) [60], heat kernel signature (HKS) [61], Laplace-Beltrami scale-space (LBSS) [62], Mesh- Difference-of-Gaussians (DOG) [63], and 3D Harris [64]. Tombari et al. [65] and Hänsch et al. [66] survey these 3D operators and compare their performance. As shown in Table 2, point feature, point domain feature, or rotated image feature descriptors are commonly utilized for LiDAR registration.
Registration based on point features still has problems relating to noise sensitivity, low robustness, and large time complexity, and it remains difficult to achieve high precision. Several methodological improvements have recently been proposed in order to address problems of poor computational efficiency and poor robustness, including feature point extraction using the 3D operator for implementation of a point cloud registration algorithm [72,73,74]. In addition, many studies focus on extracting geometric features by constructing the domain topologic information of points and then optimizing the point cloud registration process based on domain features [75]. This approach improves registration accuracy and shows high robustness to noise [66,76]. Aiger et al. proposed a method for implementing point cloud global registration based on 4-point congruent sets (4PCS) of features [77]. This method exploits the fact that the ratio between the lines formed by four coplanar points remains invariant in the process of affine transformation, and does not require calculation of complex geometric characteristics. The method has high efficiency and good anti-noise ability [78], and can achieve automated marker-less registration [79]. Theiler et al. [79] improves 4PCS using 3D keypoints, such as 3D DOG, and 3D Harris keypoints.
It is of note that there are differences in point cloud information obtained using different LiDAR systems to scan the same geographical entities from different perspectives. Even with the same acquisition device, multiple measurements are required to obtain complete target point clouds, especially with TLS equipment, due to obscuration of objects and limited acquisition range. As a result, areas of overlap between top-view LiDAR point clouds and side-view LiDAR point clouds are small, as are multi-station TLS LiDAR point clouds, and extraction of point features is difficult; the point-based registration method therefore cannot be applied in such cases [36].

4.1.2. Line-Based Methods

Lines have stronger geometric topologies and constraints relative to points and permit higher registration accuracy [80,81]. Line features, such as road networks and building contours, are common in large-range 3D point cloud scenes and can be used for LiDAR registration (as shown in Table 3). Buildings are the largest and most important geographical entities in urban spaces, and building contours have been widely used in building model reconstruction and LiDAR point cloud registration [51,76,82]. In addition, roads, which are also important elements of urban space, have been extracted based on their unique linear and regular characteristics [83,84,85] and combined with building contours to achieve registration of point cloud data [46].
Due to the prominence of line features and their ease of extraction, point cloud data registration based on line features has relatively high accuracy and precision. In addition, compared with surface features (described next), the number of line features required during the registration process is relatively small [86]. However, because the completeness and precision of extracted line features are limited, only coarse registration can be achieved.

4.1.3. Surface-Based Methods

Surface features contain more information than line or point features and are less affected by noise. They can therefore be used for automatic registration of LiDAR point clouds. In urban spaces, surfaces are an important element of the ground object structure. LiDAR devices on different platforms can obtain a large amount of ground point cloud data and more precise registration can be achieved by making the best use of these surface features. The extraction accuracy of surface features and their distribution in the point cloud scene directly affect the final registration result. Many researchers have used the least squares method, random sample consensus algorithm, and principal component analysis method for surface fitting, allowing surface features to be obtained in the point cloud scene. The extracted surfaces are mainly ground, roofs, and building facades.
As shown in Table 4, most researchers use the least squares method when performing point cloud registration based on surface features. The method is used to minimize the distance between corresponding surface features of different LiDAR point clouds [89]. When using the least squares method for registration of 3D surfaces, it is necessary to take full account of the randomness of the local surface-normal vector [90]. The accuracy of this method is sufficient for ground deformation monitoring [89]. Some researchers have also implemented point cloud registration by locating conjugate surface features [91,92,93].
Despite the higher accuracy of registration based on surface features, the requirements for point cloud segmentation and the fitting algorithm are high, because surface features must be extracted before registration. In addition, the 3D point cloud scene to be registered must contain numerous surface features [98]; otherwise, it is difficult to guarantee registration accuracy.

4.1.4. Other Feature-Based Methods

Although many studies have used point, line, and surface features to obtain high-accuracy LiDAR point cloud registration, there are still some difficulties relating to large-scale urban 3D point clouds. For example, point-based methods are extremely susceptible to the influence of point density and noise. Most line-based methods are only applicable to buildings when the contours of the building are easy to extract, making these methods difficult to apply to suburbs with fewer buildings. Surface-based methods have high requirements for overlapping areas, and at least three pairs of surfaces should be present in the clouds to be registered [48].
Given these problems, some researchers have considered using a combination of point, line, and surface features to construct a joint transformation model [101], or to find conjugate spatial curves [102], in order to calculate point cloud registration parameters. Alternatively, other registration features, such as circles, spheres, and cylinders, can be used to calculate the registration parameters between different point clouds [103]. However, due to the relatively high extraction requirements, these methods are difficult to extend to general situations and are not widely used [104]. On the other hand, based on the results of feature extraction, if urban 3D point clouds can be classified by semantic analysis [105] and the corresponding relationship between classified surface objects can then be identified, relatively good point cloud registration can be achieved [106].

4.2. Fine Registration Methods

4.2.1 Iterative Approximation Methods

In current point cloud registration research, the iterative approximation method mainly refers to the ICP algorithm and its series of improved algorithms. The ICP algorithm is built on the quaternions method, which uses a 4D vector to represent three rotation parameters and one angle parameter [10,107]. The advantage of this method is that it can directly solve rigid body transformation through a rigorous mathematical process, without the need for an initial estimate of location. Besl and McKay first proposed the ICP method for registration of 3D data [11]. This method assumes good estimation of initial location; a number of points are selected from the point set to be registered, and the points corresponding to these points in the reference point set are then identified. The transformation is obtained by minimizing the distance between these pairs of points. The closest point set is then recalculated according to a rigorous solution process and repeated iterations are performed until the objective function value remains constant and the registration result is obtained. This method does not fully consider the effect of noise on accuracy of registration results; however, the effect of noise can be reduced by weighting the least squares distance to improve registration accuracy [108,109,110]. In the computer vision field, in order to speed up the registration process and prevent locally optimized results, several studies register point clouds by calculating feature-substitute point pairs, including invariant features, such as curvature and spherical harmonics [111], surfaces [112], and angular-invariant features [113]. Such ICP-registration methods in computer vision are reviewed in [12,114,115,116].
The development of LiDAR technology has greatly promoted application and development of ICP algorithms for remote sensing and mapping. In the ICP process, it is important to identify the closest point to a known location, with three search strategies used, i.e., point-to-point, point-to-surface, and point-to-projection [90,117,118,119]. However, since the LiDAR device uses discrete laser pulses to measure the distance to a ground target, the target point clouds are practically a dense set of sampling points and do not reflect all details of the target object, especially at the target boundary. Furthermore, due to differences between acquisition devices, angles, and methods, there is no one-to-one correspondence between point sets of LiDAR point clouds from different platforms, and point clouds are easily affected by noise. Registration accuracy is often not ideal and the calculation process is complicated; direct use of the ordinary ICP algorithm therefore often leads to anisotropic and inhomogeneous localization errors. In addition, the ICP algorithm requires that initial location of point clouds should not differ significantly; otherwise, the algorithm will render locally optimal solutions. Many researchers have therefore attempted to improve the ICP algorithm, using strategies that are mainly focused on looking for other registration features, algorithm optimization, and selection of appropriate data-management methods, as shown in Table 5.
Because the ICP algorithm performs point cloud registration based on an iterative process, it is slow at finding corresponding points between two point clouds and is less efficient when registering large-scale, high-density point cloud scenes. However, the ICP concept is used in some registration algorithms for specific surface objects. For example, Bucksch and Khoshelham proposed a registration method based on a tree skeleton line for TLS data from different stations [127]. Optimal conversion parameters were obtained by minimizing the distance between points in input point clouds and the skeleton line in reference point clouds.

4.2.2. Random Sample Consensus Methods

Random sample consensus (RANSAC) methods were proposed by Fischler and Bolles [128], and have been widely used in 2D and 3D data processing; they have also been studied for use in image registration [129,130,131,132]. With the development of LiDAR technology and its application in geography, RANSAC methods have been used for point cloud data preprocessing and segmentation in numerous studies [70,133,134]; their application to point cloud registration has in fact become an important area of research [135]. RANSAC methods involve three steps. First, a number of control points are randomly selected from point cloud data and used to calculate the conversion relationship. Second, the conversion relationship is used to eliminate external points from point cloud data, and the point cloud data registration degree is then calculated. Finally, an iterative transformation is used to find the data set with maximum registration degree, and this is then used to calculate conversion parameters [129]. The process is similar to that of the ICP algorithm, but can avoid iteration over entire point clouds. In combination with the SIFT operator, RANSAC can effectively solve the problem of 3D point cloud data registration without local features, while improving registration efficiency [136,137].

4.2.3. Normal Distribution Transform Methods

Normal Distribution Transform (NDT) methods were first proposed in 2D space [138] and then gradually extended to point cloud data registration in the fields of robotics and photogrammetry [139,140,141].
Applications of NDT are common in mobile robotics, mainly because the robot can obtain the positional relation between two points through the rangefinder when measuring data. With direct initial transformation, the NDT algorithm can be used to quickly and simply achieve fine registration of point clouds. The main idea of this method is to convert point cloud data in a 3D grid into a continuously differentiable probability distribution function. The probability distribution of the samples of each 3D point position measurement in the grid cell is represented by a normal distribution. Subsequently, the probability of normal distribution of two point cloud data sets is optimized using the Hessian Matrix method to achieve point cloud registration [139]. A key process in the NDT algorithm is to build grids for point clouds, but grid size is difficult to determine. The use of different grid sizes to organize point clouds therefore becomes an effective way to establish grids for 3D point clouds [142,143,144].
Since laser scanners used in photogrammetry cannot measure the positional relationship between two points and cannot carry out the initial transformation, to date there has been little research on applications of the NDT algorithm in photogrammetry. Ripperda and Brenner showed that if the laser scanner is set up approximately upright for each scan, LiDAR point clouds can be sliced parallel to the ground and 2D NDT can be applied to the sliced clouds; this was the first application of the NDT algorithm to TLS point cloud registration [145]. However, as the method is still inherently 2D, not extended to 3D space, it presents challenges for wide-range promotion and application. Magnusson et al. developed a 3D NDT algorithm by replacing the 2D space rotation matrix with a 3D space rotation matrix [146].
The NDT algorithm has fast computational speed and high precision. It is especially suitable for processing large-scale and large data-volume point cloud data, but requirements for initial locations of point cloud data remain high. When using the NDT algorithm for point cloud registration, a coarse to fine registration strategy is therefore used, i.e., in the initial registration process, feature-based methods, which do not have strict requirements for initial positioning of point clouds, are used to obtain coarse registration. After the registration result is obtained, the NDT algorithm is used to achieve fine registration. However, there is still a lack of applied research using this approach in large-scale complex geographical environments.

4.2.4. Methods with Auxiliary Data

In the process of acquiring point cloud data, under certain conditions LiDAR equipment can simultaneously obtain target image data and measurement-device location GNSS coordinates. Especially when using TLS to obtain point clouds, a standard target is generally used to quickly stitch multi-station point clouds. Images, GNSS data, and standard targets can therefore effectively assist in registration.
In image-assisted point cloud registration, images are generally used to extract features, including 2D SIFT features [147] and conjugate corner features [35]. These features are described, screened, registered, and mapped to 3D space to find conjugate features [52,148]; the point cloud conversion parameters are then calculated [52,149]. Compared with discrete LiDAR point clouds, the image has rich space-continuous spectral information, so textural features are evident [150], and features based on image extraction have higher reliability and robustness.
GNSS can accurately obtain the coordinates of the ground target. Some LiDAR equipment therefore also records the spatial location of the platform equipment center while acquiring 3D point clouds. Initial global registration of airborne and MLS data can be performed using GNSS information [30,151], but, in complex urban areas, occlusion by buildings can cause GNSS signal lockout, reducing registration accuracy of point clouds [152].
A standard target is widely used in multi-station TLS point cloud stitching, which uses a special standard target as the same name feature for registration. During the scan of objects by LiDAR equipment, these standard targets can be placed in appropriate locations in the scan area. It must be ensured that more than three standard targets are placed between adjacent scanning stations. After obtaining the standard target information, automatic registration can be performed using associated LiDAR registration processing software, such as Cyclone software. In forests, a dense tree canopy can significantly reduce GNSS positioning accuracy and even hinder the acceptance of GNSS signals. Registration of 3D point clouds based on standard target information only has a narrow range of applications due to the challenges posed by complex scanning scenes and difficulties in setting targets.

5. Error Analysis Methods

Point cloud registration error analysis is mainly performed to determine the degree of registration between different point clouds in a common area. Because point clouds have discrete characteristics, the accuracy of registration is generally obtained by calculating the offset distance between model point clouds and registration point clouds after transformation. Specifically, the offset distance can be classified as either point-to-point or point-to-surface distance. Quantitative evaluation of LiDAR data registration results on different platforms or the same platform is of great significance for automatic registration theory and algorithmic implementation of 3D laser point cloud data. There may be significant differences in registration results for different scenarios and ranges due to algorithm complexity and differences in applicability. For example, in small-scale digital archiving of cultural heritage sites, registration accuracy should be within the range of centimeters or millimeters. In contrast, in large-scale geographical applications, due to the complexity and diversity of surface morphology, as well as constraints relating to the performance of the acquisition platform, point cloud registration accuracy for registration of a single target is low (generally required within the decimeter level). After performing point cloud registration using a specific method, it is therefore necessary to perform error analysis on the registration results to select the most suitable registration method. There are three main ideas relating to such error analysis:
(1)
Comparison with existing registration methods. At present, most registration methods are improved methods developed from existing relatively mature methods. One important process is therefore to compare the results obtained using original and modified methods. This approach is widely used with the ICP algorithm and its improvements. After point cloud registration using the ICP algorithm and improved algorithms, parameters such as average offset distance, maximum offset distance, minimum offset distance, and standard deviation between model point clouds and conversion data of registration point clouds can be obtained, and the performance of the different registration methods can be analyzed. Bae and Lichti employed traditional and improved ICP algorithms for registration of TLS point clouds [153]. They calculated the mean offset distance and standard deviation between points and corresponding surfaces after conversion of registration point clouds and found that the mean offset distance and standard deviation of the traditional ICP algorithm were 2.24 m and 2.55 m, respectively. The improved ICP algorithm had corresponding values of 0.12 m and 1.50 m, respectively. Clearly, registration accuracy was significantly improved with the improved algorithm. In point cloud registration using an ICP-type algorithm, the time efficiency of registration is an important reference index. When analyzing registration results, the time complexity of different methods must therefore also be quantitatively evaluated.
(2)
Error analysis based on a reference point. The range of 3D point clouds of a geographical scene obtained by LiDAR is generally large, especially for ALS data. Calculating the offset distance of each registration point will therefore result in large calculation volumes. However, computational complexity can be effectively reduced by selecting reference points from point clouds and calculating offset distances between them. Before data were scanned, Yang et al. manually placed objects in the scanning scene, and object information in LiDAR point clouds from different stations was obtained through TLS equipment [48]. By calculating the offset distance between objects, the registration accuracy of this method was evaluated and compared with that of the method proposed by Dold and Brenner [93].
(3)
Error analysis based on a common point. When an ALS system is used to obtain surface 3D point clouds, it is difficult to set reference targets in the scanning scene, and this method is therefore more difficult to apply to analysis of ALS data registration results. Common point clouds, such as the ground points from ALS and MLS, can be selected from LiDAR point clouds and the offset distance between point clouds can then be calculated based on the common point [46,67]. Geographical scenes are unique and complex, and no geographical scenes have exactly the same geographical landscape; the geographical scene at the same location will also change with time. As a result, when validating the scientific meaningfulness and reliability of a proposed method, most researchers focus on specific scenarios and specific objects, rather than natural geographical scenes.
It is difficult to evaluate the advantages and disadvantages of different methods because the evaluation indices used by different authors are not consistent. In order to quantitatively evaluate different methods, it is therefore necessary for authoritative organizations to establish standard data sets for point cloud registration and a set of comprehensive evaluation indices. The International Institute of Photogrammetry has established a set of standard data sets for building 3D reconstructions and surface-cover classifications, which has helped standardize research in these fields.

6. Comparison of Different Point Cloud Registration Methods

Geographical scenes contain a large amount of features, especially in urban space, where widely distributed buildings, roads, and transport facilities provide many point, line, and surface features. Such features can be used to quickly achieve registration between different point clouds. Feature-based registration methods are usually applicable for coarse registration, which provides a good initial position for fine registration, effectively reducing computation demands for point cloud registration. A key process in this feature-based approach is feature extraction, which directly affects final registration accuracy. Although the existing feature-based method can achieve good results by searching for conjugate points, lines, or surface features, it is still difficult to use feature-based methods for large-scale LiDAR point cloud registration, because it is difficult to guarantee that extracted features are evenly distributed within the global range. Since point clouds are irregular and discrete, a point-based feature method is more sensitive to the density of point clouds and noise than a line-based or surface-based feature method. At present, most line-feature methods use lines obtained from building point clouds to calculate conversion parameters, but this is relatively difficult in areas with few buildings. The surface-based feature method requires large overlap between different LiDAR point clouds to locate conjugate surface features. In addition, most feature-based registration studies use local features of point clouds, and there is little research on use of global features. Global features can characterize the global characteristics of point clouds, while local features only represent its domain characteristics. The feature-based approach must thus maintain a balance between feature proficiency, method stability, and time efficiency [154].
Most existing feature-based registration methods can only be used to achieve initial registration. In contrast, the iterative approximation method, the random sample consensus method, and the normal distribution transformation model are widely used for fine registration. Because LiDAR point cloud registration using the ICP algorithm occurs through iteration, requirements for the initial position of the point clouds are relatively high. When the initial position of the point clouds is poor, it is difficult to obtain a globally optimal solution. The ICP algorithm also requires high point cloud density; when this is low, registration errors may occur in the search for the closest point. In addition, time complexity is generally high with the ICP algorithm. Selecting an effective feature can help speed up the convergence process and reduce registration time [75]. However, it is still impossible to avoid potential errors during location of the closest point by an effective feature. Such problems have been discussed at the target level [155] and as a local feature of a computational point [153].
An important process in the RANSAC registration method is continuous filtering of registration features during point cloud registration, with the optimal registration feature used to solve conversion parameters. The random sample consensus has been widely used for point cloud registration, and can achieve good registration results even if overlapping areas are small. However, this method requires iterative sampling and calculation of point cloud consistency. The number of iterations has a significant influence on registration speed and accuracy. If the number of iterations is too high, convergence speed is relatively slow, but if the number of iterations is too small, this will lead to poor selection of samples, making it difficult to obtain desired registration results.
Although the NDT algorithm has been widely used for 2D image registration, it is rarely used in 3D LiDAR research. The NDT algorithm does not require knowledge of the corresponding point relationship or extraction of the registration feature from point clouds; consequently, its calculation efficiency and registration precision are higher. However, this method has a significant drawback in that the cost function is not continuous. Since the method first divides the point clouds into grids and then calculates the Gaussian distribution in the grid, the discontinuous cost function cannot guarantee calculation of high precision conversion parameters. If grid size is too large, final registration accuracy is difficult to guarantee. If grid size is too small, the probability distribution function in the body element cannot accurately characterize the surface features. A multi-scale method could effectively solve the problem of determining the grid unit scale. In the image data-assisted point cloud registration process, most registration methods remain feature-based. Use of GNSS and target data depends on known locations of auxiliary data; the principle is simple and easy to implement, but the degree of automation is often not high. In Table 6, we compare these point cloud registration methods.
Additionally, we compare the applications and performances of different point cloud registration methods, as shown in Table 7. These registration methods are classified based on the coarse-to-fine strategies, the applications of these methods are presented by experimental environment, the performances of these methods are compared based on the deviation between reference and registered data, and the information of the experimental environment, experimental data, and deviation of each method is listed according to the paper.

7. Further Developments in LiDAR Data Registration

7.1. LiDAR Data Registration for Full Space

In order to obtain multi-phase comprehensive 3D spatial information on the Earth’s surface and even the stars, the LiDAR system would need to be mountable on various platforms, and would need to be able to acquire geospatial data at any time needed. The registration of LiDAR data will therefore develop in two directions: micro-refinement and macro-globalization.
In micro-refinement, the development of LiDAR data registration will include indoor/outdoor and ground/underground registration. Registration can be extended from exterior spaces to interior spaces, and even to integrate both spaces. With the maturation of airborne and vehicle-borne LiDAR detection technology, acquisition of 3D geographical information is becoming increasingly common. More recently, with the development of miniaturized and mobile ground LiDAR scanners, fast scans of indoor space have become possible. Global detection of outdoor and indoor 3D space can be achieved through point cloud data registration, which can improve management of small indoor spaces. It will be possible to obtain point cloud data of human living spaces by integrating point cloud data from underground and underwater spaces.
At present, indoor LiDAR data registration is mainly focused on walls, celling, floor [156], or any other key point descriptors [157]. Algorithms tried to improve not only accuracy but also efficiency, which can reach the requirement of building reconstruction and real-time building information acquirement. On the other hand, large scale outdoor LiDAR data registration has attracted lots of attention, so that different scales of LiDAR data sets can work together to achieve applications such as object extraction [22], change detection [158] and scene reconstruction [159]. Although there are some different applications of indoor and outdoor data registration, the combination of indoor and outdoor scenes is the full space of our living circle. The development of indoor/outdoor registration may be located in indoor- outdoor interacted registration, that the full space is not only the target, but also the rules to adjust the result of indoor or outdoor scenes.
Regarding macro-globalization, LiDAR data registration will continue to expand from regional to global space, and even to interplanetary space. Development of point cloud integration technology, as well as the gradual maturation of point cloud data acquisition technology using space-borne stereo imagery, can overcome the large-spot and wide-spacing limitations of space-borne point cloud data. Consequently, registration of satellite point cloud data and airborne, vehicular, and other multi-platform LiDAR data becomes possible. By obtaining high-precision 3D detailed features, we also gain the ability to solve large-scale problems and enhance resource assessment applications, such as for macro-scale forest resource surveys. In addition, with the advancement of planetary exploration, the development of point cloud registration methods that can be used beyond Earth will be helpful for analysis of the spatial distribution of the landscapes of other planets, moons, and asteroids.

7.2. New Types of LiDAR Data Registration

At present, the LiDAR systems used in most areas are small-spot discrete systems. Because the signal received by these systems is discrete, single or multiple-pulse echo information, the ability to characterize the vertical structure and physical characteristics of ground surface objects is reduced, restricting application to other fields. With the development of improved LiDAR sensors, a new, full-waveform LiDAR system came into being. Full-waveform LiDAR adds all-digital waveform recording technology to traditional LiDAR, allowing real-time recording of all or part of the laser reflection echo waveform. Mallet and Bretar [160] reviewed four aspects of full-waveform LiDAR: system introduction, processing methods, quantitative analysis, and applications.
Full-waveform LiDAR systems have been mounted in satellites, aircraft, cars, and other platforms. The obtained point cloud information contains all-digital waveform data on ground objects. It is therefore possible to obtain richer quantitative parameters by performing laser signal processing and information mining directly from the waveform. A key element of processing is waveform decomposition, including methods such as Gaussian decomposition, deconvolution, and empirical models [161,162]. Relative to discrete LiDAR point clouds, a full-waveform radar has a stronger ability to describe object structure, and has been widely used in the study of forests and urban areas. Forest area studies include estimation of forest parameters [163,164,165] and modeling of forested areas [166,167]. In urban space, the use of full-waveform LiDAR to study the distribution and structure of urban elements is still uncommon, mainly because multi-pulse signals only form when the laser beam reaches the edge of a building. A small number of studies have focused on the distinction between different materials and the classification of different ground objects [168,169,170].
At present, there are few studies on registration of full-waveform LiDAR; only two related studies were found in the Scopus Database using the search terms “registrat *”, “full-waveform” and “LiDAR”. Although the 3D spatial distribution of full-waveform LiDAR point clouds is similar to that of traditional discrete LiDAR point clouds, further study is needed to allow full use of all-digital waveform data and to achieve more accurate registration. This registration will take full advantage of the characteristics of full-waveform LiDAR point clouds, such as their high density, strong stratification, higher coordinate accuracy, and richer features, further accelerating the application of full-waveform LiDAR to forests and urban space.
The development of new hardware, especially surface scan, line scan, active/passive laser, and femtosecond LiDAR, also present opportunities for LiDAR data registration. The dual-band LiDAR developed in recent years is based on the superposition of near-infrared band detection with the blue-green band, which not only measures 3D information, but can simultaneously obtain water depth and underwater terrain information, overcoming the problem of the incapability of the infrared band to effectively penetrate water. The emerging face array LiDAR has advantages of large grid density and long-distance rapid measurement, overcoming the limitation that LiDAR cannot be used for long-range dynamic target imaging. However, there remain issues with low resolution and a poor signal-to-noise ratio. Development of multi-spectral/hyperspectral LiDAR makes it possible to obtain rich terrain spectral information while detecting 3D surface information.

7.3. Technical Development of LiDAR Data Registration

The presently-used methods of point cloud data registration are mainly coarse and fine registration methods. It is likely that these two methods will still be widely used in future and that registration accuracy will continue to improve. With increasing ability to obtain point cloud data from different complex environments, it becomes necessary to test the sensitivity, robustness, and accuracy of different registration methods with data of differing complexity. Furthermore, as point cloud registration is moving in the direction of large-scale scenes, great attention must be focused on the efficiency of point cloud registration for specific engineering applications. Current methods of improving registration efficiency are mainly focused on point cloud storage and indexing; these include the use of octree, quadtree, and R-tree, and the development of registration rules or extraction features. If an effective combination of data mining technology and use of effective information can be integrated, then registration efficiency can be greatly improved. Data mining is an integral part of the knowledge discovery framework, which uses algorithms to search for hidden information in a large volume of data and eventually construct a knowledge model. Data-mining techniques have been applied to variation detection technologies based on remote sensing imagery, object classification, and other research areas. If spatial data-mining technology is applied to point cloud data registration processes, registration efficiency could be greatly improved while increasing registration accuracy. Such studies may become common in future.

8. Summary

With improvements in spatial data acquisition capabilities, multi-platform and multi-angle data have attracted more and more attention, and have been widely used in various fields. The application of integrated multi-platform, multi-angle LiDAR data in urban spaces, forest areas, and polar environments has become an important area of research. This paper has presented a comprehensive review of LiDAR data registration from the perspective of photogrammetry and remote sensing, addressing a gap in the literature. LiDAR equipment can be used to obtain a wide range of 3D surface information, but because the geographical environment is relatively complex and subject to rapid changes, point clouds are very susceptible to the influence of noise. Given this, and the discrete characteristics of point clouds, the point cloud registration process is relatively complex, and consequently, most research has adopted a coarse-to-fine registration strategy, achieving good registration outcomes.
In this paper, we focused on this coarse-to-fine strategy and categorized existing registration methods into two major categories, namely coarse and fine LiDAR data registration methods. Based on the feature used, coarse LiDAR data can be classified into point-based, line-based, surface-based, and other methods. For fine registration, iterative approximation methods, random sample consensus methods, normal distributions transform methods, and methods using auxiliary data are extensively used. Classification based on methods allows in-depth understanding of their principles and characteristics, so that an appropriate registration method can be selected based on different data sources; this is also helpful for understanding whether the selected method is universal. Through an effective combination of initial registration and fine registration, high-quality point cloud registration can be achieved. With improvements in LiDAR equipment and expansion of the scope of access, point cloud data scales have increased dramatically. In large-scale data registration, we must consider the data structure of point clouds and storage methods. Especially when using a feature-based approach for point cloud registration, favorable features should be selected to facilitate more efficient registration. It is also necessary to avoid using all LiDAR point clouds as inputs for iterative approximation and random sample consensus methods.
Although LiDAR point cloud registration technology is relatively mature, there is still a need for an objective evaluation system to provide quantitative analysis of different methods and to promote high-quality registration methods. The establishment of standard data sets and the development of evaluation indicators and automatic evaluation platforms by relevant authoritative international organizations in the fields of photogrammetry and remote sensing will promote further research into point cloud registration. To improve point cloud computing efficiency, it is better to register LiDAR point clouds on a large scale and verify the effectiveness and reliability of the registration method, which will help in promoting application of LiDAR data and solving practical problems.

Author Contributions

L.C and Y.C. proposed the major framework for the review. S.C. and X.L. organized literature and wrote the manuscript, H.X. and Y.W. assisted in collation of literature. Y.C. and M.L. assisted with refining the framework design and manuscript writing.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 41622109, 41501456, 41371017).

Acknowledgments

Sincere thanks are given for the comments and contributions of anonymous reviewers and members of the editorial team.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anuta, P.E. Spatial registration of multispectral and multitemporal digital imagery using fast Fourier transform techniques. IEEE Trans. Geosci. Electron. 1970, 8, 353–368. [Google Scholar] [CrossRef]
  2. Smith, S.M.; Brady, J.M. SUSAN—A new approach to low level image processing. Int. J. Comput. Vis. 1997, 23, 45–78. [Google Scholar] [CrossRef]
  3. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  4. Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2004, 22, 761–767. [Google Scholar] [CrossRef]
  5. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  6. Brown, L.G. A survey of image registration techniques. ACM Comput. Surv. 1992, 24, 325–376. [Google Scholar] [CrossRef]
  7. Fonseca, L.M.; Manjunath, B.S. Registration techniques for multisensor remotely sensed imagery. Photogramm. Eng. Remote Sens. 1996, 62, 1049–1056. [Google Scholar]
  8. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  9. Oliveira, F.P.; Tavares, J.M.R. Medical image registration: A review. Comput. Method Biomech. 2014, 17, 73–93. [Google Scholar] [CrossRef] [PubMed]
  10. Horn, B.K. Closed-form solution of absolute orientation using unit quaternions. JOSA A 1987, 4, 629–642. [Google Scholar] [CrossRef]
  11. Best, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar]
  12. Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef]
  13. Torabzadeh, H.; Morsdorf, F.; Schaepman, M.E. Fusion of imaging spectroscopy and airborne laser scanning data for characterization of forest ecosystems—A review. ISPRS J. Photogramm. 2014, 97, 25–35. [Google Scholar] [CrossRef]
  14. Tomljenovic, I.; Höfle, B.; Tiede, D.; Blaschke, T. Building extraction from airborne laser scanning data: An analysis of the state of the art. Remote Sens. 2015, 7, 3826–3862. [Google Scholar] [CrossRef]
  15. Wang, Y.; Cheng, L.; Chen, Y.; Wu, Y.; Li, M. Building point detection from vehicle-borne LiDAR data based on voxel group and horizontal hollow analysis. Remote Sens. 2016, 8, 419. [Google Scholar] [CrossRef]
  16. Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  17. Cheng, L.; Xu, H.; Li, S.; Chen, Y.; Zhang, F.; Li, M. Use of LiDAR for calculating solar irradiance on roofs and façades of buildings at city scale: Methodology, validation, and analysis. ISPRS J. Photogramm. 2018, 138, 12–29. [Google Scholar] [CrossRef]
  18. Salvi, J.; Matabosch, C.; Fofi, D.; Forest, J. A review of recent range image registration methods with accuracy evaluation. Image Vis. Comput. 2007, 25, 578–596. [Google Scholar] [CrossRef]
  19. Tam, G.K.; Cheng, Z.; Lai, Y.; Langbein, F.C.; Liu, Y.; Marshall, D.; Martin, R.R.; Sun, X.; Rosin, P.L. Registration of 3D point clouds and meshes: A survey from rigid to nonrigid. IEEE Trans Vis. Comput. Graph. 2013, 19, 1199–1217. [Google Scholar] [CrossRef] [PubMed]
  20. Chen, C. Searching for intellectual turning points: Progressive knowledge domain visualization. Proc. Natl. Acad. Sci. USA 2004, 101, 5303–5310. [Google Scholar] [CrossRef] [PubMed]
  21. Wulder, M.A.; White, J.C.; Nelson, R.F.; Naesset, E.; Orka, H.O.; Coops, N.C.; Hilker, T.; Bater, C.W.; Gobakken, T. LiDAR sampling for large-area forest characterization: A review. Remote Sens. Environ. 2012, 121, 196–209. [Google Scholar] [CrossRef]
  22. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  23. Jaboyedoff, M.; Oppikofer, T.; Abellán, A.; Derron, M.; Loye, A.; Metzger, R.; Pedrazzini, A. Use of LiDAR in landslide investigations: A review. Nat. Hazards 2012, 61, 5–28. [Google Scholar] [CrossRef]
  24. Groeger, G.; Pluemer, L. CityGML—Interoperable semantic 3D city models. ISPRS J. Photogramm. 2012, 71, 12–33. [Google Scholar] [CrossRef]
  25. Quackenbush, L.J.; Im, I.; Zuo, Y. Road extraction: A review of LiDAR-focused studies. In Remote Sensing of Natural Resources; Wang, G., Weng, Q., Eds.; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  26. Deems, J.S.; Painter, T.H.; Finnegan, D.C. LiDAR measurement of snow depth: A review. J. Glaciol. 2013, 59, 467–479. [Google Scholar] [CrossRef]
  27. Bhardwaj, A.; Sam, L.; Bhardwaj, A.; Martín-Torres, F.J. LiDAR remote sensing of the cryosphere: Present applications and future prospects. Remote Sens. Environ. 2016, 177, 125–143. [Google Scholar] [CrossRef]
  28. Wang, X.; Cheng, X.; Gong, P.; Huang, H.; Li, Z.; Li, X. Earth science applications of ICESat/GLAS: A review. Int. J. Remote Sens. 2011, 32, 8837–8864. [Google Scholar] [CrossRef]
  29. Khan, S.A.; Aschwanden, A.; Bjørk, A.A.; Wahr, J.; Kjeldsen, K.K.; Kjær, K.H. Greenland ice sheet mass balance: A review. Rep. Prog. Phys. 2015, 78, 46801. [Google Scholar] [CrossRef] [PubMed]
  30. Hohenthal, J.; Alho, P.; Hyyppä, J.; Hyyppä, H. Laser scanning applications in fluvial studies. Prog. Phys. Geogr. Earth Environ. 2011, 35, 782–809. [Google Scholar] [CrossRef]
  31. Large, A.R.; Heritage, G.L.; Charlton, M.E. Laser scanning: The future. Laser Scan. Environ. Sci. 2009, 262–271. [Google Scholar]
  32. Baltsavias, E.P. Airborne laser scanning: Basic relations and formulas. ISPRS J. Photogramm. 1999, 54, 199–214. [Google Scholar] [CrossRef]
  33. Filin, S. Recovery of systematic biases in laser altimetry data using natural surfaces. Photogramm. Eng. Remote Sens. 2003, 69, 1235–1242. [Google Scholar] [CrossRef]
  34. Habib, A.; Kersting, A.P.; Bang, K.I.; Lee, D. Alternative methodologies for the internal quality control of parallel LiDAR strips. IEEE Trans. Geosci. Remote Sens. 2010, 48, 221–236. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Xiong, X.; Zheng, M.; Huang, X. LiDAR Strip Adjustment Using Multifeatures Matched with Aerial Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 976–987. [Google Scholar] [CrossRef]
  36. Lee, J.; Yu, K.; Kim, Y.; Habib, A.F. Adjustment of discrepancies between LiDAR data strips using linear features. IEEE Geosci Remote Sens. 2007, 4, 475–479. [Google Scholar] [CrossRef]
  37. Rentsch, M.; Krzystek, P. LiDAR strip adjustment with automatically reconstructed roof shapes. Photogramm. Rec. 2012, 27, 272–292. [Google Scholar] [CrossRef]
  38. Maas, H. Methods for measuring height and planimetry discrepancies in airborne laserscanner data. Photogramm. Eng. Remote Sens. 2002, 68, 933–940. [Google Scholar]
  39. Habib, A.; Bang, K.I.; Kersting, A.P.; Lee, D. Error budget of LiDAR systems and quality control of the derived data. Photogramm. Eng. Remote Sens. 2009, 75, 1093–1108. [Google Scholar] [CrossRef]
  40. Hebel, M.; Stilla, U. Simultaneous calibration of ALS systems and alignment of multiview LiDAR scans of urban areas. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2364–2379. [Google Scholar] [CrossRef]
  41. Kumari, P.; Carter, W.E.; Shrestha, R.L. Adjustment of systematic errors in ALS data through surface matching. Adv. Space Res. 2011, 47, 1851–1864. [Google Scholar] [CrossRef]
  42. Skaloud, J.; Lichti, D. Rigorous approach to bore-sight self-calibration in airborne laser scanning. ISPRS J. Photogramm. 2006, 61, 47–59. [Google Scholar] [CrossRef]
  43. Theiler, P.W.; Schindler, K. Automatic registration of terrestrial laser scanner point clouds using natural planar surfaces. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 3, 173–178. [Google Scholar] [CrossRef]
  44. Guo, Y.; Bennamoun, M.; Sohel, F.; Lu, M.; Wan, J. 3D object recognition in cluttered scenes with local surface features: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2270–2287. [Google Scholar]
  45. Restrepo, M.I.; Ulusoy, A.O.; Mundy, J.L. Evaluation of feature-based 3-D registration of probabilistic volumetric scenes. ISPRS J. Photogramm. 2014, 98, 1–18. [Google Scholar] [CrossRef]
  46. Cheng, L.; Wu, Y.; Tong, L.; Chen, Y.; Li, M. Hierarchical Registration Method for Airborne and Vehicle LiDAR Point Cloud. Remote Sens. 2015, 7, 13921–13944. [Google Scholar] [CrossRef]
  47. Zhang, W.; Chen, Y.; Wang, H.; Chen, M.; Wang, X.; Yan, G. Efficient registration of terrestrial LiDAR scans using a coarse-to-fine strategy for forestry applications. Agric. For. Meteorol. 2016, 225, 8–23. [Google Scholar] [CrossRef]
  48. Yang, B.; Dong, Z.; Liang, F.; Liu, Y. Automatic registration of large-scale urban scene point clouds based on semantic feature points. ISPRS J. Photogramm. 2016, 113, 43–58. [Google Scholar] [CrossRef]
  49. Cheng, L.; Tong, L.; Li, M.; Liu, Y. Semi-automatic registration of airborne and terrestrial laser scanning data using building corner matching with boundaries as reliability check. Remote Sens. 2013, 5, 6260–6283. [Google Scholar] [CrossRef]
  50. Clode, S.; Rottensteiner, F.; Kootsookos, P.; Zelniker, E. Detection and vectorization of roads from LiDAR data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535. [Google Scholar] [CrossRef]
  51. Zhang, K.; Yan, J.; Chen, S. Automatic construction of building footprints from airborne LiDAR data. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2523–2533. [Google Scholar] [CrossRef]
  52. Wendt, A. A concept for feature based data registration by simultaneous consideration of laser scanner data and photogrammetric images. ISPRS J. Photogramm. 2007, 62, 122–134. [Google Scholar] [CrossRef]
  53. Weinmann, M. Point Cloud Registration. In Reconstruction and Analysis of 3D Scenes: From Irregularly Distributed 3D Points to Object Classes; Weinmann, M., Ed.; Springer International Publishing: Cham, Switzerland, 2016; pp. 55–110. [Google Scholar]
  54. Rönnholm, P.; Haggrén, H. Registration of laser scanning point clouds and aerial images using either artificial or natural tie features. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 3, 63–68. [Google Scholar] [CrossRef]
  55. Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D Registration; IEEE: Piscataway, NI, USA, 2009; pp. 3212–3217. [Google Scholar]
  56. Johnson, A.E.; Hebert, M. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 433–449. [Google Scholar] [CrossRef]
  57. Barnea, S.; Filin, S. Registration of terrestrial laser scans via image based features. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 26–31. [Google Scholar]
  58. Chen, H.; Bhanu, B. 3D free-form object recognition in range images using local surface patches. Pattern Recognit. Lett. 2007, 28, 1252–1262. [Google Scholar] [CrossRef]
  59. Zhong, Y. Intrinsic Shape Signatures: A Shape Descriptor for 3D Object Recognition; IEEE: Piscataway, NI, USA, 2009; pp. 689–696. [Google Scholar]
  60. Mian, A.; Bennamoun, M.; Owens, R. On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes. Int. J. Comput. Vis. 2010, 89, 348–361. [Google Scholar] [CrossRef]
  61. Sun, J.; Ovsjanikov, M.; Guibas, L. A Concise and Provably Informative Multi-Scale Signature Based on Heat Diffusion; Wiley Online Library: Hoboken, NJ, USA, 2009; pp. 1383–1392. [Google Scholar]
  62. Unnikrishnan, R.; Hebert, M. Multi-Scale Interest Regions from Unorganized Point Clouds; IEEE: Piscataway, NI, USA, 2008; pp. 1–8. [Google Scholar]
  63. Zaharescu, A.; Boyer, E.; Varanasi, K.; Horaud, R. Surface Feature Detection and Description with Applications to Mesh Matching; IEEE: Piscataway, NI, USA, 2009; pp. 373–380. [Google Scholar]
  64. Sipiran, I.; Bustos, B. Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes. Visual Comput. 2011, 27, 963. [Google Scholar] [CrossRef]
  65. Tombari, F.; Salti, S.; Di Stefano, L. Performance evaluation of 3D keypoint detectors. Int. J. Comput. Vis. 2013, 102, 198–220. [Google Scholar] [CrossRef]
  66. Hänsch, R.; Weber, T.; Hellwich, O. Comparison of 3D interest point detectors and descriptors for point cloud fusion. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 57. [Google Scholar] [CrossRef]
  67. Cheng, L.; Tong, L.; Wu, Y.; Chen, Y.; Li, M. Shiftable leading point method for high accuracy registration of airborne and terrestrial LiDAR data. Remote Sens. 2015, 7, 1915–1936. [Google Scholar] [CrossRef]
  68. Weber, T.; Hänsch, R.; Hellwich, O. Automatic registration of unordered point clouds acquired by Kinect sensors using an overlap heuristic. ISPRS J. Photogramm. 2015, 102, 96–109. [Google Scholar] [CrossRef]
  69. Barnea, S.; Filin, S. Keypoint based autonomous registration of terrestrial laser point-clouds. ISPRS J. Photogramm. 2008, 63, 19–35. [Google Scholar] [CrossRef]
  70. Eo, Y.D.; Pyeon, M.W.; Kim, S.W.; Kim, J.R.; Han, D.Y. Coregistration of terrestrial LiDAR points by adaptive scale-invariant feature transformation with constrained geometry. Autom. Constr. 2012, 25, 49–58. [Google Scholar] [CrossRef]
  71. He, Y.; Mei, Y. An efficient registration algorithm based on spin image for LiDAR 3D point cloud models. Neurocomputing 2015, 151, 354–363. [Google Scholar] [CrossRef]
  72. Wang, Z.; Brenner, C. Point based registration of terrestrial laser data using intensity and geometry features. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 583–590. [Google Scholar]
  73. Kang, Z.; Li, J.; Zhang, L.; Zhao, Q.; Zlatanova, S. Automatic registration of terrestrial laser scanning point clouds using panoramic reflectance images. Sensors 2009, 9, 2621–2646. [Google Scholar] [CrossRef] [PubMed]
  74. Lv, F.; Ren, K. Automatic registration of airborne LiDAR point cloud data and optical imagery depth map based on line and points features. Infrared Phys. Technol. 2015, 71, 457–463. [Google Scholar] [CrossRef]
  75. Gressin, A.; Mallet, C.; Demantké, J.; David, N. Towards 3D LiDAR point cloud registration improvement using optimal neighborhood knowledge. ISPRS J. Photogramm. 2013, 79, 240–251. [Google Scholar] [CrossRef]
  76. Yang, B.; Chen, C. Automatic registration of UAV-borne sequent images and LiDAR data. ISPRS J. Photogramm. 2015, 101, 262–274. [Google Scholar] [CrossRef]
  77. Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-Points Congruent Sets for Robust Pairwise Surface Registration; ACM: New York, NY, USA, 2008; p. 85. [Google Scholar]
  78. Corsini, M.; Dellepiane, M.; Ganovelli, F.; Gherardi, R.; Fusiello, A.; Scopigno, R. Fully automatic registration of image sets on approximate geometry. Int. J. Comput. Vis. 2013, 102, 91–111. [Google Scholar] [CrossRef] [Green Version]
  79. Theiler, P.W.; Wegner, J.D.; Schindler, K. Keypoint-based 4-Points Congruent Sets—Automated marker—Less registration of laser scans. ISPRS J. Photogramm. 2014, 96, 149–163. [Google Scholar] [CrossRef]
  80. Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and LiDAR data registration using linear features. Photogramm. Eng. Remote Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  81. Al-Durgham, K.; Habib, A. Association-Matrix-Based Sample Consensus Approach for Automated Registration of Terrestrial Laser Scans Using Linear Features. Photogramm. Eng. Remote Sens. 2014, 80, 1029–1039. [Google Scholar] [CrossRef]
  82. Cheng, L.; Gong, J.; Li, M.; Liu, Y. 3D Building Model Reconstruction from Multi-view Aerial Imagery and LiDAR Data. Photogramm. Eng. Remote Sens. 2011, 77, 125–139. [Google Scholar] [CrossRef]
  83. Matkan, A.A.; Hajeb, M.; Sadeghian, S. Road extraction from LiDAR data using support vector machine classification. Photogramm. Eng. Remote Sens. 2014, 80, 409–422. [Google Scholar] [CrossRef]
  84. Yang, B.; Fang, L.; Li, J. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds. ISPRS J. Photogramm. 2013, 79, 80–93. [Google Scholar] [CrossRef]
  85. Hu, X.; Li, Y.; Shan, J.; Zhang, J.; Zhang, Y. Road centerline extraction in complex urban scenes from lidar data based on multiple features. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7448–7456. [Google Scholar]
  86. Fangning, H.; Ayman, H. A Closed-Form Solution for Coarse Registration of Point Clouds Using Linear Features. J. Surv. Eng. 2016. [Google Scholar] [CrossRef]
  87. Hansen, W.; Gross, H.; Thoennessen, U. Line-based registration of terrestrial and aerial LiDAR data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 311, 161–166. [Google Scholar]
  88. Yang, B.; Zang, Y.; Dong, Z.; Huang, R. An automated method to register airborne and terrestrial laser scanning point clouds. ISPRS J. Photogramm. 2015, 109, 62–76. [Google Scholar] [CrossRef]
  89. Monserrat, O.; Crosetto, M. Deformation measurement using terrestrial laser scanning data and least squares 3D surface matching. ISPRS J. Photogramm. 2008, 63, 142–154. [Google Scholar] [CrossRef]
  90. Grant, D.; Bethel, J.; Crawford, M. Point-to-plane registration of terrestrial laser scans. ISPRS J. Photogramm. 2012, 72, 16–26. [Google Scholar] [CrossRef]
  91. Dold, C.; Brenner, C. Automatic matching of terrestrial scan data as a basis for the generation of detailed 3D city models. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 1091–1096. [Google Scholar]
  92. Von Hansen, W. Robust automatic marker-free registration of terrestrial scan data. Proc. Photogramm. Comput. Vis. 2006, 36, 105–110. [Google Scholar]
  93. Dold, C.; Brenner, C. Registration of terrestrial laser scanning data using planar patches and image data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 78–83. [Google Scholar]
  94. Gruen, A.; Akca, D. Least squares 3D surface and curve matching. ISPRS J. Photogramm. 2005, 59, 151–174. [Google Scholar] [CrossRef]
  95. Akca, D. Matching of 3D surfaces and their intensities. ISPRS J. Photogramm. 2007, 62, 112–121. [Google Scholar] [CrossRef]
  96. Akca, D. Co-registration of surfaces by 3D least squares matching. Photogramm. Eng. Remote Sens. 2010, 76, 307–318. [Google Scholar] [CrossRef]
  97. Ge, X.; Wunderlich, T. Surface-based matching of 3D point clouds with variable coordinates in source and target system. ISPRS J. Photogramm. 2016, 111, 1–12. [Google Scholar] [CrossRef]
  98. Brenner, C.; Dold, C.; Ripperda, N. Coarse orientation of terrestrial laser scans in urban environments. ISPRS J. Photogramm. 2008, 63, 4–18. [Google Scholar] [CrossRef]
  99. Zhang, D.; Huang, T.; Li, G.; Jiang, M. Robust algorithm for registration of building point clouds using planar patches. J. Surv. Eng. 2011, 138, 31–36. [Google Scholar] [CrossRef]
  100. Wu, H.; Scaioni, M.; Li, H.; Li, N.; Lu, M.; Liu, C. Feature-constrained registration of building point clouds acquired by terrestrial and airborne laser scanners. J. Appl. Remote Sens. 2014, 8, 83587. [Google Scholar] [CrossRef]
  101. Jaw, J.J.; Chuang, T.Y. Feature-Based Registration of Terrestrial and Aerial LiDAR Point Clouds towards Complete 3D Scene. In Proceedings of the 29th Asian Conference on Remote Sensing, Colombo, Sri Lanka, 10–14 November 2008; pp. 1295–1300. [Google Scholar]
  102. Yang, B.; Zang, Y. Automated registration of dense terrestrial laser-scanning point clouds using curves. ISPRS J. Photogramm. 2014, 95, 109–121. [Google Scholar] [CrossRef]
  103. Rabbani, T.; Dijkman, S.; van den Heuvel, F.; Vosselman, G. An integrated approach for modelling and global registration of point clouds. ISPRS J. Photogramm. 2007, 61, 355–370. [Google Scholar] [CrossRef]
  104. Franaszek, M.; Cheok, G.S.; Witzgall, C. Fast automatic registration of range images from 3D imaging systems using sphere targets. Autom. Constr. 2009, 18, 265–274. [Google Scholar] [CrossRef]
  105. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. 2015, 105, 286–304. [Google Scholar] [CrossRef]
  106. Yu, F.; Xiao, J.; Funkhouser, T. Semantic alignment of LiDAR data at city scale. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  107. Faugeras, O.D.; Hebert, M. The representation, recognition, and locating of 3-D objects. Int. J. Robot. Res. 1986, 5, 27–52. [Google Scholar] [CrossRef]
  108. Bergevin, R.; Soucy, M.; Gagnon, H.; Laurendeau, D. Towards a general multi-view registration technique. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 540–547. [Google Scholar] [CrossRef]
  109. Dorai, C.; Weng, J.; Jain, A.K. Optimal registration of object views using range data. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 1131–1138. [Google Scholar] [CrossRef]
  110. Eggert, D.W.; Fitzgibbon, A.W.; Fisher, R.B. Simultaneous registration of multiple range views for use in reverse engineering of CAD models. Comput. Vis. Image Underst. 1998, 69, 253–272. [Google Scholar] [CrossRef]
  111. Sharp, G.C.; Lee, S.W.; Wehe, D.K. ICP registration using invariant features. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 90–102. [Google Scholar] [CrossRef]
  112. Yamany, S.M.; Farag, A.A. Surface signatures: An orientation independent free-form surface representation scheme for the purpose of objects registration and matching. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1105–1120. [Google Scholar] [CrossRef]
  113. Jiang, J.; Cheng, J.; Chen, X. Registration for 3-D point cloud using angular-invariant feature. Neurocomputing 2009, 72, 3839–3844. [Google Scholar] [CrossRef]
  114. Campbell, R.J.; Flynn, P.J. A survey of free-form object representation and recognition techniques. Comput. Vis. Image Underst. 2001, 81, 166–210. [Google Scholar] [CrossRef]
  115. Liu, Y. Automatic registration of overlapping 3D point clouds using closest points. Image Vis. Comput. 2006, 24, 762–781. [Google Scholar] [CrossRef]
  116. Díez, Y.; Roure, F.; Lladó, X.; Salvi, J. A qualitative review on 3d coarse registration methods. ACM Comput. Surv. 2015, 47, 45. [Google Scholar] [CrossRef]
  117. Blais, G.; Levine, M.D. Registering multiview range data to create 3D computer objects. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 820–824. [Google Scholar] [CrossRef]
  118. Park, S.; Subbarao, M. An accurate and fast point-to-plane registration technique. Pattern Recognit. Lett. 2003, 24, 2967–2976. [Google Scholar] [CrossRef]
  119. Mitra, N.J.; Gelfand, N.; Pottmann, H.; Guibas, L. Registration of point cloud data from a geometric optimization perspective. In Proceedings of the 2004 Eurographics/ACM Siggraph Symposium on Geometry Processing, Nice, France, 8–10 July 2004; pp. 22–31. [Google Scholar]
  120. Haralick, R.M.; Joo, H.; Lee, C.; Zhuang, X.; Vaidya, V.G.; Kim, M.B. Pose estimation from corresponding point data. IEEE Trans. Syst. Man Cybern. 1989, 19, 1426–1446. [Google Scholar] [CrossRef]
  121. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  122. Maier-Hein, L.; Franz, A.M.; Dos Santos, T.R.; Schmidt, M.; Fangerau, M.; Meinzer, H.; Fitzpatrick, J.M. Convergent iterative closest-point algorithm to accomodate anisotropic and inhomogenous localization error. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1520–1532. [Google Scholar] [CrossRef] [PubMed]
  123. Zhang, X.; Glennie, C. Change detection from differential airborne LiDAR using a weighted anisotropic iterative closest point algorithm. IEEE J-STARS 2015, 8, 3338–3346. [Google Scholar] [CrossRef]
  124. Elseberg, J.; Borrmann, D.; Nüchter, A. One billion points in the cloud—An octree for efficient processing of 3D laser scans. ISPRS J. Photogramm. 2013, 76, 76–88. [Google Scholar] [CrossRef]
  125. Gong, J.; Zhu, Q.; Zhong, R.; Zhang, Y.; Xie, X. An efficient point cloud management method based on a 3D R-tree. Photogramm. Eng. Remote Sens. 2012, 78, 373–381. [Google Scholar] [CrossRef]
  126. Wu, H.; Guan, X.; Gong, J. ParaStream: A parallel streaming Delaunay triangulation algorithm for LiDAR points on multicore architectures. Comput. Geosci. 2011, 37, 1355–1363. [Google Scholar] [CrossRef]
  127. Bucksch, A.; Khoshelham, K. Localized registration of point clouds of botanic trees. IEEE Geosci. Remote Sens. Lett. 2013, 10, 631–635. [Google Scholar] [CrossRef]
  128. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  129. Chen, C.; Hung, Y.; Cheng, J. RANSAC-based DARCES: A new approach to fast automatic registration of partially overlapping range images. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 1229–1234. [Google Scholar] [CrossRef]
  130. Kim, T.; Im, Y. Automatic satellite image registration by combination of matching and random sample consensus. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1111–1117. [Google Scholar]
  131. Yang, J.; Huang, Q.; Wu, B.; Chen, J. A Remote Sensing Imagery Automatic Feature Registration Method Based on Mean-Shift; IEEE: Piscataway, NI, USA, 2012; pp. 2364–2367. [Google Scholar]
  132. Wang, X.; Li, Y.; Wei, H.; Liu, F. An ASIFT-Based Local Registration Method for Satellite Imagery. Remote Sens. 2015, 7, 7044–7061. [Google Scholar] [CrossRef]
  133. Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Extended RANSAC algorithm for automatic detection of building roof planes from LiDAR data. Photogramm. J. Finl. 2008, 21, 97–109. [Google Scholar]
  134. Xu, B.; Jiang, W.; Shan, J.; Zhang, J.; Li, L. Investigation on the weighted ransac approaches for building roof plane segmentation from LiDAR point clouds. Remote Sens. 2016, 8, 5. [Google Scholar] [CrossRef]
  135. Fontanelli, D.; Ricciato, L.; Soatto, S. A Fast Ransac-Based Registration Algorithm for Accurate Localization in Unknown Environments Using LiDAR Measurements; IEEE: Piscataway, NI, USA, 2007; pp. 597–602. [Google Scholar]
  136. Weinmann, M.; Weinmann, M.; Hinz, S.; Jutzi, B. Fast and automatic image-based registration of TLS data. ISPRS J. Photogramm. 2011, 66, S62–S70. [Google Scholar] [CrossRef]
  137. Al-Durgham, K.; Habib, A.; Kwak, E. RANSAC approach for automated registration of terrestrial laser scans using linear features. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 2, 13–18. [Google Scholar] [CrossRef]
  138. Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003. [Google Scholar]
  139. Takeuchi, E.; Tsubouchi, T. A 3-D Scan Matching Using Improved 3-D Normal Distributions Transform for Mobile Robotic Mapping. In Proceedings of the International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 3068–3073. [Google Scholar]
  140. Stoyanov, T.; Magnusson, M.; Lilienthal, A.J. Point Set Registration through Minimization of the L2 Distance between 3D-Ndt Models. In Proceedings of the IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 5196–5201. [Google Scholar]
  141. Das, A.; Waslander, S.L. Scan registration using segmented region growing NDT. Int. J. Robot. Res. 2014, 33, 1645–1663. [Google Scholar] [CrossRef]
  142. Ulaş, C.; Temeltaş, H. 3D multi-layered normal distribution transform for fast and long range scan matching. J. Intell. Robot. Syst. 2013, 71, 85–108. [Google Scholar] [CrossRef]
  143. Hong, H.; Lee, B.H. Key-layered normal distributions transform for point cloud registration. Electron. Lett. 2015, 51, 1986–1988. [Google Scholar] [CrossRef]
  144. Miao, Y.; Liu, Y.; Ma, H.; Jin, H. The Pose Estimation of Mobile Robot Based on Improved Point Cloud Registration. J. Intell. Robot. Syst. 2016, 13. [Google Scholar] [CrossRef]
  145. Ripperda, N.; Brenner, C. Marker-free registration of terrestrial laser scans using the normal distribution transform. Proc. ISPRS Work. Group 2005, 36, 86–91. [Google Scholar]
  146. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  147. Yang, M.Y.; Cao, Y.; McDonald, J. Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments. ISPRS J. Photogramm. 2011, 66, S52–S61. [Google Scholar] [CrossRef]
  148. Sedaghat, A.; Ebadi, H. Remote sensing image matching based on adaptive binning SIFT descriptor. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5283–5293. [Google Scholar] [CrossRef]
  149. Han, J.; Perng, N.; Chen, H. LiDAR point cloud registration by image detection technique. IEEE Geosci. Remote Sens. Lett. 2013, 10, 746–750. [Google Scholar] [CrossRef]
  150. Avbelj, J.; Iwaszczuk, D.; Mueller, R.; Reinartz, P.; Stilla, U. Coregistration refinement of hyperspectral images and DSM: An object-based approach using spectral information. ISPRS J. Photogramm. 2015, 100, 23–34. [Google Scholar] [CrossRef] [Green Version]
  151. Abayowa, B.O.; Yilmaz, A.; Hardie, R.C. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models. ISPRS J. Photogramm. 2015, 106, 68–81. [Google Scholar] [CrossRef]
  152. Gobakken, T.; Næsset, E. Assessing effects of positioning errors and sample plot size on biophysical stand properties derived from airborne laser scanner data. Can. J. For. Res. 2009, 39, 1036–1052. [Google Scholar] [CrossRef]
  153. Bae, K.; Lichti, D.D. A method for automated registration of unorganised point clouds. ISPRS J. Photogramm. 2008, 63, 36–54. [Google Scholar] [CrossRef]
  154. Yang, J.; Cao, Z.; Zhang, Q. A fast and robust local descriptor for 3D point cloud registration. Inf. Sci. 2016, 346, 163–179. [Google Scholar] [CrossRef]
  155. Douillard, B.; Quadros, A.; Morton, P.; Underwood, J.P.; De Deuge, M.; Hugosson, S.; Hallström, M.; Bailey, T. Scan Segments Matching for Pairwise 3D Alignment. In Proceedings of the IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3033–3040. [Google Scholar]
  156. Sanchez, J.; Denis, F.; Checchin, P.; Dupont, F.; Trassoudaine, L. Global Registration of 3D LiDAR Point Clouds Based on Scene Features: Application to Structured Environments. Remote Sens. 2017, 9, 1014. [Google Scholar] [CrossRef]
  157. Bueno, M.; Gonzalez-Jorge, H.; Martinez-Sanchez, J.; Lorenzo, H. Automatic point cloud coarse registration using geometric keypoint descriptors for indoor scenes. Autom. Constr. 2017, 81, 134–148. [Google Scholar] [CrossRef]
  158. Xu, H.; Cheng, L.; Li, M.; Chen, Y.; Zhong, L. Using Octrees to Detect Changes to Buildings and Trees in the Urban Environment from Airborne LiDAR Data. Remote Sens. 2015, 7, 9682–9704. [Google Scholar] [CrossRef]
  159. Lin, H.; Gao, J.; Zhou, Y.; Lu, G.; Ye, M.; Zhang, C.; Liu, L.; Yang, R. Semantic Decomposition and Reconstruction of Residential Scenes from LiDAR Data. ACM Trans. Graph. 2013, 32, 66. [Google Scholar] [CrossRef]
  160. Mallet, C.; Bretar, F. Full-waveform topographic LiDAR: State-of-the-art. ISPRS J. Photogramm. 2009, 64, 1–16. [Google Scholar] [CrossRef]
  161. Hartzell, P.J.; Glennie, C.L.; Finnegan, D.C. Empirical waveform decomposition and radiometric calibration of a terrestrial full-waveform laser scanner. IEEE Trans. Geosci. Remote Sens. 2015, 53, 162–172. [Google Scholar] [CrossRef]
  162. Wagner, W.; Ullrich, A.; Ducic, V.; Melzer, T.; Studnicka, N. Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner. ISPRS J. Photogramm. 2006, 60, 100–112. [Google Scholar] [CrossRef]
  163. Kimes, D.S.; Ranson, K.J.; Sun, G.; Blair, J.B. Predicting LiDAR measured forest vertical structure from multi-angle spectral data. Remote Sens. Environ. 2006, 100, 503–511. [Google Scholar] [CrossRef]
  164. Yao, W.; Krzystek, P.; Heurich, M. Tree species classification and estimation of stem volume and DBH based on single tree extraction by exploiting airborne full-waveform LiDAR data. Remote Sens. Environ. 2012, 123, 368–380. [Google Scholar] [CrossRef]
  165. Cao, L.; Coops, N.C.; Innes, J.L.; Dai, J.; Ruan, H.; She, G. Tree species classification in subtropical forests using small-footprint full-waveform LiDAR data. Int. J. Appl. Earth Obs. 2016, 49, 39–51. [Google Scholar] [CrossRef]
  166. Sun, G.; Ranson, K.J. Modeling LiDAR returns from forest canopies. IEEE Trans. Geosci. Remote Sens. 2000, 38, 2617–2626. [Google Scholar]
  167. Koetz, B.; Morsdorf, F.; Sun, G.; Ranson, K.J.; Itten, K.; Allgower, B. Inversion of a LiDAR waveform model for forest biophysical parameter estimation. IEEE Geosci. Remote Sens. Lett. 2006, 3, 49–53. [Google Scholar] [CrossRef]
  168. Jutzi, B.; Stilla, U. Laser pulse analysis for reconstruction and classification of urban objects. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2003, 34, 151–156. [Google Scholar]
  169. Abed, F.M.; Mills, J.P.; Miller, P.E. Calibrated Full-Waveform Airborne Laser Scanning for 3D Object Segmentation. Remote Sens. 2014, 6, 4109–4132. [Google Scholar] [CrossRef]
  170. Azadbakht, M.; Fraser, C.S.; Zhang, C. Separability of Targets in Urban Areas Using Features from Full-Waveform LiDARA Data. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 5367–5370. [Google Scholar]
Figure 1. (a) Publication statistics; and (b) cloud map of high-frequency terms used in LiDAR-related publications (2000–2016). (Source: Scopus Database).
Figure 1. (a) Publication statistics; and (b) cloud map of high-frequency terms used in LiDAR-related publications (2000–2016). (Source: Scopus Database).
Sensors 18 01641 g001
Figure 2. Number of different types of publications on LiDAR (2000–2016). (Source: Scopus database).
Figure 2. Number of different types of publications on LiDAR (2000–2016). (Source: Scopus database).
Sensors 18 01641 g002
Figure 3. LiDAR registration-related publications in different journals (2000–2016). (Source: Scopus Database)
Figure 3. LiDAR registration-related publications in different journals (2000–2016). (Source: Scopus Database)
Sensors 18 01641 g003
Table 1. Comparison of LiDAR systems mounted on different platforms.
Table 1. Comparison of LiDAR systems mounted on different platforms.
PlatformsSystem AbbreviationScanning PerspectiveScanning RangePoint Cloud DensityApplication Areas
AirborneALSTop viewSurface shapeRelatively sparseTerrain mapping, forest surveys, 3D urban areas
VehicleMLSSide viewStripe shapeDenseRoad mapping, 3D urban areas
TripodTLSSide viewPoint shapeDenseDeformation monitoring, reverse engineering
SatelliteSLSTop viewSurface shapeLarge spot size, low densityForestry surveys, atmospheric measurements, snow monitoring
Table 2. Point-based registration methods for point clouds.
Table 2. Point-based registration methods for point clouds.
Feature TypeMethodsTest ObjectsData Platform
Point featureProjection density [49]BuildingsALS, TLS
Movable guidance point registration [67]BuildingsALS, TLS
Geometric shape constraint [48]Urban scenesTLS
Point domain featureNormal vector angle histogram [55]Urban scenes, Indoor scenesTLS
Minimum Euclidean distance of point pairs [68]Indoor scenesTLS
Rotated image feature3D Euclidean distance of point pairs [69]Urban scenesTLS
SIFT operator [70]BuildingsTLS
kd-tree [71]Urban scenesTLS
Table 3. Line-based registration methods for point clouds.
Table 3. Line-based registration methods for point clouds.
Feature TypeMethodsTest ObjectsData Platform
ALS, MLSLine feature translation, rotation quantity [87]Urban scenesALS, TLS
Laplacian matrix decomposition [88]Urban scenesALS, TLS
Point cloud segmentation based on TIN [36]Urban scenesALS
Combination of building contours and road networksRoad networks used for coarse registration, building contours used for fine registration [46]Urban scenes
Table 4. Surface-based registration methods for point clouds.
Table 4. Surface-based registration methods for point clouds.
Feature TypeMethodsTest ObjectsData Platforms
Least squares surfaceEuclidean distance of the corresponding surface [94]Individual objectsTLS
Combined with intensity information [95]Individual objects, indoor scenesTLS
3D similarity transformation model [96]Small plateauALS, images
Stochastic model [97]Individual objectsTLS
Conjugate surfaceThree pairs of conjugate surface features [98]Urban sceneTLS
Rodriguez matrix [99]BuildingsTLS
2D similarity transformation and simple vertical shift [100]BuildingsALS, TLS
Table 5. Improved methods based on ICP.
Table 5. Improved methods based on ICP.
Improvement StrategyAdvantagesMethods
Find other registration featuresEffectively reduce noise interferenceVariation of geometric curvature of point, variation of normal vector of point and normal vector angle [120]
Distance from point to tangent plane of closest point in model [121]
Angle between point and direction of k adjacent points in field [113]
A point-to-plane method using General Least Squares adjustment model [90]
Optimize registration algorithmDirectly improve algorithm efficiencyWeighted analysis of anisotropic and inhomogeneous registration properties [122]
Weight matrix in three principal directions calculated by covariance matrix [123]
Select appropriate data management methodQuickly and efficiently store and manage discrete LiDAR point cloudsOctree [124]
3D R-tree [125]
quad-tree [113]
kd-tree [126]
Table 6. Comparison of various point cloud registration methods.
Table 6. Comparison of various point cloud registration methods.
MethodsMain IdeaAdvantagesProblems
Feature-based methods“Feature extraction—feature matching—point clouds registration”, using features to guide point cloud registrationHigh precision, results are robust and reliableRequires that target has significant features; extracted feature precision and quality are difficult to guarantee
Iterative approximation methodsEuclidean distances between point clouds are continually reduced by iterationHigh precision, and mostly used for fine registrationRequires large overlap area; high requirements for initial position; prone to local optimal solution
Random sample consensus methodsRegistration parameters are calculated using smallest sample setHigh efficiency, strong anti-noise capabilityNumber of iterations required for convergence is difficult to determine
Normal distribution transformation methodsConstruct body element, generate point cloud distribution model, and determine optimal matching relationshipEfficiency is relatively high; no need for good initial positionRequires point clouds to have large overlapping areas
Methods using auxiliary dataImage-assisted methodsExtract same named feature in image, then use feature matching methodPrinciple is simple, mostly used in global registrationImage data availability is poor, and it is difficult to ensure quality of extracted feature
GNSS-assisted methodsGNSS data assisted point cloud coordinate transformationPrinciple is simple, mostly used in global registrationAccuracy of GNSS data and signal lockout
Standard target-assisted methodsCalculate point cloud conversion parameters using standard target informationPrinciple is simple and easy to operateLess automation, not suitable for complex scenes
Table 7. The applications and performances of different registration methods.
Table 7. The applications and performances of different registration methods.
MethodsExperimental EnvironmentExperimental DataDeviation (m)
Point-based methodsProjection density [49]Outdoors, urban scene, the campus of Nanjing University, China, covers 1000 × 1000 m2ALS: density = 11 points/m2; accuracy (h) = 0.30 m; accuracy (v) = 0.15 m
TLS: density = 25 points/m2
0.50
0.44 (h) 1
0.15 (v)
Geometric shape constraint [48]Outdoors, open park, covers 1450 × 650 × 65 m3TLS: density = 442 points/m2; 40% overlap0.068
Outdoors, uptown, covers 600 × 400 × 30 m3TLS: density = 326 points/m2; 20% overlap0.072
Outdoors, subway station, covers 300 × 450 × 10 m3TLS: density = 673 points/m2; 50% overlap0.069
Movable guidance point [67]Outdoors, urban building, the campus of Nanjing University, China, covers 400 × 1600 m2ALS: density = 1 points/m2;
accuracy (h) = 0.3 m; accuracy (v) = 0.2 m
TLS: at 50 m, density = 100 points/m2; accuracy (h) = 6 mm; accuracy (v) = 4 mm
0.26
3D distance of point pairs [69]Outdoors, urban scene, courtyard-like square with manmade objectsTLS: angular resolution = 0.12°;
7 scans, each scan contains 2.25 million points
Outdoors, open park area with little structureTLS: angular resolution = 0.12°;
7 scans, each scan contains 2.25 million points
SIFT operator [70]Outdoors, building object, covers 27 × 12 × 18 m3.TLS: angular resolution (h) = 0.0015°;
angular resolution(v) = 0.0015°
0.02
Line-based methodsLaplacian matrix decomposition [88]Outdoors, urban scene, covers 800 × 15,000 m2ALS: density = 8 points/m2
TLS: density = 12 points/m2
0.37
Outdoors, urban scene, covers 11,000 × 12,000 m2ALS: density = 5 points/m2
TLS: density = 20 points/m2
0.70
TIN-based [36]Outdoors, urban sceneALS: density = 2.24 points/m2;
accuracy (h) = 0.5 m; accuracy(v) = 0.15 m
0.007 (x) 2
0.004 (y)
0.004 (z)
Road networks & building contours [46]Outdoors, urban scene, Olympic sports center, Nanjing, China, covers 4000 × 4000 m2ALS: density = 4 points/m2 ; accuracy (h) = 0.30 m; accuracy (v) = 0.15 m
MLS: 360° scanning cope, surveying range 2–300 m, and point frequency 200,000 points/s
0.68(h)
0.41(v)
Surface-based methodsRodriguez matrix [99]Outdoors, buildingTLS: the scanning interval was roughly 2 cm, 4 stations were positioned about 25 m away from the house.0.0223 (x)
0.0030 (y)
0.0206 (z)
Outdoors, substationTLS: the scanning interval was roughly 2 cm, the distance was less than 50 m between the two stations.
Other feature-based methodsconjugate spatial curves [102]Indoors, No. 159 cave in the Dunhuang Mogao GrottoesTLS: the average span of points was 1 mm, about 35–60% overlap between different scans, and 76 scans consisted of 17.5 million points.0.003
fitting of simple objects [103]Indoors, industrial site, the room is about 8 × 4.5 × 4 m3.TLS: 4 scan, each scan consisted of 1 million points.
object detectors [106]Outdoors, urban scene, streets of New York, Paris, Rome, and San Francisco.MLS: each data set contains 300–500 M points representing 50–100 city blocks covering 2–4 km2.
Iterative approximation methodspoint-to-plane [90]Individual object, the Neil Armstrong statue in Purdue UniversityTLS: 8 scans, positioned at a distance of 5–10 m from the statue0.0025
Random sample consensus methodsSIFT features [136]Outdoors, urban scene, the data set is acquired at a district in Hanover called Holzmarkt.TLS: angular resolution = 0.12°; a measurement accuracy of 12 mm can be expected.0.015
iterative closest projected point [137]Outdoors, the Ronald McDonald house in Calgary, Canada.TLS: 6 scans, the average overlap is roughly 70%.
Normal distribution transform methods2D NDT [145]Outdoors, urban scene, a street in HannoverTLS: 4 scans, each scan requires about 4 min and yields approximately 2,250,000 scanned points.0.42
3D NDT [146]Outdoors, 3 mine data sets, which are collected in the Kvarntorp mine, south of Örebro in Sweden.TLS: 2 scans from the end section of a tunnel from the same pose, only different resolution.
TLS: 2 scans were taken approximately 4 m apart; each scans contain around 27,500 points
TLS:65 scans, each scans contain around 95,000 points
1h (v) represents the deviation in horizontal (vertical) direction. 2 x (y, z) represents the deviation in x (y, z) direction.

Share and Cite

MDPI and ACS Style

Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of Laser Scanning Point Clouds: A Review. Sensors 2018, 18, 1641. https://doi.org/10.3390/s18051641

AMA Style

Cheng L, Chen S, Liu X, Xu H, Wu Y, Li M, Chen Y. Registration of Laser Scanning Point Clouds: A Review. Sensors. 2018; 18(5):1641. https://doi.org/10.3390/s18051641

Chicago/Turabian Style

Cheng, Liang, Song Chen, Xiaoqiang Liu, Hao Xu, Yang Wu, Manchun Li, and Yanming Chen. 2018. "Registration of Laser Scanning Point Clouds: A Review" Sensors 18, no. 5: 1641. https://doi.org/10.3390/s18051641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop