Next Article in Journal
Coalition Game-Based Secure and Effective Clustering Communication in Vehicular Cyber-Physical System (VCPS)
Next Article in Special Issue
A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations
Previous Article in Journal
A Fast Channel Assignment Scheme for Emergency Handling in Wireless Body Area Networks
Previous Article in Special Issue
Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems

1
School of Civil and Environmental Engineering, Yonsei University, Seodaemun-gu, Seoul 03722, Korea
2
Department of Computer Science, Yonsei University, Seodaemun-gu, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(3), 474; https://doi.org/10.3390/s17030474
Submission received: 19 December 2016 / Revised: 23 February 2017 / Accepted: 24 February 2017 / Published: 27 February 2017
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)

Abstract

:
This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) were mounted. The geometric relationships between three sensors were solved by the proposed calibration, considering the GNSS/INS as one unit sensor. Our solution basically uses the point cloud generated by a 3-dimensional (3D) terrestrial laser scanner rather than using conventionally obtained 3D ground control features. With the terrestrial laser scanner, accurate and precise reference data could be produced and the plane features corresponding with the sparse mobile laser scanning data could be determined with high precision. Furthermore, corresponding point features could be extracted from the dense terrestrial laser scanning data and the images captured by the video cameras. The parameters of the boresight and the lever-arm were calculated based on the least squares approach and the precision of the boresight and lever-arm could be achieved by 0.1 degrees and 10 mm, respectively.

1. Introduction

With the increasing demand for 3-dimensional (3D) geospatial information in various fields such as civil engineering and construction [1,2], environmental monitoring [3,4], and disaster management [5,6], a number of devices and algorithms for the 3D reconstruction have been developed and utilized. In general, all 3D mapping techniques can be classified into range-based techniques using a 3D laser scanner, also called Light Detecting and Ranging (LiDAR), and image-based 3D reconstruction techniques based on the principles of computer vision and photogrammetry [7,8]. By using those techniques, the 3D information (X, Y, Z) of observed objects is represented by a point cloud. The 3D laser scanner directly measures the 3D coordinates of objects with extremely high accuracy and resolution, but is financially prohibitive. Alternatively, image-based 3D reconstruction methods have been developed and applied to reducing the cost of acquiring point clouds [9,10]. Using corresponding features in overlapped images, the 3D coordinates of objects are calculated. However, the image-based 3D reconstruction technique has much noise and low accuracy and is highly affected by the captured space. If there is no feature in the captured space or the space is too dark or bright, the 3D reconstruction cannot be performed and many noisy points occur. In this regard, the 3D laser scanner is utilized by the engineers who require geospatial data of high resolution and accuracy [11,12].
According to the scanning geometry, laser scanners can be classified into terrestrial laser scanners and mobile laser scanners. Terrestrial laser scanners spherically scan the surrounding space using two freely rotating axes in a fixed position and generate more accurate, precise, and dense information than mobile laser scanners [13]. With their high performance, terrestrial laser scanners has been widely applied to the fields requiring highly accurate and dense 3D information like construction sites [14,15]. On the other hand, mobile laser scanners mounted on a moving vehicle rapidly rotate or oscillate horizontally at a certain fixed vertical angle. In general, mobile laser scanners with high scan rates are designed to be mounted on different vehicles such as automobiles and drones. As the platform where the mobile laser scanner is mounted moves, a point cloud is generated with respect to the trajectory of the platform. The trajectory information can be obtained from a navigation sensor like a Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) and the point cloud is formed based on the geometric relationship between the navigation sensor and the laser scanner [13,16]. Moreover, by integrating the mobile laser scanner with a camera, color information can be added to the generated point cloud.
A system combining multiple sensors with a navigation sensor on a moving vehicle is called a Mobile Mapping System (MMS). In the early 1990s, MMSs combining a code-only GNSS, stereo digital cameras, and supplementary dead-reckoning sensors were developed and utilized in applications based on the image-based 3D reconstruction technique [17,18]. As the accuracy of laser measurement and navigation sensors has improved, mobile laser scanners have become one of the main components of any MMS. In particular, as near real-time and periodical 3D mapping is required for the autonomous driving systems, the laser-based measurement MMS has been developed and widely utilized to generate the high-quality 3D geospatial information about urban environments [19,20,21,22].
To integrate the datasets captured by each sensor mounted on the MMS into the unified single coordinate system, the calibration, which is the process to estimate the orientation (boresight) and position (lever-arm) parameters, is required with the reference datasets [16,23,24]. When the boresight and lever-arm parameters defining the geometric relationship between each sensing data and GNSS/INS data are determined, georeferenced data can be generated. However, even after precise calibration, the boresight and lever-arm parameters of an MMS can be shaken and the errors that deteriorate the accuracy of the georeferenced data might accumulate. Accordingly, for the stable operation of multiple sensors, precise calibration must be conducted periodically.
In general, the calibration process is performed based on the observation models and constraints to define the geometric relationship between the observed object in real world and in the sensing data. For example, in the case of camera calibration, a calibration model is generally designed based on a collinearity equation with the Exterior Orientation Parameters (EOPs) and Interior Orientation Parameters (IOPs) [25,26]. To configure the constraints for the camera calibration, a checkerboard which has repetitive black and white (BW) patterns and whose spacing is accurately known is generally utilized [27,28]. For example, computer vision libraries such as OpenCV [29] and Matlab Toolbox [30] provide camera calibration tools based on the checkerboard approach. With the calibrated parameters, the correction of lens distortion, which is presented in a fisheye lens or wide angle lens camera image, and geometric analysis such as visual odometry can be performed [31]. Furthermore, ground control features of which ground coordinates are known can be used as geometric constraints. For example, since the checkerboard-based calibration method is difficult to apply to an airborne system due to the long distance between the sensor and ground, Chiang et al. [32] used ground control points to calibrate the time-offset of the camera shutter and to estimate the trajectory of the camera mounted on an airborne vehicle.
While point-based calibration techniques using a checkerboard or ground control points can be a practical solution for the calibration of camera systems, it is difficult to extract accurate corner or edge points from the sparse point clouds generated by the mobile laser scanner due to its low accuracy and resolution. Alternatively, the line, plane, and cylindrical features, which can be defined by mathematical equations, have been widely applied for the calibration of laser scanners [33,34,35]. For example, the plane features can be precisely extracted from a sparse point cloud using a RANdom SAmple Consensus (RANSAC) algorithm even if there exist noisy points in the point cloud [36]. With the orthogonality constraints of multiple planes installed in the laser scanning view, the boresight parameters can be estimated with least-squares adjustment [37,38,39,40]. Furthermore, based on the least-squares adjustment, the boresight and lever-arm parameters of the MMS can be stochastically calculated using the plane features as geometric constraints. Filin [41] applied the least-squares adjustment with the plane features to calibrate an airborne laser scanning system. Glennie [42] performed the boresight calibration of a mobile laser scanner with the plane features captured in a kinematic mode.
Obviously, the precision of the calibrated parameters is directly affected by the accuracy and geometry of the ground control features, and the construction of the accurate and dense ground control features for the calibration is important. To collect the ground control features, a total station or a laser tracker, which is the laser-based equipment to achieve the 3D coordinate of a point target with a sub-millimeter accuracy, are generally utilized [43,44,45]. Even though the observation accuracy of the total station is significantly high, the point positioning techniques are labor-intensive and it is difficult for general users to achieve a number of accurate control point coordinates. Moreover, since different types of features are required for the calibration of each sensor on the MMS, it is difficult to make a common ground control dataset.
In this paper, we have devised a method for utilizing the terrestrial laser scanner to simultaneously calibrate the camera and mobile laser scanner mounted on the MMS. On our MMS, devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one GNSS/INS were mounted. The devised MMS calibration process can be largely divided into two steps. As the first step for constructing the dataset of the ground control features, the terrestrial laser scanning data needs to be accurately georeferenced. In the second step point and plane features were extracted from the georeferenced terrestrial laser scanning data and matched with the features extracted from the mobile laser scanning data and the captured images. Before applying the boresight and lever-arm calibration of the devised MMS, the camera calibration to estimate the camera IOPs was conducted separately using the checkerboard approach. The calibration parameters of each sensor and their precisions were calculated based on the least-squares adjustment.

2. Methodology

2.1. Overview

On the MMS, three network video cameras, a mobile laser scanner, and a GNSS/INS were mounted and combined with a steel-welded frame to fix each sensor. The MMS was designed for two purposes; (a) generation of the point cloud including color information of scanned areas; and (2) 3D mapping of the objects extracted from images.
Figure 1 illustrates the MMS which was developed and applied for the experiments verifying the proposed calibration approach. To utilize the terrestrial laser scanning data as a reference source data for calibration, the post-processing of the terrestrial laser scanning data was conducted in two steps namely: (1) registration, which is to merge multiple point clouds into a common point cloud, and (2) georeferencing, which is to convert the relative coordinate system of point clouds into an absolute coordinate system.
Before the boresight and lever-arm calibration of the MMS, the camera calibration of each camera sensor was conducted to define the accurate geometry of the cameras by estimating IOPs. The camera calibration algorithm was designed by the collinearity equation including the IOPs, and a checkerboard was utilized to conduct the camera calibration.
After processing the terrestrial laser scanning data and estimating the camera IOPs, the boresight and lever-arm calibration was conducted. To conduct this calibration, reference features were extracted from the datasets of the video cameras, the mobile laser scanner, and the terrestrial laser scanner. Point features were extracted for the calibration of the camera sensors and the plane features were extracted for the calibration of the mobile laser scanner.
Based on the parameters estimated from the calibration process, the integration of multiple sensors mounted on the MMS were conducted. Moreover, since each sensor is conducting data sampling in its own time frame, the time synchronization among the sensors was performed based on the time-dependent linear interpolation with respect to the position and orientation of the MMS platform. The overall process of our sensor calibration and integration method of the MMS is depicted in Figure 2.

2.2. Camera Calibration

The camera sensor captures an image by collecting the rays reflected from targets. When the camera sensor receives rays through its lens, the geometry between coordinates of observed targets and image can be represented by the collinearity equation including the parameters of image coordinates (xi,yi), focal length (c), principal point (xp,yp), lens distortion (Δxiyi), camera position (xc,yc,zc), camera orientation (m11 ~ m33), and object position (xo,yo,zo). The collinearity equation can be represented by Equation (1) [25]:
x i = x p x n + Δ x i y i = y p y n + Δ y i
where, xn,yn can be calculated by Equation (2), and the camera orientation parameters (m11 ~ m33) can be calculated from the rotation angle (ω,ϕ,κ) by Equation (3):
x n = c m 11 ( x o x c ) + m 12 ( y o y c ) + m 13 ( z o z c ) m 31 ( x o x c ) + m 32 ( y o y c ) + m 33 ( z o z c ) y n = c m 21 ( x o x c ) + m 22 ( y o y c ) + m 23 ( z o z c ) m 31 ( x o x c ) + m 32 ( y o y c ) + m 33 ( z o z c )
[ m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33 ] = [ cos φ cos κ cos ω sin κ + sin ω sin φ cos κ sin ω sin κ cos ω sin φ cos κ cos φ sin κ cos ω cos κ sin ω sin φ sin κ sin ω cos κ + cos ω sin φ sin κ sin φ sin ω cos φ cos ω cos φ ]
Moreover, the lens distortion is generally modelled by radial distortion and tangential distortion and can be represented by Equation (4):
Δ x i = x n ( A 1 r n 2 + A 2 r n 4 + A 3 r n 6 ) + B 1 ( r n 2 + 2 x n 2 ) + 2 B 2 x n y n Δ y i = y n ( A 1 r n 2 + A 2 r n 4 + A 3 r n 6 ) + 2 B 1 x n y n + B 2 ( r n 2 + 2 y n 2 )
where, rn is x n 2 + y n 2 , A1, A2 and A3 are the radial distortion parameters and B1, B2 are the tangential distortion parameters.
In general, the IOPs in the collinearity equation can be obtained from a camera specification but geometric errors exist in the image measurement system. To define the mathematical relationship among the sensors mounted on the MMS, accurate IOPs must be calculated by means of the camera calibration [44]. In this paper, a checkerboard with 30 mm spacing was used for the camera calibration. Fifteen images in different positions and angles were captured from each camera sensor to achieve the sufficient geometry to establish the correlations among parameters [46].

2.3. Registration and Georeferencing of Terrestrial Laser Scanning Data

To utilize terrestrial laser scanning data as reference data for the MMS calibration, the registration and the georeferencing of the scanning data were sequentially conducted. Basically, both processes are for estimating the rotation and the translation parameters with respect to the reference data using corresponding features or points, and the algorithms for the boresight and lever-arm calibration are similar. In registration case, one of the point clouds among the observation data at multiple stations is set as the reference. On the other hand, for the georeferencing, the points in an absolute coordinate system are utilized. In this paper, the Geodetic Reference System (GRS) 80 Korean Transverse Mercator (TM) coordinate system, which is an official legal system of South Korea [47], was applied as the absolute coordinate system. The origin is 127°00’ east longitude, 38°00’ north latitude, the scale factor is 1, the false northing is 600,000 m, and the false easting is 200,000 m.
The registration can be categorized into the target-based registration using artificial targets and the target-free registration based on minimizing locational discrepancy among point clouds. In general, the target-based registration using paper, paddle, and sphere targets is applied for the application requiring high accuracy and the target-free registration has uncertainty according to the shape and quality of the observed point clouds [48]. In particular, Becerik-Gerber et al. [49] have demonstrated that a sphere target has the highest precision in registration. For this reason, sphere targets whose diameters were 145 mm were utilized. Figure 3 illustrates these sphere targets.
As shown in Figure 3b, the sphere targets can be detected in a point cloud and fitted to a mathematical model. Through the fitted model, the center point of the sphere can be calculated and used as the control point to transform multiple point clouds. The registration targets must be fixed during scanning and a sufficient number of the targets must be installed in the overlapped scan areas. Paper and paddle targets also can be used for the registration. However, since the quality of the observed point cloud is affected by the incidence angle, the geometry among the targets and scanner should be designed carefully [49,50]. Moreover, the target-free registration, also called the Iterative Closest Point (ICP), can be additionally applied to improve the precision of the registration but the approach might have uncertainty in the registration results [4,51]. The occlusions and insufficient geometric constraints in the point clouds might cause errors in the registration process.
After the registration process, the georeferencing process is conducted to convert the relative coordinate system of the point cloud into an absolute coordinate system. For the georeferencing, control points with known absolute 3D positions are required. The static GNSS technique using Trimble’s R8 instrument, which can obtain positioning accuracy of millimeter-level, was applied to obtain the control points in this paper. The network adjustment based on the base stations managed by the Korean National Geographic Information Institute was conducted for the post-processing of the GNSS observation [52]. Figure 4 shows the static GNSS observation conducted for the georeferencing of the terrestrial laser scanning data.

2.4. Mobile Mapping System Calibration

For the sensor integration of the MMS, the mathematical models with accurate parameters to transform each observation data into another sensor system or an absolute coordinates system must be defined, and Figure 5 describes the conceptual model of the MMS [33,43,53,54,55].
As shown in Figure 5, the geometry among the object point (A), sensor frame (S), body frame (B), and map frame (L) can be defined mathematically by the rotation and translation relationship. The body frame has the relative coordinate system to combine multiple sensors and the coordinate system of the body frame is transformed into the map frame with the position and orientation information observed by a GNSS/INS. The mathematical model for the geometric relationship can be defined by Equation (5):
r L a L = r L B L ( t ) + M B L ( t ) ( M S B r S a S + r B S B )
where, r L a L is the coordinate of A in the map frame, t is the observation time, r L B L ( t ) is the position of the body frame in the map frame, M B L ( t ) is the rotation matrix from the body frame to the map frame, M S L is the rotation matrix from the sensor frame to the body frame, r S a S is the position of A in the sensor frame, and r B S B is the position of the sensor in the body frame.
In addition, when an object point (A) is projected onto the image in the camera frame (C), the geometric relationship includes the scale parameter (λa) for A. Accordingly, the mathematical model for the camera sensor can be defined by Equation (6):
r L a L = r L B L ( t ) + M B L ( t ) ( λ a M C B r C a C + r B C B )
where, M C B is the rotation matrix from the camera frame to the body frame, r C a C is the position of A in the camera frame, and r B C B is the position of the camera in the body frame. Each sensor in the body frame has individual parameters ( M S B , r B S B ,   M C B , r B C B ) for transforming the data in the sensor frame into the body frame. Moreover, the MMS has common parameters ( r L B L ( t ) , M B L ( t ) ) for transforming the data in the body frame into the map frame. The position and scale parameters of the object point ( λ a , r S a S , r C a C ) are determined for every observed point.

2.5. Adjustment Model

To estimate the optimized parameters for each sensor frame, the least squares adjustment is widely applied [16,19,25,53,55,56,57]. The least squares approach is basically designed based on the mathematically defined models which is called an observation equation. For the computation, the observation equation is represented in a matrix form as Equation (7):
y = A ξ + e
where, y is the observation vector, A is the design matrix, ξ is the unknown parameter vector (x1,x2,…,xn), e is the random error vector. It is assumed that the random errors follow the normal distribution ( e ~ ( 0 , σ 0 2 P 1 ) ) . σ 0 2 is the variance component used as scale, and P is the weight matrix which is the inverse matrix of the variance-covariance matrix. The weight is inversely proportional to the observation variance and the covariance between uncorrelated observations is zero. The high variance indicates that the observation has a large error and requires a large correction. When a measurement system has different precision observations, the weight matrix is controlled by the observation variance. In this paper, the weight matrix for image points was basically set as the identity matrix with the assumption that the observations have identical precisions. Furthermore, the weight matrix is controlled when features such as lines or planes in images or point clouds are utilized as the observation of the system [24,58]. For the plane features extracted from the point cloud, the precisions of the points in the normal direction of the plane were set as one, and those in the other direction were set as zero. By this approach, the similarities of the plane features in pairwise datasets can be measured and the optimized parameters which maximize the similarity and minimize the discrepancy can be estimated. Furthermore, when it is predicted that the precisions of control features are different, the weight matrix should reflect their precisions. In this paper, for the plane features extracted from laser scanning data, the weights reflected the inverses of squared plane model fitting errors.
From Equation (7), the least squares solution is designed to minimize the random error vector and find the most probable value of unknown parameters. The most probable value can be represented by Equation (8):
ξ ^ = ( A T P A ) 1 A T P y
While Equations (7) and (8) deal with the observation model which consists of linear equations, the calibration models of image and laser sensors are nonlinear. For this reason, the nonlinear systems have to be linearized with the first-order Taylor series approximation of the observation equations and the equations can be modified by Equations (9)–(11):
y F ( ξ o ) = J Δ ξ + e
Δ ξ ^ = ( J T P J ) 1 J T P ( y F ( ξ o ) )
ξ n e w = ξ o + Δ ξ ^
where, ξo is the approximate parameter vector before the correction, Δξ is the correction vector of the unknown parameters, ξnew is the updated parameter vector after the correction, F is the nonlinear observation system with respect to ξo, and J is the Jacobian matrix which includes the linearized equations of the nonlinear observation model and configured as Equation (12):
J = [ F 1 ( ξ o ) x 1 F 1 ( ξ o ) x 2 F 1 ( ξ o ) x n F 2 ( ξ o ) x 1 F 2 ( ξ o ) x 2 F 2 ( ξ o ) x n F m ( ξ o ) x 1 F m ( ξ o ) x 2 F m ( ξ o ) x n ]
For the boresight and lever-arm calibration of an MMS, the mathematical models for the geometric relationship to convert the point coordinates in each sensor frame into the map frame were used as F, and J was derived. ξ consisted of the boresight and lever-arm parameters. Meanwhile, for the camera calibration, the collinearity equation was used as F, and ξ consisted of the camera IOPs. The adjustment process is iteratively conducted until Δξ is almost zero or convergent. With the assumption that the observation errors follow the normal distribution, the uncertainty of adjusted parameters is also derived based on the law of error propagation. The dispersion of the adjusted parameters can be calculated by Equation (13):
D { Δ ξ ^ } = σ ^ o 2 ( J T P J ) 1
where σ ^ o 2 is calculated by Equation (14):
σ ^ o 2 = e ˜ T P e ˜ n m
where, n is the number of observations, m is the number of unknown parameters, and e ˜ is the residual vector.
With Equations (9)–(14), the least squares adjustment process can be conducted formulaically and the camera IOPs, the boresight and lever-arm parameters of the image and laser sensors in the MMS could be estimated. In the practical application of the least squares adjustment approach, the theoretically minimum number of the utilized features is basically determined by the number of the unknown parameters. To estimating the boresight and lever-arm parameters of each sensor, at least three points or four planes are required, respectively. Moreover, the geometry of the features are very important. The points must not exist in a single line and the planes must not be parallel or coincide.

2.6. Feature Extraction

As reference data for the boresight and lever-arm calibration, point and plane features were extracted from the images and point cloud. For the point feature extraction, the Harris corner detection algorithm was applied [59]. At the corner point in an image, the changes of the pixel values are remarkable in all direction. At a certain point (xi,yi), the variation (E(xi,yi)) of pixel values (I(xi,yi)) for the shift (Δxy) in the window of which size is w can be represented by Equation (15):
E ( x i , y i ) = Δ x = w w Δ y = w w [ I ( x i + Δ x , y i + Δ y ) I ( x i , y i ) ] 2
where, by the Taylor expansion, Equation (15) can be approximated and arranged in a matrix form with a symmetric matrix (M) as Equation (16):
E ( x i , y i ) [ Δ x Δ y ] ( Δ x = w w Δ y = w w [ I x 2 I x I y I x I y I y 2 ] ) [ Δ x Δ y ] = [ Δ x Δ y ] M [ Δ x Δ y ]
where, Ix and Iy are the gradients of the pixel values in the x direction and y direction, respectively. Then, the score (R) for determining corner points can be calculated by Equation (17):
R = determiant ( M ) k ( trace ( M ) ) 2
where, k is the constant for controlling the ratio of the influence between determinant(M) and trace(M). The pixels of which windows have R higher than a certain threshold are classified as the corner points in the image.
For extracting plane features from a mobile laser scanning data, the RANSAC scheme was applied. The RANSAC scheme defines model parameters by iterative processes of hypothesis and verification. In the hypothesis step, sample points are randomly extracted from a dataset and form a plane model. Then, in the verification step, the points within the distance criterion are classified as inliers and the Root Mean Squared Error (RMSE) between the plane model and inliers is calculated as a score to adopt the best plane model. For the best results, the number of the iteration (k) is determined by Equation (18):
k = log ( 1 p ) log ( 1 w n )
where, p is the probability that the best model is returned by k iterations, w is the probability that a point belongs to the best model, and n is the number of sample points. The plane model applied is defined by Equation (19):
a x + b y + c z + 1 = 0
where, a,b,c are the parameters of the plane model, and x,y,z are the 3D coordinates of a point. Figure 6 describes the examples of the point and plane features extracted from each sensing data. As shown in Figure 6b, the point cloud obtained by the mobile laser scanner is too sparse to extract the point features. Therefore, plane features were extracted and applied on the boresight and lever-arm calibration of the mobile laser scanner.

3. Data Preparation

3.1. Sensors

To obtain continuous multi-view stereo images, three AXIS F1005-E, which are network video cameras were combined [60]. The effective sensor size of the camera is 1/2.8″, and the size of the captured image is 1920 × 1200 pixels. The maximum frame rate is 60 fps which is sufficient for compensating the image motion blur that occurs when the sensor platform moves fast. The camera of which focal length is 10.5 mm could achieve 113° horizontal FOV and 62° vertical FOV. Table 1 summarizes the specifications of the AXIS F1005-E (AXIS Communications AB, Emdalavägen Lund, Sweden).
For 3D laser scanning of target objects, the Velodyne HDL 32-E (Velodyne, Morgan Hill, CA, USA) was adopted [61]. The mobile laser scanner has rotating 32 channels using a Class 1 laser, which is safe for general users under all conditions [62]. The maximum measurement range is up to 100 m and the rotation rate of the laser scanners varies from 5 to 20 Hz. The laser positioning accuracy is ± 2 cm. Its horizontal FOV and angular resolution are 360° and 0.1~0.4°, respectively. However, its vertical FOV are only −30.67 to 10.67° and the vertical angular resolution is 1.33°. Table 2 summarizes the specifications of the HDL 32-E.
Since the observations based on image and laser scanning are conducted on a fast-moving vehicle, the GNSS/INS which can obtain significantly precise and dense navigation information is essential, and OxTS survey+ was adopted [63]. The positioning and orientation accuracies of the GNSS/INS are 0.01 m and 0.1 degrees, respectively, and the output rate is 100 Hz. Table 3 summarizes the specifications of the OxTS survey+.
To obtain the reference data for the boresight and lever-arm calibration of the developed MMS, FARO’s Focus 3D was adopted [64]. Its maximum measurement range and scan rate are 120 m and 976,000 points/sec, respectively. The ranging error and noise are only ±2 mm and 0.6 mm, respectively. Moreover, the horizontal and vertical FOVs are 360° and 305°, respectively. Table 4 summarizes the specifications of the Focus 3D.
With the terrestrial laser scanner, dense and precise 3D information of target objects can be constructed rapidly and effectively. However, since the terrestrial laser scanner must be fixed while scanning, it is not suitable for an MMS. Furthermore, the terrestrial laser scanner can only observe objects in line of sight so occlusion might occur in the observed point cloud. To maximize the coverage of the laser scanning and minimize the occlusion, multiple scanning and registration processes have to be conducted. For the registration of each scanned data set, common targets must be set at the areas overlapped in the multiple point clouds. In this paper, the artificial sphere targets were utilized for registration. The georeferencing process was followed to transform the coordinate system of the registered point cloud into an absolute coordinate system. For the georeferencing process, the static GNSS observation which can ensure 3.5 mm accuracy was conducted using the Trimble R8 GNSS receiver [65].

3.2. Datasets

To do the camera calibration, a checkerboard whose grid size of squares is 30 × 30 mm was utilized [66,67]. Figure 7 shows the checkerboard and extracted reference points.
As shown in Figure 7a, the reference points could be detected from the checkerboard in captured images. Moreover, as shown in Figure 7b–d, a sufficient number of the points were utilized to achieve the geometry to release the correlation between camera calibration parameters. The checker points configure the virtual grid having 30 mm spacing and the actual camera IOPs and virtual camera EOPs can be estimated by the least squares adjustment with their precision.
After the camera calibration process of each camera sensor was done, the boresight and lever-arm calibration was conducted using the terrestrial laser scanning data. Due to the limitation of the single laser scanning coverage, 15 repetitions of the terrestrial laser scanning were conducted. Moreover, the registration and georeferencing processes were performed to use the scanning data as the reference data for the boresight and lever-arm calibration of the MMS. Figure 8 describes the distribution of the scanning stations, the targets for the registration and georeferencing, and the processed point cloud which was registered into the Korean TM coordinate system.
As shown in Figure 8, the registration targets were well distributed by considering the locations of the scanning stations and the sufficient number of the georeferencing targets were installed to cover the scanned area. The point density of the point cloud in the calibration site was about 10 mm per 1 m distance from the terrestrial laser scanning station.
At the calibration site, the BW targets whose diameters were 15 cm were installed to extract the reference point and plane features from the images and the point cloud. Figure 9 describes the datasets observed by the developed MMS.
As shown in Figure 9a–c, there existed a significant amount of image distortion in each image. Furthermore, compared with the terrestrial laser scanning data (Figure 8b), the point cloud observed by the mobile HDL 32-E was notably sparse. This is because the vertical FOV of the mobile laser scanner was relatively narrow and the vertical angular resolution value was relatively wide.
From the image and point cloud data obtained from each sensor, the reference features were collected for the MMS calibration. The absolute coordinates of the center points of the BW targets were extracted from each images and the reference point cloud. To extract the image coordinates of the targets, the Harris corner detection algorithm was applied [59]. Then, the absolute 3D position of the targets were extracted from the terrestrial laser scanning data using the Scene v5.3, which is the software provided with the FARO Focus 3D [68]. To extract the plane features from each laser scanning data, the RANSAC scheme was applied.
For the experiments, 35 point features and 14 plane features were extracted. The plane fitting errors were less than 5 mm for the terrestrial laser scanning data and less than 1 cm for the mobile laser scanning data. Among 49 extracted features, 25 points and 10 planes were used as the ground control features to estimate the boresight and lever-arm parameters and the others were used as the independent check features for the external check of the calibration. For the network video camera sensors, the locational errors between the coordinates of features projected onto an image and the coordinates extracted from the image were calculated. For the mobile laser scanner, the discrepancies between the mathematically defined planes and the points transformed based on the estimated parameters were calculated. Figure 10 shows the distribution of the extracted features.

4. Calibration Results

4.1. Camera Calibration Results

The camera calibration of each camera sensor was conducted using the images capturing the reference checkerboard. Moreover, the initial focal length for the iterative least squares solution approach was set up according to the specification provided by the manufacturer and refined by minimizing the projection residuals of the reference points with the estimated principal points and lens distortion parameters. Table 5 summarizes the initial values and camera calibration results. The table shows the differences between the initial values and calibrated parameters. The difference might have an influence on the boresight and lever-arm calibration results. For example, since the error in estimating the focal length means the error in the image projection depth, the error in the focal length might cause error in the lever-arm calibration results in the direction of the boresight. Furthermore, the principal points might cause the error in the lever-arm calibration results in the normal direction of the boresight. The lens distortion parameters were also calculated and used for the rectification of the distorted images.
Figure 11 shows the image rectified with the calibrated parameters.
As well as the calibrated values, the precision of each parameter also could be quantified by the a priori standard deviation. Since the focal length and principal points can have a significant effect on the results of the boresight and lever-arm calibration, the precision of the calibration should be checked previously. As a result of the camera calibration, the precisions of the calibrated focal lengths were below 0.001 mm and the precisions of the principal points were below 0.5 pixels. On the other hand, the residuals of the calibration also quantify the precision of the calibrated camera model. After the camera calibration, the RMS of the projection residuals could be released by 0.42 pixels.

4.2. Boresight and Lever-Arm Calibration Result

With the calibrated IOPs of each camera sensor and the features extracted from each sensing data and the reference point cloud, the boresight and lever-arm calibration of the camera and laser scanner of the MMS was conducted. Table 6 and Table 7 summarize the results of the boresight and lever-arm calibration.
As shown in Table 6, the precision of the calibrated boresight was about 0.1 degrees and the precision of the calibrated lever-arm was about 10 mm. As shown in Table 7, while the differences between the coordinates of the ground features projected onto the images and the coordinates directly extracted from the images were below 1 pixel, the discrepancy between the plane model and the points transformed using the estimated parameters was 12 mm.
Furthermore, to analyze the influence of plane geometry on parameter estimation, the number of the control planes were controlled and every combination were applied on the MMS calibration. Table 8 summarized the results of applying the controlled datasets.
As shown in Table 8, when more than 9 control planes were used for the MMS calibration, the precisions of the lever-arm parameters could be achieved by 15 mm. However, when a smaller number of planes were used, the uncertainty of the calibration increased and precisions of the estimated parameters also decreased. In particular, even though eight control planes were used, the success rate was only 69%. Meanwhile, just with four control planes, the combination of 10% could achieve precision of 15 mm. This result implied that not only the number of control features but also their geometry must be carefully considered when designing the calibration site. Figure 12 illustrates the examples of good and bad geometries of control planes for precise calibration.
Using the control features represented in Figure 12a,c, the precision of estimated lever-arm parameters could be achieved by 8 mm and 12 mm, respectively. On the other hand, as shown in Figure 12b, the orientations of control planes also played an important role on the calibration. Since there were no ceiling or floor plane, the iterative least square adjustment could not estimate appropriate parameters.
With the boresight and lever-arm parameters, the relative coordinate systems of the sensing data were transformed into the single common coordinate system of the body frame. In Figure 13, the arrows represent where each sensor was and which direction each sensor oriented from the body frame.
The data in the body frame can be transformed into the absolute coordinate system using the GNSS/INS data. Furthermore, with the parameters estimated from the calibration processes, the point cloud observed from the mobile laser scanner can be projected onto the captured images. Figure 14 represents the result of the transformation of the mobile scanning data into the Korean TM coordinate system. Figure 15 shows the result of the back projection of the mobile laser scanning data onto each image.

5. Discussion and Future Work

In this paper, the MMS combining the GNSS/INS, cameras and mobile laser scanner was developed and the boresight and lever-arm calibration of the MMS was conducted. The calibration approach based on the least squares adjustment using point features and plane features has been widely applied and continuously analyzed in existing researches for sensor calibration. However, for the adjustment, it is difficult to collect proper reference data for the calibration. In this regard, the utilization of the terrestrial laser scanner could be an alternative solution to efficiently achieve a reference dataset. Comparing with the total station and laser finder, which are generally used for collecting accurate positioning data, the terrestrial laser scanner could obtain a dense and precise point cloud and reference features for the MMS calibration.
Through the calibration parameters and GNSS/INS observation, the multi-sensor integration was conducted successfully and the point clouds observed by the mobile laser scanner were georeferenced into the absolute coordinate system or accurately projected onto the time-synchronized image (Figure 16).
The dataset continuously collected from the moving platform can be represented by two information formats: (1) the point cloud representing the 3D shape and color information of the observed objects (Figure 17); and (2) the 3D positional information of the objects extracted from the continuous images (Figure 18).
As shown in Figure 17, the point cloud observed from the MMS were directly georeferenced and could represent the 3D shapes of objects. However, the point cloud generated from the MMS was too sparse to extract the accurate road facility information. In particular, when objects were far from the MMS, the vertical point density of the generated point cloud became lower. For this reason, our research team alternatively designed the scheme of the road facility mapping based on image processing techniques. Moreover, as shown in Figure 18, the road facilities can be extracted from images and have absolute 3D coordinates. Our research team expects that the collected information of road facilities can be used as the basic data for the operation of autonomous cars. However, since the urban blockage of GNSS signals due to high and dense buildings significantly causes locational biases in the point cloud observed by the designed MMS, a proper Simultaneous Localization and Mapping (SLAM) technique to improve the stability and accuracy of observation should be developed and applied.

6. Conclusions

In this paper, the MMS combining network video cameras, mobile laser scanner, and GNSS/INS was developed and the effective procedure of the MMS calibration was proposed. By defining the reference features from a terrestrial laser scanning data, the precision of the boresight and lever-arm calibration could be achieved by approximately 10 mm and 0.1 degrees.
The main advantages of applying the terrestrial laser scanner to the MMS calibration problem are efficiency and maintenance. The mechanical analysis of each sensor is impossible for general users so a calibration process is required for the operation of an MMS. However, the observation of the accurate coordinates of the reference features is difficult and labor-intensive. In this regard, the application of the terrestrial laser scanner can significantly reduce the work time for the MMS calibration. Not only that, when the MMS users want to do the calibration again, the reference data constructed in the past can be applied for the new calibration process. For this reason, the calibration approach applying the terrestrial laser scanner can be a practical solution for general MMS users.

Acknowledgments

This research was supported by a grant [MPSS-SD-2015-41] through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government.

Author Contributions

Seunghwan Hong and Hong-Gyoo Sohn were the main directors of this research. Seunghwan Hong programmed the boresight and lever-arm calibration software and collected point and plane features for the calibration, and Kwangyong Lim programmed the basic software to obtain the data from the developed MMS. Ilsuk Park and Yoonjo Choi conducted terrestrial laser scanning and operation of the developed MMS. Jisang Lee performed the registration and georeferencing of the terrestrial laser scanning data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Slattery, K.T.; Slattery, D.K.; Peterson, J.P. Road construction earthwork volume calculation using three-dimensional laser scanning. J. Surv. Eng. 2011, 138, 96–99. [Google Scholar] [CrossRef]
  2. Gonzalez-Jorge, H.; Solla, M.; Armesto, J.; Arias, P. Novel method to determine laser scanner accuracy for applications in civil engineering. Opt. Appl. 2012, 42, 43–53. [Google Scholar]
  3. Bitenc, M.; Lindenbergh, R.; Khoshelham, K.; Van Waarden, A.P. Evaluation of a lidar land-based mobile mapping system for monitoring sandy coasts. Remote Sens. 2011, 3, 1472–1491. [Google Scholar] [CrossRef]
  4. Cho, H.; Hong, S.; Kim, S.; Park, H.; Park, I.; Sohn, H.-G. Application of a terrestrial lidar system for elevation mapping in terra nova bay, antarctica. Sensors 2015, 15, 23514. [Google Scholar] [CrossRef] [PubMed]
  5. Pellenz, J.; Lang, D.; Neuhaus, F.; Paulus, D. Real-Time 3D mapping of rough terrain: A field report from disaster city. In Proceedings of the 2010 IEEE Safety Security and Rescue Robotics, Bremen, Germany, 26–30 July 2010; pp. 1–6.
  6. Gong, J. Mobile lidar data collection and analysis for post-sandy disaster recovery. In Proceedings of the 2013 International Workshop of Computing in Civil Engineering, Los Angeles, CA, USA, 23–25 June 2013.
  7. Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Autom. Constr. 2010, 19, 829–843. [Google Scholar] [CrossRef]
  8. Klein, L.; Li, N.; Becerik-Gerber, B. Imaged-based verification of as-built documentation of operational buildings. Autom. Constr. 2012, 21, 161–171. [Google Scholar] [CrossRef]
  9. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo Tourism: Exploring Photo Collections in 3D. ACM trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
  10. De Reu, J.; Plets, G.; Verhoeven, G.; De Smedt, P.; Bats, M.; Cherretté, B.; De Maeyer, W.; Deconynck, J.; Herremans, D.; Laloo, P. Towards a three-dimensional cost-effective registration of the archaeological heritage. J. Archaeol. Sci. 2013, 40, 1108–1121. [Google Scholar] [CrossRef]
  11. Golparvar-Fard, M.; Bohn, J.; Teizer, J.; Savarese, S.; Peña-Mora, F. Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques. Autom. Constr. 2011, 20, 1143–1155. [Google Scholar] [CrossRef]
  12. Dai, F.; Rashidi, A.; Brilakis, I.; Vela, P. Comparison of image-based and time-of-flight-based technologies for three-dimensional reconstruction of infrastructure. J. Constr. Eng. Manag. 2013, 139, 69–79. [Google Scholar] [CrossRef]
  13. Puttonen, E.; Lehtomäki, M.; Kaartinen, H.; Zhu, L.; Kukko, A.; Jaakkola, A. Improved sampling for terrestrial and mobile laser scanner point cloud data. Remote Sens. 2013, 5, 1754–1773. [Google Scholar] [CrossRef]
  14. Olsen, M.J.; Kuester, F.; Chang, B.J.; Hutchinson, T.C. Terrestrial laser scanning-based structural damage assessment. J. Comput. Civil Eng. 2009, 24, 264–272. [Google Scholar] [CrossRef]
  15. Gordon, S.J.; Lichti, D.D. Modeling terrestrial laser scanner data for precise structural deformation measurement. J. Surv. Eng. 2007, 133, 72–80. [Google Scholar] [CrossRef]
  16. Schwarz, K.P.; El-Sheimy, N. Mobile mapping systems–state of the art and future trends. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 10. [Google Scholar]
  17. Novak, K. The ohio state university highway mapping system: The stereo vision system component. In Proceedings of the 47th Annual Meeting of The Institute of Navigation, Williamsburg, VA, USA, 10–12 June 1991; pp. 121–124.
  18. Grejner-Brzezinska, D.A. Mobile mapping technology: Ten years later (part one). Surv. Land Inf. Syst. 2001, 61, 75–92. [Google Scholar]
  19. Madeira, S.; Gonçalves, J.A.; Bastos, L. Sensor integration in a low cost land mobile mapping system. Sensors 2012, 12, 2935–2953. [Google Scholar] [CrossRef] [PubMed]
  20. Sairam, N.; Nagarajan, S.; Ornitz, S. Development of mobile mapping system for 3d road asset inventory. Sensors 2016, 16, 367. [Google Scholar] [CrossRef] [PubMed]
  21. Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm. Remote Sens. 2010, 65, 514–522. [Google Scholar] [CrossRef]
  22. Da Silva, J.F.C.; Camargo, P.d.O.; Gallis, R. Development of a low-cost mobile mapping system: A south american experience. Photogramm. Record 2003, 18, 5–26. [Google Scholar] [CrossRef]
  23. Habib, A.; Bang, K.I.; Kersting, A.P.; Chow, J. Alternative methodologies for lidar system calibration. Remote Sens. 2010, 2, 874–907. [Google Scholar] [CrossRef]
  24. Chan, T.O.; Lichti, D.D.; Glennie, C.L. Multi-feature based boresight self-calibration of a terrestrial mobile mapping system. ISPRS J. Photogramm. Remote Sens. 2013, 82, 112–124. [Google Scholar] [CrossRef]
  25. Habib, A.F.; Morgan, M.F. Automatic calibration of low-cost digital cameras. Opt. Eng. 2003, 42, 948–955. [Google Scholar]
  26. van den Heuvel, F.A. Estimation of interior orientation parameters from constraints on line measurements in a single image. Int. Arch. Photogramm. Remote Sens. 1999, 32, W11. [Google Scholar]
  27. Sturm, P.F.; Maybank, S.J. On plane-based camera calibration: A general algorithm, singularities, applications. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999.
  28. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 666–673.
  29. Bradski, G.; Kaehler, A. Learning Opencv: Computer Vision with the Opencv Library; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2008. [Google Scholar]
  30. Blanchet, G.; Charbit, M. Digital Signal and Image Processing Using Matlab, Volume 2: Advances and Applications: The Deterministic Case; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  31. Cain, C.; Leonessa, A. Validation of underwater sensor package using feature based slam. Sensors 2016, 16, 380. [Google Scholar] [CrossRef] [PubMed]
  32. Chiang, K.-W.; Tsai, M.-L.; Naser, E.-S.; Habib, A.; Chu, C.-H. New calibration method using low cost mem imus to verify the performance of uav-borne mms payloads. Sensors 2015, 15, 6560–6585. [Google Scholar] [CrossRef] [PubMed]
  33. Chan, T.O.; Lichti, D.D. Automatic in situ calibration of a spinning beam lidar system in static and kinematic modes. Remote Sens. 2015, 7, 10480–10500. [Google Scholar] [CrossRef]
  34. Chan, T.; Lichti, D.D.; Belton, D. Temporal analysis and automatic calibration of the velodyne hdl-32e lidar system. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 2, 61–66. [Google Scholar] [CrossRef]
  35. Glennie, C.; Lichti, D.D. Static calibration and analysis of the velodyne hdl-64e s2 for high accuracy mobile scanning. Remote Sens. 2010, 2, 1610–1624. [Google Scholar] [CrossRef]
  36. Yang, M.Y.; Förstner, W. Plane detection in point cloud data. In Proceedings of the 2nd International Conference on Machine Control Guidance, Bonn, Germany, 9–11 March 2010; pp. 95–104.
  37. Le Scouarnec, R.; Touzé, T.; Lacambre, J.-B.; Seube, N. A new reliable boresight calibration method for mobile laser scanning applications. In Proceedings of the European Calibration and Orientation Workshop, Castelldefels, Spain, 12–14 February 2014. [CrossRef]
  38. Shang, E.; An, X.; Shi, M.; Meng, D.; Li, J.; Wu, T. An efficient calibration approach for arbitrary equipped 3-d lidar based on an orthogonal normal vector pair. J. Intell. Robot. Syst. 2015, 79, 21–36. [Google Scholar] [CrossRef]
  39. Morales, J.; Martínez, J.L.; Mandow, A.; Reina, A.J.; Pequeño-Boter, A.; García-Cerezo, A. Boresight calibration of construction misalignments for 3d scanners built with a 2d laser rangefinder rotating on its optical center. Sensors 2014, 14, 20025–20040. [Google Scholar] [CrossRef] [PubMed]
  40. Rieger, P.; Studnicka, N.; Pfennigbauer, M.; Zach, G. Boresight alignment method for mobile laser scanning systems. J. Appl. Geod. 2010, 4, 13–21. [Google Scholar] [CrossRef]
  41. Filin, S. Recovery of systematic biases in laser altimetry data using natural surfaces. Photogramm. Eng. Remote Sens. 2003, 69, 1235–1242. [Google Scholar] [CrossRef]
  42. Glennie, C. Calibration and kinematic analysis of the velodyne hdl-64e s2 lidar sensor. Photogramm. Eng. Remote Sens. 2012, 78, 339–347. [Google Scholar] [CrossRef]
  43. Keller, F.; Sternberg, H. Multi-sensor platform for indoor mobile mapping: System calibration and using a total station for indoor applications. Remote Sens. 2013, 5, 5805–5824. [Google Scholar] [CrossRef]
  44. Puente, I.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P. Review of mobile mapping and surveying technologies. Measurement 2013, 46, 2127–2145. [Google Scholar] [CrossRef]
  45. Reshetyuk, Y. Self-Calibration and Direct Georeferencing in Terrestrial Laser Scanning. Ph.D. Thesis, Royal Institute of Technology, Stockholm, Sweden, 2009. [Google Scholar]
  46. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
  47. Ministry of Land, I.a.T.o.K. Standard of Horizontal Coordinate. Available online: http://www.law.go.kr/lsBylInfoPLinkR.do?lsiSeq=171727&lsNm=%EA%B3%B5%EA%B0%84%EC%A0%95%EB%B3%B4%EC%9D%98%EA%B5%AC%EC%B6%95%EB%B0%8F%EA%B4%80%EB%A6%AC%EB%93%B1%EC%97%90%EA%B4%80%ED%95%9C%EB%B2%95%EB%A5%A0%EC%8B%9C%ED%96%89%EB%A0%B9&bylNo=0002&bylBrNo=00&bylCls=BE&bylEfYd=&bylEfYdYn=Y (accessed on 29 August 2016).
  48. U.S. General Services Administration. Gsa Bim Guide for 3d Imaging. Available online: http://www.gsa.gov/bim (accessed on 28 January 2015).
  49. Becerik-Gerber, B.; Jazizadeh, F.; Kavulya, G.; Calis, G. Assessment of target types and layouts in 3d laser scanning for registration accuracy. Autom. Constr. 2011, 20, 649–658. [Google Scholar] [CrossRef]
  50. Tan, K.; Cheng, X. Correction of incidence angle and distance effects on tls intensity data based on reference targets. Remote Sens. 2016, 8, 251. [Google Scholar] [CrossRef]
  51. Rusinkiewicz, S.; Levoy, M. Efficient variants of the icp algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152.
  52. Korean National Geographic Information Institute. Gnss Data Service. Available online: http://gnss.ngii.go.kr/info/opsInfo (accessed on 26 January 2017).
  53. Hassan, E. Calibration of Multi-Sensor Laser Scanning Systems; University of Calgary: Calgary, AB, Canada, 2014. [Google Scholar]
  54. Leslar, M.; Wang, J.; Hu, B. Boresight and lever arm calibration of a mobile terrestrial lidar system. GEOMATICA 2016, 70, 97–112. [Google Scholar] [CrossRef]
  55. El-Sheimy, N. Mobile multi-sensor systems: The new trend in mapping and gis applications. In Geodesy beyond 2000; Springer: New York, NY, USA, 2000; pp. 319–324. [Google Scholar]
  56. Mikhail, E.M.; Ackermann, F.E. Observations and Least Squares; IEP Don-Donnelley: New York, NY, USA; Hagerstown, MD, USA; San Francisco, CA, USA; London, UK, 1976. [Google Scholar]
  57. Ghilani, C.D. Adjustment Computations: Spatial Data Analysis; John Wiley & Sons, INC.: Hoboken, NJ, USA, 2010. [Google Scholar]
  58. Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and lidar data registration using linear features. Photogramm. Eng. Remote Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  59. Harris, C.; Stephens, M. A combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; p. 50.
  60. AXIS. Axis f1005-e Sensor Unit. Available online: http://www.axis.com/files/datasheet/ds_f1005e_1498822_en_1509.pdf (accessed on 29 August 2016).
  61. Velodyne. Hdl 32-e Datasheet. Available online: http://velodynelidar.com/lidar/hdlproducts/97–0038d%20HDL-32E_datasheet.pdf (accessed on 29 August 2016).
  62. Commission, I.E. Safety of Laser Products: Equipment Classification, Requirements and User’s Guide; International Electrotechnical Commission: Geneva, Switzerland, 1998. [Google Scholar]
  63. Oxford Technical Solutions. Survey+ User Manual. Available online: http://www.oxts.com/Downloads/Products/surveyplus/survey+man.pdf (accessed on 29 August 2016).
  64. FARO. Faro Focus3d. Available online: http://www.faro.com/products/3d-surveying/laser-scanner-faro-focus-3d/overview (accessed on 29 August 2016).
  65. Trimble. Trimble r8 User Guide. Available online: http://geocourse.kz/Downloads/manuals/GPS/Trimble%20R8-R6-R4_v480A_UserGuide%20ENG.pdf (accessed on 29 August 2016).
  66. MathWorks. Detectcheckerboardpoints. Available online: http://kr.mathworks.com/help/vision/ref/detectcheckerboardpoints.html#outputarg_boardSize (accessed on 29 August 2016).
  67. Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic Camera and Range Sensor Calibration Using a Single Shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), St. Paul, MN, USA, 14–18 May 2012.
  68. FARO. Scene. Available online: http://www.faro.com/products/faro-software/scene/downloads#Download (accessed on 29 August 2016).
Figure 1. Configuration of mobile mapping system: network video cameras (F: front, L: left, R: right), mobile laser scanner, and GNSS/INS.
Figure 1. Configuration of mobile mapping system: network video cameras (F: front, L: left, R: right), mobile laser scanner, and GNSS/INS.
Sensors 17 00474 g001
Figure 2. Sensor calibration and integration scheme of mobile mapping system.
Figure 2. Sensor calibration and integration scheme of mobile mapping system.
Sensors 17 00474 g002
Figure 3. (a) Sphere target used for registration of terrestrial laser scanning data; (b) sphere target detected in a point cloud (the green sphere is a fitted sphere model).
Figure 3. (a) Sphere target used for registration of terrestrial laser scanning data; (b) sphere target detected in a point cloud (the green sphere is a fitted sphere model).
Sensors 17 00474 g003
Figure 4. Static GNSS observation conducted for georeferencing of terrestrial laser scanning data.
Figure 4. Static GNSS observation conducted for georeferencing of terrestrial laser scanning data.
Sensors 17 00474 g004
Figure 5. Conceptual model of mobile mapping system.
Figure 5. Conceptual model of mobile mapping system.
Sensors 17 00474 g005
Figure 6. Example of reference feature extraction: (a) point feature extracted from image; (b) plane feature extracted from mobile laser scanning data.
Figure 6. Example of reference feature extraction: (a) point feature extracted from image; (b) plane feature extracted from mobile laser scanning data.
Sensors 17 00474 g006
Figure 7. (a) Checkerboard and extracted reference points; (b) reference points for calibration of CAM(F); (c) reference points for calibration of CAM(L); (d) reference points for calibration of CAM(R).
Figure 7. (a) Checkerboard and extracted reference points; (b) reference points for calibration of CAM(F); (c) reference points for calibration of CAM(L); (d) reference points for calibration of CAM(R).
Sensors 17 00474 g007
Figure 8. Terrestrial laser scanning data: (a) distribution of scanning stations, registration targets, and georeferencing targets; (b) registered and georeferenced point cloud of calibration site.
Figure 8. Terrestrial laser scanning data: (a) distribution of scanning stations, registration targets, and georeferencing targets; (b) registered and georeferenced point cloud of calibration site.
Sensors 17 00474 g008
Figure 9. Datasets observed from image and laser sensors in MMS: (a) CAM(F) image; (b) CAM(L) image; (c) CAM(R) image (d) mobile laser scanning data (The origin of the coordinate system is center of sensor).
Figure 9. Datasets observed from image and laser sensors in MMS: (a) CAM(F) image; (b) CAM(L) image; (c) CAM(R) image (d) mobile laser scanning data (The origin of the coordinate system is center of sensor).
Sensors 17 00474 g009
Figure 10. Distribution of reference features for boresight and lever-arm calibration: (a) point features; (b) plane features.
Figure 10. Distribution of reference features for boresight and lever-arm calibration: (a) point features; (b) plane features.
Sensors 17 00474 g010
Figure 11. Result of image rectification with calibrated IOPs: (a) original image; (b) rectified image.
Figure 11. Result of image rectification with calibrated IOPs: (a) original image; (b) rectified image.
Sensors 17 00474 g011
Figure 12. Example of good and bad geometries of control planes: (a) successful case with eight planes; (b) failed case with eight planes; (c) successful case with four planes; (b) failed case with four planes.
Figure 12. Example of good and bad geometries of control planes: (a) successful case with eight planes; (b) failed case with eight planes; (c) successful case with four planes; (b) failed case with four planes.
Sensors 17 00474 g012
Figure 13. Boresight and lever-arm calibration results (In case of CAM(F), CAM(L), and CAM(R), the direction vectors in sensor frame are [0 0 1]. In case of mobile laser scanner, the direction vector in sensor frame is [1 0 0]): (a) 2-dimensional view; (b) 3D view.
Figure 13. Boresight and lever-arm calibration results (In case of CAM(F), CAM(L), and CAM(R), the direction vectors in sensor frame are [0 0 1]. In case of mobile laser scanner, the direction vector in sensor frame is [1 0 0]): (a) 2-dimensional view; (b) 3D view.
Sensors 17 00474 g013
Figure 14. Georeferenced mobile laser scanning data.
Figure 14. Georeferenced mobile laser scanning data.
Sensors 17 00474 g014
Figure 15. Projection of mobile laser scanning data into network video camera images: (a) CAM(F); (b) CAM(R); (c) CAM(L).
Figure 15. Projection of mobile laser scanning data into network video camera images: (a) CAM(F); (b) CAM(R); (c) CAM(L).
Sensors 17 00474 g015
Figure 16. Operation of developed MMS under driving conditions: (a) georeferenced point cloud (red: observed by MMS, black: point cloud observed and georeferenced by terrestrial laser scanning data and GNSS); and (b) point cloud projected onto image.
Figure 16. Operation of developed MMS under driving conditions: (a) georeferenced point cloud (red: observed by MMS, black: point cloud observed and georeferenced by terrestrial laser scanning data and GNSS); and (b) point cloud projected onto image.
Sensors 17 00474 g016
Figure 17. Point cloud generated by developed MMS: (a) point cloud projected on aerial orthophoto; (b,c) point cloud including color information.
Figure 17. Point cloud generated by developed MMS: (a) point cloud projected on aerial orthophoto; (b,c) point cloud including color information.
Sensors 17 00474 g017
Figure 18. Example of 3D road sign mapping (white boxes: faces of people and registration numbers of cars are screened due to privacy, red box: road sign, green dot: points indicating road signs, blue dots, points projected onto the image).
Figure 18. Example of 3D road sign mapping (white boxes: faces of people and registration numbers of cars are screened due to privacy, red box: road sign, green dot: points indicating road signs, blue dots, points projected onto the image).
Sensors 17 00474 g018
Table 1. Specifications of the network video camera.
Table 1. Specifications of the network video camera.
ModelAXIS F1005-E
Effective sensor size1/2.8″
Focal length2.8 mm
Field of viewHorizontal113°
Vertical62°
Image size1920 × 1200 pixels
Frame rate60 fps
Table 2. Specifications of the mobile laser scanner.
Table 2. Specifications of the mobile laser scanner.
ModelHDL-32E
Number of channels32
Measurement rangeUp to 100 m
Rotation rate5~20 Hz
Accuracy± 2 cm
Field of viewHorizontal360°
Vertical−30.67~ 0.67°
Angular resolutionHorizontal0.1~0.4°
Vertical1.33°
LaserClass 1
903 nm Wavelength
Table 3. Specifications of the GNSS/INS unit.
Table 3. Specifications of the GNSS/INS unit.
ModelOxTS Survey+
Position accuracyUp to 0.01 m
Velocity accuracy0.1 km/h
Roll/pitch accuracy0.03°
Heading accuracy0.1°
Output rate100 Hz
Size234 × 120 × 88 mm
Table 4. Specifications of the terrestrial laser scanner.
Table 4. Specifications of the terrestrial laser scanner.
ModelFocus 3D
TypeAmplitude-Modulated Continuous Wave (AMCW)
Measurement rangeUp to 120 m
Scan rate976,000 points/sec
Ranging error±2 mm
Ranging noise0.6 mm *
Field of viewHorizontal360°
Vertical305°
Size240 × 200 × 100 mm
* at 10m-raw data, at 90% reflectance.
Table 5. Camera calibration results (precision: 1 σ).
Table 5. Camera calibration results (precision: 1 σ).
Parameter (Unit)Initial ValueCalibrated Value ± Precision
CAM(F)CAM(L)CAM(R)
Focal length (mm)2.52.47 ± 0.00542.44 ± 0.00692.52 ± 0.0085
Principal points (pixel)xp0−27.41 ± 0.700.47 ± 0.87−49.65 ± 0.95
yp033.05 ± 0.9659.03 ± 0.8718.14 ± 0.87
Radial distortion (unitless)A1 (×10−1)0−3.33 ± 0.017−3.39 ± 0.021−3.57 ± 0.03
A2 (×10−1)01.29 ± 0.0141.43 ± 0.0211.65 ± 0.03
A3 (×10−2)0−2.47 ± 0.045−3.13 ± 0.078−4.15 ± 0.15
Decentering distortion (unitless)B1 (×10−4)0−5.16 ± 0.73−8.59 ± 1.00−3.51 ± 1.21
B2 (×10−4)01.98 ± 1.503.29 ± 1.51−5.71 ± 2.35
Projection residuals (pixel)x-direction-0.470.360.50
y-direction-0.360.320.42
Table 6. Boresight and lever-arm calibration results (precision: 1 σ).
Table 6. Boresight and lever-arm calibration results (precision: 1 σ).
Parameter (unit)Calibrated Value ± PrecisionParameter (unit)Calibrated Value ± Precision
CAM (F) x T (mm)776.82 ± 6.25CAM (L) x T (mm)478.36 ± 6.13
y T (mm)1776.02 ± 5.72 y T (mm)1650.72 ± 5.61
z T (mm)383.23 ± 4.12 z T (mm)340.34 ± 4.42
ω (degree)−81.8879 ± 0.0712 ω (degree)−87.2623 ± 0.0725
φ (degree)−0.0425 ± 0.0621 φ (degree)−0.7647 ± 0.0303
κ (degree)179.2535 ± 0.0815 κ (degree)−179.8742 ± 0.0853
CAM (R) x T (mm)1060.64 ± 7.50Mobile laser scanner x T (mm)793.87 ± 1.26
y T (mm)1656.25 ± 8.32 y T (mm)1120.07 ± 1.34
z T (mm)424.36 ± 5.75 z T (mm)892.54 ± 7.38
ω (degree)−83.0442 ± 0.0641 ω (degree)−0.2845 ± 0.0642
φ (degree)−0.2962 ± 0.0446 φ (degree)5.2074 ± 0.0534
κ (degree)175.9012 ± 0.0671 κ (degree)88.2112 ± 0.0113
Table 7. Projection residual and external check result of boresight and lever-arm calibration.
Table 7. Projection residual and external check result of boresight and lever-arm calibration.
Sensor (unit)Projection ResidualExternal Check Result
MeanRMSE *MeanRMSE
CAM(F) (pixel)0.000.440.230.65
CAM(L) (pixel)0.000.650.150.85
CAM(R) (pixel)0.000.620.350.77
Mobile laser scanner (mm)11.5812.9910.9711.58
* Root Mean Square Error.
Table 8. Precision of estimated parameter and external check result according to plane number.
Table 8. Precision of estimated parameter and external check result according to plane number.
Number of PlanesNumber of Precise Cases/Number of Combination * (percentage)Precision of Estimated Parameters (mm) **External Check Result (mm) **
x T y T z T MeanRMSE
30/120 (0%)-----
422/210 (10%)5.533.9211.0815.7115.86
576/252 (30%)1.151.799.1311.4112.16
697/210 (46%)3.151.939.489.3311.65
766/120 (55%)0.991.187.038.229.01
831/45 (69%)1.311.487.5310.2411.28
910/10 (100%)1.491.647.849.219.78
101/1 (100%)1.261.347.3810.9711.58
* The cases have estimated lever-arm parameter precisions better than 15 mm; ** The cases have the highest precision among the estimated parameters.

Share and Cite

MDPI and ACS Style

Hong, S.; Park, I.; Lee, J.; Lim, K.; Choi, Y.; Sohn, H.-G. Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems. Sensors 2017, 17, 474. https://doi.org/10.3390/s17030474

AMA Style

Hong S, Park I, Lee J, Lim K, Choi Y, Sohn H-G. Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems. Sensors. 2017; 17(3):474. https://doi.org/10.3390/s17030474

Chicago/Turabian Style

Hong, Seunghwan, Ilsuk Park, Jisang Lee, Kwangyong Lim, Yoonjo Choi, and Hong-Gyoo Sohn. 2017. "Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems" Sensors 17, no. 3: 474. https://doi.org/10.3390/s17030474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop