Next Article in Journal
Unified Camera Tamper Detection Based on Edge and Object Information
Previous Article in Journal
Fluorescence Spectroscopy and Chemometric Modeling for Bioprocess Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems

1
School of Civil and Environmental Engineering, College of Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea
2
Department of Civil and Environmental Engineering, College of Engineering, Myongji University, 116 Myongji-ro, Cheoin-gu, Yongin, Gyeonggy-do 449-728, Korea
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(5), 10292-10314; https://doi.org/10.3390/s150510292
Submission received: 20 February 2015 / Revised: 17 April 2015 / Accepted: 27 April 2015 / Published: 4 May 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system’s trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach.

1. Introduction

Three-dimensional (3D) reconstruction of building structures is an important task as it allows for representing and documenting the current status of building components and maintenance work [1,2,3,4]. To facilitate 3D data acquisition of existing structures, laser scanners, which are fast, simple to use, and yet highly accurate, are widely employed [5,6]. However, the conventional static laser scanning has some limitations on its operability, particularly in indoor environments due to the presence of clutter and occlusions: in order to scan complex indoor spaces without loss of information, surveyors need to change the scanner location many times, which incurs extra work for registration of separate point-cloud data [7]. Alternatively, a mobile-based kinematic mapping system that uses the Simultaneous Localization and Mapping (SLAM) technique has been considered. Typically, SLAM has been employed for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition in indoor environments.
One of the common methods for acquisition of dense 3D data is the use of a 2D Laser Range Finder (LRF). A mobile system equipped with an LRF sensor scans the surrounding environments on the 2D plane and uses that information to localize its position. While it is moving, the system provides for its trajectory in a consistent way. 3D data can be obtained by scanning the vertical profiles in the direction perpendicular to its trajectory and registering all scans in the same coordinate system. The necessary hardware can be obtained by adding vertically mounted LRF sensors to a mobile platform [8,9]. In this case, misalignment of multiple sensors causes a systematic contradiction between them, which negatively impacts the utility of sensor fusion algorithms [10].
Calibration is the process of estimating the parameters that need to be applied to correct actual measurements to their true values [11]. These parameters together constitute a calibration model, which can be used to correct systematic instrumental errors [12]. Typically, the parameters of geometric sensors can be decomposed into intrinsic and extrinsic parameters. The intrinsic parameters control how the sensor functions and samples the scene. The extrinsic parameters determine the position and orientation of the sensor relative to a reference coordinate system [13]. According to the parameter types, calibration also can be divided into intrinsic and extrinsic forms. Intrinsic calibration refers to the process of setting the magnitude of the output of a sensor to the magnitude of the input property within a specified accuracy and precision. Extrinsic calibration refers to the process of finding the location of the sensor with respect to some other reference frame. This is typically required in multi-sensor fusion, where the data from different sensors has to be registered in a single reference frame [10,14]. Thus far, extrinsic calibration of multiple sensors in mobile system has been studied mainly for determination of the relative transformation between an LRF and a rigidly-attached camera on a mobile system [15,16,17,18]. However, there have been very few works on calibration of multiple LRF sensors [19]. Underwood et al. [10] proposed a multi-LRF calibration framework that minimizes the sensor misalignment, but the process was for an outdoor mobile mapping system with known motion information using high-performance, but also high-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS)-based navigation: The 3D geometric primitive (a single vertical pole) was detected while the vehicle drove around the test site and was used for the calibration input. For a calibration without motion information, Antone and Friedman [20] presented an automated procedure that uses a specially-designed pyramid target for calibration of the 3D location and orientation of LRF position with respect to a fixed 3D reference frame. The method, however, was designed only for a single LRF calibration. The multi-LRF calibration, proposed by Choi et al. [19], estimates a relative pose of 2D LRF sensors using scan data on two orthogonal planes, but requires further verification of coinciding or perpendicular scan-line inputs for calibration.
In this research, a kinematic laser scanning system is introduced for 3D indoor point-cloud data acquisition. A new calibration facility and a mathematical model are also presented, and the calibration for all LRF sensors involved is conducted and evaluated. This research assumes that the intrinsic sensor calibration is completed, and focuses on the extrinsic calibration of multiple LRF sensors to identify the rigid transformation from each sensor frame to the platform body frame. The calibration process entails the following: (1) a mathematical adjustment model is developed to estimate the influencing systematic factors; (2) a double-deck calibration facility, which is specifically designed for multiple LRF sensors, is introduced; (3) the calibration facility is scanned by the developed scanning system, and a total station with high accuracy is used for the coordinate measurements; (4) the calibrated parameters are estimated using a constrained least squares method; and finally (5) these parameters are analyzed and evaluated by means of a correlation matrix of parameters, residual calculations, and visual inspection of the produced point cloud data.

2. System Description and Mathematical Model

2.1. Kinematic 3D Laser Scanning System

Figure 1 shows the kinematic 3D laser scanning system developed in this research. It is approximately 35 cm (length) × 35 cm (width) × 78 cm (height). The platform carries three 2D LRFs (UTM-30LX, Hokuyo, Osaka, Japan). It provides a scanning range of 270° with angular resolution of 0.25°. It is able to measure distances up to 60 m, but without guaranteed reliability. One full scan cycle lasts for 25 ms, which supplies a 40 Hz measurement frequency. Data transfer to the host is realized through USB 2.0 interface or Ethernet cable. The measurement accuracy, according to the manufacturer [21], is defined as ±30 mm at 0.1 to 10 m range, and ±50 mm at 10 to 30 m range. Precision of the repeated measurement (standard deviation) is less than 10 mm at 0.1 to 10 m range, and less than 30 mm at 10 to 30 m range on white sheet [22]. In this research, one is mounted horizontally to update the system’s location, and the other two are mounted vertically to reconstruct the 3D scenes. In addition, a laptop computer is used for storing the data of each sensor, and an Ethernet hub used for connecting the three LRFs with the laptop computer.
Figure 1. Kinematic 3D laser scanning system developed in this research.
Figure 1. Kinematic 3D laser scanning system developed in this research.
Sensors 15 10292 g001

2.2. Coordinate System

In almost all mapping scenarios, the sensors can sample only from a small region of the larger area to be mapped. The complete map is built by physically moving the sensors through the environments and registering the information into a single coordinate frame; this usually necessitates the transformation of several coordinate frames regardless of the sensors or mapping algorithm [10]. In the present study, three coordinate frames—the sensor, body, and global frames—were adopted to model the locations of the point acquisitions with respect to the location of the kinematic scanning system:
  • The sensor frame (s) of the 2D LRF provides point information by means of the distance and angle p = [ ϕ ρ 0 ] T in the sensor frame (s), which is then transformed to Cartesian coordinates as p s = [ ρ cos ( ϕ ) ρ s i n ( ϕ ) 0 ] T . The sensor frame is defined by its alignment on the platform body frame.
  • The body frame (b) of the developed system is right-handed. Its origin is fixed to the center point on the mobile system, and the x-axis points in the direction of the platform’s forward movement. Each sensor is located with respect to the body frame by the lever-arm (three constant translation) and bore-sight (three rotation) parameters given by T b = [ x b y b z b ] T and R b = [ ω b φ b κ b ] T . In the present study, three different lever-arm and bore-sight parameter groups are used to define the middle-horizontal, left-vertical, and right-vertical LRF sensors.
  • The global frame (g) is fixed to an arbitrary point on the Earth and is used to represent the stationary environment in which the platform moves. In the present study, the origin of the global frame was fixed to the point from which the platform starts to move. The movement of the system with respect to the global frame is given by the three constant translation parameters [ x g y g z g ] T and three rotation parameters [ ω g φ g κ g ] T , respectively. In its practical implementation, however, the movement of the system (X) is limited to the 2D space and thus represented by two translation parameters on the x-y plane and one rotation parameter along the z-axis: X = [ x g y g κ g ] T .
Figure 2 shows the configurations of the three coordinate systems. The three 2D LRFs used in the developed kinematic scanning system are mounted to point in three different directions. The middle LRF is mounted horizontally to sweep the x-y plane of the body frame and to correct the system’s position. The other two LRFs are mounted vertically (on the y-z plane of the body frame) to scan the profiles of the surrounding environments. 3D point clouds are obtained by registering those vertical profiles on the system’s trajectory in the global frame. Since the maximum field of view of the Hokuyo UTM-30LX is limited to 270°, two sensors are needed to capture the complete scan (360°) without a servo motor.
Figure 2. Configurations of sensor, body, and global frames of the developed system.
Figure 2. Configurations of sensor, body, and global frames of the developed system.
Sensors 15 10292 g002
By combining two coordinate transformations, a point p s i = [ x y 0 ] T , in which i refers to the middle-horizontal (h), left-vertical (l), and right-vertical (r) scanners respectively on the 2D sensor frame, can be transformed to a point p g = [ x y z ] T on the 3D global frame. At time t, the transformation model is given by:
p g ( t ) = R b g ( t ) [ R s i b p s i ( t ) + T b i ] + T g ( t )
where R s i b = R κ i R φ i R ω i is given by:
R κ i = [ cos κ i sin κ i 0 sin κ i cos κ i 0 0 0 1 ] , R φ i = [ cos φ i 0 sin φ i 0 1 0 sin φ i 0 cos φ i ] , R ω i = [ 1 0 0 0 cos ω i sin ω i 0 sin ω i cos ω i ]
and T b i = [ x b i y b i z b i ] T ; both are determined from the ith mounted sensor position in the body frame, whereas R b g ( t ) = [ cos κ ( t ) sin κ ( t ) 0 sin κ ( t ) cos κ ( t ) 0 0 0 1 ] and T g ( t ) = [ x ( t ) y ( t ) 0 ] T are determined by SLAM solution.
This is due to the fact that the movement of the developed system is limited to the x-y plane, thus only two translation parameters (x(t), y(t)) and one rotation parameter (κ(t)) are considered. These equations are required whenever sensory information from a mobile system is mapped onto the global frame.
For multi-sensor systems in SLAM, one of the primary sources of mapping error is sensor misalignment. Therefore, sensors must be carefully calibrated with respect to the body frame to enable the mobile system to accurately localize its position and map surrounding environments [20]. In Figure 2, the alignment errors in the body frame can be decomposed into lever-arm (xb, yb, zb) and bore-sight (ωb, φb, κb) errors. Compared with the lever-arm errors, whose impact is constant regardless to the range, the bore-sight errors are more substantial because they are accumulated with increasing range [23]. For example, bore-sight errors of only 1° can result in measurement errors of over 0.25 m at a distance of 15 m [24]. In most system installations, the lever-arms can be determined separately by physical means such as surveying instrument or design drawing [25,26,27], but the bore-sight can only be calculated indirectly (estimated by least squares calibration) [28]. Moreover, high correlation between the lever-arm and bore-sight parameters can lead to failure in the least squares calibration [27]; accordingly, the adjustment model in the present research assumes that the lever-arm parameters are known from the design drawing of the sensor stand (Figure 3), and therefore focuses only on the bore-sight calibration.
Figure 3. (a) Lever-arm parameters and (b) design drawing of the sensor stand.
Figure 3. (a) Lever-arm parameters and (b) design drawing of the sensor stand.
Sensors 15 10292 g003

2.3. Calibration Facility

For carrying out the bore-sight calibration, the 3D calibration facility was established as shown in Figure 4. In practice, the LRF’s large beam uncertainty at a long range creates obstacle to calibrate the LRF sensors in a large test site. Moreover, due to the LRF’s finite angular resolution, the calibration target should be placed within a small enough radius, thus producing a sufficient number of samples on the target surface for reliable estimation [20]. The multi-LRF calibration in a small area is also proved by Choi et al. [19]; the size of the calibration area is approximately 2 m × 2 m. In the present study, the calibration facility is composed of 1 m-radius aluminum frames hanging 16 targets on each (total: 32 targets). In addition, the half-circle shape was specifically designed, as shown in Figure 5, to avoid partial occlusions on the targets and to allow for a good network geometry of the target array. The latter is especially important, because weak network geometry of targets leads to correlation between calibration parameters in the adjustment [12,27]. The horizontal frame is supported by three tripods, and the vertical frame is wedged up using two screws on each side.
Figure 4. Establishment of the double-deck calibration facility.
Figure 4. Establishment of the double-deck calibration facility.
Sensors 15 10292 g004
Figure 5. (a) Some targets are partially occluded (red dots) in the line-of-sight of the LRF sensor; and (b) no occlusions are detected using the half-circle frame.
Figure 5. (a) Some targets are partially occluded (red dots) in the line-of-sight of the LRF sensor; and (b) no occlusions are detected using the half-circle frame.
Sensors 15 10292 g005

2.4. System Equations

Bore-sight errors usually are computed by least squares using a set of target points captured in the scanning areas [23]. In order to represent the full observation models of nonlinear transformations between the sensor and the global frame, Equation (1) is expanded as:
F x = x g + cos κ g ( cos φ b cos κ b x s cos φ b sin κ b y s + x b + v o f f ) sin κ g ( cos φ b sin κ b x s + sin ω b sin φ b cos κ b x s + cos ω b cos κ b y s sin ω b sin φ b sin κ b y s + y b ) F y = y g + sin κ g ( cos φ b cos κ b x s cos φ b sin κ b y s + x b + v o f f ) + cos κ g ( cos φ b sin κ b x s + sin ω b sin φ b cos κ b x s + cos ω b cos κ b y s sin ω b sin φ b sin κ b y s + y b ) F z = z g + sin φ b sin κ b x s cos ω b sin φ b cos κ b x s + sin ω b cos κ b y s + cos ω b sin φ b sin κ b y s + z b
where [ x s y s ] T indicates a scanned point measured by an LRF in the sensor frame. Coordinate transformation from the sensor to the body frame is defined by the lever-arm [ x b y b z b ] T and bore-sight [ ω b φ b κ b ] T parameters. In order to determine the system’s position from sensor frame to global frame [ x g y g z g ] T and orientation κ g in the global frame, a prism target (set up on the origin of the body frame) and two specific reflective sheet targets (attached behind the vertical scanners’ stand) were used and measured by a total station. As shown in Figure 6, the system’s position was determined by subtraction of the offset (0.080 m) from the z value of the prism’s coordinates, and the system’s orientation was derived from the azimuth of the baseline connecting the two sheet targets’ centers.
Figure 6. Prism and sheet targets for total station measurement.
Figure 6. Prism and sheet targets for total station measurement.
Sensors 15 10292 g006
Additionally, two offset parameters, hoff for horizontal scans and voff for vertical scans, should be determined prior to the adjustment. Calibration presumes the availability of targets with known coordinates determined with high precision [12]. hoff and voff indicate the offsets between the transformed 2D scan lines and the target point in the 3D global frame. Figure 7 shows an example of horizontal scanning, in which hoff represents the offset between the scan line and the center line: The former consists of the point returns from the target surface, and the latter passes the target point, dividing the target into halves. Likewise, voff is determined for two vertical scanners. Note that voff is assumed as same value for two vertical scanners after the system is properly leveled and aligned with the calibration facility. The alignment process and calibration facility will be explained in more detail in Section 2.3.
Figure 7. Conceptual figure of the horizontal target offset.
Figure 7. Conceptual figure of the horizontal target offset.
Sensors 15 10292 g007
After the lever-arm ( x b , y b , z b ) and two offset ( h o f f , v o f f ) parameters are determined, only the bore-sight parameters ( ω b , φ b , κ b ) remain to be estimated. The final adjustment model, which is designed to determine the nine bore-sight parameters for three different LRF sensors, is formed as:
[ F x h ω h F x h φ h F x h κ h F y h ω h F y h φ h F y h κ h F z h ω h F z h φ h F z h κ h F x l ω l F x l φ l F x l κ l F y l ω l F y l φ l F y l κ l F z l ω l F z l φ l F z l κ l F x r ω r F x r φ r F x r κ r F y r ω r F y r φ r F y r κ r F z r ω r F z r φ r F z r κ r ] [ δ ^ ω h δ ^ φ h δ ^ κ h δ ^ ω l δ ^ φ l δ ^ κ l δ ^ ω r δ ^ φ r δ ^ κ r ] + [ u x h u y h u z h u x l u y l u z l u x r u y r u z r ] = [ υ ^ x h υ ^ y h υ ^ z h υ ^ x l υ ^ y l υ ^ z l υ ^ x r υ ^ y r υ ^ z r ]
where the Jacobian matrix consists of partial derivatives of an observation group taken with respect to the ith sensor’s bore-sight parameter set [ ω b φ b κ b ] i T , δ ^ represents the vector of corrections to the approximate values of the bore-sight parameter set, u represents the mis-closure vector (the calculated minus the observed values), and υ ^ represents the estimated residuals [29].
The correct use of adjustment requires that some steps be taken to preclude the estimated parameters from being highly correlated and thus indeterminable [12]. For this reason, the constrained least squares method is employed to fix some correlated parameters in the adjustment. To formulate the matrix expression, the normal matrix and its matching constraints matrix are formed. In this procedure, the constraint equation borders the normal Equation (4) as:
[ J T W J Z T Z 0 ] [ ξ ^ λ ^ ] = [ J T W τ τ c ]
where J is the Jacobian matrix contains all of the coefficients of the linearized observations in Equation (4), τ is the observed minus computed values, and W is the weight matrix. Following this, observation equations for the constraints are included in the normal matrix as additional rows Z and columns ZT, and their constants 0 are added to the constants matrix as additional rows τc. ξ ^ is the correction for the calibration parameters while λ is a Lagrangian multiplier [30]. The process is repeated until the corrections become sufficiently small.

3. Calibration Process

3.1. Calibration Facility Setup

After the assembly of the calibration facility, the aluminum frames would still be tilted and not aligned with the scanning system. For accurate calibration, therefore, its horizontal and vertical alignments with the scanning system should be ensured. In the experiment, the alignment was conducted using the temporary coordinate system shown in Figure 8 and total station measurement targets in Figure 9.
Figure 8. Temporary coordinate system for alignment.
Figure 8. Temporary coordinate system for alignment.
Sensors 15 10292 g008
Initially, the baseline connecting the two control points a and b of the horizontal frame was chosen to constitute the x-axis of the temporary coordinate system (hence their y and z values are 0 m), and the origin was centered on the baseline. The y-axis was orthogonal to the x-axis, pointing forward, and the z-axis was right-perpendicular to the x-y plane. Points ①, ②, ③ on the vertical aluminum frame (Figure 9a) were used to vertically adjust the orientation to correspond with the x-z plane. Points ④, ⑤, indicating the two reflective sheet targets attached behind the vertical scanners’ stand (Figure 9b), were used for alignment of the scanning system with respect to the vertical aluminum frame. Point ⑥, the prism target on the platform base (Figure 9b), was used to determine the origin of the body frame with respect to the temporary coordinate system. The scanner positions ( S h , S l , S r ) with respect to the temporary coordinate system were used to determine the two offset values h o f f and v o f f .
Figure 9. Total station measurement targets and LRF sensors (a) on the aluminum frames and (b) on the platform base.
Figure 9. Total station measurement targets and LRF sensors (a) on the aluminum frames and (b) on the platform base.
Sensors 15 10292 g009
After the temporary coordinate system was defined, the scanning system and the horizontal aluminum frame were levelled by using a bull’s eye, and the vertical aluminum frame was vertically aligned by checking if the y coordinates of points ①, ②, ③ are the same, which allow the scan lines to be orthogonal to the targets’ surface. Then, the alignment of the scanning system with respect to the vertical aluminum frame was conducted by checking if the y coordinates of points ④, ⑤ are the same, which allows the vertical offset v o f f to be constant for every vertical target. All of the points were measured by a total station. The alignment results are listed in Table 1. Since the temporary coordinate system was defined according to the two control points a and b, the coordinates were fixed. All of the y values of points ①, ②, ③ were consistent (−0.001 m), indicating that the vertical mis-alignment error with the x-z plane was less than 0.001 m. Likewise, the same y values of points ④, ⑤ (−0.015 m) indicated that the mis-alignment error with the vertical aluminum frame also was less than 0.001 m.
After the alignment, the two offset values h o f f and v o f f were determined. The origin of the body frame was calculated by subtraction of the prism offset value (0.080 m) from the total station measurement of the prism target center (⑥). Then, the locations of each scanner were derived, as indicated in Table 1, from the design drawing of the sensor stand (Figure 3). In the temporary coordinate system, because the horizontal offset h o f f lay along the z-axis, it was equal to the difference between the z value of S h (0.045 m) and that of the horizontal targets (0 m), whereas the vertical offset v o f f , which lay along the y-axis, was derived by subtraction of half the thickness of the vertical aluminum frame (0.010 m) from the y values of S l and S r (both were 0.050 m). Accordingly, h o f f and v o f f were determined to be 0.045 m and 0.040 m, respectively. Since the present study assumes that the lever-arm parameters are known, finally nine bore-sight parameters (three per each LRF sensor) remain for the adjustment process.
Table 1. Alignment results with respect to the temporary coordinate system (unit: M).
Table 1. Alignment results with respect to the temporary coordinate system (unit: M).
Pointxyz
a 1.030 (fixed)0.000 (fixed)0.000 (fixed)
b −1.030 (fixed)0.000 (fixed)0.000 (fixed)
-−0.001-
-−0.001-
-−0.001-
0.139−0.015−0.009
−1.141−0.015−0.008
−0.001−0.1490.030
S h −0.001−0.0180.045
S l 0.1390.050−0.009
S r −0.1410.050−0.008
OffsetValue
h o f f 0.045
v o f f 0.040
Having redundancy in the adjustment model is crucial for quality assurance purposes when calibrating the scanner. However, since the scanned points are irregularly positioned on the target, it is rather difficult to achieve point-to-point correspondences between a scan point and the target center [31]. Alternatively, in order to automatically identify the target center from a scan line, a new aluminum target was specially designed, as shown in Figure 10.
Figure 10. Target design for automated detection of target center.
Figure 10. Target design for automated detection of target center.
Sensors 15 10292 g010
The target is composed of two panels adjoined at an angle of 30°. The size of each panel was chosen to ensure that at least seven scanned points are available within a distance of one meter. The joint part between two panels was hollowed so as to have no laser beam returned from this area: This allows acquirement of a separate scan line on each panel as well as the estimation of the coordinates of the intersecting point. By calculating the offset, the intersecting point can be matched to the target center. In scanning, all targets are kept stationary in the scanner’s field of view for a few minutes while a series of several hundred scans are obtained.

3.2. Determination of Target Coordinates

A prerequisite for calibration is the determination of the coordinates of the target centers using a high-accuracy independent measurement technique [12]. In the present research, both the horizontal and vertical target arrays were first surveyed with the total station TOPCON GPT-9000A (accuracy range: 0.3~1.5 mgon according to minimum reading (1 mgon equals to 3.24″)), then the target centers were calculated using the least squares adjustment. Though the total station offers the non-prism mode, its distance accuracy can be influenced by surface reflection and steep incidence angles to the target. Alternatively therefore, the target centers’ coordinates were obtained by manual reading of azimuth measurements. In Figure 11, points a and b indicate the locations of the two total station’s setups in the global frame. In the experiment, the position of a also was determined according to the origin of the global frame, and the position of b was measured by the total station at a.
Figure 11. 3D triangulation for target detection using a total station.
Figure 11. 3D triangulation for target detection using a total station.
Sensors 15 10292 g011
In the adjustment, the two horizontal angle ( h a and h b ) and two vertical angle ( v a and v b ) measurements from stations a and b, along with the additional distance measurement (d) of the baseline (obtained by measuring tape to achieve high redundancy for adjustment), were adjusted simultaneously using the least squares method. The observation equations for the distance ( F d ), horizontal ( F h ) and vertical ( F v ) azimuth measurements are given by:
F d = ( x t x i ) 2 + ( y t y i ) 2
F h = tan 1 ( x t x i y t y i )
F v = tan 1 ( z t z i ( x t x i ) 2 + ( y t y i ) 2 )
where x i , y i , z i are the fixed coordinates of the ith station (a or b ), and x t , y t , z t are the coordinates of the target center to be adjusted. The Jacobian matrix (J), which contains the partial derivatives of the observation equations with respect to the target center coordinates set, is formed as:
[ F d x t F d y t 0 F h a x t F h a y t 0 F h b x t F h b y t 0 F v a x t F v a y t F v a z t F v b x t F v b y t F v b z t ] [ δ ^ x t δ ^ y t δ ^ z t ] + [ u d u h a u h b u v a u v b ] = [ υ ^ d υ ^ h a υ ^ h b υ ^ v a υ ^ v b ]
where δ ^ indicates the vector of the corrections to the approximate values for the target coordinates, u represents the calculated minus observed values, and υ ^ represents the estimated residuals. In the unweighted least squares adjustment, the solution of the adjustment model and its covariance matrix can be obtained as:
δ ^ = ( J T J ) 1 J T u
S 2 = S 0 2 ( J T J ) 1
where S 0 2 is the reference variance. The standard deviations are derived from the square root of the diagonal elements of the covariance matrix [30].
Figure 12 shows the total station surveying. Point a indicates the first total station setup as well as the origin of the global frame. Point b was setup to have redundant measurements for network adjustment: Its 2D position and orientation with respect to the origin a were determined according to the prism target. The dashed line ab indicates the baseline. In the experiment, some targets were not measured by the total station, due to steep incidence angles; thus, the number of targets (16 targets on each frame) was chosen in consideration of the viewing angle from each total station setup. Subsequently, the position of the scanning system in the global frame was measured using a prism target on the platform base, as shown in Figure 9b.
The adjusted positions and their estimated standard deviations are listed in Table 2 for the horizontal targets and in Table 3 for the vertical targets.
Figure 12. Surveying of calibration targets by a total station.
Figure 12. Surveying of calibration targets by a total station.
Sensors 15 10292 g012
Table 2. Positions and standard deviations of the horizontal targets.
Table 2. Positions and standard deviations of the horizontal targets.
Position (m)Standard Deviation (mm)
No.xyzxyz
10.3481.4321.1880.1020.3670.116
20.4721.5411.1890.0690.2360.076
30.6301.6341.1870.0700.2180.074
40.7811.6881.1880.0990.2790.097
50.9391.7151.1890.0670.1670.060
61.1251.7151.1900.0910.2020.074
71.2831.6881.1940.1930.3790.141
81.4341.6351.1970.1610.2840.108
91.5961.5431.1970.0680.1060.042
101.7221.4451.1950.1540.2150.088
111.8291.3261.1950.1800.2270.096
121.9301.1681.1940.1330.1490.066
131.9921.0201.1930.1150.1180.054
142.0300.8661.1930.2460.2300.108
152.0370.6801.1920.2260.1920.094
162.0150.5161.1910.2200.1720.087
Table 3. Positions and standard deviations of the vertical targets.
Table 3. Positions and standard deviations of the vertical targets.
Position (m)Standard Deviation (mm)
No.xyzxyz
10.2341.2261.4440.0410.1500.051
20.2841.2011.6020.0960.3220.166
30.3671.1551.7590.1290.3770.269
40.4611.1041.8770.1330.3330.290
50.5711.0421.9760.0450.0950.096
60.7160.9652.0660.0320.0580.066
70.8490.8912.1170.0880.1420.176
80.9880.8162.1430.1620.2410.313
91.1520.7262.1430.2520.3540.469
101.2910.6502.1160.3290.4430.580
111.4230.5772.0650.4490.5690.726
121.5680.4981.9760.5640.6450.780
131.6790.4371.8770.5750.5900.663
141.7730.3861.7590.4890.4480.446
151.8570.3401.6030.6430.5240.411
161.9060.3111.4460.1650.1240.069
In the tables, the target number 1 indicates the first target from the left on each aluminum frame in Figure 12. Overall, the standard deviations were calculated to be less than 0.4 mm for the horizontal targets and 0.8 mm for the vertical targets.

4. Experimental Results

The calibration facility was designed for a self-assembled structure; thus it can be set up in any open place. Figure 13 shows the point-cloud data acquisitions (100 scans) by the developed scanning system. The diamond marks represent the two total station setups, where the blue mark indicates the reference station defining the origin of the global frame. The origin of the platform body frame is represented by the large black point, and the large green, blue, and red points indicate the locations of the left-vertical, horizontal, and right-vertical LRF sensors, respectively. The point-cloud groups also are displayed, the individual colors corresponding to each sensor. Whereas the horizontal sensor scanned all of the 16 horizontal targets, the two vertical sensors scanned only 12 targets each (red and green), due to the limited scanning range.
Figure 14 shows the point clouds of the horizontal targets on the x-y plane (100 scans). Each target center is identified by dotted lines connected to the scanner origin. In order to take only the point returns from the target surfaces, the scanned points beyond the range of θ   ±   2.5 ( θ is the direction angle from the scanner origin to each target center, as defined by the dashed line) were filtered out. Each target center was then calculated from the two intersecting scan lines on a target surface. Owing to the high standard deviations of the Hokuyo sensors (10 mm at 0.1 to 10 m range), some intersecting points were calculated far from their original target centers, particularly for the scans on the right side (possibly due to sensor imperfections), which later lead to significant errors in adjustment. This could be prevented by discarding the intersecting points beyond the range of θ   ±   0.25 (the hollowed part of the target in Figure 10). In Figure 14, the remaining intersecting points are indicated by the ‘+’ symbol.
Figure 13. Point-cloud data acquisitions by the developed scanning system.
Figure 13. Point-cloud data acquisitions by the developed scanning system.
Sensors 15 10292 g013
As noted earlier, because it was assumed that the lever-arm parameters were known from the design drawing of the sensor stand, the bore-sight angles were the main concern of the multiple-scanner calibration. Once the design matrix was formed from the measurements by the total station and the scanning system, the calibrated parameters were computed using Equations (3)–(6). The residuals were then computed by substitution of the estimated values for the observed values. In practice, a large number of target observations (from 1000 scan lines) are collected to provide with high redundancy in the adjustment, and the parameter estimates are further improved. To initialize the iterative solution, the initial values for the bore-sight parameters were derived from the design drawing (Figure 3), and are listed in Table 4. The convergence criterion for all of the parameters was set to 10−5. In the present study, two calibrations, without and with the constraint, were conducted and compared by correlation analysis, because the correlations among estimated parameters are considered to be a good indicator of the adjustment quality [27].
Figure 14. Estimation of the horizontal target centers by laser scanning.
Figure 14. Estimation of the horizontal target centers by laser scanning.
Sensors 15 10292 g014
Figure 15a shows the correlation matrix of calibration parameters without the constraint, where each color represents a different range of correlation coefficient magnitude: The bright color indicates high correlations, and the dark color, vice versa. The horizontal scanner’s parameters, particularly for ω h and φ h , were highly correlated (the correlation was about 0.723), which can incur failure in the adjustment. This high correlation can be explained by the weak network geometry of the horizontal targets; indeed, as shown in Table 2, the z-value variation was very low (less than 10 mm). However, this configuration was inevitable, because the z-values of the horizontal targets were leveled precisely for detection by the horizontal scanning. The two vertical scanners also showed correlations, albeit small, between ω l and κ l (0.278), and between ω r and κ r (0.219).
In order to achieve de-correlation between parameters, the constrained least squares approach was employed. Assuming the horizontal scanner to be set at a fixed flat level on the sensor stand, the calibration of parameter ω h was excluded from the adjustment by enforcing the condition Δ ω h = 0 . Likewise, the correlated parameters κ l and κ r in each vertical scanner were removed from the adjustment. Consequently, the correlation matrix (Figure 15b) showed sufficient de-correlation among the remaining six parameters.
The initial approximations and the calibrated parameters without and with the constraints are listed in Table 4. In the calibration without constraints, the adjustment was terminated by force after 10 iterations because it diverged. Compared with the initial approximations, the most noticeable change was found with ω h , which showed the great offset value of −144.902°. This was virtually impossible in consideration of the horizontal scanner’s position on the platform base (Figure 2 and Figure 6); this result therefore was considered an adjustment failure due to the high correlation between the horizontal sensor parameters. With the constraints, the calibration converged in four iterations, representing the largest changes in parameter ω l (−1.018°) for the left and in ω r (−1.017°) for the right vertical scanner.
Figure 15. Correlation matrix of the calibrated parameters (a) without and (b) with the constraints.
Figure 15. Correlation matrix of the calibrated parameters (a) without and (b) with the constraints.
Sensors 15 10292 g015
Table 4. Calibration parameters (unit: °).
Table 4. Calibration parameters (unit: °).
Calibration ParametersInitial ApproximationCalibration without ConstraintsCalibration with Constraints
ω h 0.000−144.9020.000
φ h 0.000−0.100−0.148
κ h −90.000−90.625−90.274
ω l 0.000−1.018−1.018
φ l 90.00090.00490.004
κ l 0.000−0.0010.000
ω r 0.000−1.194−1.017
φ r 90.00089.96989.965
κ r −180.000−179.180−180.000
Convergence-NoYes
Figure 16 shows a comparison of the mapped data (a cross-section view of the corridor with 30 scans) before and after the constrained calibration. The green points represent the scanned points with the initial approximations, and show that the profile was tilted to the right due to the rotation errors in ω l and ω r . By contrast, the red points, which were acquired with the calibrated parameters resulting from the constrained adjustment, show that the tilt problem was resolved after calibration.
Finally, Figure 17 shows the estimated mean residuals before and after the constrained calibration. The red line indicates the mean residuals estimated with the initial approximations, and the blue line indicates those estimated with the calibrated parameters resulting from the constrained adjustment. The considerable reductions achieved through the calibration procedure are clearly evident in the plotted results, particularly for the two vertical scanner parameters. Contrastingly, it was found that the residual υ ^ z l , which was related to the constrained calibration parameter κ r , was increased slightly, since the corrections were optimized for the other, non-constrained parameters in the adjustment.
Figure 16. Cross-section view of the point-cloud data acquisition before (green) and after the constrained calibration (red).
Figure 16. Cross-section view of the point-cloud data acquisition before (green) and after the constrained calibration (red).
Sensors 15 10292 g016
Figure 17. Estimated mean residuals before and after the constrained calibration.
Figure 17. Estimated mean residuals before and after the constrained calibration.
Sensors 15 10292 g017

5. Conclusions

A kinematic 3D laser scanning system was developed and calibrated using a specially designed double-deck calibration facility. The employed calibration approach stemmed from the bore-sight self-calibration approach used in photogrammetry. The double-deck calibration facility was designed for multi-LRF calibration, and its two-panel target allows for automated detection of the intersection of two scan lines, thus resulting in high redundancy and more rigorous calibration. Further, in order to achieve sufficient de-correlation between the parameters, the constrained least squares adjustment was applied. The experimental result demonstrated that the calibration had improved the point-cloud data acquisitions by kinematic scanning, in terms of both visual inspections and the measurement residuals after adjustment.
The proposed calibration facility and methodology are not limited to the present scanning system. For the purpose of 3D mapping, several studies have been found to use the same multi-LRF configuration: one mounted horizontally on the mobile system for navigation, and the other mounted vertically to map the surrounding environment [8,9]. The proposed calibration facility and methodology can be effectively applied to calibrate such configurations. Moreover, the calibration facility has been designed to adjust its vertical frame’s slope using two screws on each side, allowing the calibration of the LRF sensor inclined at various angles as well. The proposed constrained least squares calibration, under the predefined assumptions, can be used simply to handle problems in estimation such as highly correlated parameters.
Nevertheless, the proposed constraint approach is not the best way to reduce strong correlations among parameters, because it cannot adjust the constrained values. Moreover, in the present study, calibration for the intrinsic and lever-arm parameters was not considered, because such augmentation can result in additional correlation among those parameters. In order to achieve sufficient de-correlation then, future work should include the design of a new calibration facility ensuring a better network geometry in a 3D context, as well as the testing of various types of targets. Additionally, the influences of target characteristics such as surface brightness, color and material on the calibration also need to be taken into consideration.

Acknowledgments

This research was supported by a grant (11 High-tech G11) from the Architecture & Urban Development Research Program funded by the Korean Ministry of Land, Infrastructure and Transport. Also, the authors would like to thank Burcin Becerik-Gerber and Vineet R. Kamat for insightful comments that improved the quality of this manuscript.

Author Contributions

Jaehoon Jung developed the system, mathematical model, and wrote the paper under the supervision of Joon Heo. Jeonghyun Kim, Sanghyun Yoon, Sangmin Kim, and Hyoungsig Cho contributed to the calibration facility setup and data collection, and Changjae Kim contributed to the verification of the method.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bosché, F.; Guenet, E. Automating surface flatness control using terrestrial laser scanning and building information models. Autom. Constr. 2014, 44, 212–226. [Google Scholar] [CrossRef]
  2. Jung, J.; Hong, S.; Jeong, S.; Kim, S.; Cho, H.; Hong, S.; Heo, J. Productive modeling for development of as-built bim of existing indoor structures. Autom. Constr. 2014, 42, 68–77. [Google Scholar] [CrossRef]
  3. Heo, J.; Jeong, S.; Park, H.-K.; Jung, J.; Han, S.; Hong, S.; Sohn, H.-G. Productive high-complexity 3d city modeling with point clouds collected from terrestrial lidar. Comput. Environ. Urban Syst. 2013, 41, 26–38. [Google Scholar] [CrossRef]
  4. Son, H.; Kim, C.; Kim, C. 3d reconstruction of as-built industrial instrumentation models from laser-scan data and a 3d CAD database based on prior knowledge. Autom. Constr. 2015, 49, 193–200. [Google Scholar] [CrossRef]
  5. Hong, S.; Jung, J.; Kim, S.; Cho, H.; Lee, J.; Heo, J. Semi-automated approach to indoor mapping for 3d as-built building information modeling. Comput. Environ. Urban Syst. 2015, 51, 34–46. [Google Scholar] [CrossRef]
  6. Han, S.; Cho, H.; Kim, S.; Jung, J.; Heo, J. Automated and efficient method for extraction of tunnel cross sections using terrestrial laser scanned data. J. Comput. Civil Eng. 2012, 27, 274–281. [Google Scholar] [CrossRef]
  7. Yang, B.; Zang, Y. Automated registration of dense terrestrial laser-scanning point clouds using curves. ISPRS J. Photogramm. Remote Sens. 2014, 95, 109–121. [Google Scholar] [CrossRef]
  8. Yan, R.J.; Wu, J.; Lee, J.Y.; Han, C.S. 3d point cloud map construction based on line segments with two mutually perpendicular laser sensors. In Proceedings of the International Conference on Control, Automation and Systems (ICCAS), Kimdaejung Convention Center, Gwangju, Korea, 20–23 October 2013; pp. 1114–1116.
  9. Wang, Y.K.; Huo, J.; Wang, X.S. A real-time robotic indoor 3d mapping system using duel 2d laser range finders. In Proceedings of the Chinese Control Conference (CCC), Nanjing, China, 28–30 July 2014; pp. 8542–8546.
  10. Underwood, J.P.; Hill, A.; Peynot, T.; Scheding, S.J. Error modeling and calibration of exteroceptive sensors for accurate mapping applications. J. Field Robot. 2010, 27, 2–20. [Google Scholar] [CrossRef]
  11. Abbas, M.A.; Setan, H.; Majid, Z.; Chong, A.K.; Idris, K.M.; Aspuri, A. Calibration and accuracy assessment of leica scanstation c10 terrestrial laser scanner. In Developments in Multidimensional Spatial Data Models; Springer: Berlin, Germany, 2013; pp. 33–47. [Google Scholar]
  12. Reshetyuk, Y. Self-Calibration and Direct Georeferencing in Terrestrial Laser Scanning. Infrastructure, Geodesy. Ph.D. Thesis, Royal Institute of Technology (KTH), Stockholm, Sweden, 2009. [Google Scholar]
  13. Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2301–2306.
  14. Weingarten, J. Feature-Based 3d Slam; Swiss Federal Institute of Technology Lausanne: Vaud, Switzerland, 2006. [Google Scholar]
  15. Li, G.; Liu, Y.; Dong, L.; Cai, X.; Zhou, D. An algorithm for extrinsic parameters calibration of a camera and a laser range finder using line features. In Proceedings of the International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3854–3859.
  16. Scaramuzza, D.; Harati, A.; Siegwart, R. Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), San Diego, CA, USA, 29 October–2 November 2007; pp. 4164–4169.
  17. Vasconcelos, F.; Barreto, J.P.; Nunes, U. A minimal solution for the extrinsic calibration of a camera and a laser-rangefinder. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2097–2107. [Google Scholar] [CrossRef] [PubMed]
  18. Zhou, L. A new minimal solution for the extrinsic calibration of a 2d lidar and a camera using three plane-line correspondences. IEEE Sens. J. 2014, 14, 442–454. [Google Scholar] [CrossRef]
  19. Choi, D.G.; Bok, Y.; Kim, J.S.; Kweon, I.S. Extrinsic calibration of 2d laser sensors. In Proceedings of the International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 3027–3033.
  20. Antone, M.E.; Friedman, Y. Fully automated laser range calibration. In Proceedings of the British Machine Vision Conference (BMVC), University of Warwick, Coventry, UK, 10–13 September 2007; pp. 1–10.
  21. Hokuyo Automatic Co. LTD. Scanning Laser Range Finder utm-30lx/ln Specification; Hokuyo Automatic Co. LTD: Osaka, Japan, 2009. [Google Scholar]
  22. Demski, P.; Mikulski, M.; Koteras, R. Characterization of hokuyo utm-30lx laser range finder for an autonomous mobile robot. Adv. Technol. Intell. Syst. 2013, 440, 143–153. [Google Scholar]
  23. Chan, T.O.; Lichti, D.D.; Glennie, C.L. Multi-feature based boresight self-calibration of a terrestrial mobile mapping system. ISPRS J. Photogramm. Remote Sens. 2013, 82, 112–124. [Google Scholar] [CrossRef]
  24. Maddern, W.; Harrison, A.; Newman, P. Lost in translation (and rotation): Rapid extrinsic calibration for 2d and 3d lidars. In Proceedings of the International Conference on Robotics and Automation (ICRA), RiverCentre, Saint Paul, MN, USA, 14–18 May 2012; pp. 3096–3102.
  25. Vallet, J.; Skaloud, J. Development and experiences with a fully-digital handheld mapping system operated from a helicopter. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. Istanb. 2004, 35, 791–796. [Google Scholar]
  26. Bender, D.; Schikora, M.; Sturm, J.; Cremers, D. A graph based bundle adjustment for ins-camera calibration. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-1/W2, 39–44. [Google Scholar]
  27. Skaloud, J.; Lichti, D. Rigorous approach to bore-sight self-calibration in airborne laser scanning. ISPRS J. Photogramm. Remote Sens. 2006, 61, 47–59. [Google Scholar] [CrossRef]
  28. Chan, T.O. Feature-Based Boresight Self-Calibration of a Mobile Mapping System. Master Thesis, University of Calgary, Calgary, AB, Canada, 2011. [Google Scholar]
  29. Lichti, D.D. Error modelling, calibration and analysis of an am-cw terrestrial laser scanner system. ISPRS J. Photogramm. Remote Sens. 2007, 61, 307–324. [Google Scholar] [CrossRef]
  30. Wolf, P.R.; Ghilani, C.D. Adjustment Computations: Statistics and Least Squares in Surveying and Gis; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  31. Chow, J.C.; Lichti, D.D.; Glennie, C.; Hartzell, P. Improvements to and comparison of static terrestrial lidar self-calibration methods. Sensors 2013, 13, 7224–7249. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Jung, J.; Kim, J.; Yoon, S.; Kim, S.; Cho, H.; Kim, C.; Heo, J. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems. Sensors 2015, 15, 10292-10314. https://doi.org/10.3390/s150510292

AMA Style

Jung J, Kim J, Yoon S, Kim S, Cho H, Kim C, Heo J. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems. Sensors. 2015; 15(5):10292-10314. https://doi.org/10.3390/s150510292

Chicago/Turabian Style

Jung, Jaehoon, Jeonghyun Kim, Sanghyun Yoon, Sangmin Kim, Hyoungsig Cho, Changjae Kim, and Joon Heo. 2015. "Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems" Sensors 15, no. 5: 10292-10314. https://doi.org/10.3390/s150510292

Article Metrics

Back to TopTop