Line segment extraction for large scale unorganized point clouds

https://doi.org/10.1016/j.isprsjprs.2014.12.027Get rights and content

Abstract

Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

Introduction

Benefiting from the advances in sensor technology for both airborne and ground-based mobile laser scanning, dense points clouds have become increasingly common, and the need for new approaches to address these point clouds has become increasingly important. As the common feature in man-made objects, straight linear structures play an important role in a variety of applications, such as: road extraction (Yang et al., 2013); building outline extraction (Baillard et al., 1999); localization (Borges et al., 2010), city model building (Lafarge and Mallet, 2012); calibration (Moghadam et al., 2013); line-based visualization (Chen and Wang, 2011); and more. This paper emphasizes straight line segment extraction for point clouds, whereas most of the existing work concentrates on 2D line segment detection in a single image (Ballard, 1981, Burns et al., 1986, Von Gioi et al., 2010) and 3D line segment reconstruction in multi-view images (Baillard et al., 1999, Woo et al., 2009, Jain et al., 2010). Only a few papers consider point clouds (Lu et al., 2008, Moghadam et al., 2013).

A large number of dense point clouds have been obtained by current scanners; the RIEGL VMX-450 scanner, for example, can yield 1.1 million range measurements per second. Therefore, one of the biggest challenges is to find an efficient way to address the voluminous data. Unorganized point clouds lack normal vector and connectivity information, making the problem even more challenging.

Our method is designed to cope with line segment extraction for large-scale unorganized point clouds from the real word. A line segment here is defined as the intersection of two half-planes. To extract the line segment, we take into account the point region that is near the straight linear structure. Such a region is designated as a “3D line-support region.” The word “3D” is used to distinguish the region from the concept of a “line-support region,” which has proved to be a robust descriptor to extract line segments in images.

The key idea of our method is to first convert a point cloud into a collection of shaded images by non-photorealistic rendering with different viewpoints; then the LSD algorithm (Von Gioi et al., 2010) is applied to these images to extract the 2D line-support regions. These 2D line-support regions are then back-projected into the original point cloud as 3D line-support regions, with each region containing roughly one line segment. Next, to maintain accuracy, each 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure. Finally, the 3D line-support regions and their LSHP structures are refined as the output.

Fig. 1 presents a result of our method. Given an unorganized 3D raw scan point cloud as the input (Fig. 1(a)), our method extracts the 3D line-support region and LSHP structures as the output, where the line segments are drawn in black and the attached half-planes are represented by colored 3D rectangles (Fig. 1(b)). As a result, the LSHP structure provides an abstraction of a point cloud, and the vegetation in the input is filtered.

Section snippets

2D line segment detection for a single image

Image line segment detection has been studied over several decades. The traditional methods combine the Canny edge detector (Canny, 1986) and Hough transform (Ballard, 1981). These methods are generally slow and produce a significant number of false detections. Recently, an efficient line segment detector with false detection control (designated LSD) was presented by Von Gioi et al. (2010). LSD follows the method proposed by Burns et al. (1986). First, the image is partitioned into a collection

Overview

This section first introduces the concepts of 3D line-support regions and Line-Segment-Half-Planes (LSHP) structures, and then provides an overview of our approach.

3D line-support region extraction

2D line-support region extraction in images has already been well investigated. The traditional methods group the edge pixels with similar gradient directions into line segments. However, although there are several methods that can extract sharp feature points in 3D space, there is not, to our best knowledge, a method to group these feature points into line segments, especially for scenes containing vegetation and other non-manifold objects.

To cope with this problem, we take the 2D line-support

LSHP modeling

Each 3D line-support region is fitted by an LSHP structure. The LSHP structure is used to validate the 3D line-support region and ‘provide a geometric constraint for line segments.

It is difficult to directly fit the LSHP structure for a 3D line-support region in 3D space. Instead, we can take advantage of the corresponding 2D line-support regions to ascertain an optimal projection direction of the 3D line-support region (Fig. 6(a)). By projecting the 3D line-support region along the projection

Refinement of 3D line-support regions and LSHP structures

Because the 3D line-support regions are obtained from multi-view images, there are many overlaps. It is necessary to combine the 3D line-support regions and LSHP structures that share the same line segment. At the same time, the boundaries of the 3D line-support regions also need to be refined in 3D space.

Environment

The method was evaluated using typical street-scene LiDAR point clouds acquired by a RIGEL VMX-450 MLS system; the average density of the point clouds used is approximately 2500 points/m2. We also tested our method on the classical “sharp sphere” point cloud data.

Information on the test point clouds is summarized in Table 1. The first column of the table is the name of the data, the second column is the figure number of the point clouds as presented in this paper, the third column is the number

Application

In this section, we illustrate the effectiveness of 3D line-support regions and their relative LSHP structures in urban scene abstraction.

Today, with the widespread use of point clouds acquired by current devices, it is efficient to create large-scale dense point clouds from large outside environments. However, to fully model the entire scene is time-consuming with the current technology. For other applications, such browsing the data on the Internet via a browser, the bandwidth is the major

Conclusion and future work

In this paper, we have proposed an effective line segment extraction method that is capable of accurately extracting line segments from large-scale unorganized raw scan point clouds. The 3D line-support regions are also extracted at the same time. These 3D line-support regions are fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for line segments, making the line segments more reliable and accurate.

The proposed method was tested on raw scan point

Acknowledgments

This work was supported by an NSFC Grant (Project No. 61371144) and an NSERC Discovery Grant. The authors would like to thank the anonymous reviewers for their valuable comments.

References (32)

  • T. Chen et al.

    3d line segment detection for unorganized point clouds from multi-view stereo

  • J. Daniels et al.

    Robust smooth feature extraction from point clouds

  • A. Desolneux et al.

    Meaningful alignments

    Int. J. Comput. Vis.

    (2000)
  • E. Eisemann et al.

    Flash photography enhancement via intrinsic relighting

    ACM Trans. Graph.

    (2004)
  • S. Heuel et al.

    Matching, reconstructing and grouping 3d lines from multiple views using uncertain projective geometry

  • A. Jain et al.

    Exploiting global connectivity constraints for reconstruction of 3d line segments from images

  • Cited by (117)

    • Accurate and complete line segment extraction for large-scale point clouds

      2024, International Journal of Applied Earth Observation and Geoinformation
    • Geometric feature enhanced line segment extraction from large-scale point clouds with hierarchical topological optimization

      2022, International Journal of Applied Earth Observation and Geoinformation
      Citation Excerpt :

      Borges et al. (2010) firstly aggregated point clouds into clusters using the local features and then projected clusters into 2D to get two endpoints of the plane intersecting line and the depth discontinuity edges in point clouds. Lin et al. (2015) firstly used the EDL algorithm to project the point clouds into images from multiple perspectives. 3D line segments are then extracted by back-projecting 2D line segments to the 3D LSHP structure.

    View all citing articles on Scopus
    View full text