Recent progresses on real-time 3D shape measurement using digital fringe projection techniques
Introduction
With recent advancements of digital technology, real-time 3D shape measurement plays an increasingly important role in enormous fields including manufacturing, medical sciences, computer sciences, homeland security, and entertainment.
Over the past few years, real-time 3D shape measurement technologies have been advancing drastically rapidly. For real-time 3D shape measurement, the 3D shapes have to be acquired rapidly, processing quickly, and displayed in real time. Time-of-flight (TOF) is one of the technique that is commercially used. For this technique, a single camera is used to measure the time delay of modulated light from an active emitter. Based on TOF techniques, Canesta (http://www.canesta.com) and 3DV Systems (http://www.3dvsystems.com/) have developed real-time 3D range scanning camera that allows the data being acquired and processed in real time. However, the achieved depth accuracy is usually not high due to its fundamental limitations of this technique. For example, Canesta achieved depth uncertainty of 0.3–1.5 cm depends on the measurement condition [1].
Stereo vision is another technique that is extensively used for 3D shape measurement. For this technique, two cameras viewing from different angles are used to acquire a pair of images at the same time. By finding the corresponding pairs from both images, 3D information can be extracted. For this technique, the data acquisition speed is fast (as fast as the camera can reach). However, because this technique hinges on detecting the corresponding pairs from two camera images, which is difficult for objects without strong surface texture information. In addition, because finding the corresponding pair is a fundamentally difficult problem, it is very difficult for them to reach real-time 3D shape reconstruction.
Spacetime stereo is another technique that has potential for high speed 3D shape measurement [2], [3], [4]. To resolve the correspondence problem, a projector is used to project a sequence of active patterns for assistance. In a short period of time, a number of structured patterns are projected and captured by the camera. The correspondences between two camera pixels are identified based on the actively projected structured patterns. By using the active structured patterns, stereo matching can be done rapidly, thus this technique has the potential to achieve real-time 3D shape measurement. However, this technique has some drawbacks: (1) for any measurement point, the projector and two cameras must be able to “see” it. Therefore, it only measures the overlapping regions of the three devices, which is much smaller than any of them; (2) because stereo matching is utilized, it is very difficult for this technique to reach pixel-level resolution.
A structured light system is similar to a stereo technique in that it only utilizes two devices for 3D shape measurement. It replaces one camera of a stereo system with a projector to project structured patterns [5], which are encoded with codewords through certain codification strategies. From the captured structured patterns, the codewords can be decoded. If the codewords are unique, the correspondence between the projector sensor and the camera sensor is uniquely identified, and 3D information can be calculated through triangulation. To reach high-speed 3D shape measurement, the structured patterns must be switched rapidly, and captured in a short period of time. Rusinkiewicz and Levoy developed a real-time 3D shape measurement system based on the stripe boundary code [6], [7]. The 3D data acquisition speed is 15 frames/s, because capturing each structured patterns takes while four patterns are required to reconstruct one 3D model. Most of the structured light systems use binary patterns, where only 0 and 1 s are used for codification. The advantages of using a binary method are: (1) simple, the coding and decoding algorithms are very simple; (2) fast, because the processing algorithm is very simple, it can reach very fast processing speed; and (3) robust, since only two levels are used, it is very robust to the noise. However, it is very difficult for this technique to reach pixel-level spatial resolution at very high speed, because the stripe width must be larger than one projector's pixel.
To increase the spatial resolution without reducing the measurement speed, multiple-level codification strategies were proposed at the sacrifice of increasing the sensitivity level to noise. Pan et al. developed color N-ary method for 3D shape measurement [8]. This system can be used to measure neutral color objects very well. However, because the color patterns are used, the measurement accuracy is affected by the surface color. The extreme case of an N-ary pattern becomes a trapezoidal shaped pattern when the graylevel increment is 1. Huang et al. developed this algorithm which is called trapezoidal phase-shifting algorithm [9]. For this algorithm, the pixel-level spatial resolution is reached and the measured speed is high since only three patterns are required. However, it requires that the projector and the camera are in focus to alleviate errors induced by the image blurring or defocus, albeit it is less sensitive to this problem since the errors are canceled out to a certain degree due the special design of the patterns. Triangular shaped phase-shifting methods were also proposed for high-speed 3D shape measurement [10]. Jia et al. has successfully demonstrated that this technique could be used to measure smooth objects at very high speed. For this technique, it suffers if the projector or camera is not in focus. Moreover, since only two images are used, which are not sufficient to solve the so called “phase”. This method also requires the neighborhood pixel properties to ensure the success of the measurement. Guan et al. proposed a composite method for real-time 3D shape measurement [11]. In this method, different frequency and orientation of fringes are encoded into a single grayscale image. The different phases with different frequencies were obtain through demodulation. They have successfully demonstrated that this algorithm could perform the measurement well. However, the data quality is not very high since only 8-bit fringe images were used.
When the binary, multiple-level, trapezoidal, or triangular shaped structured patterns are blurred to a certain degree, they all become sinusoidal in shape. Therefore, utilizing sinusoidal stripe patterns directly would be a natural choice. The technique that uses a projector to project sinusoidal patterns is called digital fringe projection technique, and if a phase-shifting algorithm is adopted, this technique is called digital fringe projection and phase-shifting technique. Digital fringe projection and phase-shifting method is essentially a special structured light technique with the structured patterns composing of sinusoidal stripes, which are then called fringe images. To reach high-speed measurement, a small number of fringe images are recorded and used for 3D shape measurement. In this paper, we mainly focused on real-time 3D shape measurement technique using the digital fringe projection technique, and especially on our recent explorations.
The paper is organized as follows. Section 2 explains the principle of real-time 3D shape measurement technique that uses a digital fringe projection and phase-shifting method. Section 3 details the real-time 3D shape measurement technique that we developed over the past few years. Section 4 presents the most recent advancements on this technology. Section 5 addresses some challenging tasks for the existing technologies, and Section 6 summarizes the paper.
Section snippets
Digital fringe projection system
Fig. 1 shows a typical digital fringe projection system. A computer generates the digital fringe patterns composing of vertical straight stripes that are sent to a digital video projector, the projector projects the fringe images onto the object, the object distorts the fringe images so that the vertical straight stripes are deformed because of the surface profile, a camera captures the distorted fringe images into the computer, and the computer then analyzes the fringe images to obtain 3D
Real-time 3D shape measurement technique
As introduced in Section 1, in order to do real-time 3D shape measurement, 2D fringe images must be captured rapidly, and 3D shape must be reconstructed quickly, and the reconstructed 3D geometries must be displayed instantaneously in real time. Hence, it includes three tasks, acquisition, reconstruction, and display that must be completed simultaneously and quickly.
Recent progresses
After developed the first real-time 3D shape measurement system using a digital fringe projection and phase-shifting algorithm [45], we, as well as Prof. Huang's group from Stony Brook, have made tremendous progresses on improving this technology. The vast majority of our efforts have been devoted to reducing the errors caused by motion, extending the measurement range, and acquiring color texture simultaneously. This section will summarize these most recent progresses.
Challenges
To extend their applications to other fields, and to enhance their performances, real-time 3D shape measurement techniques have to increase the measurement speed, range, and capabilities. Miniaturizing the real-time 3D shape measurement system will also be essential to enabling this technology enter into ordinary consumers. The challenges are huge, but the future is promising. In this section, we will address the paramount challenges facing the real-time 3D shape measurement techniques. It
Conclusions
Real-time 3D shape measurement is increasingly important with its applications expanding rapidly to diverse areas. Rapid progresses have been made over the past few years, but challenges remain tremendous. This paper has presented the real-time 3D shape measurement techniques that we have developed over the past few years, explained the most recent efforts towards advancing this technology further, and addressed the challenges that we are facing or will encounter. We hope that this paper has
Acknowledgments
I thank Prof. Peisen S. Huang and Mr. Hong Guo for allowing me to access their most recent data shown in Fig. 7. This work is sponsored by Iowa State University.
References (66)
- et al.
Depth object recovery using radial basis functions
Opt Commun
(1999) - et al.
Flexible and accurate implementation of a binocular structured light system
Opt Lasers Eng
(2008) - et al.
Absolute fringe order calculation using optimised multi-frequency selection in full-field profilometry
Opt Laser Eng
(2005) - et al.
Novel 3-d video for quantification of facial movement
Otolaryngol Head Neck Surg
(2008) - Hsu S, Acharya S, Rafii A, New R. Performance of a time-of-flight range camera for intelligent vehicle safety...
- et al.
Spacetime stereo: a unifying framework for depth from triangulation
IEEE Trans Pattern Anal Mach Intell
(2005) - et al.
Spacetime stereo: shape recovery for dynamic scenes
- et al.
Spacetime faces: high-resolution capture for modeling and animation
ACM Trans Graph
(2004) - et al.
Pattern codification strategies in structured light systems
Patt Recognit
(2004) - Hall-Holt O, Rusinkiewicz S. Stripe boundary codes for real-time structured-light range scanning of moving objects. In:...