Abstract
Recent studies have demonstrated that images constructed from lidar reflectance information exhibit superior robustness to lighting changes in outdoor environments in comparison to traditional passive stereo camera imagery. Moreover, for visual navigation methods originally developed using stereo vision, such as visual odometry (VO) and visual teach and repeat (VT&R), scanning lidar can serve as a direct replacement for the passive sensor. This results in systems that retain the efficiency of the sparse, appearance-based techniques while overcoming the dependence on adequate/consistent lighting conditions required by traditional cameras. However, due to the scanning nature of the lidar and assumptions made in previous implementations, data acquired during continuous vehicle motion suffer from geometric motion distortion and can subsequently result in poor metric VO estimates, even over short distances (e.g., 5–10 m). This paper revisits the measurement timing assumption made in previous systems, and proposes a frame-to-frame VO estimation framework based on a novel pose interpolation scheme that explicitly accounts for the exact acquisition time of each feature measurement. In this paper, we present the promising preliminary results of our new method using data generated from a lidar simulator and experimental data collected from a planetary analogue environment with a real scanning laser rangefinder.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008)
M. Bosse, R. Zlot, Continuous 3d scan-matching with a spinning 2d laser. in Robotics and Automation (ICRA), 2009 IEEE International Conference on, (IEEE, 2009), pp. 4312–4319
J. Enright, T.D. Barfoot, M. Soto, Star tracking for planetary rovers. in Proceedings of the IEEE Aerospace Conference, (Big Sky, MT, 2012), pp. 1–13
P.E. Forssén, E. Ringaby, Rectifying rolling shutter video from hand-held devices. in IEEE Conference on Computer Vision and Pattern Recognition, (IEEE Computer Society, IEEE, San Francisco, USA, 2010)
P.T. Furgale, T.D. Barfoot, G. Sibley, Continuous-time batch estimation using temporal basis functions. in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), (St. Paul, USA, 2012), pp. 2088–2095
K. Gauss, Theory of the motion of the heavenly bodies moving about the sun in conic sections: Reprint 2004 (1809)
F. Grassia, Practical parameterization of rotations using the exponential map. J. Graphics Tools 3, 29–48 (1998)
J. Guivant, E. Nebot, S. Baiker, Localization and map building using laser range sensors in outdoor applications. J. Robot. Syst. 17(10), 565–583 (2000)
A. Lambert, P.T. Furgale, T.D. Barfoot, J. Enright, Field testing of visual odometry aided by a sun sensor and inclinometer. J. Field Robot. (special issue) Space Robot. 29(3), 426–444 (2012)
J. Levinson, Automatic laser calibration, mapping, and location for autonomous vehicles. Ph.D. thesis, Stanford University 2011
M. Magnusson, A. Lilienthal, T. Duckett, Scan registration for autonomous mining vehicles using 3d-ndt. J. Field Robot. 24(10), 803–827 (2007)
L. Matthies, M. Maimone, A. Johnson, Y. Cheng, R. Willson, C. Villalpando, S. Goldberg, A. Huertas, A. Stein, A. Angelova, Computer vision on Mars. Int. J. Comput. Vision 75(1), 67–92 (2007)
C. McManus, P.T. Furgale, T.D. Barfoot, Towards appearance-based methods for lidar sensors. in Robotics and Automation (ICRA), 2011 IEEE International Conference on, (Shanghai, China, 2011), pp. 1930–1935
C. McManus, P.T. Furgale, B.E. Stenning, T.D. Barfoot, Visual teach and repeat using appearance-based lidar. in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), (St. Paul, USA, 2012), pp. 389–396
F. Moosmann, C. Stiller, Velodyne SLAM. in Intelligent Vehicles Symposium (IV), 2011 IEEE, (IEEE, 2011), pp. 393–398
J. Neira, J. Tardós, J. Horn, G. Schmidt, Fusing range and intensity images for mobile robot localization. IEEE Trans. Robot. Autom. 15(1), 76–84 (1999)
A. Nüchter, K. Lingemann, J. Hertzberg, H. Surmann, 6D SLAM-3D mapping outdoor environments. J. Field Robot. 24(8–9), 699–722 (2007)
J. O’Neill, W. Moore, K. Williams, R. Bruce, Scanning system for lidar (Oct 30 2007), US Patent App. 12/447,937
K. Shoemake, Animating rotation with quaternion curves. ACM SIGGRAPH Comput. Graphics 19(3), 245–254 (1985)
S. Thrun, C. Urmson, How Google’s self-driving car works. Intelligent Robots and Systems (IROS) (2011). http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works
C.H. Tong, P.T. Furgale, T.D. Barfoot, Gaussian process gauss-newton: Non-parametric state estimation. in 9th Canadian Conference on Computer and Robot Vision (CRV)(to appear), Toronto, Canada, 28–30 May 2012
Acknowledgments
We would like to extend our deepest thanks to the staff of the Ethier Sand and Gravel in Sudbury, Ontario, Canada for allowing us to conduct our field tests on their grounds. We also wish to thank Dr. James O’Neill from Autonosys for his help in preparing the lidar sensor for our field tests. In addition, we would also like to acknowledge (from the Autonomous Space Robotics Laboratory) Colin McManus for being instrumental in gathering the data used in this paper, Andrew Lambert for his help in preparing the GPS payload, Paul Furgale and Chi Hay Tong for their work on the GPU SURF algorithm, Goran Basic for designing and assembling the Autonosys payload mount, and Keith Leung for providing onsite photography for the field tests. Lastly, we also wish to thank the Natural Sciences and Engineering Research Council of Canada and the Canada Foundation for Innovation, Defence R&D Canada at Suffield (particularly Jack Collier), the Canadian Space Agency, and MDA Space Missions (particularly Cameron Ower, Raja Mukherji, and Joseph Bakambu) for providing us with the financial and in-kind support necessary to conduct this research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Dong, H., Barfoot, T.D. (2014). Lighting-Invariant Visual Odometry using Lidar Intensity Imagery and Pose Interpolation. In: Yoshida, K., Tadokoro, S. (eds) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol 92. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40686-7_22
Download citation
DOI: https://doi.org/10.1007/978-3-642-40686-7_22
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40685-0
Online ISBN: 978-3-642-40686-7
eBook Packages: EngineeringEngineering (R0)