Abstract
We propose a novel method for vision based simultaneous localization and mapping (vSLAM) using a biologically inspired vision sensor that mimics the human retina. The sensor consists of a 128x128 array of asynchronously operating pixels, which independently emit events upon a temporal illumination change. Such a representation generates small amounts of data with high temporal precision; however, most classic computer vision algorithms need to be reworked as they require full RGB(-D) images at fixed frame rates. Our presented vSLAM algorithm operates on individual pixel events and generates high-quality 2D environmental maps with precise robot localizations. We evaluate our method with a state-of-the-art marker-based external tracking system and demonstrate real-time performance on standard computing hardware.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Neira, J.P., Davison, A.J., Leonard, J.J.: Special issue on visual slam 24 (2008)
Newcombe, R.A., Molyneaux, D., Kim, D., Davison, A.J., Shotton, J., Hodges, S., Fitzgibbon, A.: Kinectfusion: Real-time dense surface mapping and tracking. In: IEEE International Symposium on Mixed and Augmented Reality (2011)
Lichtsteiner, P., Posch, C., Delbruck, T.: A 128x128 120db 15us latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid State Circuits (2007)
Weikersdorfer, D., Conradt, J.: Event-based particle filtering for robot self-localization. In: IEEE International Conference on Robotics and Biomimetics (2012)
Isard, M., Blake, A.: Condensation conditional density propagation for visual tracking. International Journal of Computer Vision (1998)
Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: IEEE Conference on Computer Vision and Pattern Recognition (2011)
Schraml, S., Belbachir, A.N., Milosevic, N., Schön, P.: Dynamic stereo vision system for real-time tracking. In: IEEE International Symposium on Circuits and Systems (2010)
Rogister, P., Benosman, R., Ieng, S.H., Lichtsteiner, P., Delbruck, T.: Asynchronous event-based binocular stereo matching. IEEE Transactions on Neural Networks and Learning Systems (2012)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Weikersdorfer, D., Hoffmann, R., Conradt, J. (2013). Simultaneous Localization and Mapping for Event-Based Vision Systems. In: Chen, M., Leibe, B., Neumann, B. (eds) Computer Vision Systems. ICVS 2013. Lecture Notes in Computer Science, vol 7963. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39402-7_14
Download citation
DOI: https://doi.org/10.1007/978-3-642-39402-7_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39401-0
Online ISBN: 978-3-642-39402-7
eBook Packages: Computer ScienceComputer Science (R0)