An Integrated Framework for Robust Human-Robot Interaction

An Integrated Framework for Robust Human-Robot Interaction

Mohan Sridharan
ISBN13: 9781466626720|ISBN10: 1466626720|EISBN13: 9781466627031
DOI: 10.4018/978-1-4666-2672-0.ch016
Cite Chapter Cite Chapter

MLA

Sridharan, Mohan. "An Integrated Framework for Robust Human-Robot Interaction." Robotic Vision: Technologies for Machine Learning and Vision Applications, edited by Jose Garcia-Rodriguez and Miguel A. Cazorla Quevedo, IGI Global, 2013, pp. 281-301. https://doi.org/10.4018/978-1-4666-2672-0.ch016

APA

Sridharan, M. (2013). An Integrated Framework for Robust Human-Robot Interaction. In J. Garcia-Rodriguez & M. Cazorla Quevedo (Eds.), Robotic Vision: Technologies for Machine Learning and Vision Applications (pp. 281-301). IGI Global. https://doi.org/10.4018/978-1-4666-2672-0.ch016

Chicago

Sridharan, Mohan. "An Integrated Framework for Robust Human-Robot Interaction." In Robotic Vision: Technologies for Machine Learning and Vision Applications, edited by Jose Garcia-Rodriguez and Miguel A. Cazorla Quevedo, 281-301. Hershey, PA: IGI Global, 2013. https://doi.org/10.4018/978-1-4666-2672-0.ch016

Export Reference

Mendeley
Favorite

Abstract

Developments in sensor technology and sensory input processing algorithms have enabled the use of mobile robots in real-world domains. As they are increasingly deployed to interact with humans in our homes and offices, robots need the ability to operate autonomously based on sensory cues and high-level feedback from non-expert human participants. Towards this objective, this chapter describes an integrated framework that jointly addresses the learning, adaptation, and interaction challenges associated with robust human-robot interaction in real-world application domains. The novel probabilistic framework consists of: (a) a bootstrap learning algorithm that enables a robot to learn layered graphical models of environmental objects and adapt to unforeseen dynamic changes; (b) a hierarchical planning algorithm based on partially observable Markov decision processes (POMDPs) that enables the robot to reliably and efficiently tailor learning, sensing, and processing to the task at hand; and (c) an augmented reinforcement learning algorithm that enables the robot to acquire limited high-level feedback from non-expert human participants, and merge human feedback with the information extracted from sensory cues. Instances of these algorithms are implemented and fully evaluated on mobile robots and in simulated domains using vision as the primary source of information in conjunction with range data and simplistic verbal inputs. Furthermore, a strategy is outlined to integrate these components to achieve robust human-robot interaction in real-world application domains.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.