Paper
18 February 2002 Reinforcement learning for robot control
William D. Smart, Leslie Pack Kaelbling
Author Affiliations +
Proceedings Volume 4573, Mobile Robots XVI; (2002) https://doi.org/10.1117/12.457434
Event: Intelligent Systems and Advanced Manufacturing, 2001, Boston, MA, United States
Abstract
Writing control code for mobile robots can be a very time-consuming process. Even for apparently simple tasks, it is often difficult to specify in detail how the robot should accomplish them. Robot control code is typically full of magic numbers that must be painstakingly set for each environment that the robot must operate in. The idea of having a robot learn how to accomplish a task, rather than being told explicitly is an appealing one. It seems easier and much more intuitive for the programmer to specify what the robot should be doing, and let it learn the fine details of how to do it. In this paper, we describe JAQL, a framework for efficient learning on mobile robots, and present the results of using it to learn control policies for simple tasks.
© (2002) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
William D. Smart and Leslie Pack Kaelbling "Reinforcement learning for robot control", Proc. SPIE 4573, Mobile Robots XVI, (18 February 2002); https://doi.org/10.1117/12.457434
Lens.org Logo
CITATIONS
Cited by 12 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Mobile robots

Line width roughness

Control systems

Space robots

Machine learning

Computer science

Control systems design

Back to Top