Accessibility navigation


Trajectory planning of a robot using learning algorithms

Tsoularis, A., Kambhampati, C. and Warwick, K. (1992) Trajectory planning of a robot using learning algorithms. In: First International Conference on Intelligent Systems Engineering, 1992. IEE, pp. 13-16. ISBN 0852965494

Full text not archived in this repository.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Abstract/Summary

The authors consider the problem of a robot manipulator operating in a noisy workspace. The manipulator is required to move from an initial position P(i) to a final position P(f). P(i) is assumed to be completely defined. However, P(f) is obtained by a sensing operation and is assumed to be fixed but unknown. The authors approach to this problem involves the use of three learning algorithms, the discretized linear reward-penalty (DLR-P) automaton, the linear reward-penalty (LR-P) automaton and a nonlinear reinforcement scheme. An automaton is placed at each joint of the robot and by acting as a decision maker, plans the trajectory based on noisy measurements of P(f).

Item Type:Book or Report Section
Refereed:Yes
Divisions:Science
ID Code:21730
Uncontrolled Keywords:decision maker, discretized linear reward-penalty, final position, initial position, learning algorithms, learning automata, linear reward-penalty, noisy measurements, noisy workspace, nonlinear reinforcement, path planning, robot manipulator, sensing operation
Publisher:IEE

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation