Paper

Application of Actor-Critic Method to Mobile Robot Using State Representation Based on Probability Distributions


Authors:
Manabu Gouko
Abstract
In this study, I applied an actor-critic learning method to a mobile robot which uses a state representation based on distances between probability distributions. This state representation is proposed in a previous work and is insensitive to environmental changes, i.e., sensor signals maintaining an identical state even under certain environmental changes. The method, which constitutes a reinforcement learning algorithm, can handle continuous states and action spaces. I performed a simulation and verified that the mobile robot can learn a wall-following task. Then, I confirmed that the learned robot can achieve the same task when its sensors are artificially changed.
Keywords
Reinforcement Learning; Actor-Critic Method; State Representation; Mobile Robot
StartPage
191
EndPage
195
Doi
Download | Back to Issue| Archive