Paper

Moratorium Effect on Estimation Values in Simple Reinforcement Learning


Authors:
Katsuhiro Honda; Akira Notsu; Yuki Tezuka
Abstract
In this paper, we introduce low-priority cut-in (moratorium) to chain form reinforcement learning, which we proposed as Simple Reinforcement Learning for a reinforcement learning agent that has small memory. In the real world, learning is difficult because there are an infinite number of states and actions that need a large number of stored memory and learning time. To solve the problem, better estimated values are categorized as “GOOD” in the reinforcement learning process. Additionally, the alignment sequence of estimated values is changed because they are regarded as an important sequence themselves. However, the method is heavily affected by the action policy. If an agent tends to search many states, its memory overflows with low-value data. Thus, low-priority cut-in (moratorium) enhances the method in order to solve this problem. We conducted some simulations and observed the influence of our methods. Several simulation results show good influence on learning.
Keywords
Reinforcement Learning; Q-learning; State-action Set Categorization
StartPage
112
EndPage
119
Doi
10.5963/IJCSAI0303004
Download | Back to Issue| Archive