The Asymptotic Equipartition Property in Reinforcement Learning and its Relation to Return Maximization

URI http://harp.lib.hiroshima-u.ac.jp/hiroshima-cu/metadata/7044
File
Title
The Asymptotic Equipartition Property in Reinforcement Learning and its Relation to Return Maximization
Author
氏名 IWATA Kazunori
ヨミ イワタ カズノリ
別名 岩田 一貴
氏名 IKEDA Kazushi
ヨミ イケダ カズシ
別名
氏名 SAKAI Hideaki
ヨミ サカイ ヒデアキ
別名
Subject
Reinforcement learning
Markov decision process
Information theory
Asymptotic equipartition property
Stochastic complexity
Return maximization
Abstract

We discuss an important property called the asymptotic equipartition property on empirical sequences in reinforcement learning. This states that the typical set of empirical sequences has probability nearly one, that all elements in the typical set are nearly equi-probable, and that the number of elements in the typical set is an exponential function of the sum of conditional entropies if the number of time steps is sufficiently large. The sum is referred to as stochastic complexity. Using the property we elucidate the fact that the return maximization depends on two factors, the stochastic complexity and a quantity depending on the parameters of environment. Here, the return maximization means that the best sequences in terms of expected return have probability one. We also examine the sensitivity of stochastic complexity, which is a qualitative guide in tuning the parameters of action-selection strategy, and show a sufficient condition for return maximization in probability.

Description Peer Reviewed
Journal Title
Neural Networks
Volume
19
Issue
1
Spage
62
Epage
75
Published Date
2006-01
Publisher
Elsevier
ISSN
08936080
NCID
AA10680676
Language
eng
NIIType
Journal Article
Text Version
著者版
Rights
Copyright © 2006 Elsevier Ltd. All rights reserved
Relation URL
Old URI
Set
hiroshima-cu