Title
Advice-Based Exploration in Model-Based Reinforcement Learning.
Abstract
Convergence to an optimal policy using model-based reinforcement learning can require significant exploration of the environment. In some settings such exploration is costly or even impossible, such as in cases where simulators are not available, or where there are prohibitively large state spaces. In this paper we examine the use of advice to guide the search for an optimal policy. To this end we propose a rich language for providing advice to a reinforcement learning agent. Unlike constraints which potentially eliminate optimal policies, advice offers guidance for the exploration, while preserving the guarantee of convergence to an optimal policy. Experimental results on deterministic grid worlds demonstrate the potential for good advice to reduce the amount of exploration required to learn a satisficing or optimal policy, while maintaining robustness in the face of incomplete or misleading advice.
Year
Venue
Field
2018
Canadian Conference on AI
Convergence (routing),Satisficing,Computer science,Markov decision process,Robustness (computer science),Linear temporal logic,Artificial intelligence,Grid,Machine learning,Reinforcement learning
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
14
4