Abstract | ||
---|---|---|
Convergence to an optimal policy using model-based reinforcement learning can require significant exploration of the environment. In some settings such exploration is costly or even impossible, such as in cases where simulators are not available, or where there are prohibitively large state spaces. In this paper we examine the use of advice to guide the search for an optimal policy. To this end we propose a rich language for providing advice to a reinforcement learning agent. Unlike constraints which potentially eliminate optimal policies, advice offers guidance for the exploration, while preserving the guarantee of convergence to an optimal policy. Experimental results on deterministic grid worlds demonstrate the potential for good advice to reduce the amount of exploration required to learn a satisficing or optimal policy, while maintaining robustness in the face of incomplete or misleading advice. |
Year | Venue | Field |
---|---|---|
2018 | Canadian Conference on AI | Convergence (routing),Satisficing,Computer science,Markov decision process,Robustness (computer science),Linear temporal logic,Artificial intelligence,Grid,Machine learning,Reinforcement learning |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
14 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Rodrigo Toro Icarte | 1 | 0 | 0.34 |
Toryn Qwyllyn Klassen | 2 | 5 | 3.15 |
Richard Anthony Valenzano | 3 | 39 | 6.62 |
Sheila A. Mcilraith | 4 | 4577 | 491.08 |