Automatic Curriculum Graph Generation for Reinforcement Learning Agents
M. Svetlik, M. Leonetti, J. Sinapov, R. Shah, N. Walker, and P. Stone, “Automatic Curriculum Graph Generation for Reinforcement Learning Agents,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017, doi: 10.1609/aaai.v31i1.10933.
Abstract
In recent years, research has shown that transfer learning methods can be leveraged to construct curricula that sequence a series of simpler tasks such that performance on a final target task is improved. A major limitation of existing approaches is that such curricula are handcrafted by humans that are typically domain experts. To address this limitation, we introduce a method to generate a curriculum based on task descriptors and a novel metric of transfer potential. Our method automatically generates a curriculum as a directed acyclic graph (as opposed to a linear sequence as done in existing work). Experiments in both discrete and continuous domains show that our method produces curricula that improve the agent’s learning performance when compared to the baseline condition of learning on the target task from scratch.
BibTeX Entry
@inproceedings{svetlik2017aaai, author = {Svetlik, Maxwell and Leonetti, Matteo and Sinapov, Jivko and Shah, Rishi and Walker, Nick and Stone, Peter}, title = {Automatic Curriculum Graph Generation for Reinforcement Learning Agents}, publisher = {Association for the Advancement of Artificial Intelligence}, location = {San Francisco}, month = feb, year = {2017}, wwwtype = {conference}, wwwpdf = {https://www.cs.utexas.edu/%7Epstone/Papers/bib2html-links/AAAI17-Svetlik.pdf}, wwwposter = {https://doi.org/10.5281/zenodo.3244636}, booktitle = {Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)}, keywords = {curriculum learning; reinforcement learning}, doi = {10.1609/aaai.v31i1.10933} }