Efficient Multi-Task Learning via Iterated Single-Task Transfer
We investigate the feasibility of competing with multi-task RL by performing repeated Transfer RL from one task to another.
We describe a method of finding near optimal sequences of transfers to perform in this setting, and use it to show that performing the optimal sequence of transfer is competitive with other MTRL methods on the MetaWorld MT10 benchmark. [Semantic Scholar URL]
This page is a stub. Please see the links above for more information.