Online learning of shaping rewards in reinforcement learning. Grzes, Kudenko. Neural Networks 2010.


  1. When you don’t have a potential function defined a-priori, is it possible to learn one while interacting with the environment and still do better?
  2. Two cases considered: model free (SARSA) as well as model-based (SARSA)
  3. Shaping learning performed for model-free is to use a lower resolution gridding for shaping, which can be learnt more rapidly
    1. For model based, use “free space assumption” <not sure what that means – Manhattan distance for navigation tasks?>
    2. In both cases, they use an abstracted task to do the shaping
  4. Extends Grzes, M., & Kudenko, D. (2008a). Multigrid reinforcement learning with reward shaping. In LNCS, Proceedings of the 18th international conference on artificial neural networks
  5. Shaping can be useful because VFAs can be used for the shaping, and even if this is unsafe, it will still not influence the final policy
  6. Ng’s work on shaping was for model-free methods, though John’s extended it to model-based, RMAXy settings
    1. Jonh’s showed that if the shaping function is admissible its PAC-MDP
  7. The “free space assumption” is used for real-time learning of heuristics (a-la LRTA*)
    1. “The free space assumption deals with initial uncertainty assuming that all actions in all states are unblocked.”
  8. “In the automatic shaping approach (Marthi, 2007) an abstract MDP is formulated and solved. In the initial phase of learning, the abstract MDP model is built and, after a defined number of episodes, the abstract MDP is solved exactly and its value function used as the potential function for ground states. In this paper, we propose an algorithm which applies a multigrid strategy…”
    1. The method used here (for the model free part) though, is model-free, doesn’t increase computational costs, and requires on minimal domain knowledge <knowledge of how to aggregate states is needed, although they may just glob together states based on the transition function>
  9. Here, the shaping function is simply a value function, but where each state is mapped to a set of states, and the value for that state is done according to the entire set (generalization to improve learning rate)
  10. On to the model-based part
  11. In effect, R-Max has a heuristic that each state can lead directly to the goal state.  Instead, 1/(1-γ) can be replaced by a tighter (although still admissible) heuristic value.  This still maintains guarantees of PAC-MDPness
    1. This is what Johns paper is about
    2. The algorithm is PAC-MDP iff heuristic is admissible
  12. The free state assumption says that if actions may fail in actual domain, ignore that possibility (for example, ignore walls in a navigation task)
  13. <I don’t really see how the model-based part of this paper contributes anything significant beyond John’s paper>
  14. Good source of references
Advertisements
Tagged

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: