Category Archives: Shaping

Where do Rewards Come From? Singh, Lewis, Barto. CogSci 2009.

  1. Considers reward from a evolutionary perspective – where do they come from (the general assumption in CS is only that they are specified a-priori)
    1. Considers the optimal reward function, given a fitness function and a distribution over environments
  2. “Novel results from computational experiments show how traditional notions of extrinsically and intrinsically motivated behaviors may emerge from such optimal reward functions.”
    1. Rewards <functions?> are the result of search as opposed to hand-crafting
  3. Reward function doesn’t need to look in any way like the fitness function, but should help behavior as opposed to just relying on fitness function
  4. In this paper rewards are assumed to be things that animals want to do, because they lead to fitness, but fitness is the thing that really matters
  5. In RL, agents ultimately don’t act directly according to the reward function, but rather the value function
  6. Mention incentive motivation (McClure, Daw, Montague, 2003)
  7. Instead of the standard RL model where the agent interacts with an environment, the claim here is there are two environments: one internal to the agent (mental/physical) and the other external to the agent
    1. But they still consider rewards to be part of the internal environment <so it seems we can consider the classical RL model WLOG>
  8. Psychologists discuss intrinsic vs extrinsic motivation, where intrinsic motivation causes animals to explore absent of some external (extrinsic) source of reward
    1. Here they consider similar definitions, with intrinsic reward being something that leads to analogs of intrinsic motivation in animals, and extrinsic reward as having to do with the standard reward function as defined in RL
  9. Criteria for a framework for reward:
    1. Formally well-defined and computationally feasible
    2. Makes minimal changes to existing RL framework (to maintain generalizability)
    3. Is not agent specific (does not care about model-free/based)
    4. Doesn’t care about the specific search process <seems to me the same as the above>
    5. “Framework derives rewards that capture both intrinsic and extrinsic motivational processes”
  10. Cites some earlier work (some with the same authors as this paper), as similar, but that do not satisfy this criteria, and mention Ackley and Michael’s stuff as being pretty close (evolves secondary behavior, but lacks theoretical basis)
  11. Ultimately, agents are evaluated according to expected fitness (resulting from a <deterministic?> fitness function, and a distribution over environments)
  12. In this model, an RL agent learns to behave according to reward as usual, but that entire history of interaction while learning is then passed to a fitness function, which produces a real-numbered evaluation of the history
    1. The goal is to find a reward function that optimizes the expected fitness over the distribution of environments <and histories?>
  13. Set up some experiments, claim emergence of interesting reward functions, which also capture regularities across environments
    1. Experiments are in 6×6 gridworlds, which may contain a small number of types of items (food, water)
  14. e-greedy ql
  15. Reward functions that give reward only to items that directly increase fitness actually don’t do well
    1. Richer reward functions that reward actions that indirectly increase fitness as well lead to better performance
  16. In some domains, the reward function helps bring about behavior that appears the same way intrinsic motivation is expected to impact behavior
Advertisements