The Cross Entropy Method for Fast Policy Search. Mannor, Rubinstein, Gat. ICML 2003


  1. According to PATH INTEGRAL POLICY IMPROVEMENT WITH COVARIANCE MATRIX ADAPTATION, this is the first paper that uses cross entropy methods for policy search.
  2. Survey of policy gradient methods at:
    1. Experiments with Infinite-horizon, Policy Gradient Estimation.  JMLR 2001
  3. Connection between stochastic optimization and Q-Learning made in:
    1. Asynchronous Stochastic Approximation and Q-learning.  Tsitsiklis.  Machine Learning 1994.
  4. Standard stochastic optimization is slow because of constraints in the way the update can be performed
  5. The approach here (cross-entropy), on the other hand is designed to be fast, and functions differently than standard stochastic optimization
  6. For background on Cross-Ent optimization, see:
    1. A tutorial on the Cross-entropy method.  de-Boer, Kroese, Mannor, Rubinstein.  Annals of OR
    2. The Simulated Entropy Method for Combinatorial and Continuous Optimization.  Rubenstein.  Methodology and Computing in Applied Probability 1999.
  7. “Good” samples in cross ent are usually referred to as elites
  8. Claim the method is generally robust as long as sample sizes are large and spread out at beginning, some additional smoothing may be used (too fast convergence seems to be an issue that comes up in some of the literature)
  9. A variant is proposed that is more adaptive in choice of generation size and how many samples are elites
  10. In this paper, cross ent is used in a finite MDP to generate a stochastic policy
  11. Here they care about finding a global policy and update policy of every state based on the history after that state was visited during a trajectory
  12. In section 4 they discuss parameterized policies (as opposed to ones that just encode p(a|s) in a matrix)
  13. When using cross entropy this (the normal, parametrizing a policy) way, mu(|s, theta) doesn’t have to be differentiable w.r.t. theta, which is not true in policy gradient algorithms
  14. Alg is tested empirically in a gridworld as well as inventory control (in this problem, the policy is at what stock level is each item re-ordered)
  15. Say that other methods of optimization can be used, but many are sensitive to sampling error (by that I think he means noise).  Says that gradient methods as well as simulated annealing and guided local search all have this problem)
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: