## Cross-Entropy Randomized Motion Planning. Marin Kobilarov. RSS 11

This looked familiar. I already saw this talk online a while back:

https://aresearch.wordpress.com/2012/03/26/motion-planning-and-optimal-control-of-robotic-systems-marin-kobilarov-talk/

1. Mentions motion planning exploration and exploitation:
1. Warren B. Powell. Approximate Dynamic Programming:Solving the Curses of Dimensionality. Wiley Series in Probability and Statistics, 2007.
2. [17] M. Rickert, O. Brock, and A. Knoll. Balancing explo-ration and exploitation in motion planning. In Proc. IEEE Int. Conf. Robotics and Automation ICRA 2008, pages 2812–2817, 2008. doi: 10.1109/ROBOT.2008.4543636
2. Citations given for CE are:
1. Reuven Y. Rubinstein and Dirk P. Kroese. The cross-entropy method: a unified approach to combinatorial optimization. Springer, 2004.
2. Dirk Kroese, Sergey Porotsky, and Reuven Rubinstein. The cross-entropy method for continuous multi-extremal optimization.  methodology and Computing in Applied Probability, 8:383–407, 2006. ISSN 1387-5841.
3. “The CE method is widely applicable and is used to successfully solve complex combinatorial problems such as the minimum graph cut or the traveling salesman problems”
4. “The scheme is general and converges to an optimum assuming that enough feasible trajectories can be sampled. As with most randomized methods there is no guarantee that this would be the global optimum but with high likelihood the global optimum can be approached if enough samples are generated
1. This is contrary to what I remember reading somewhere else (that it may not converge), but not sure
5. These authors relate CE to MCMC (not MCTS) methods, by doing importance sampling.  Citation for MCTS:
1. Reuven Y. Rubenstein and Dirk P. Kroese. Simulation and the Monte Carlo Method. Wiley, 2008.
6. When doing MCTS you sample from another distribution to calculate an integral.  The assumption is you can’t sample from the distribution you care about so you get samples from a distribution that is easy to sample from.
7. The best sampling-distribution to use is one that minimizes the estimated integral you are trying to find.
8. A good way to find that distribution is by finding a distribution that minimizes the KL-Divergence to the optimal distribution
9. There is a good explanation of CE along with its relationship to rare-event probabilities, but I am hungry right now and should re-read it in the future.
10. Says initial distribution doesn’t have to be very good, it just needs to get good coverage of the space (these claims are a little hand wavy without citations, so not sure what to make of it)
11. He discusses doing uniform sampling for the first generation, which is how my implementation does it as well
12. Distribution parameter update is done according to EM
13. Also says good to inject noise into samples to prevent early convergence
14. Looks like the approach here is continuous time and looking to find parameters to optimal controllers (at least the first domain looks LQR)
1. Searching trajectories through the state space, as opposed to policies?
15. “The Dubins car fails the small-time local controllability (STLC) test which creates interesting non-trivial control problems”
1. Cites Lavalle
16. For the Dubins car example he uses the encoding of saying what action to take, and for how long (so each action is 2D and variable time)
17. Also does a helicopter domain – the description of the dynamics is pretty detailed
18. “The paper addresses the motion planning problem through a randomized optimization in the continuous space of feasible trajectories”
19. Domains here are planned by selecting sequences of states, and using inverse kinematics to stitch them together.