## Motion Planning and Optimal Control of Robotic Systems. Marin Kobilarov. Talk

Read through of the actual paper is here:http://wp.me/pglXz-kD

1. Seems to be strongly motivated by real problems, lots of examples on real robots, or realistic simulations
2. Continuous vs discrete mechanics (one involves integration, the other summation, Euler Lagrange vs discrete Lagrange)
1. Discrete mechanics uses discrete trajectories, based on snapshots across time, as opposed to continuous time in trajectories
2. Gives an example of the continuous and discrete dynamics in modelling pendulum motion
1. In this case, demonstrates that the Runge-Kutta approximation dissipates energy over time, so the pendulum stops moving, the discrete doesn’t
3. He is interested in discrete geometric mechanics
1. Uses vector fields, flow maps, and covariant differentials(?)
2. Transformations make the space linear
4. Gives an example of what he calls the snakeboard (skateboard where axles rotate) and shows RK integration is very poor compared to the discrete mechanical equations
5. Computational costs of these methods are very low
6. Discrete Geometric Optimal Control.
1. Equations for this are very simple, and very similar to the equations for the original dynamics.
2. The difference is the Lagrangian
7. When working with nonlinear systems, must use iterative methods, like Newton-type root finding
1. Usually takes only a few (ten or so) iterations, can be done real-time
2. Regularity of the Lagrangian for control means there is a optimal control solution
8. Normal methods Don’t deal well with collisions or other nonsmooth dynamics
1. This is still an area of active research
9. The state-of-the-art method of planning in nonsmooth domains (such as with obstacles) are RRTs or PRMs (probabilistic roadmaps)
1. A drawback is these don’t really converge to optimal, they are good at only finding some path to goal
10. Proposes random-sampling in trajectory space (adaptive sampling and stochastic optimization, Cross-Entropy for rare-event estimation)
11. Trajectories can be parameterized in two ways: by piecewise constant control, or by specific states visited.
1. For control sequences, trajectory estimated by discrete mechanics, for state parametrization, need optimal control
12. Reformulates the problem as optimization, instead of points try to find small regions around those points
13. Try to estimate volume of low-cost trajectories via MCMC
14. When estimating densities can work with combinations of Gaussians, for example.
15. SCE-RRT* algorithm
16. Combining sensing and control: how do you plan and try to gain as much information as possible about the environment within a restricted amount of time?
1. Implemented this on a model helicopter for slam-style learning/planning.  Planned to end up on full-sized helicopters later.
17. Conclusions:
1. Discrete mechanics leads to provably good numerical algs
2. Traj optimization should exploit state space geometry
3. <missed the rest>
18. Future Work:
1. Numerical geom dynamics and control (robust real-time numerics)
2. planning and control, uncertainty and robustness
3. info-theoretic bounds on performance/efficiency/complexity
4. He cares about applications to autonomous robots (surface, sea, air, space)