Read through of the actual paper is here:http://wp.me/pglXz-kD

- Seems to be strongly motivated by real problems, lots of examples on real robots, or realistic simulations
- Continuous vs discrete mechanics (one involves integration, the other summation, Euler Lagrange vs discrete Lagrange)
- Discrete mechanics uses discrete trajectories, based on snapshots across time, as opposed to continuous time in trajectories
- Gives an example of the continuous and discrete dynamics in modelling pendulum motion
- In this case, demonstrates that the Runge-Kutta approximation dissipates energy over time, so the pendulum stops moving, the discrete doesn’t

- He is interested in discrete geometric mechanics
- Uses vector fields, flow maps, and covariant differentials(?)
- Transformations make the space linear

- Gives an example of what he calls the snakeboard (skateboard where axles rotate) and shows RK integration is very poor compared to the discrete mechanical equations
- Computational costs of these methods are very low
- Discrete Geometric Optimal Control.
- Equations for this are very simple, and very similar to the equations for the original dynamics.
- The difference is the Lagrangian

- When working with nonlinear systems, must use iterative methods, like Newton-type root finding
- Usually takes only a few (ten or so) iterations, can be done real-time
- Regularity of the Lagrangian for control means there is a optimal control solution

- Normal methods Don’t deal well with collisions or other nonsmooth dynamics
- This is still an area of active research

- The state-of-the-art method of planning in nonsmooth domains (such as with obstacles) are RRTs or PRMs (probabilistic roadmaps)
- A drawback is these don’t really converge to optimal, they are good at only finding some path to goal

**Proposes random-sampling in trajectory space (adaptive sampling and stochastic optimization, Cross-Entropy for rare-event estimation)**- Trajectories can be parameterized in two ways: by piecewise constant control, or by specific states visited.
- For control sequences, trajectory estimated by discrete mechanics, for state parametrization, need optimal control

- Reformulates the problem as optimization, instead of points try to find small regions around those points
- Try to estimate volume of low-cost trajectories via MCMC
- When estimating densities can work with combinations of Gaussians, for example.
- SCE-RRT* algorithm
- Combining sensing and control: how do you plan and try to gain as much information as possible about the environment within a restricted amount of time?
- Implemented this on a model helicopter for slam-style learning/planning. Planned to end up on full-sized helicopters later.

- Conclusions:
- Discrete mechanics leads to provably good numerical algs
- Traj optimization should exploit state space geometry
- <missed the rest>

- Future Work:
- Numerical geom dynamics and control (robust real-time numerics)
- planning and control, uncertainty and robustness
- info-theoretic bounds on performance/efficiency/complexity
- He cares about applications to autonomous robots (surface, sea, air, space)