- Approach is concerned with dynamic walking on uneven terrain
- A common approach is to formulate the walking problem as a linear system which can be solved with a number of methods. This is problematic because it reduces energy efficiency, overly constrains the type of gaits that can be found, and requires constant compensation for what the body naturally does (that introduces non linearity) which introduces other forms of complexity
- Dynamic walkers utilize, instead of overcome the inherent nonlinearities involved in legged locomotion
- The idea is to only introduce small corrections into the trajectories that will naturally occur in the system
- Discuss motivation based on studies of how animals walk, and how humans learn to walk.
- Cost function is linear in the actions, but can be piecewise linear
- Solutions are nonstationary finite horizon
- Set up the Bellman eq as a linear program so that it can handle actions w/100s to 10,000s dimensions (not yet sure how this is different from linear programming that can be done in the vanilla setting)
- A previous paper (uses a similar approach) and solves a domain with 20,000 dimensional action space for 8760 time steps (a year). Thats crazy.
- They then introduce a pure exploitation algorithm called SPAR-Storage. It constructs a concave piecewise linear approximation of the value function.
- At the limit, the approximation and true value function match at the optimum
- All pieces begin with zero slope and zero value and are iteratively improved
- Even for very large states in terms of energy storage (tens of thousands), the algorithm converged in about 100 iterations. On the other hand, the algorithm is sensitive to the size of the rest of the state vector (only a handful of dimensions is practical), although they present some ideas as to how to improve it
- There is a convergence proof that I am not reading carefully right now

Advertisements
(function(){var c=function(){var a=document.getElementById("crt-2136021309");window.Criteo?(a.parentNode.style.setProperty("display","inline-block","important"),a.style.setProperty("display","block","important"),window.Criteo.DisplayAcceptableAdIfAdblocked({zoneid:388248,containerid:"crt-2136021309",collapseContainerIfNotAdblocked:!0,callifnotadblocked:function(){a.style.setProperty("display","none","important");a.style.setProperty("visbility","hidden","important")}})):(a.style.setProperty("display","none","important"),a.style.setProperty("visibility","hidden","important"))};if(window.Criteo)c();else{if(!__ATA.criteo.script){var b=document.createElement("script");b.src="//static.criteo.net/js/ld/publishertag.js";b.onload=function(){for(var a=0;a<__ATA.criteo.cmd.length;a++){var b=__ATA.criteo.cmd[a];"function"===typeof b&&b()}};(document.head||document.getElementsByTagName("head")[0]).appendChild(b);__ATA.criteo.script=b}__ATA.criteo.cmd.push(c)}})();
(function(){var c=function(){var a=document.getElementById("crt-824198902");window.Criteo?(a.parentNode.style.setProperty("display","inline-block","important"),a.style.setProperty("display","block","important"),window.Criteo.DisplayAcceptableAdIfAdblocked({zoneid:837497,containerid:"crt-824198902",collapseContainerIfNotAdblocked:!0,callifnotadblocked:function(){a.style.setProperty("display","none","important");a.style.setProperty("visbility","hidden","important")}})):(a.style.setProperty("display","none","important"),a.style.setProperty("visibility","hidden","important"))};if(window.Criteo)c();else{if(!__ATA.criteo.script){var b=document.createElement("script");b.src="//static.criteo.net/js/ld/publishertag.js";b.onload=function(){for(var a=0;a<__ATA.criteo.cmd.length;a++){var b=__ATA.criteo.cmd[a];"function"===typeof b&&b()}};(document.head||document.getElementsByTagName("head")[0]).appendChild(b);__ATA.criteo.script=b}__ATA.criteo.cmd.push(c)}})();