 Originally in motion planning, the goal was just to find a collisionfree path in Cspace. If there were dynamic constraints, it was of a secondary concern to use the original path found under those dynamic constraints. Doing this though, is difficult (path may not actually be executable) and generally produces low quality, jerky trajectories.
 “This paper addresses the implementation of a planning method and its application to a motion planning benchmark with severe underactuation, significant drift, high dimensionality, discrete system changes that occur at boundary conditions, and finally a system which is not reducable to a system with lower order dynamics.
 Two goals of this paper:
 Development of online planners that provide stability and completeness guarantees
 Development of offline planners that can be used interactively in prototyping
 There are a number of criteria that make motion planning hard:
 Drift: When a system cannot instantaneously stop. “From a planning perspective, systems with drift are challenging since the shortest path cost between two states frequently disagrees with this metric”
 Underactuation: When the dimension of the control space is less than the dimension of the state space. Can be the result of nonholonomic or other dynamic constraints. “Analyzing the shape and dimensionality of the reachable space in the presence of underactuation and kinematic constraints can be quite challenging”
 Discrete system changes: discontinuities in the the dynamic constraints or state variables as system evolves
 Highdimensional systems: In particular, this is problematic in dynamic systems where state parameters interact in a complicated way
 The simulation domain is the computer game of Koules, which objects pushing and bouncing on each other. Secondorder dynamic system
 The planning algorithm used is called PDSTExplore, it was previously introduced but is expanded here
 Planner is blackbox, uses trajectories of libraries, search is guided by space partition data structure
 “A single execution of PDSTExplore is used to find a feasible path that kills on Koule [part of the solution to the full problem]. Multiple executions of the planner can be used to construct a sequence of paths that combine to be a solution for a given initial state. However, it is possible that no solution is reachable form the final state of a partial solution generated by on execution of PDSTExplore.” They have special machinery to take care of that situation
 The algorithm performs random walks to optimize coverage of the state space. Constructs a tree with the root corresponding to the query state
 Way state space decomposition and knownness estimations are performed is similar to INFORMED AND PROBALISTICALLY COMPLETE SEARCH FOR MOTION PLANNING UNDER DIFFERENTIAL CONSTRAINTS. BEKRIS, KAVRAKI. AAAI 2008 (WORKSHOP?). Basically put samples in the tree, as each sample is put in a leaf that leaf is bisected immediately
 Every trajectory must be contained only within one cell.
 There is a controller that is used, so the trajectories generated are in terms of state space and then a controller tries to find actions to satisfy that sequence of states. I thought that this is specifically not what they are doing? I must be misunderstanding something. The way the trajectories are sampled has a fair amount of domain knowledge cooked in
 There is a backtracking mechanism to deal with partial solutions which can’t be solved
 Although the cost of the algorithm is N log N in terms of # of trajectories, costs look linear because the simulator dominates the cost of actually planning
Fast TreeBased Exploration of State Space for Robots with Dynamics. Ladd, Kavraki. Algorithmic Foundations of Robotics. 2005.
 Paper for the same algorithm as above, in a longer version of the above
 Planner represents samples as path segments as opposed to individual states, and uses nonuniform subdivisions of the state space to do proper exploration
 Gets rid of problems related to sample placement, metrics, and coverage estimation
 Test on kinodynamic point, differential drive, and blimplike robots
 There are a few families of dynamics planning problems in the literature:
 Exact: Produce optimal paths but are PSPACE
 Path Conversion: Transform the problem into some other space and solve that easier problem, these methods tend to produce paths of low quality
 Sampling Based
 Roadmap (PRM): Global sampling and local planners to build a graph in the space. The local planner can be problematic, as it may not be possible to compute for all domains, and is computationall expensive in most that it does exist for. PRMs are complete
 Treebased (RRT, EST): These are cheaper than PRMs, but may also leverage local planners to connect trees grown from different locations
 There are many exploration planners, that work to ensure state space has a good coverage. They are used in bidirectional planning and Probabalistic Roadmap of Trees algorithm, but most instances of exploration planners are expensive
 Generally work by measuring which states need to be explored and then growing the tree in that direction
 PDSTExplore works differently, and is designed for domains with secondorder dynamics
 Tree based planners are sensitive to the method that determines proximity
 In particular, the drift present in secondorder systems invalidates inherent assumptions in the way the metric is used”
 Goal of alg is to maintain good qualities of tree based planners (such as speed), while addressing the shortcomings. In order to do so, the algorithm:

 Uses path segment samples instead of state space samples
 Does not leverage metrics
 Has a greedy deterministic path segment selection method
 Samples put into the tree have to be collision free
 I’m not understanding something fully because the tree decomposes the state space, but samples in the tree are paths?
 I think the tree decomposition only really serves to select which point is expanded next. All the expansions and next states are
 In a successful (collisionfree) move, the tree further decomposes the space, changing priorities. On failed expansions, the priority is adjusted manually (penalized)
 Density/knownness of a cell is determined by #of samples in the cell and the volume of the cell
 They run some experiments on some extremely complex 2dnavigation domains (real mazes) with differential drive