Not familiar with this (RL) journal, but as a survey (looking for references) its probably fine. The references seem like they are from top people so useful for references to mine at the very least.

- From the perspective of working on a high-dimensional problem that can’t be tackled at once in its entirety
- Abstraction as dimension reduction. The way abstraction is defined here is through the definition of subtasks, where it is fairly easy to determine if certain features aren’t related to the task
- Options
- Hierarchies of Abstract Machines / HAM
- Two forms of optimality when working with subtasks
- Recursive optimality: each subtask optimally achieves the goal state of the subtask
- Hierarchical optimality: each subtask is optimal wrt to the overall goal

- Although recursive optimality yields poorer results, the advantage is that it makes the planning easier, whereas hierarchical optimality doesn’t change the problem difficulty
- MAXQ and followup work, read:
- Andre, D. & Russell, S. J. (2002). State abstraction for programmable reinforcement learning

agents, AAAI/IAAI, pp. 119–125. - Marthi, B., Russell, S. J. & Andre, D. (2006). A compact, hierarchical q-function

decomposition, UAI, AUAI Press.

- Andre, D. & Russell, S. J. (2002). State abstraction for programmable reinforcement learning
- <Stuff from Andy’s students is all over this paper>
- More papers deal with solving subtasks than finding them, still
- Existing algorithms for finding subtasks fall under these categories:
- Subgoals as states needed to a task (high visit frequency (perhaps just on successful trajectories), reward gradient)
- Subgoals as states that provide access to other regions (related to graph cut, for example based on flow)
- Subtasks based on factored state space

- References for a number of papers that do state abstration by merging them.