- Attempt “… to learn the partially ordered structure inherent in human everyday activities from observations by exploiting variability in the data.”
- Learns full joint probability over actions that make up a task, their partial ordering, and their parameters
- Can be used for classification, but also figuring out what actions are relevant to a task, what objects are used, if it was done correctly, or what is typical for an individual
- Use synthetic data as well as TUM Kitchen and CMU MMAC
- Use Bayesian Logic Nets (another paper has author overlap and uses the same approach)
- Common alternate approaches are HMMs, conditional random fields (CRFs), or suffix trees
- But these are most effective when the ordering of the subtasks are pretty fixed
- Also Markov assumption of HMM doesn’t really hold in the way data is often represented and may require all history information
- Also some other approaches for representing partial ordering
- <Whatever this means> “All these approaches focus only on the ordering among atomic action entities, while our system learns a distribution over the order as well as the action parameters.”
- Literature on partially ordered plans require lots of prior information, and have been applied to synthetic problems
- Working off the TUM dataset, they needed to resort to bagging and data noisification techniques in order to get enough samples
- Needs annotation / data labeling
- Learns, for example, that after making brownies (the CMU MMAC dataset) some people like to put the frying pan in the sink, and others on the stove
- Performance of this approach is much better than that of CRFs, and is more robust to variability in how people undertake tasks