- Proposes automated probabilistic models of everyday activities (AM-EvA)
- Works at differing levels of abstraction, starting at joint poses and trajectories
- Integrate different kinds of action models into one framework with a-priori knowledge about actions

- <Seems like their motivation is a bit different as> they are interested to see if activities degrade or change (forget what you were doing with dementia)
- But they also have “kitchen” activities recorded, but this is setting a table, including getting stuff from cabinets, also skeleton model
- No head-cam, it seems
- But data is very nicely annotated

- Data from multi-cam setup alone, tracks 51 DOF (from Bayesian particle filters)
- Mentions Gaussian Process Dynamical Models for embedding human motion data into latent space <need to check this out>
- This can “…serve as a starting point for the (unsupervised) segmentation of trajectories into meaningful fragments, whose sequential ordering in turn provides the input for a further interpretation of the overall action sequence in a (discrete) time-series model. Such an unsupervised approach will group motions mainly with respect to their kinematic or dynamic properties.”

- Models here are generative, so can be used to predict, and even assign probabilities to motion sequences
- Discuss action segmentation, and then that combined with hierarchical segmentation
- Mentions that external descriptions of a behavior can also be incorporated <although its not clear how>. Took data from ehow, parsed it, and then ran it through their segmented hierarchical data to try and see what the person was doing although it seems primarily speculative at this point as they write “We will explore this direction of research in the near future.”
- Although doing the same activity multiple times will result in variability, some of the behavior will remain the same, they are seeking to find those regularities.
- Their AM-EvA framework supports two types of statistical relational models:
- Bayesian logic networks (BLN)
- Markov logic networks
- Both of these allow for representation of “meta-model of probability distributions, i.e. a template for the construction of a concrete probability distribution that can be represented as a graphical model… [either 1 or 2 immediately above]”
- Bayesian logic networks are a little more restrictive, but likewise they are easier to use, so its what the authors go with

- BLN is a Bayesian network that does not allow constructs that conflict with provided logic
- <This stuff is pretty slick. Right now I don’t think we are interested in discretizing behavior into segments, but if we do it in the end, this is a neat option to check out>
- Because things are segmented and hierarchically labeled, can query the corpus for certain behaviors (such as episodes where something was carried with both hands), and it can pull them out
- Can also be queried in other ways for “action-related concepts”, such as where was someone standing when doing a certain action. This can be used, among other things, for user identification
- Can give probability estimates of what higher-order behavior is going on, given some lower order behavior