Learning Slow Features for Behaviour Analysis. Zafeiriou, Nicolaou, Zafeiriou, Nikitidis, Pantic. ICCV 2013.

  1. Discusses deterministic as well as probabilistic SFA <I don’t know anything about the latter>
  2. Derive a version of deterministic SFA “… that is able to identify linear projections that extract the common slowest varying features of two or more sequences.”
  3. Also, an EM method for inference in probabilistic SFA and extend it to handle multiple data sequences
  4. This EM-SFA algorithm “… can be combined with dynamic time warping techniques for robust sequence time-alignment.”
  5. Used on face videos
  6. <A number of good references in here – a few I still haven’t read>
  7. Seems like the probabalistic version of SFA is based on similarity to an ML estimate of a linear generative model with “… a Gaussian linear dynamical system with diagonal covariances over the latent space.” <Need to look into that more>
  8. Their version of probabilistic SFA is novel because it is based on EM, as opposed to other ML approaches.  This allows for modeling latent distributions that aren’t restricted by diagonal covariance matrices.
    1. <There is a hack in the math in the standard ML probabilistic SFA that maps variance to 0, which “… essentially reduces the model to a deterministic one…”
  9. Call facial expressions, action units (AUs), this is the first unsupervised method that does so based on temporal data; other unsupervised approaches are based on “…global structures…” <I assume that means global properties of still frames>
  10. <Not taking notes on particulars of math for probabilistic SFA, both ML and EM versions>
  11. When looking at more than one data sequence, SFA for this setting is designed to find the common slowly varying features
  12. <Naturally> assumption is that there is a common latent space between multiple-sequence SFA
    1. Math for both deterministic and probabilistic multi-sequence SFA
  13. <The time warping part / aligning sequences of different length is probably not relevant>
  14. The video data they work from is a public database of 400 annotated videos, each with a constrained structure of neutral, onset, apex, and offset.
  15. Not so clearly described, but seems like they dont operate off raw video data; they work off of 68 facial points tracked in 2D
  16. Compare performance of EM-SFA, SFA, and traditional linear dynamic systems.
    1. Seems like each is run on a subset of facial data, either mouth, eyes, or brows
  17. Use slowest feature of SFA as a classifier for when the expression is being performed (when its value goes from negative to positive, and back to negative)
  18. EM-SFA outperforms SFA ad LDS
  19. <Did a quick read, but I don’t see where they have experimental results of multi-sequence.  If it is indeed missing that is strange because they do have the math and the data already to test it.>
  20. The temporal alignment algorithm is used to match up videos from the same category so that the neutral, apex, etc… frames are synchronized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: