Slow Feature Analysis: Unsupervised Learning of Invariances. Wiskott, Sejnowski. Neural Computation 2002.


  1. Pick out features that vary slowly from time-series data
  2. “It is based on a nonlinear expansion of the input signal and application of PCA to this expanded signal and its time derivative.”
    1. Sounds a little like the first part is SVMish
  3. It can be applied hierarchically to extract features
  4. It is used to model the visual system
  5. Can learn man different things (translation, rotation, contrast, etc…) based on the training set
  6. Doesn’t need a large training corpus
  7. “Performance degrades if the network is trained to learn multiple invariances simultaneously.”
  8. It has been common in ANN systems to build representations that have invariances in them
  9. Another method is to map different representations to each other in an invariant way (such as translation or size), but this needs to be set up in advance
  10. The approach here is different than the previous 2 approaches listed because it is based on learning invariances from temporal inputs
    1. The idea is that the perception of our environment varies slowly, but at a low-level changes occur quickly; this attempts to capture that phenomenon
    2. “…a slowly-varying representation can be considered to be of higher abstraction level than a quickly varying one.”
  11. “It is important to note here that the input-output function computes the output signal instantaneously, only on the basis of the current input.”
    1. <This seems like it must then throw out a great deal of useful information  – basically its saying context doesn’t matter>
  12. When dealing with an object moving, being able to identify they object, and its location will allow motion to be described in a way that changes slowly (certainly w.r.t. individual pixels that change drastically as an object moves across them)
    1. Commonly, object identity changes slowly, but its position may change much more quickly
  13. The formal problem:
    1. Given I-dimensional vector inputs x(t)
    2. Find a function g(x) -> y(t) of dimension J <shouldn’t the left side of the equation be g(x(t))?>
    3. Such that Δ(y’ ) is minimal
    4. Under the additional constraints that:
      1. y_j has 0-mean
      2. y_j^2 = 1 (unit variance)
      3. <y_j, y_k> = 0 (decorrelation), where <f> denotes 1/(t_1 – t_0)  \int_{t_0}^{t_1} f(t)dt <integral>
    5. 13.3 means output variation should be minimal
    6. 13.4.1 and 2 add constraints so output cannot be constant
    7. 13.4.3 ensures each signal component does not reproduce another
    8. It says there is an order induced such that Δ(y_j) <= Δ(y_k) if jk. <how does this fall out of the constraints?>
  14. “This learning problem is an optimization problem of variational calculus and in general is difficult to solve.”  If however, the function for each g_j are a liner combination of nonlinear functions, the problem is easier to solve
  15. The algorithm to do the optimization is here
  16. All j in the output are a linear combination of the same nonlinear components; only the weights change
  17. Ah, there is a brief mention to the similarity to SVM here.
  18. Ah, they mention that if the nonlinear basis functions are such that the product of the basis functions and x are zero-mean and unit covariance then the constraints are satisfied iff the weight vectors are orthonormal
  19. The solution to this problem is an Eigen-decomposition problem
  20. The algorithm is then refined to normalize the input signals
  21. Propose using degree 1 or 2 basis functions (linear or quadratic)
  22. And then normalize the output so the results are zero-mean and have identity covariance
    1. They call this sphering or whitening, and the matrix that accomplishes this can be arrived at by PCA
  23. Then do PCA again on a matrix that is computed from the whitened outputs
    1. The J Eigenvectors with the lowest Eigenvalues give the normalized weight vectors
    2. This produces the output function
  24. When testing, inputs must be normalized in the same manner as the data in the training set
  25. “For practical reasons, SVD is used in steps 4 and 5 [22 and 23] instead of PCA.  SVD is preferable for analyzing degenerate data in which some eigenvalues are very close to zero, which are then discarded in step 4 [22].  The nonlinear expansion sometimes leads to degenerate data, since it produces a highly redundant representation where some components may have a linear relationship… [more stuff of the like, and numeric weirdness ].”
  26. <I was wondering where the ANN part of this comes up, now it is explained> they consider the basis functions to make up the hidden layer, and the weights on the basis functions the weights between the hidden and output layers.
    1. <They actually propose two different ANNs that would do the job.  This explanation is a bit interesting as it unifies a couple of approaches, but on the other hand, the ANN treatment is inelegant and I don’t think conveys what is going on well, and has nothing do do with the actual implementation.>
  27. “Its useful to measure the invariance of signals not by the value of Δ directly but by a measure that has a more intuitive interpretation.” Then a measure they propose is defined
  28. <How do they define Δ exactly though?  Is it based on integration though?  I hope not…  Oh I suppose we are probably working in discrete time so we don’t need/can’t do more sophisticated methods of integration>
  29. Going back to the point about different possible ANNs that do this (27.1) they say the type of network depends on what basis functions are used <So again nice to see the connection to ANNs but the connection isn’t really useful in manner more than conceptual and shouldn’t be stressed too much>
  30. One example of a particular case of SFA is about two things:
    1. “… learning response behavior of complex cells based on simple cell responses… “
    2. “…estimation of disparity and motion.”
  31. Then there will be a more sophisticated example that requires chaining of 3 SFAs
    1. This results in translation invariance
  32. These are related to problems our visual system may deal with, but no claims here are made about biological plausibility
  33. Implementation  done in Mathematica <Says something interesting about the authors.>
  34. The first 2 examples model 5 monocular simple cells that are modeled by Gabors.
  35. Data size 512 points
  36. <The input dimension and size of corpus seem pretty small, but the paper is from 2002 so I should probably give them a break.>
  37. Δt is a fixed amount
  38. “The amplitude and phase modulation signals are low-pass filtered guassian [Gaussian] white noise normalized to zero mean and unit variance”
    1. <Not sure what this means in the context of the experiemnt>
  39. The experiment is set up so that 1 of the 5 simple cells has a different orientation and phase than the others so it is independent, and is designed to be a distractor which should be ignored by SFA
  40. <I can’t make any sense out of the graphs they have>
  41. <I also have to admit to not caring so much about this particular application>
  42. In this experiment, the results are said to be good because a degree-2 poly captures the slow features well.  The third example is supposed to be harder.
  43. In example 3 they have a few polynomial layers, leading to a sparse higher dimensional polynomial <Can’t you just do this with one layer and  sparisfy it in the same way there?>
  44. “The algorithm can extract not only slowly varying features but also rarely varying ones.”
  45. Then move onto a model of a 1D retina with 65 sensors and 2 SFA layers
  46. <why is everything low-pass filtered?>
  47. They had to clip values of the outputs for each SFA layer, which was needed to prevent significant errors in extrapolation.  Aside from that overfitting wasn’t a problem
  48. <Really, no idea what the graphs are depicting>
  49. Different parts can do either: translation invariance, what, or where information
  50. <I’m mostly skipping over about half of this paper – mostly in-depth discussion of results>
  51. <Ok, picking back up at the conclusion>
  52. “SFA is somewhat unusual in that directions of minimal variance rather than maximal variance are extracted.”  Argue that perhaps without normalization this would be expected to pick up on noise, but because of normalization the signal actually causes less variation than actual noise <?> and so therefore noise is actually discarded.
    1. Slowly-varying noise though, is susceptible to getting picked up, in this case low-pass filtering may help
  53. <It would be interesting instead to try and extract maximal information (in the information theoretic sense) instead of minimizing variance.    Apparently some connections like this have been made with the algorithm  (Shaw, 2003Creutzig and Sprekeler, 2008), but using an information-theoretic criteria seems like it makes the most sense.  I think that any approach that focuses simply on variation and not information (such as PCA) has some potential pitfalls – variance (small or large) doesn’t matter, its information.  Sometimes variation contains information, but sometimes it does not.>
  54. Although many invariances are found says they failed to find a similarity measure in one of the experiments
  55. Actually here the point is made that doing an information-theoretic optimization does not really change the algorithm (the objectives are left almost the same)
    1. The information theoretic approach is difficult in the case of continuous inputs/outputs
    2. <Ok this makes me feel better>
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: