Kernel-based Extraction of Slow Features: Complex Cells Learn Disparity and Translation Invariance from Natural Images. Bray, Martinez. NIPS 2002.


  1. <An on-line method.  See point #13>
  2. Basic point is that vanilla SFA does not scale well to increasing dimension
  3. Here they use a <different?> objective function “…that maximizes output variance over a long period whilst minimizing variance over a shorter period.”
    1. This work shows how to do this via kernel trick
  4. “This leads to an efficient method that maps to an architecture that could be biologically implemented either by Sigma-Pi neurons, or fixed RBF networks (as described for SFA[…]).”
  5. “We demonstrate that using this method to extract features that vary slowly in natural images leads to the development of both the complex-cell properties of translation invariance and disparity coding simultaneously.”
  6. Yes the approach here uses a different objective function that tries to minimize the relative short term vs long term variance
  7. Instead of just expanding the input data by a series of functions, instead uses kernel trick over the data corpus to achieve a similar result while keeping computational costs in check (at least in terms of what would exist in the expansion of the raw input space – for example a polynomial expansion of image data would be quite large)
  8. “We assume the solution w can be written as an expansion in terms of mapped training data: w = ∑li=1αiΦ(xi)”
  9. From this you can rewrite the objective function in terms of the kernel, as opposed to standard basis-functions
  10. Need sparsification because otherwise the l matrix operation quickly becomes computationally problematic as well
    1. They subsample the original data set, and call this a basis set, or BS
  11. Some data is computed only between elements of the basis set (such as estimating covariances and eigen decomposition), while others goes between elements of the basis set and the entire corpus <seems like these are the long and short term averages?>
  12. To select their basis set they use a greedy method that minimizes least-squares error between data points
  13. “The complete online algorithm requires minimal memory, making it ideal for very large data sets.  The implementation estimates long- and short-terms kernel means online using exponential time averages parameterised using half-lives of Λs , Λl (as in […]).  Likewise, the covariance matrices … are updated online at each time step … there is therefore no need to explicitly compute or store kernel matrices.”
  14. Empirical results come from stereo 128×128 greyscale images.  They translated images by a pixel at random
  15. <Seems like they trained 20 8×8 simple cells by some other algorithm, and then used that as some sort of underlying step? These simple cells “maximises a nonlinear measure of temporal correlation (TRS) between the present and previous output…” Resulting simple cells are similar to Gabor weight vectors>
  16. <Yeah> “The complex cells received input from these 20 types of simple cells when processing both the left and right eye images.  Complex cells had a spatial receptive field of 4×4; each cell therefore received 320 simple cell inputs (2x4x4x20)…”
  17. Claim translation invariance <but its hard to know how much is due to SFA and how much falls out from the simple cells below>
Advertisements
Tagged ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: