Past-future Information Bottleneck in Dynamical Systems. Creutzig, Globerson, Tishby. Physical Review 2009.


  1. “Biological sensory systems need to encode the most relevant incoming information and transmit this information successfully under noisy conditions in real time. Extracting the most predictive information from incoming temporal signals is crucial for two reasons. First, predictive adaptive coding is a natural implementation of redundancy reduction of data, thus making efficient use of scarce resources and therefore called efficient coding. Second, it is only information about the future that can be behaviorally relevant.
  2. This is examined from an information-theoretic perspective
  3. Compressing a signal is called source coding.  Getting it transmitted over a noisy channel and decoded is channel coding.  Finally, doing both is called source-channel coding.
    1. “We are here interested in joint lossy source-channel coding in time.”
  4. It can also be thought of from the learning theory perspective to develop a learning system of specified complexity
  5. The goal is to isolate components of the data stream that are most predictive of future data
  6. Based on information bottleneck
  7. Related to work in Predicitve Coding and the Slowness Principle, except in this case the concern is prediction of the entire future as opposed to one step in the future.
  8. Results for IB based on case where variables are jointly Gaussian
  9. “Our results show that as the trade-off parameter increases, the compressed state space goes through a series of structural phase transitions, gradually increasing its dimension. Thus, for example, to obtain little information about the future, it turns out one can use a one-dimensional scalar state space. As more information is required about the future, the dimension of the required state space increases up to its maximumn. The structure and location of the phase transitions are related to the eigenvalues of the so-called Hankel matrix.
  10. <There is a lot of math in this paper, and the paper is short.  Not reading the math carefully so this whole summary probably won’t be long.>
  11. There is a β in the Lagrangian in IB that controls accuracy vs compression.  In the setting here, the resulting curve as β changes is called the  information curve.
  12. Assume dynamics are linear
  13. They have they approach applied to a spring-mass system as an example
  14. <Seems like in order to be able to use the approach it has to be fully linear, and all coefficients must be known.   Nice math, but doesn’t seem very generally applicable.>
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: