Dimension reduction and its application to model-based exploration in continuous spaces. Nouri, Littman. Machine Learning 2010


Just a note that I read this paper while I didn’t have ability to take notes, so notes are pretty sparse here.  If I go down the road of dimensionality reduction / feature selection in RL I should cite this, but probably the Parr, Ng papers on L1 regularization in TD are more relevant at the moment.

  1. Addresses exploration in high dimensional domains
  2. Uses a continuous value of knowness <C-KWIK?> to drive exploration smoothly, is closer in spirit to MBIE.  This is better than the standard binary known/unknown labels that R-Max uses
  3. Does learning according to KNN with RBFs, learns RBF parameters (σ) for each component of the output vector independently, which allows for more compact representations
  4. Performance degrades very slowly as a function of problem size, generalization and accuracy are very good as related to methods that do not attempt dimension reduction
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: