An Intelligent Battery Controller Using Bias-Corrected Q-Learning. Lee, Powell. AAAI 2012


  1. Deals with trying to use Q-Learning on energy storage and retrieval from renewable energies.
  2. Q-Learning performed is on discrete state, 2 action, but has extremely high variance
    1. This variance means QL can diverge for millions of iterations
  3. The max operator introduces bias when operating over noisy data.  Aside from the citations I am familiar on this topic, there are results from other fields as well.   Looks like there is some research on how to correct the issue that I am not familiar with
  4. The introduction of bias is particularly problematic when gamma ~= 1, which occurs in the cases they consider due to very short time steps (need for high gamma/long look ahead seems to be a recurring theme in work from this lab)
  5. They introduce a method to correct the bias introduced by the max operator
  6. They:
    1. illustrate the minimal condition to cause max-bias to occur in q-learning
    2. produce an algorithm that prevents this from occurring
    3. Show empirical results
  7. Q-Learning step size has a strong impact on bias.  Large step size causes faster convergence when there is no noise, but lower step size reduces bias.
  8. To correct the bias, there is one additional term in the normal QL update that is the expectation of the bias
    1. (although what goes into computing that term is a bit complex)
    2. There is an assumption that the reward distribution is the same for all actions
  9. “In most cases, the bias correction term overestimates the actual max-induced bias and puts negative bias on the stochastic sample.  However, this is a much milder form of bias.  In a maximization problem, positive bias takes many more iterations to be removed than negative bias because QL propagates positive bias due to the max operator over Q values”
  10. In empirical results, they run for 3 million samples, although a significant amount of change occurs by 1/2 million.  Bias-corrected still overestimates value compared to truth (even though at the beginning the estimate is lower than truth), but error is much smaller than vanilla QL
  11. They compute ground truth by doing value iteration, with a transition matrix estimated by performing monte carlo sampling
  12. They also run another more complex experiment where computing the true value function is more difficult, and simply determine when QL converges, and then report the error of converged states (I would also like to see what the rate, or % of converged states is for each).  Although here they also have the true value function estimated
  13. In the last experiment there is no comparison to true value
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: