Efficient Backprop. LeCun, Bottou, Orr, Muller. Neural Networks: Tricks of the Trade. 1998Q


  1. Backprop can be slow for multilayered nets that are non-convex, high-D, very bumpy, or have many plateaus.  It may not even converge
  2. Instead of doing all dataset in batch, noise introduced by following the gradient of individual samples can be helpful
    1. Tends to be faster (especially cases where there is redundancy in the data)
    2. Converges to better results (noise in samples jumps out of local minima)
    3. Can be used for tracking <nonstationary data>
    4. Better suited to large datasets
  3. Advantages for batch:
    1. Mathematical properties of convergence better understood (stochasticity from individual samples has less of an impact when all the data is considered together, so convergence is easier to analyze)
    2. Some methods (such as conj gradient) only work in batch
  4. Annealing schedule for learning rate helps convergence, but is an additional parameter that has to be controlled
  5. Can try and get the best of both worlds by doing mini batches
    1. Also, trying to eliminate the impact of noise on the final setting of the weights probably isn’t that important; overtraining becomes more of a concern before then
  6. Methods of bagging or shuffling the data help when not running in batch mode.  It is important to keep the samples spread around the range of inputs to prevent overfitting a small region of the input space if that is where samples become concentrated from in some part of the corpus
    1. Proposes a informal boosting method, although this can lead to overfitting the hard-to-fit samples
  7. Inputs should be normalized (average value of each input should be zero), should also have same covariance (whitening)
    1. The exception is if some samples are known to be less important covariance of those samples can be reduced to downweight
  8. Sigmoids are good non linear functions because they serve to keep weights normalized as they propagate through the network because they are more likely to produce values that are on average 0
  9. Tanh better than logistic (as logistic isn’t zero-centered), although now simple relus are preferred
    1. Give recommended constants to be used with the tanh s.t. weights are also likely to have variance of 1 as it goes through the network
  10. One problem with symmetric sigmoids is that error surface can be very flat near the origin.  Therefore it isn’t good to initialize with very small weights.  This is also true far from the origin – adding a small linear term can help get rid of the plateaus
  11. For classification, it isn’t good to use binary target values (+/- 1) as it leads to instabilities.
    1. Because the sigmoids only have these values asymptotically, using these goal outputs will make the weights grow larger and larger.  This then makes the gradients larger
    2. Also, when the outputs end up producing values close to +/-1, there is no measure of confidence
    3. Target values should be at the max of the second derivative of the sigmoid
  12. Intermediate weights should be used for initialization, such that the sigmoid is activated in it its linear region
    1. If weights are too small the gradients will also be too small
  13. Getting everything right requires that the training set be normalized, the sigmoid be chosen properly, and that weights be set correctly
    1. An equation is given for the standard deviation the weights should be set to
  14. Another issue is choosing the learning rates
    1. Many adaptive methods for setting the learning rates only work in batch mode, because in the online case things jump around constantly
    2. This is discussed more later in the paper but one idea is to use independent update values for each weight so that they converge more or less at the same rate (one way to do this is by looking at second derivatives)
    3. Learning rates at lower layers should be larger than those at higher layers because the second derivative of the cost function wrt weights in lower layers is usually smaller than those in the higher layers
    4. For conv. nets, learning rate should be proportional to sqrt(# connections sharing weight)
  15. Mentions one possible rule for adaptive learning rates, but the idea is that it is large away from the optimum and becomes small near it
  16. Some theory about learning rates – it can be computed exactly if the shape can be approximated by a quadratic.  If it isn’t exactly quadratic you can use the rules as an approximation but then need to iterate on it
  17. The hessian is a measure of curvature of the error surface.
  18.  There is an equation involving the learning rate and the hessian, which, if it always shrinks a vector (all eigenvalues < 1) the update equation will converge
    1. Goal then is to have different learning rates across different eigendirections, based on its eigenvalue
    2. If weights are coupled, H must first be rotated s.t. it is diagonal (making the coord axes line up w/ eigendirections)
  19. Now going back to justifications for some tricks
  20. Subtract means from input vars because a nonzero mean makes a very large eigenvalue, which makes convergence very slow.  Likewise, data that isn’t normalized will slow learning as well
  21. Decorrelating the input variables make the method of different learning rates per weight optimal
  22. Whitening the signal can help make the energy surface at the optima spherical which helps convergence
  23. Talk about newton updates <but I think this isnt used in practice? so im not taking notes>
  24. Newton and gradient descent are different, but if you whiten they are the same <?>
  25. Conj Gradient:
    1. O(N)
    2. Doesn’t use explicit Hessian
    3. Tries “…to find descent directions that try to minimally spoil the result achieved in the previous iterations
    4. Uses line search <?>.  Ah, given a descent direction (the gradient), just minimize along this
    5. Only batch
    6. conj directions are orthogonal in space of identity hessian matrix
    7. Good line search method is critical
    8. Can be good for momentum
  26. Quasi-Newton (BFGS)
    1. Iteratively computes estimate of inverse Hessian
    2. O(N^2) – memory as well, so only applicable to small networks <this is what Hessian free fixes right?>
    3. Reqs line search
    4. Batch
  27. <I think much of the rest of the paper is less relevant now than when it was written back then so skimming>
  28. Tricks for computing the Hessian
  29. Large eigenvalues in the Hessian cause problems during training because:
    1. Non-zero mean inputs
    2. Wide variation of 2nd derivatives
    3. Correlation between state vars
  30. Also between layers, the Hessian at first layer is pretty flat but becomes pretty steep by the last layer
  31. “From our experience we know that a carefully tuned stochastic gradient descent is hard to beat on large classification problems.”
  32. There are methods for estimating the principal eigenvalues/vectors of the Hessian w/o actually computing the Hessian
Advertisements

One thought on “Efficient Backprop. LeCun, Bottou, Orr, Muller. Neural Networks: Tricks of the Trade. 1998Q

  1. andrew says:

    We used this as a reference in the Galvanize neural networks class. Thanks for the summary!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: