WordPress lost my draft of this, so I am just quickly summarizing my previous summary

- Basic idea is that most optimization algorithms that assume Lipshitz smoothness also assume the Lipshitz constant is known
- This paper proposes an algorithm that does not require that information a-priori, and achieves minimax optimality even though that data is not known a-priori
- Regret is order:
*L*^{*d*/(*d*+2)}*T*^{(*d*+1)/(*d*+2)} for a Lipshitz smoothness*L*, and time*T*. - But this is for when T > max( L^d, (0.15 L^{2/(d+2)}/max(d,2))^d )

- Regret is order:
- The paper cites a number of other optimization papers as related work, including Kleinberg’s and DIRECT. State that the latter is empirically good, but guarantees are only at the limit.
- Assume
*L*is selected from a set of possible values. Algorithms that just assume the largest*L*will find the correct solution, but it says that these methods have poor regret (especially with regards to*T*) - The algorithm proposed here functions in two stages:
- Initially it uses uniform exploration that gives some estimate of the Lipshitz constant. Although this of course may be wrong, the estimate found is good enough to start making intelligent decisions.
- Finds the optimal region using a standard exploration-exploitation strategy

- They say this exploration method may be useful in other areas, but leave it at that.