## Adaptive Submodularity: A New Approach to Active Learning and Stochastic Optimization. Glovin, Krause, Ray. Talk (NIPS 2010)

From http://videolectures.net/nipsworkshops2010_golovin_asm/

1. Basic idea is consider submodular functions, they they are stochastic and unfold over time.  So instead of making all decisions in one shot (before execution begins), better to wait to see how the first decision unfolds before making the second, etc.
1. Gives the example of influence in social networks – if you want to get people to plug a product you give to a few people for free, see how influence is spreading from the first person before deciding the second
2. The setting described here is a generalization of standard suboptimality
3. Analysis here allows many results from standard submodular optimization to the adaptive setting
4. Instead of maximizing margin, maximizes expected margin based on current state, but does this only for the first selection, and the results of the first selection on state then become conditions for the next choice
5. Gets standard bounds of 1-1/e
6. Because adaptive, can now do things like “take the minimal number of actions needed to achieve X”, which you couldn’t do with standard submodularity (there, you can just say “maximize the benefit of N choices”
1. This is a log-approximation ln(n)+1 for stochastic set cover
7. Another application optimal decision trees
1. “Generalized binary search” which is equivalent to max info gain
2. This problem is adaptive submodular <I thought things like xor in decision trees arent submodular, but maybe because we are talking w.r.t. information gain it is?>
3. Give bounds for this approximation
4. “Results require that tests are exact (no noise)!”
8. If trying to do decision trees in noisy setting, can do a Bayesian version
1. But generalized binary search, maxing info gain, and maxing value of info aren’t adaptive submodular in noisy setting.  In noisy settings they require exponentially more tests than optimum <Should really read this paper>
9. Trick is to transform noisy problem into normal problem by making outcome part of the hypothesis
1. Only need to distinguish between noisy hypotheses that lead to different decisions
10. Gives bounds for this Bayesian active learning for noisy domains
11. Give an example of testing different psyhological/economic models of how people perform decision making
1. For this setting, turns out many common approaches actually do much worse than random