Submodular Meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation, and Dictionary Selection. Das, Kempe. ICML 2011


  1. How do you select a subset of k variables from a large set in order to get the best linear prediction of another variable?
    1. To be clear, its about choosing dimensions of data, as opposed to selecting subset of whole data points
  2. Related to feature selection and sparse approximation
  3. Analyzes performance of commonly used, empirically effective greedy heuristics
    1. Analysis based on submodularity and spectral analysis
  4. Introduces the “… submodularity ratio as a key quantity to help understand why greedy algorithms perform well even when the variables are highly correlated.”
  5. Get best approximation guarantees in terms of both submodularity ratio as well as “… smallest k-sparse eigenvalue of the covariance matrix.”
  6. Also get better bounds on dictionary selection problem <not sure what that is>
  7. Test on real-world as well as synthetic data
    1. Results show submodularity ratio is a better predictor of performance of greedy algorithms than other spectral parameters
  8. Commonly, after subset selection is performed, the goal is to minimize MSE or maximize squared multiple correlation R2 <whats that? sounds neat>.  Here they focus on the latter
  9. Selection criteria is based on covariance matrices
  10. Minimizing R2 is “… equivalent to the problem of sparse approximation over dictionary vectors…”
  11. Formulation is similar to that of sparse recovery, except there the assumption is the data is actually k-sparse, but in many cases that assumption doesn’t hold and the data is actually dense
  12. The problem as specified is NP-Hard, so heuristics are the only way to go
  13. Common heuristics are greedy methods, or those based on convex relaxation.  Here they focus on the former, as the methods are generally simpler and require less tuning
  14. Of the greedy methods, most common are Forward Regression and Orthogonal Matching Pursuit
  15. Previous theoretical work on these greedy methods have been lacking in applicability
  16. “We prove that whenever the submodularity ration is bounded away from 0, the R2 objective is ‘reasonably close’ to submodular, and Forward Regression gives a constant-factor approximation.”
  17. Although they mention issues with spectral methods in this context (“… greedy algorithms perform very well, even for near-singular input matrices.”) the covariance ratio is related to spectral analysis:
    1. The submodularity ratio “… is always lower-bounded by the smallest k-sparse eigenvalue of the covariance matrix [but can be much larger].”
  18. They also get bounds for the two greedy methods when the lowest k-sparse eigenvalue is non-zero
  19. In comparison between performance as related to submodularity ratio vs spectral analysis “… while the input covariance matrices are close to singular, the submodularity ratio actually turns out to be significantly larger.  Thus our theoretical results can begin to explain why, in many instances, greedy algorithms perform well in spite of the fact that the data may have high correlations.”
  20. R2 describes the fraction of the variance of the desired output values Y that are explained by variables in the X inputs in the corpus
  21. Forward regression is the natural greedy method, where the variable that is added at each step is the one that most maximizes R2 immediately
  22. Give a bound for how much  the R2 values of a set and a sum of its elements can differ, used for the proof of performance of forward regression
  23. <Skipping actual proofs, onto empirical results>
  24. “Because several of the spectral parameters (as well as the optimum solution) are NP-hard to compute, we restrict our experiments to data sets with n<30 features, from which k<8 are to be selected.  We stress that the greedy algorithms themselves are very efficient.”
  25. Data sets are World Bank development indiccators, house prices in Boston w/relevant features, and then a synthetic set
  26. Compared to optimal (exhaustive) results, the greedy algorithms perform extremely well
  27. The greedy methods work better than Lasso
  28. The bound based on the submodularity ratio is much better than that of the spectral parameters
    1. While there is a fair amount of looseness between the theoretical bounds and actual performance (theoretical is pessimistic), the difference between actual and theoretical is not absurdly different
    2. They also include an extension to the theory that drastically tightens the bounds <why not put this in the theoretical section in the first place, as opposed to with the empirical results?>
  29. The real-world data sets are nearly singular
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: