- LPPs are “linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set.”
- An alternative to PCA

- “When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the LPP are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator <whatever that is, oh its the Laplacian on general spaces, not Euclidian> on the manifold.”
- LPP has many data representation properties nonlinear methods like Laplacian Eigenmaps or LLE, but LPP is linear and therefore is defined everywhere and not solely on the data points used for training
- Because LPP preserves local structure, nearest-neighbor search in the low dimensional space found by the LPP may be similar to that of the raw high-D space, so NN can be done more cheaply
- Fact that LPP is linear makes it “fast and suitable for practical application.” Other similar methods are nonlinear and don’t have this property
- “LPP may be conducted in the original space or in the reproducing kernel Hilbert space (RKHS) into which data points are mapped. This gives rise to kernel LPP.”
- LPP is a linear approximation of nonlinear Laplacian Eigenmap
- <LPP, SFA, a number of other methods are unified in: http://link.springer.com/chapter/10.1007%2F978-3-662-44851-9_30>
- Construct adjacency graphs either by epsilon-ball or k-nn, either choose weights as unit, or some other method (like Gaussian bump)
- Then solve eigenvector problem
- “While the Laplace Beltrami for a manifold is generated by the Riemannian metric, for a graph it comes from the adjacency relation.”
- <skipping the math>
**Kernel LPP**generalizes to the nonlinear case