LPPs are “linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set.”

An alternative to PCA

“When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the LPP are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator <whatever that is, oh its the Laplacian on general spaces, not Euclidian> on the manifold.”

LPP has many data representation properties nonlinear methods like Laplacian Eigenmaps or LLE, but LPP is linear and therefore is defined everywhere and not solely on the data points used for training

Because LPP preserves local structure, nearest-neighbor search in the low dimensional space found by the LPP may be similar to that of the raw high-D space, so NN can be done more cheaply

Fact that LPP is linear makes it “fast and suitable for practical application.” Other similar methods are nonlinear and don’t have this property

“LPP may be conducted in the original space or in the reproducing kernel Hilbert space (RKHS) into which data points are mapped. This gives rise to kernel LPP.”

LPP is a linear approximation of nonlinear Laplacian Eigenmap