- Looks to find community structure in social networks
- “Community structure represents the latent social context of user actions.”
- The algorithm proposed, MetaFac (MetaGraph Factorization) has 3 components:
- Metagraph, a relational hypergraph representation for modeling multirelational and multi-dimensional social data
- an efficient factorization method for community extraction on a given
- an on-line method to handle time-varying relations through incremental metagraph factorization.

- Tested on large datasets from Digg
- Social network data is complex as it is high dimensional and time-varying
- “All these [previous] works restrict themselves to pairwise relations between entities (e.g. user-user or user-paper). In rich online social media, networked data consists of multiple coevolving dimensions, e.g. users, tags, feeds, comments, etc. Collapsing such multi-way networks into pairwise networks results in the loss of valuable information, and the analysis of temporal correlation among multi-dimensions is difficult.”
- For multi-dimensional network analysis, can use either:
- Tensor-based analysis
- Multi-graph mining (joint factorization over two or more matrices). Its application is domain-specific

- The
*order*of a tensor is the number of*modes*or*ways*it has.- This is different from the
*dimension* - A mode has a dimension, which is the number of elements in that mode

- This is different from the
- Use chi (or bold italic X) to denote a tensor
- A
*mode-d matricization*or*unfolding*is the process of reordering an*M*-way array into a matrix <2D? Don’t understand this exactly>- The inverse operation is a
*folding*

- The inverse operation is a
- A
*vectorization*linearizes the tensor into a vector - The
*mode-d product*is the product of a mode-d tensor with a (2d) matrix - A
*mode-d accumulation*sums entries across all rows aside for mode*d*, producing a vector of length*I_d*- Like unfolding, this can also be done across multiple dimensions, with a
*mode-(c,d) accumulation*

- Like unfolding, this can also be done across multiple dimensions, with a
- “
*Tensor decomposition*or*factorization*is a form of higher-order principal component analysis. It decomposes a tensor into a core tensor multiplied by a matrix along each mode.”- “A special case of tensor decomposition is referred as CP or PARAFAC decomposition”

- The problem of finding latent community structure in social networks is a 3-part problem:
- “how to represent multi-relational social data”
- “how to reveal the latent communities consistently across multiple relations”
- “how to track communities over time”

- “We introduce metagraph, a relational hypergraph for representing multi-relational and multi-dimensional social data. We use a metagraph to configure the relational context specific to the system features – this is the key to making our community analysis adaptable…”
- There are 3 important concepts:
*facet*– a set of objects or entities of the same type: “a user facet is a set of users, a story facet is a set of stories, etc”*relation*– interactions among facets: “digging” (liking) a story involves two facets (binary – user, story), making a comment involves 3 facets (user, story, comment). A facet may be implicit*multi-relational hypergraph*(*metagraph*) describes “the combination of relations and facets in a social media context. A hypergraph is a graph where edges, called hyperedges, connect to any number of vertices. The idea is to use an M-way hyperedge to represent the interactions of M facets: each facet as a vertex and each relation as a hyperedge on a hypergraph. A metagraph defines a particular structure of interactions among facets, not among facet elements.”

- In their metagraph/hypergraph, the set of nodes is the set of all facets, and the edges are the set of all relations.
- “We formalize the community discovery problem as latent space extraction from multi-relational social data represented by a metagraph.” The goal is to find clusters of users that interact (stochastically) in communities in some way.
- If considering two objects
*i*,*j*, the relationship between*i*and*j*through community*k*, relationships are described in terms of the probability of*k*iteracting with*i*(), and*p*_{*k**→**i*}*k*interacting with*j*. Given this, the interaction bettween*i*and*j*is*x_{i,j*} =*p*_{*k**→**i*}*p*_{*k**→**j*}*p*_{*k*}- Because working with hypergraphs, can add an arbitrary number of objects <although I think the analysis is always for one community?>

- There are a couple of problems
- Metagraph Factorization (MF)
- Given a metagraph (hypergraph), and a set of observed data tensors defined on that graph
- “…find a nonnegative core tensor [z] and factors {U^(q)}_{q \in V} for corresponding facets V={v^{(q)}}” <I don’t know what this means>

- The second problem is the same metagraph factorization problem, but for time evolving / nonstationary data. This basically adds an index to the above problem (now looking for a core tensor [
*z*_*t*]) - They approach the problem from the perspective of optimization
- Start with a simple example of a 2-way hypergraph (just a normal graph) with 3 vertices (facets).
- The data set “… corresponding to the hyperedges are two second-order data tensors (i.e. matrices) {
*Χ*^(*a*),*Χ*^(*b*)}” They are made of facets <1,2> and <2,3>, with facet 2 being shared by both tensors. - The goal is to extract community structure by “… finding a nonnegative core tensor [
*z*] and factors {*U*^(1),*U*^(2),*U*^(3)} corresponding to the three facets. The goal is to be able to describe the 2 matrices that make up to the corpus as follows:*Χ*^(*a*) [made of facets <1,2>] = [*z*]*U*^(1)*U*^(2)*Χ*^(*b*) [made of facets <2,3>] = [*z*]*U*^(2)*U*^(3)

- [z] and
*U*^(2) are shared by both. The “… length of [z] is determined by the number of latent spaces (communities) to be extracted.” - Because we are working in terms of distributions, the KL-divergence (they denote as
*D*(•||•)) is a reasonable approximation of cost. In this example, the resulting cost is:*D*(*Χ*^(*a*) || [*z*]*U*^(1)*U*^(2)) +*D*(*Χ*^(*b*) || [*z*]*U*^(2)*U*^(3))- Each
*D*in this equation corresponds to one hyperedge, “… each tensor product corresponds to all how facets are incident to an hyperedge and the summation corresponds to all hyperedges on the graph.” - There is one additional constraint in the cost function such that each column of the
*U*s must sum to 1. This is based on the assumption that probability of a relation occurring in one community is independent of other entities in a community

- Not going over the algorithm exactly, but the optimization is nonconvex, so the solution found is local
- The algorithm is iterative – EM based (as its a local method and the problem is nonconvex). Its a generalization of an algorithm “… for solving the single nonnegative matrix factorization problem.”
- The algorithm is computationally expensive so they propose a modification that makes it cheaper
- There is the version thats based on nonstationarity <not paying much attention because not relevant to the setting I consider>.
- <Ok on to the experiments>
- Data set consists of different types of relations (contact, reply, comment, etc), but by far the largest relation type is the “digg” which has 1.15 million tuples
- A limitation of the approach is that the number of communities must be specified by user up front (like K-means)
- <The community separation makes a lot of sense for K=2 communities, but with K=4 there are already some spurious results, for example one cluster is clearly politically oriented, but another also has politics and tech mixed together, and then there is yet another cluster that is also tech. For K=12 the clusters don’t seem to make much sense>
- They then use the method to make predictions (such as who will digg what)
- The MFT (nonstationary model) outperforms MF (stationary model) slightly. They seem to outperform other methods my a pretty large margin (2 to 3 times as good if I understand the metric)
- The tests they do aren’t suited to a number of other approaches that only consider pairwise interactions so they have to be tweaked (checking for many pairwise interactions).

- Run scalability evaluations on synthetic data – scales to very large data and is linear