Social Net Link Prediction PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

The Link Prediction Problem for Social Networks

David Liben-Nowell Jon Kleinberg

ABSTRACT
Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures of the proximity of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures.

General Terms
Algorithms

Keywords
Social networks, link analysis, link prediction

Categories and Subject Descriptors


H.2.8 [Database Applications]: Data Mining; J.4 [Social and Behavioral Sciences]: Sociology; G.2.2 [Graph Theory]: Network Problems

1.

INTRODUCTION

As part of the recent surge of research on large, complex networks and their properties, a considerable amount of attention has been devoted to the computational analysis of social networksstructures whose nodes represent people or other entities embedded in a social context, and whose edges represent interaction, collaboration, or inuence between entities. Natural examples of social networks include the set of all scientists in a particular discipline, with edges
Laboratory for Computer Science, Massachusetts Institute of Technology. Email: [email protected]. Supported in part by an NSF Graduate Research Fellowship. Department of Computer Science, Cornell University. Email: [email protected]. Supported in part by a David and Lucile Packard Foundation Fellowship and NSF ITR Grant IIS0081334.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specic permission and/or a fee. CIKM03, November 38, 2003, New Orleans, Louisiana, USA. Copyright 2003 ACM 1-58113-723-0/03/0011 ...$5.00.

joining pairs who have co-authored papers; the set of all employees in a large company, with edges joining pairs working on a common project; or a collection of business leaders, with edges joining pairs who have served together on a corporate board of directors. The availability of large, detailed datasets encoding such networks has stimulated extensive study of their properties and the identication of recurring structural features. (For a thorough recent survey, see [11].) Social networks are highly dynamic objects; they grow and change quickly over time through the addition of new edges, signifying the appearance of new interactions in the underlying social structure. Understanding the mechanisms by which they evolve is a fundamental question that is still not well understood, and it forms the motivation for our work here. We dene and study a basic computational problem underlying social network evolution, the link prediction problem: Given a snapshot of a social network at time t, we seek to accurately predict the edges that will be added to the network during the interval from time t to a given future time t . In effect, the link prediction problem asks: to what extent can the evolution of a social network be modeled using features intrinsic to the network itself? Consider a co-authorship network among scientists, for example. There are many reasons, exogenous to the network, why two scientists who have never written a paper together will do so in the next few years: for example, they may happen to become geographically close when one of them changes institutions. Such collaborations can be hard to predict. But one also senses that a large number of new collaborations are hinted at by the topology of the network: two scientists who are close in the network will have colleagues in common, and will travel in similar circles; this suggests that they themselves are more likely to collaborate in the near future. Our goal is to make this intuitive notion precise, and to understand which measures of proximity in a network lead to the most accurate link predictions. We nd that a number of proximity measures lead to predictions that outperform chance by factors of 40 to 50, indicating that the network topology does indeed contain latent information from which to infer future interactions. Moreover, certain fairly subtle measuresinvolving innite sums over paths in the networkoften outperform more direct measures, such as shortestpath distances and numbers of shared neighbors. We believe that a primary contribution of the present paper is in the area of network evolution models. While there has been a proliferation of such models in recent years (again see [11]), they have generally been evaluated only by asking whether they reproduce certain global structural features observed in real networks. As a result, it has been difcult to evaluate and compare different approaches on a principled footing. Link prediction, on the other hand, offers a natural basis for such evaluations: a network model is useful to the extent that it can support meaningful inferences from observed network data. One sees a related approach in recent work of Newman [10],

who considers the correlation between certain network growth models and data on the appearance of edges of co-authorship networks. Concurrently with the present work, Popescul and Ungar [13] have also investigated a related formulation of the link prediction problem. In addition to its role as a basic question in social network evolution, the link prediction problem could be relevant to a number of interesting current applications of social networks. Increasingly, for example, researchers in AI and data mining have argued that a large organization, such as a company, can benet from the interactions within the informal social network among its members; these serve to supplement the ofcial hierarchy imposed by the organization itself [8, 14]. Effective methods for link prediction could be used to analyze such a social network, and suggest promising interactions that have not yet been utilized within the organization. In a different vein, research in security has recently begun to emphasize the role of social network analysis, largely motivated by the problem of monitoring terrorist networks; link prediction in this context allows one to conjecture that particular individuals are interacting even though their interaction has not been directly observed.

astro-ph cond-mat gr-qc hep-ph hep-th

training period auths. papers edges 5343 5816 41852 5469 6700 19881 2122 3287 5724 5414 10254 17806 5241 9498 15842

auths. 1561 1253 486 1790 1438

Core |Eold | 6178 1899 519 6654 2311

|Enew | 5751 1150 400 3294 1576

Figure 1: ArXiv sections from which networks were constructed: astrophysics, condensed matter, general relativity/quantum cosmology, and high energy physics (phenomenology and theory). purposes to measure node-to-node similarity. Perhaps the most basic approach is to rank pairs by the length of the shortest path between them in Gcollab . Such a measure follows the notion that collaboration networks are small worlds, in which individuals are related through short chains [11]. (We predict a random subset of pairs at distance two in Gcollab ; distance-one pairs are edges in the training set Eold .) Methods based on node neighborhoods. For a node x, let (x) be the set of neighbors of x in Gcollab . Several approaches are based on the idea that two nodes x and y are more likely to form a link if (x) and (y ) have large overlap; this follows the natural intuition that such node pairs represent authors with many colleagues in common, and hence are more likely to come into contact themselves [6]. Common neighbors. One can directly use this idea by setting score(x, y ) := |(x) (y )|, the number of common neighbors of x and y . In collaboration networks, Newman [10] has veried a correlation between the number of common neighbors of x and y at time t, and the probability that they will collaborate in the future. Jaccards coefcient and Adamic/Adar. The Jaccard coefcient, commonly used in information retrieval [15], measures the number of features that both x and y have compared to the number of features that either x or y has. Taking features as neighbors in Gcollab , this leads to score(x, y ) := |(x) (y )|/|(x) (y )|. Adamic and Adar [1] consider a related measure, in the context of deciding when two personal home pages are strongly related. They compute features of the pages, and dene the similarity between two pages to be 1 z : feature shared by x, y log(frequency(z )) . This renes the simple counting of common features by weighting rarer features more heavily. This suggests the measure score(x, y ) := z(x)(y) log |1 . (z )| Preferential attachment has received considerable attention as a model of network growth [11]. The basic premise is that the probability a new edge involves node x is proportional to |(x)|. Barabasi et al. [2] and Newman [10] have further proposed, on the basis of empirical evidence, that the probability of co-authorship of x and y is correlated with the product of the number of collaborators of x and y , corresponding to the measure score(x, y ) := |(x)| |(y )|. Methods based on the ensemble of all paths. A number of methods rene the notion of shortest-path distance by implicitly considering the ensemble of all paths between two nodes. Katz [7] denes a measure that directly sums over this collection of paths, exponentially damped by length to count short paths more heavily. This leads to the measure score(x, y ) := =1 |pathsx,y |, where pathsx,y is the set of all length- paths from x to y . One can verify that the matrix of scores is given by (I M )1 I , where M is the adjacency matrix of the graph. We consider weighted 1 Katz, where pathsx,y = if there are parallel edges x, y , and unweighted Katz, where parallel edges are ignored. Hitting time, PageRank, and variants. A random walk on Gcollab starts at a node x, and iteratively moves to a neighbor of x chosen uniformly at random. The hitting time Hx,y from x to y is the expected number of steps required for a random walk starting at x to reach y . We also consider the symmetric commute time

2.

DATA AND EXPERIMENTAL SETUP

We model a social network as a graph G = V, E in which each edge e E represents an interaction between its endpoints at a particular time t(e). We record multiple interactions by parallel edges with different time-stamps. For times t < t , let G[t, t ] denote the subgraph of G restricted to edges with time-stamps between t and t . To formulate the link prediction problem, we choose a training interval [t0 , t0 ] and a test interval [t1 , t1 ] where t0 < t1 , and give an algorithm access to the network G[t0 , t0 ]; it must then output a list of edges, not present in G[t0 , t0 ], that are predicted to appear in the network G[t1 , t1 ]. For our experiments, we use co-authorship networks G obtained from papers found in ve sections of the physics e-Print arXiv, www. arxiv.org. (See Figure 1.) Occasional syntactic anomalies were handled heuristically, and authors were identied by rst initial and last name; this appears to introduce only a small amount of error due to ambiguous identiers. Our training interval is the period [1994, 1996], and the test interval is [1997, 1999]. Denote the training interval subgraph G[1994, 1996] by Gcollab := A, Eold , and let Enew denote the set of edges u, v where u and v co-author a paper during the test interval but not the training intervalthese are the new interactions we are seeking to predict. In evaluating link prediction methods, we focus on links between authors who have each written at least a minimum number of papers: we dene the set Core to be all nodes incident to at least train edges in the training interval and at least test edges in the test interval, where train and test are both set to 3. Each link predictor p outputs a ranked list Lp of pairs in A A Eold ; these are predicted new collaborations, in decreasing order of condence. Dene Enew := Enew (Core Core) and n := |Enew |. Our performance measure for predictor p is then determined as follows: from the ranked list Lp , we take the rst n pairs in Core Core, and determine the size of the intersection of this set of pairs with the set Enew .

3.

METHODS FOR LINK PREDICTION

In this section, we survey an array of methods for link prediction. Each assigns a connection weight score(x, y ) to pairs of nodes, producing a ranked list in decreasing order of score(x, y ). A predictor can thus be viewed as computing a measure of proximity or similarity between nodes x and y , relative to the network topology. These predictors are adapted from techniques used in graph theory and social network analysis, and many must be modied from their original

Cx,y := Hx,y + Hy,x . Both of these measures serve as natural proximity measures, and hence (negated) can be used as score(x, y ). One difculty with hitting time is that Hx,y is quite small whenever y is a node with a large stationary probability y , regardless of the identity of x. Thus we also consider normalized measures Hx,y y or (Hx,y y + Hy,x x ). Another difculty with these measures is their sensitive dependence to parts of the graph far away from x and y , even when x and y are connected by very short paths. A way of counteracting this is to allow the random walk from x to y to periodically reset, returning to x with a xed probability at each step; in this way, distant parts of the graph will almost never be explored. Random resets form the basis of the PageRank measure for Web pages [3], and we can adapt it for link prediction as follows: Dene the rooted PageRank measure to be the stationary probability of y in a random walk that returns to x with probability each step, moving to a random neighbor with probability 1 . SimRank [5] is a xed point of the following recursive denition: two nodes are similar insofar as they are joined to similar neighbors. Numerically, we dene score(x, x) := 1 and, for some [0, 1], score(x, y ) :=
a(x) b(y )

score(a, b)

|(x)| |(y )|

SimRank can be interpreted in terms of a random walk on Gcollab : it is the expected value of , where is a random variable giving the time at which random walks started from x and y rst meet. Higher-level approaches. We now discuss three meta-approaches that can be used in conjunction with any of the above methods. Low-rank approximation. All our link prediction methods can be formulated in terms of the adjacency matrix M . For example, common neighbors of two nodes can be computed as the inner product between the two corresponding rows of M . A common technique when analyzing a large matrix M is to choose a relatively small number k and compute the rank-k matrix Mk that best approximates M under any of a number of standard matrix norms. This can be done efciently using the singular value decomposition, and it forms the core of methods like latent semantic analysis [4]. Intuitively, this can be viewed as a type of noise-reduction technique that preserves most of the structure in the matrix. We consider three applications of low-rank approximation: (i) the Katz measure, using Mk rather than M in the underlying formula; (ii) common neighbors, using inner products of rows in Mk rather than M ; andmost simply of all (iii) dening score(x, y ) to be the (x, y ) entry in the matrix Mk . Unseen bigrams. Link prediction is akin to the problem of estimating frequencies of unseen bigrams in language modelingpairs of words that co-occur in a test corpus, but not in the corresponding training corpus (see, e.g., [9]). Following ideas in that literature, we can improve score(x, y ) using values of score(z, y ) for nodes z that are similar to x. Suppose we have values score(x, y ) computed under one of the measures above. Let Sx denote the nodes most related to x under score(x, ), for a parameter > 0. We then dene enhanced scores in terms of these nodes: score (x, y ) := |{z : z (y ) Sx }| or score score(x, z ). wtd (x, y ) := z (y )Sx Clustering. We can also try to improve the quality of a predictor by deleting the more tenuous edges in Gcollab by a clustering procedure, and then running the predictor on the resulting cleanedup subgraph. Specically, consider a measure computing values for score(x, y ). We compute score(u, v ) for all edges in Eold , and delete the (1 ) fraction of these edges for which the score is lowest. We now re-compute score(x, y ) for all pairs x, y on this subgraph.

(Many collaborations form for reasons outside the scope of the network, so improvement over random is arguably more meaningful here than raw performance.) A number of methods signicantly outperform random, suggesting that the network topology alone does contain useful information; the Katz measure and its variants perform consistently well, and some of the very simple measures (e.g., common neighbors and the Adamic/Adar measure) also perform well. At the same time, there is clearly much room for improvement in performance on this task, and nding ways to take better advantage of the information in the training data is an interesting open question. Another issue is to improve the efciency of the proximity-based methods on very large networks; fast algorithms for approximating the distribution of node-to-node distances may be one approach [12]. The fact that collaboration networks form a small worldi.e., there are short paths connecting almost all pairs of scientists [11]is normally viewed as vital to the scientic community. In our context, though, this implies that there are often very short (and very tenuous) paths between two scientists in unrelated disciplines; this suggests why the basic graph distance predictor is not competitive with most of the other approaches studied. Our most successful link predictors can be viewed as using measures of proximity that are robust to the few edges that result from rare collaborations between elds. Performance of the low-rank approximation methods tends to be best at an intermediate rank, but on gr-qc they perform best at rank 1. This suggests a sense in which the collaborations in gr-qc have a much simpler structure. One also observes the apparent importance of node degree in the hep-ph collaborations: the preferential attachment predictor does uncharacteristically well on this dataset, outperforming the basic graph distance predictor. Certain of the methods show high overlap in the predictions they make; one such cluster of methods is Katz, low-rank inner product, and Adamic/Adar. It would be interesting to understand the generality of these overlap phenomena, especially since some of the large overlaps (such as the one just mentioned) do not seem to follow obviously from the denitions of the measures. Given the low performance of the predictors on astro-ph (and the fact that none beats simple ranking by common neighbors), it is an interesting challenge is to formalize a sense in which it is a difcult dataset. By running our predictors on some other datasets, we have discovered that performance swells dramatically as the topical focus of the dataset widens. In a narrow eld, almost anyone can collaborate with anyone else, and new collaborations are largely random. It would be interesting to make precise a sense in which such new collaborations are simply not predictable from the training data. Acknowledgements. We thank Tommi Jaakkola, Lillian Lee, Frank McSherry, and Grant Wang for helpful discussions, and Paul Ginsparg for generously providing arXiv bibliographic data.

5.

REFERENCES

4.

RESULTS AND DISCUSSION

In Figure 2, we show each predictors performance on each arXiv section, in terms of the factor improvement over random predictions.

[1] L. Adamic, E. Adar. Friends and neighbors on the web. Soc. Networks, 25(3), 2003. [2] A. Barabasi, H. Jeong, Z. N eda, E. Ravasz, A. Schubert, T. Vicsek. Evolution of the social network of scientic collaboration. Physica A, 311(34), 2002. [3] S. Brin, L. Page. The anatomy of a large-scale hypertextual Web search engine. Comput. Networks ISDN, 1998. [4] S. Deerwester, S. Dumais, G. Furnas, T. Landauer, R. Harshman. Indexing by latent semantic analysis. J. Am. Soc. Inform. Sci., 41(6), 1990. [5] G. Jeh, J. Widom. SimRank: A measure of structural-context similarity. In KDD, 2002.

probability that a random prediction is correct graph distance (all distance-two pairs) common neighbors preferential attachment Adamic/Adar Jaccard SimRank = 0.8 hitting time hitting timenormed by stationary distribution commute time commute timenormed by stationary distribution rooted PageRank = 0.01 = 0.05 = 0.15 = 0.30 = 0.50 Katz (weighted) = 0.05 = 0.005 = 0.0005 Katz (unweighted) = 0.05 = 0.005 = 0.0005 Low-rank approximation: rank = 1024 Inner product rank = 256 rank = 64 rank = 16 rank = 4 rank = 1 Low-rank approximation: rank = 1024 Matrix entry rank = 256 rank = 64 rank = 16 rank = 4 rank = 1 Low-rank approximation: rank = 1024 Katz ( = 0.005) rank = 256 rank = 64 rank = 16 rank = 4 rank = 1 unseen bigrams common neighbors, = 8 (weighted) common neighbors, = 16 Katz ( = 0.005), = 8 Katz ( = 0.005), = 16 unseen bigrams common neighbors, = 8 (unweighted) common neighbors, = 16 Katz ( = 0.005), = 8 Katz ( = 0.005), = 16 clustering: = 0.10 Katz (1 = 0.001, 2 = 0.1) = 0.15 = 0.20 = 0.25

astro-ph 0.475% 9.6 18.0 4.7 16.8 16.4 14.6 6.5 5.3 5.2 5.3 10.8 13.8 16.6 17.1 16.8 3.0 13.4 14.5 10.9 16.8 16.8 15.2 14.6 13.0 10.1 8.8 6.9 8.2 15.4 13.8 9.1 8.8 6.9 11.4 15.4 13.1 9.2 7.0 0.4 13.5 13.4 16.9 16.5 14.2 15.3 13.1 10.3 7.4 12.0 4.6 3.3

cond-mat 0.147% 25.3 41.1 6.1 54.8 42.3 39.3 23.8 23.8 15.5 16.1 28.0 39.9 41.1 42.3 41.1 21.4 54.8 54.2 41.7 41.7 41.7 54.2 47.1 44.7 21.4 15.5 6.0 16.7 36.3 46.5 21.4 15.5 6.0 27.4 42.3 45.3 21.4 15.5 6.0 36.9 39.9 38.1 39.9 40.5 39.3 36.9 29.8 37.5 46.5 34.5 27.4

gr-qc 0.341% 21.4 27.2 7.6 30.1 19.9 22.8 25.0 11.0 33.1 11.0 33.1 35.3 27.2 25.0 24.3 19.9 30.1 30.1 37.5 37.5 37.5 29.4 29.4 27.2 31.6 42.6 44.9 6.6 8.1 16.9 26.5 39.7 44.9 30.1 11.0 19.1 27.2 41.2 44.9 30.1 39.0 25.0 35.3 27.9 42.6 32.4 41.9 47.1 47.1 19.9 20.6

hep-ph 0.207% 12.2 27.0 15.2 33.3 27.7 26.1 3.8 11.3 17.1 11.3 18.7 24.6 27.6 29.9 30.7 2.4 24.0 32.6 18.7 24.2 24.9 34.9 32.4 30.8 27.9 19.6 17.7 18.6 26.2 28.1 23.1 20.0 17.7 27.1 34.3 32.3 24.9 19.7 17.7 15.6 18.6 24.2 24.8 22.3 22.1 21.7 12.2 33.0 21.1 21.2 19.5

hep-th 0.153% 29.2 47.2 7.5 50.5 41.7 41.7 13.4 21.3 23.4 16.3 29.2 41.3 42.6 46.8 46.8 12.9 52.2 51.8 48.0 49.7 49.7 50.1 47.2 47.6 35.5 23.0 14.6 21.7 37.6 40.9 34.2 22.5 14.6 32.1 38.8 41.3 35.1 23.0 14.6 47.2 48.8 51.3 50.9 39.7 42.6 38.0 38.0 38.0 44.2 35.9 17.5

Figure 2: Performance of link predictors on the task dened in Section 2. For each predictor and each arXiv section, the given number species the factor improvement over random prediction. Italicized entries have performance at least as good as the graph distance predictor; bold entries are at least as good as the common neighbors predictor. [6] E. Jin, M. Girvan, M. Newman. The structure of growing social networks. Phys. Rev. E, 64(046132), 2001. [7] L. Katz. A new status index derived from sociometric analysis. Psychometrika, 18(1), March 1953. [8] H. Kautz, B. Selman, M. Shah. ReferralWeb: Combining social networks and collaborative ltering. CACM, 1997. [9] L. Lee. Measures of distributional similarity. In ACL, 1999. [10] M. Newman. Clustering and preferential attachment in growing networks. Phys. Rev. E, 64(025102), 2001. [11] M. Newman. The structure and function of complex networks. SIAM Review 45:167-256, 2003. [12] C. Palmer, P. Gibbons, C. Faloutsos. ANF: A Fast and Scalable Tool for Data Mining in Massive Graphs. In KDD, 2002. [13] A. Popescul, L. Ungar. Statistical Relational Learning for Link Prediction. Workshop on Learning Statistical Models from Relational Data, IJCAI 2003. [14] P. Raghavan. Social networks: From the web to the enterprise. IEEE Internet Comp., Jan/Feb 2002. [15] G. Salton, M. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, 1983.

You might also like