Publications
Preprints
Manuscripts in preparation, under review
Ryan A. Rossi, David F. Gleich, Assefaw H. Gebremedhin, and Md. Mostofa Ali Patwary. A fast parallel maximum clique algorithm for large sparse graphs and temporal strong components. arXiv, cs.SI:1302.6256, 2013. [ bib | http ]
Austin Benson, David F. Gleich, and James Demmel. Direct tall-and-skinny QR factorizations in mapreduce architectures. arXiv, cs.DC:1301.1071, 2012. [ bib | http ]
David F. Gleich and Ryan A. Rossi. A dynamical system for PageRank with time-dependent teleportation. arXiv, cs.SI:1211.4266, 2012. [ bib | http ]
In press
Just waiting for details on these ones now.
David F. Gleich, Chen Greif, and James M. Varah. The power and arnoldi methods in an algebra of circulants. Numerical Linear Algebra with Applications, Accepted. [ bib | DOI | local ]
Circulant matrices play a central role in a recently proposed formulation of three-way data computations. In this setting, a three-way table corresponds to a matrix where each "scalar" is a vector of parameters defining a circulant. This interpretation provides many generalizations of results from matrix or vector-space algebra. We derive the power and Arnoldi methods in this algebra. In the course of our derivation, we define inner products, norms, and other notions. These extensions are straightforward in an algebraic sense, but the implications are dramatically different from the standard matrix case. For example, a matrix of circulants has a polynomial number of eigenvalues in its dimension; although, these can all be represented by a carefully chosen canonical set of eigenvalues and vectors. These results and algorithms are closely related to standard decoupling techniques on block-circulant matrices using the fast Fourier transform.
Journal publications
Mohsen Bayati, David F. Gleich, Amin Saberi, and Ying Wang. Message-passing algorithms for sparse network alignment. ACM Trans. Knowl. Discov. Data, 7(1):3:1-3:31, March 2013. [ bib | DOI | local ]
David F. Gleich and Art B. Owen. Moment based estimation of stochastic Kronecker graph parameters. Internet Mathematics, 8(3):232-256, August 2012. [ bib | DOI | local ]
Stochastic Kronecker graphs supply a parsimonious model for large sparse real-world graphs. They can specify the distribution of a large random graph using only three or four parameters. Those parameters have, however, proved difficult to choose in specific applications. This article looks at method-of-moments estimators that are computationally much simpler than maximum likelihood. The estimators are fast, and in our examples, they typically yield Kronecker parameters with expected feature counts closer to a given graph than we get from KronFit. The improvement is especially prominent for the number of triangles in the graph.
Francesco Bonchi, Pooya Esfandiar, David F. Gleich, Chen Greif, and Laks V.S. Lakshmanan. Fast matrix computations for pairwise and columnwise commute times and Katz scores. Internet Mathematics, 8(1-2):73-112, 2012. [ bib | DOI | local ]
We explore methods for approximating the commute time and Katz score between a pair of nodes. These methods are based on the approach of matrices, moments, and quadrature developed in the numerical linear algebra community. They rely on the Lanczos process and provide upper and lower bounds on an estimate of the pairwise scores. We also explore methods to approximate the commute times and Katz scores from a node to all other nodes in the graph. Here, our approach for the commute times is based on a variation of the conjugate gradient algorithm, and it provides an estimate of all the diagonals of the inverse of a matrix. Our technique for the Katz scores is based on exploiting an empirical localization property of the Katz matrix. We adapt algorithms used for personalized PageRank computing to these Katz scores and theoretically show that this approach is convergent. We evaluate these methods on 17 real-world graphs ranging in size from 1000 to 1,000,000 nodes. Our results show that our pairwise commute-time method and columnwise Katz algorithm both have attractive theoretical properties and empirical performance.
Paul G. Constantine, David F. Gleich, and Gianluca Iaccarino. A factorization of the spectral Galerkin system for parameterized matrix equations: derivation and applications. SIAM Journal of Scientific Computing, 33(5):2995-3009, 2011. [ bib | DOI | local ]
Recent work has explored solver strategies for the linear system of equations arising from a spectral Galerkin approximation of the solution of PDEs with parameterized (or stochastic) inputs. We consider the related problem of a matrix equation whose matrix and right-hand side depend on a set of parameters (e.g., a PDE with stochastic inputs semidiscretized in space) and examine the linear system arising from a similar Galerkin approximation of the solution. We derive a useful factorization of this system of equations, which yields bounds on the eigenvalues, clues to preconditioning, and a flexible implementation method for a wide array of problems. We complement this analysis with (i) a numerical study of preconditioners on a standard elliptic PDE test problem and (ii) a fluids application using existing CFD codes; the MATLAB codes used in the numerical studies are available online.
David F. Gleich, Ying Wang, Xiangrui Meng, Farnaz Ronaghi, Margot Gerritsen, and Amin Saberi. Some computational tools for digital archive and metadata maintenance. BIT Numerical Mathematics, 51:127-154, 2011. [ bib | DOI | local ]
Computational tools are a mainstay of current search and recommendation technology. But modern digital archives are astonishingly diverse collections of older digitized material and newer born digital content. Finding interesting material in these archives is still challenging. The material often lacks appropriate annotation-or metadata-so that people can find the most interesting material. We describe four computational tools we developed to aid in the processing and maintenance of large digital archives. The first is an improvement to a graph layout algorithm for graphs with hundreds of thousands of nodes. The second is a new algorithm for matching databases with links among the objects, also known as a network alignment problem. The third is an optimization heuristic to disambiguate a set of geographic references in a book. And the fourth is a technique to automatically generate a title from a description.
Paul G. Constantine and David F. Gleich. Random alpha PageRank. Internet Mathematics, 6(2):189-236, September 2010. [ bib | DOI | local | http ]
We suggest a revision to the PageRank random surfer model that considers the influence of a population of random surfers on the PageRank vector. In the revised model, each member of the population has its own teleportation parameter chosen from a probability distribution, and consequently, the ranking vector is random. We propose three algorithms for computing the statistics of the random ranking vector based respectively on (i) random sampling, (ii) paths along the links of the underlying graph, and (iii) quadrature formulas. We find that the expectation of the random ranking vector produces similar rankings to its deterministic analogue, but the standard deviation gives uncorrelated information (under a Kendall-tau metric) with myriad potential uses. We examine applications of this model to web spam.
David F. Gleich, Andrew P. Gray, Chen Greif, and Tracy Lau. An inner-outer iteration for PageRank. SIAM Journal of Scientific Computing, 32(1):349-371, February 2010. [ bib | DOI | local ]
We present a new iterative scheme for PageRank computation. The algorithm is applied to the linear system formulation of the problem, using inner-outer stationary iterations. It is simple, can be easily implemented and parallelized, and requires minimal storage overhead. Our convergence analysis shows that the algorithm is effective for a crude inner tolerance and is not sensitive to the choice of the parameters involved. The same idea can be used as a preconditioning technique for nonstationary schemes. Numerical examples featuring matrices of dimensions exceeding 100,000,000 in sequential and parallel environments demonstrate the merits of our technique. Our code is available online for viewing and testing, along with several large scale examples.
Paul G. Constantine, David F. Gleich, and Gianluca Iaccarino. Spectral methods for parameterized matrix equations. SIAM Journal on Matrix Analysis and Applications, 31(5):2681-2699, 2010. [ bib | DOI | local ]
We apply polynomial approximation methods—known in the numerical PDEs context as spectral methods—to approximate the vector-valued function that satisfies a linear system of equations where the matrix and the right-hand side depend on a parameter. We derive both an interpolatory pseudospectral method and a residual-minimizing Galerkin method, and we show how each can be interpreted as solving a truncated infinite system of equations; the difference between the two methods lies in where the truncation occurs. Using classical theory, we derive asymptotic error estimates related to the region of analyticity of the solution, and we present a practical residual error estimate. We verify the results with two numerical examples.
Qiqi Wang, David F. Gleich, Amin Saberi, Nasrollah Etemadi, and Parviz Moin. A Monte Carlo method for solving unsteady adjoint equations. Journal of Computational Physics, 227(12):6184-6205, June 2008. [ bib | DOI | local ]
Traditionally, solving the adjoint equation for unsteady problems involves solving a large, structured linear system. This paper presents a variation on this technique and uses a Monte Carlo linear solver. The Monte Carlo solver yields a forward-time algorithm for solving unsteady adjoint equations. When applied to computing the adjoint associated with Burgers� equation, the Monte Carlo approach is faster for a large class of problems while preserving sufficient accuracy.
David F. Gleich and Marzia Polito. Approximating personalized PageRank with minimal use of webgraph data. Internet Mathematics, 3(3):257-294, December 2007. [ bib | DOI | local ]
In this paper, we consider the problem of calculating fast and accurate approximations to the personalized PageRank score of a webpage. We focus on techniques to improve speed by limiting the amount of web graph data we need to access. Our algorithms provide both the approximation to the personalized PageRank score as well as guidance in using only the necessary information—and therefore sensibly reduce not only the computational cost of the algorithm but also the memory and memory bandwidth requirements. We report experiments with these algorithms on web graphs of up to 118 million pages and prove a theoretical approximation bound for all. Finally, we propose a local, personalized web-search system for a future client system using our algorithms.
Referred conference publications
Reid Andersen, David F. Gleich, and Vahab Mirrokni. Overlapping clusters for distributed computation. In Proceedings of the fifth ACM international conference on Web search and data mining, WSDM '12, pages 273-282, New York, NY, USA, 2012. ACM. [ bib | DOI | local | http ]
Paul G. Constantine and David F. Gleich. Distinguishing signal from noise in an SVD of simulation data. In Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing, pages 5333-5336, 2012. [ bib | DOI | local ]
David F. Gleich and C. Seshadhri. Vertex neighborhoods, low conductance cuts, and good seeds for local community methods. In KDD2012, pages 597-605, 2012. [ bib | DOI | local ]
Arif Khan, David F. Gleich, Mahantesh Halappanavar, and Alex Pothen. A multicore algorithm for network alignment via approximate matching. In Proceedings of the 2012 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC '12, pages 64:1-64:11, Los Alamitos, CA, USA, 2012. IEEE Computer Society Press. [ bib | .pdf ]
Ryan A. Rossi and David F. Gleich. Dynamic PageRank using evolving teleportation. In Anthony Bonato and Jeannette Janssen, editors, Algorithms and Models for the Web Graph, volume 7323 of Lecture Notes in Computer Science, pages 126-137. Springer Berlin Heidelberg, 2012. [ bib | DOI | local ]
The importance of nodes in a network constantly fluctuates based on changes in the network structure as well as changes in external interest. We propose an evolving teleportation adaptation of the PageRank method to capture how changes in external interest influence the importance of a node. This framework seamlessly generalizes PageRank because the importance of a node will converge to the PageRank values if the external influence stops changing. We demonstrate the effectiveness of the evolving teleportation on the Wikipedia graph and the Twitter social network. The external interest is given by the number of hourly visitors to each page and the number of monthly tweets for each user.
Reid Andersen, David F. Gleich, and Vahab S Mirrokni. Overlapping clusters for distributed computation. In Poster proceedings of the SIAM Workshop on Combinatorial and Scientific Computing (CSC), 2011. Poster. [ bib | local ]
Paul G. Constantine and David F. Gleich. Tall and skinny QR factorizations in MapReduce architectures. In Proceedings of the second international workshop on MapReduce and its applications, MapReduce '11, pages 43-50, New York, NY, USA, 2011. ACM. [ bib | DOI | local ]
David F. Gleich and Lek-Heng Lim. Rank aggregation via nuclear norm minimization. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '11, pages 60-68, New York, NY, USA, 2011. ACM. [ bib | DOI | local ]
The process of rank aggregation is intimately intertwined with the structure of skew-symmetric matrices. We apply recent advances in the theory and algorithms of matrix completion to skew-symmetric matrices. This combination of ideas produces a new method for ranking a set of items. The essence of our idea is that a rank aggregation describes a partially filled skew-symmetric matrix. We extend an algorithm for matrix completion to handle skew-symmetric data and use that to extract ranks for each item. Our algorithm applies to both pairwise comparison and rating data. Because it is based on matrix completion, it is robust to both noise and incomplete data. We show a formal recovery result for the noiseless case and present a detailed study of the algorithm on synthetic data and Netflix ratings.
David F. Gleich, Paul G. Constantine, Abraham Flaxman, and Asela Gunawardana. Tracking the random surfer: empirically measured teleportation parameters in PageRank. In WWW '10: Proceedings of the 19th international conference on World wide web, pages 381-390, April 2010. [ bib | DOI | local ]
PageRank computes the importance of each node in a directed graph under a random surfer model governed by a teleportation parameter. Commonly denoted alpha, this parameter models the probability of following an edge inside the graph or, when the graph comes from a network of web pages and links, clicking a link on a web page. We empirically measure the teleportation parameter based on browser toolbar logs and a click trail analysis. For a particular user or machine, such analysis produces a value of alpha. We find that these values nicely fit a Beta distribution with mean edge-following probability between 0.3 and 0.7, depending on the site. Using these distributions, we compute PageRank scores where PageRank is computed with respect to a distribution as the teleportation parameter, rather than a constant teleportation parameter. These new metrics are evaluated on the graph of pages in Wikipedia.
Pooya Esfandiar, Francesco Bonchi, David F. Gleich, Chen Greif, Laks V. S. Lakshmanan, and Byung-Won On. Fast Katz and commuters: Efficient approximation of social relatedness over large networks. In Algorithms and Models for the Web Graph, 2010. [ bib | DOI | local ]
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
Mohsen Bayati, Margot Gerritsen, David F. Gleich, Amin Saberi, and Ying Wang. Algorithms for large, sparse network alignment problems. In Proceedings of the 9th IEEE International Conference on Data Mining, pages 705-710, December 2009. [ bib | DOI | arXiv | local ]
We propose a new distributed algorithm for sparse variants of the network alignment problem that occurs in a variety of data mining areas including systems biology, database matching, and computer vision. Our algorithm uses a belief propagation heuristic and provides near optimal solutions for an NP-hard combinatorial optimization problem. We show that our algorithm is faster and outperforms or nearly ties existing algorithms on synthetic problems, a problem in bioinformatics, and a problem in ontology matching. We also provide a unified framework for studying and comparing all network alignment solvers.
Paul G. Constantine and David F. Gleich. Using polynomial chaos to compute the influence of multiple random surfers in the PageRank model. In Anthony Bonato and Fan Chung Graham, editors, Proceedings of the 5th Workshop on Algorithms and Models for the Web Graph (WAW2007), volume 4863 of Lecture Notes in Computer Science, pages 82-95. Springer, 2007. [ bib | DOI | local ]
The PageRank equation computes the importance of pages in a web graph relative to a single random surfer with a constant teleportation coefficient. To be globally relevant, the teleportation coefficient should account for the influence of all users. Therefore, we correct the PageRank formulation by modeling the teleportation coefficient as a random variable distributed according to user behavior. With this correction, the PageRank values themselves become random. We present two methods to quantify the uncertainty in the random PageRank: a Monte Carlo sampling algorithm and an algorithm based the truncated polynomial chaos expansion of the random quantities. With each of these methods, we compute the expectation and standard deviation of the PageRanks. Our statistical analysis shows that the standard deviation of the PageRanks are uncorrelated with the PageRank vector.
David F. Gleich and Leonid Zhukov. Scalable computing with power-law graphs: Experience with parallel PageRank. In SuperComputing 2005, November 2005. Poster. [ bib | local | .pdf ]
Dennis Decoste, David F. Gleich, Tejaswi Kasturi, Sathiya Keerthi, Omid Madani, Seung-Taek Park, David M. Pennock, Corey Porter, Sumit Sanghai, Farial Shahnaz, and Leonid Zhukov. Recommender systems research at Yahoo! Research Labs. In Beyond Personalization, San Diego, CA, January 2005. Position Statement. [ bib | local ]
We describe some of the ongoing projects at Yahoo! Research Labs that involve recommender systems. We discuss recommender systems related problems and solutions relevant to Yahoo!’s business.
David F. Gleich, Leonid Zhukov, Matthew Rasmussen, and Kevin Lang. The World of Music: SDP embedding of high dimensional data. In Information Visualization 2005, 2005. Interactive Poster. [ bib | local | .pdf ]
In this paper we investigate the use of Semidefinite Programming (SDP) optimization for high dimensional data layout and graph visualization. We developed a set of interactive visualization tools and used them on music artist ratings data from Yahoo!. The computed layout preserves a natural grouping of the artists and provides visual assistance for browsing large music collections.
David F. Gleich and Leonid Zhukov. An SVD based term suggestion and ranking system. In ICDM '04: Proceedings of the Fourth IEEE International Conference on Data Mining (ICDM'04), pages 391-394, Brighton, UK, November 2004. IEEE Computer Society. [ bib | DOI | local ]
In this paper, we consider the application of the singular value decomposition (SVD) to a search term suggestion system in a pay-for-performance search market. We propose a novel positive and negative refinement method based on orthogonal subspace projections. We demonstrate that SVD subspace-based methods: 1) expand coverage by reordering the results, and 2) enhance the clustered structure of the data. The numerical experiments reported in this paper were performed on Overture's pay-per-performance search market data.
Technical reports
David F. Gleich, Peter Glynn, Gene H. Golub, and Chen Greif. Three results on the PageRank vector: eigenstructure, sensitivity, and the derivative. In Andreas Frommer, Michael W. Mahoney, and Daniel B. Szyld, editors, Web Information Retrieval and Linear Algebra Algorithms, number 07071 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany, 2007. [ bib | local | http ]
The three results on the PageRank vector are preliminary but shed light on the eigenstructure of a PageRank modified Markov chain and what happens when changing the teleportation parameter in the PageRank model. Computations with the derivative of the PageRank vector with respect to the teleportation parameter show predictive ability and identify an interesting set of pages from Wikipedia.
David F. Gleich. Hierarchical directed spectral graph partitioning. Information Networks, Stanford University, Final Project, 2005, 2006. [ bib | local | .pdf ]
In this report, we examine the generalization of the Laplacian of a graph due to Fan Chung. We show that Fan Chung’s generalization reduces to examining one particular symmetrization of the adjacency matrix for a directed graph. From this result, the directed Cheeger bounds trivially follow. Additionally, we implement and examine the benefits of directed hierarchical spectral clustering empirically on a dataset from Wikipedia. Finally, we examine a set of competing heuristic methods on the same dataset.
David F. Gleich, Matthew Rasmussen, Kevin Lang, and Leonid Zhukov. The world of music: User ratings; spectral and spherical embeddings; map projections. Online report, 2006. [ bib | local | .pdf ]
In this paper we present an algorithm for layout and visualization of music collections based on similarities between musical artists. The core of the algorithm consists of a non-linear low dimensional embedding of a similarity graph constrained to the surface of a hyper-sphere. This approach effectively uses additional dimensions in the embedding. We derive the algorithm using a simple energy minimization procedure and show the relationships to several well known eigenvector based methods. We also describe a method for constructing a similarity graph from user ratings, as well as procedures for mapping the layout from the hyper-sphere to a 2d display. We demonstrate our techniques on Yahoo! Music user ratings data and a MusicMatch artist similarity graph.
David F. Gleich. Finite calculus: A tutorial for solving nasty sums. Combinatorics, Final Paper, Stanford University, 2004., 2005. [ bib | local ]
Kevin Andrew and David F. Gleich. Mtf , bit , and comb: A guide to deterministic and randomized online algorithms for the list access problem. Advanced Algorithms, Harvey Mudd College, Final Project, 2004. [ bib | local ]
In this survey, we discuss two randomized online algorithms for the list access problem. First, we review competitive analysis and show that the MTF algorithm is 2-competitive using a potential function. Then, we introduce randomized competitive analysis and the associated adversary models. We show that the randomized BIT algorithm is 7/4-competitive using a potential function argument. We then introduce the pairwise property and the TIMESTAMP algorithm to show that the COMB algorithm, a COMBination of the BIT and TIMEST AMP algorithms, is 8/5-competitive. COMB is the best known randomized algorithm for the list access program.
David F. Gleich, Leonid Zhukov, and Pavel Berkhin. Fast parallel PageRank: A linear system approach. Technical Report YRL-2004-038, Yahoo! Research Labs, 2004. [ bib | local | .pdf ]
In this paper we investigate the convergence of iterative stationary and Krylov subspace methods for the PageRank linear system, including the convergence dependency on teleportation. We demonstrate that linear system iterations converge faster than the simple power method and are less sensitive to the changes in teleportation. In order to perform this study we developed a framework for parallel PageRank computing. We describe the details of the parallel implementation and provide experimental results obtained on a 70-node Beowulf cluster.
Leonid Zhukov and David F. Gleich. Topic identification in soft clustering using pca and ica. Online report, Yahoo! research labs, 2004. [ bib | local ]
Many applications can benefit from soft clustering, where each datum is assigned to multiple clusters with membership weights that sum to one. In this paper we present a comparison of principal component analysis (PCA) and independent component analysis (ICA) when used for soft clustering. We provide a short mathematical background for these methods and demonstrate their application to a sponsored links search listings dataset. We present examples of the soft clusters generated by both methods and compare the results.
Erin Bodine, David F. Gleich, Cathy Kurata, Jordan Kwan, Lesley Ward, and Daniel Fain. Three methods for improving relevance in web search. Clinic Report, Harvey Mudd College, May 9 2003. 102 pages. Includes fully documented program code on accompanying CD. [ bib | local ]
The 2002–2003 Overture clinic project evaluated and implemented three different methods for improving relevance ordering in web search. The three methods were bottom up micro information unit (MIU) analysis, top down MIU analysis, and proximity scoring. We ran these three methods on the top 200 web pages returned for each of 58 queries by an already existing algorithmic search engine. We used two metrics, precision and relevance ordering, to evaluate the results. Precision deals with how relevant the web page is for a given query, while relevance ordering is how well-ordered the returned results are. We evaluated the precision of each method and of the algorithmic search engine by hand. For relevance ordering, we recruited other humans to compare pages and used their decisions to generate an ideal ranking for each query. The results of each of our methods and of the algorithmic search engine are then compared to this ideal ranking vector using Kendall’s Tau. Our bottom up MIU analysis method achieved the highest precision score of 0.78 out of 1.00. In addition, bottom up MIU analysis received the second highest correlation coefficient (or relevance ordering score) of 0.107 while the algorithmic search engine received the highest correlation coefficient of 0.121. Interestingly, our proximity scoring method received high relevance ordering scores when the algorithmic search engine received low relevance ordering scores.
David F. Gleich. Machine learning in computer chess: Genetic programming and krk. Independent Study Report, Harvey Mudd College, 2003. [ bib | local ]
In this paper, I describe genetic programming as a machine learning paradigm and evaluate its results in attempting to learn basic chess rules. Genetic programming exploits a simulation of Darwinian evolution to construct programs. When applied to the King-Rook-King (KRK) chess endgame problem, genetic programming shows promising results in spite of a lack of significant chess knowledge.
Other reviewed publications
These usually underwent some form of review
David F. Gleich. Review of: Numerical algorithms for personalized search in self-organizing information networks by Sep Kamvar, Princeton Univ. Press, 2010, 160pp., ISBN13: 978-0-691-14503-7. Linear Algebra and its Applications, 435(4):908 - 909, 2011. [ bib | DOI | local ]
Ph.D. Theses
Just one, thankfully
David F. Gleich. Models and Algorithms for PageRank Sensitivity. PhD thesis, Stanford University, September 2009. [ bib | local | .pdf ]
The PageRank model helps evaluate the relative importance of nodes in a large graph, such as the graph of links on the world wide web. An important piece of the PageRank model is the teleportation parameter α. We explore the interaction between α and PageRank through the lens of sensitivity analysis. Writing the PageRank vector as a function of α allows us to take a derivative, which is a simple sensitivity measure. As an alternative approach, we apply techniques from the field of uncertainty quantification. Regarding α as a random variable produces a new PageRank model in which each PageRank value is a random variable. We explore the standard deviation of these variables to get another measure of PageRank sensitivity. One interpretation of this new model shows that it corrects a small oversight in the original PageRank formulation. Both of the above techniques require solving multiple PageRank problems, and thus a robust PageRank solver is needed. We discuss an inner-outer iteration for this purpose. The method is low-memory, simple to implement, and has excellent performance for a range of teleportation parameters. We show empirical results with these techniques on graphs with over 2 billion edges.