Pagerank Algorithm

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 5901 Experts worldwide ranked by ideXlab platform

Roberto Tempo - One of the best experts on this subject based on the ideXlab platform.

  • a web aggregation approach for distributed randomized Pagerank Algorithms
    arXiv: Systems and Control, 2012
    Co-Authors: Hideaki Ishii, Roberto Tempo, Erwei Bai
    Abstract:

    The Pagerank Algorithm employed at Google assigns a measure of importance to each web page for rankings in search results. In our recent papers, we have proposed a distributed randomized approach for this Algorithm, where web pages are treated as agents computing their own Pagerank by communicating with linked pages. This paper builds upon this approach to reduce the computation and communication loads for the Algorithms. In particular, we develop a method to systematically aggregate the web pages into groups by exploiting the sparsity inherent in the web. For each group, an aggregated Pagerank value is computed, which can then be distributed among the group members. We provide a distributed update scheme for the aggregated Pagerank along with an analysis on its convergence properties. The method is especially motivated by results on singular perturbation techniques for large-scale Markov chains and multi-agent consensus.

  • a web aggregation approach for distributed randomized Pagerank Algorithms
    IEEE Transactions on Automatic Control, 2012
    Co-Authors: Hideaki Ishii, Roberto Tempo
    Abstract:

    The Pagerank Algorithm employed at Google assigns a measure of importance to each web page for rankings in search results. In our recent papers, we have proposed a distributed randomized approach for this Algorithm, where web pages are treated as agents computing their own Pagerank by communicating with linked pages. This paper builds upon this approach to reduce the computation and communication loads for the Algorithms. In particular, we develop a method to systematically aggregate the web pages into groups by exploiting the sparsity inherent in the web. For each group, an aggregated Pagerank value is computed, which can then be distributed among the group members. We provide a distributed update scheme for the aggregated Pagerank along with an analysis on its convergence properties. The method is especially motivated by results on singular perturbation techniques for large-scale Markov chains and multi-agent consensus. A numerical example is provided to illustrate the level of reduction in computation while keeping the error in rankings small.

  • Distributed randomized Algorithms for the Pagerank computation
    IEEE Transactions on Automatic Control, 2010
    Co-Authors: Hideaki Ishii, Roberto Tempo
    Abstract:

    In the search engine of Google, the Pagerank Algorithm plays a crucial role in ranking the search results. The Algorithm quantifies the importance of each web page based on the link structure of the web. We first provide an overview of the original problem setup. Then, we propose several distributed randomized schemes for the computation of the Pagerank, where the pages can locally update their values by communicating to those connected by links. The main objective of the paper is to show that these schemes asymptotically converge in the mean-square sense to the true Pagerank values. A detailed discussion on the close relations to the multi-agent consensus problems is also given.

Hideaki Ishii - One of the best experts on this subject based on the ideXlab platform.

  • a web aggregation approach for distributed randomized Pagerank Algorithms
    arXiv: Systems and Control, 2012
    Co-Authors: Hideaki Ishii, Roberto Tempo, Erwei Bai
    Abstract:

    The Pagerank Algorithm employed at Google assigns a measure of importance to each web page for rankings in search results. In our recent papers, we have proposed a distributed randomized approach for this Algorithm, where web pages are treated as agents computing their own Pagerank by communicating with linked pages. This paper builds upon this approach to reduce the computation and communication loads for the Algorithms. In particular, we develop a method to systematically aggregate the web pages into groups by exploiting the sparsity inherent in the web. For each group, an aggregated Pagerank value is computed, which can then be distributed among the group members. We provide a distributed update scheme for the aggregated Pagerank along with an analysis on its convergence properties. The method is especially motivated by results on singular perturbation techniques for large-scale Markov chains and multi-agent consensus.

  • a web aggregation approach for distributed randomized Pagerank Algorithms
    IEEE Transactions on Automatic Control, 2012
    Co-Authors: Hideaki Ishii, Roberto Tempo
    Abstract:

    The Pagerank Algorithm employed at Google assigns a measure of importance to each web page for rankings in search results. In our recent papers, we have proposed a distributed randomized approach for this Algorithm, where web pages are treated as agents computing their own Pagerank by communicating with linked pages. This paper builds upon this approach to reduce the computation and communication loads for the Algorithms. In particular, we develop a method to systematically aggregate the web pages into groups by exploiting the sparsity inherent in the web. For each group, an aggregated Pagerank value is computed, which can then be distributed among the group members. We provide a distributed update scheme for the aggregated Pagerank along with an analysis on its convergence properties. The method is especially motivated by results on singular perturbation techniques for large-scale Markov chains and multi-agent consensus. A numerical example is provided to illustrate the level of reduction in computation while keeping the error in rankings small.

  • Distributed randomized Algorithms for the Pagerank computation
    IEEE Transactions on Automatic Control, 2010
    Co-Authors: Hideaki Ishii, Roberto Tempo
    Abstract:

    In the search engine of Google, the Pagerank Algorithm plays a crucial role in ranking the search results. The Algorithm quantifies the importance of each web page based on the link structure of the web. We first provide an overview of the original problem setup. Then, we propose several distributed randomized schemes for the computation of the Pagerank, where the pages can locally update their values by communicating to those connected by links. The main objective of the paper is to show that these schemes asymptotically converge in the mean-square sense to the true Pagerank values. A detailed discussion on the close relations to the multi-agent consensus problems is also given.

G. C. H. E. Croon - One of the best experts on this subject based on the ideXlab platform.

  • The Pagerank Algorithm as a method to optimize swarm behavior through local analysis
    Swarm Intelligence, 2019
    Co-Authors: M. Coppola, J. Guo, E. Gill, G. C. H. E. Croon
    Abstract:

    This work proposes Pagerank as a tool to evaluate and optimize the global performance of a swarm based on the analysis of the local behavior of a single robot. Pagerank is a graph centrality measure that assesses the importance of nodes based on how likely they are to be reached when traversing a graph. We relate this, using a microscopic model, to a random robot in a swarm that transitions through local states by executing local actions. The Pagerank centrality then becomes a measure of how likely it is, given a local policy, for a robot in the swarm to visit each local state. This is used to optimize a stochastic policy such that the robot is most likely to reach the local states that are “desirable,” based on the swarm’s global goal. The optimization is performed by an evolutionary Algorithm, whereby the fitness function maximizes the Pagerank score of these local states. The calculation of the Pagerank score only scales with the size of the local state space and demands much less computation than swarm simulations would. The approach is applied to a consensus task, a pattern formation task, and an aggregation task. For each task, when all robots in the swarm execute the evolved policy, the swarm significantly outperforms a swarm that uses the baseline policy. When compared to globally optimized policies, the final performance achieved by the swarm is also shown to be comparable. As this new approach is based on a local model, it natively produces controllers that are flexible and robust to global parameters such as the number of robots in the swarm, the environment, and the initial conditions. Furthermore, as the wall-clock time to evaluate the fitness function does not scale with the size of the swarm, it is possible to optimize for larger swarms at no additional computational expense.

  • The Pagerank Algorithm as a method to optimize swarm behavior through local analysis
    Swarm Intelligence, 2019
    Co-Authors: M. Coppola, J. Guo, E. Gill, G. C. H. E. Croon
    Abstract:

    This work proposes Pagerank as a tool to evaluate and optimize the global performance of a swarm based on the analysis of the local behavior of a single robot. Pagerank is a graph centrality measure that assesses the importance of nodes based on how likely they are to be reached when traversing a graph. We relate this, using a microscopic model, to a random robot in a swarm that transitions through local states by executing local actions. The Pagerank centrality then becomes a measure of how likely it is, given a local policy, for a robot in the swarm to visit each local state. This is used to optimize a stochastic policy such that the robot is most likely to reach the local states that are “desirable,” based on the swarm’s global goal. The optimization is performed by an evolutionary Algorithm, whereby the fitness function maximizes the Pagerank score of these local states. The calculation of the Pagerank score only scales with the size of the local state space and demands much less computation than swarm simulations would. The approach is applied to a consensus task, a pattern formation task, and an aggregation task. For each task, when all robots in the swarm execute the evolved policy, the swarm significantly outperforms a swarm that uses the baseline policy. When compared to globally optimized policies, the final performance achieved by the swarm is also shown to be comparable. As this new approach is based on a local model, it natively produces controllers that are flexible and robust to global parameters such as the number of robots in the swarm, the environment, and the initial conditions. Furthermore, as the wall-clock time to evaluate the fitness function does not scale with the size of the swarm, it is possible to optimize for larger swarms at no additional computational expense.Control & SimulationSpace Systems EgineeringSpace Engineerin

Huafeng Xie - One of the best experts on this subject based on the ideXlab platform.

  • ranking scientific publications using a model of network traffic
    Journal of Statistical Mechanics: Theory and Experiment, 2007
    Co-Authors: D Walker, Huafeng Xie, Koonkiu Yan, Sergei Maslov
    Abstract:

    To account for strong ageing characteristics of citation networks, we modify the Pagerank Algorithm by initially distributing random surfers exponentially with age, in favour of more recent publications. The output of this Algorithm, which we call CiteRank, is interpreted as approximate traffic to individual publications in a simple model of how researchers find new information. We optimize parameters of our Algorithm to achieve the best performance. The results are compared for two rather different citation networks: all American Physical Society publications between 1893 and 2003 and the set of high-energy physics theory (hep-th) preprints. Despite major differences between these two networks, we find that their optimal parameters for the CiteRank Algorithm are remarkably similar. The advantages and performance of CiteRank over more conventional methods of ranking publications are discussed.

  • finding scientific gems with google s Pagerank Algorithm
    Journal of Informetrics, 2007
    Co-Authors: P Chen, Huafeng Xie, Sergei Maslov, S Redner
    Abstract:

    We apply the Google Pagerank Algorithm to assess the relative importance of all publications in the Physical Review family of journals from 1893 to 2003. While the Google number and the number of citations for each publication are positively correlated, outliers from this linear relation identify some exceptional papers or “gems” that are universally familiar to physicists. © 2006 Published by Elsevier Ltd.

  • ranking scientific publications using a simple model of network traffic
    arXiv: Physics and Society, 2006
    Co-Authors: D Walker, Huafeng Xie, Koonkiu Yan, Sergei Maslov
    Abstract:

    To account for strong aging characteristics of citation networks, we modify Google's Pagerank Algorithm by initially distributing random surfers exponentially with age, in favor of more recent publications. The output of this Algorithm, which we call CiteRank, is interpreted as approximate traffic to individual publications in a simple model of how researchers find new information. We develop an analytical understanding of traffic flow in terms of an RPA-like model and optimize parameters of our Algorithm to achieve the best performance. The results are compared for two rather different citation networks: all American Physical Society publications and the set of high-energy physics theory (hep-th) preprints. Despite major differences between these two networks, we find that their optimal parameters for the CiteRank Algorithm are remarkably similar.

Sergei Maslov - One of the best experts on this subject based on the ideXlab platform.

  • ranking scientific publications using a model of network traffic
    Journal of Statistical Mechanics: Theory and Experiment, 2007
    Co-Authors: D Walker, Huafeng Xie, Koonkiu Yan, Sergei Maslov
    Abstract:

    To account for strong ageing characteristics of citation networks, we modify the Pagerank Algorithm by initially distributing random surfers exponentially with age, in favour of more recent publications. The output of this Algorithm, which we call CiteRank, is interpreted as approximate traffic to individual publications in a simple model of how researchers find new information. We optimize parameters of our Algorithm to achieve the best performance. The results are compared for two rather different citation networks: all American Physical Society publications between 1893 and 2003 and the set of high-energy physics theory (hep-th) preprints. Despite major differences between these two networks, we find that their optimal parameters for the CiteRank Algorithm are remarkably similar. The advantages and performance of CiteRank over more conventional methods of ranking publications are discussed.

  • finding scientific gems with google s Pagerank Algorithm
    Journal of Informetrics, 2007
    Co-Authors: P Chen, Huafeng Xie, Sergei Maslov, S Redner
    Abstract:

    We apply the Google Pagerank Algorithm to assess the relative importance of all publications in the Physical Review family of journals from 1893 to 2003. While the Google number and the number of citations for each publication are positively correlated, outliers from this linear relation identify some exceptional papers or “gems” that are universally familiar to physicists. © 2006 Published by Elsevier Ltd.

  • ranking scientific publications using a simple model of network traffic
    arXiv: Physics and Society, 2006
    Co-Authors: D Walker, Huafeng Xie, Koonkiu Yan, Sergei Maslov
    Abstract:

    To account for strong aging characteristics of citation networks, we modify Google's Pagerank Algorithm by initially distributing random surfers exponentially with age, in favor of more recent publications. The output of this Algorithm, which we call CiteRank, is interpreted as approximate traffic to individual publications in a simple model of how researchers find new information. We develop an analytical understanding of traffic flow in terms of an RPA-like model and optimize parameters of our Algorithm to achieve the best performance. The results are compared for two rather different citation networks: all American Physical Society publications and the set of high-energy physics theory (hep-th) preprints. Despite major differences between these two networks, we find that their optimal parameters for the CiteRank Algorithm are remarkably similar.

  • finding scientific gems with google
    arXiv: Data Analysis Statistics and Probability, 2006
    Co-Authors: P Chen, Sergei Maslov, H Xie, S Redner
    Abstract:

    We apply the Google Pagerank Algorithm to assess the relative importance of all publications in the Physical Review family of journals from 1893--2003. While the Google number and the number of citations for each publication are positively correlated, outliers from this linear relation identify some exceptional papers or "gems" that are universally familiar to physicists.