Distributed Computation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Aquinas Hobor - One of the best experts on this subject based on the ideXlab platform.

  • On Power Splitting Games in Distributed Computation: The Case of Bitcoin Pooled Mining
    Proceedings of the Computer Security Foundations Workshop, 2015
    Co-Authors: Loi Luu, Inian Parameshwaran, Ratul Saha, Prateek Saxena, Aquinas Hobor
    Abstract:

    Several new services incentivize clients to compete in solving large Computation tasks in exchange for financial rewards. This model of competitive Distributed Computation enables every user connected to the Internet to participate in a game in which he splits his Computational power among a set of competing pools -- the game is called a Computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the "block withholding attack". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.

  • on power splitting games in Distributed Computation the case of bitcoin pooled mining
    IACR Cryptology ePrint Archive, 2015
    Co-Authors: Loi Luu, Inian Parameshwaran, Ratul Saha, Prateek Saxena, Aquinas Hobor
    Abstract:

    Several new services incentivize clients to compete in solving large Computation tasks in exchange for financial rewards. This model of competitive Distributed Computation enables every user connected to the Internet to participate in a game in which he splits his Computational power among a set of competing pools — the game is called a Computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin cryptocurrency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the “block withholding attack”. This attack is a topic of debate, initially thought to be ill-incentivized in today’s pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy—that is—in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, a part of the Bitcoin network is incentivized to waste resource competing for higher selfish reward.

Peter Robinson - One of the best experts on this subject based on the ideXlab platform.

  • Distributed Computation of large scale graph problems
    Symposium on Discrete Algorithms, 2015
    Co-Authors: Hartmut Klauck, Gopal Pandurangan, Danupon Nanongkai, Peter Robinson
    Abstract:

    Motivated by the increasing need for fast Distributed processing of large-scale graphs such as the Web graph and various social networks, we study a number of fundamental graph problems in the message-passing model, where we have k machines that jointly perform Computation on an arbitrary n-node (typically, n G k) input graph. The graph is assumed to be randomly partitioned among the k ≥ 2 machines (a common implementation in many real world systems). The communication is point-to-point, and the goal is to minimize the time complexity i.e., the number of communication rounds, of solving various fundamental graph problems. We present lower bounds that quantify the fundamental time limitations of distributively solving graph problems. We first show a lower bound of Ω(n/k) rounds for computing a spanning tree (ST) of the input graph. This result also implies the same bound for other fundamental problems such as computing a minimum spanning tree (MST), breadth-first tree (BFS), and shortest paths tree (SPT). We also show an Ω(n/k2) lower bound for connectivity ST verification and other related problems. Our lower bounds develop and use new bounds in random-partition communication complexity. To complement our lower bounds, we also give algorithms for various fundamental graph problems, e.g., PageRank, MST, connectivity, ST verification, shortest paths, cuts, spanners, covering problems, densest subgraph, subgraph isomorphism, finding triangles, etc. We show that problems such as PageRank, MST, connectivity, and graph covering can be solved in O(n/k) time (the notation O hides polylog(n) factors and an additive polylog(n) term); this shows that one can achieve almost linear (in k) speedup, whereas for shortest paths, we present algorithms that run in O(n/Ok) time (for (1 + e)-factor approximation) and in O(n/k) time (for O(log n)-factor approximation) respectively. Our results are a step towards understanding the complexity of distributively solving large-scale graph problems.

  • Distributed Computation of large scale graph problems
    arXiv: Distributed Parallel and Cluster Computing, 2013
    Co-Authors: Hartmut Klauck, Gopal Pandurangan, Danupon Nanongkai, Peter Robinson
    Abstract:

    Motivated by the increasing need for fast Distributed processing of large-scale graphs such as the Web graph and various social networks, we study a message-passing Distributed computing model for graph processing and present lower bounds and algorithms for several graph problems. This work is inspired by recent large-scale graph processing systems (e.g., Pregel and Giraph) which are designed based on the message-passing model of Distributed computing. Our model consists of a point-to-point communication network of $k$ machines interconnected by bandwidth-restricted links. Communicating data between the machines is the costly operation (as opposed to local Computation). The network is used to process an arbitrary $n$-node input graph (typically $n \gg k > 1$) that is randomly partitioned among the $k$ machines (a common implementation in many real world systems). Our goal is to study fundamental complexity bounds for solving graph problems in this model. We present techniques for obtaining lower bounds on the Distributed time complexity. Our lower bounds develop and use new bounds in random-partition communication complexity. We first show a lower bound of $\Omega(n/k)$ rounds for computing a spanning tree (ST) of the input graph. This result also implies the same bound for other fundamental problems such as computing a minimum spanning tree (MST). We also show an $\Omega(n/k^2)$ lower bound for connectivity, ST verification and other related problems. We give algorithms for various fundamental graph problems in our model. We show that problems such as PageRank, MST, connectivity, and graph covering can be solved in $\tilde{O}(n/k)$ time, whereas for shortest paths, we present algorithms that run in $\tilde{O}(n/\sqrt{k})$ time (for $(1+\epsilon)$-factor approx.) and in $\tilde{O}(n/k)$ time (for $O(\log n)$-factor approx.) respectively.

Loi Luu - One of the best experts on this subject based on the ideXlab platform.

  • On Power Splitting Games in Distributed Computation: The Case of Bitcoin Pooled Mining
    Proceedings of the Computer Security Foundations Workshop, 2015
    Co-Authors: Loi Luu, Inian Parameshwaran, Ratul Saha, Prateek Saxena, Aquinas Hobor
    Abstract:

    Several new services incentivize clients to compete in solving large Computation tasks in exchange for financial rewards. This model of competitive Distributed Computation enables every user connected to the Internet to participate in a game in which he splits his Computational power among a set of competing pools -- the game is called a Computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the "block withholding attack". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.

  • on power splitting games in Distributed Computation the case of bitcoin pooled mining
    IACR Cryptology ePrint Archive, 2015
    Co-Authors: Loi Luu, Inian Parameshwaran, Ratul Saha, Prateek Saxena, Aquinas Hobor
    Abstract:

    Several new services incentivize clients to compete in solving large Computation tasks in exchange for financial rewards. This model of competitive Distributed Computation enables every user connected to the Internet to participate in a game in which he splits his Computational power among a set of competing pools — the game is called a Computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin cryptocurrency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the “block withholding attack”. This attack is a topic of debate, initially thought to be ill-incentivized in today’s pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy—that is—in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, a part of the Bitcoin network is incentivized to waste resource competing for higher selfish reward.

Hartmut Klauck - One of the best experts on this subject based on the ideXlab platform.

  • Distributed Computation of large scale graph problems
    Symposium on Discrete Algorithms, 2015
    Co-Authors: Hartmut Klauck, Gopal Pandurangan, Danupon Nanongkai, Peter Robinson
    Abstract:

    Motivated by the increasing need for fast Distributed processing of large-scale graphs such as the Web graph and various social networks, we study a number of fundamental graph problems in the message-passing model, where we have k machines that jointly perform Computation on an arbitrary n-node (typically, n G k) input graph. The graph is assumed to be randomly partitioned among the k ≥ 2 machines (a common implementation in many real world systems). The communication is point-to-point, and the goal is to minimize the time complexity i.e., the number of communication rounds, of solving various fundamental graph problems. We present lower bounds that quantify the fundamental time limitations of distributively solving graph problems. We first show a lower bound of Ω(n/k) rounds for computing a spanning tree (ST) of the input graph. This result also implies the same bound for other fundamental problems such as computing a minimum spanning tree (MST), breadth-first tree (BFS), and shortest paths tree (SPT). We also show an Ω(n/k2) lower bound for connectivity ST verification and other related problems. Our lower bounds develop and use new bounds in random-partition communication complexity. To complement our lower bounds, we also give algorithms for various fundamental graph problems, e.g., PageRank, MST, connectivity, ST verification, shortest paths, cuts, spanners, covering problems, densest subgraph, subgraph isomorphism, finding triangles, etc. We show that problems such as PageRank, MST, connectivity, and graph covering can be solved in O(n/k) time (the notation O hides polylog(n) factors and an additive polylog(n) term); this shows that one can achieve almost linear (in k) speedup, whereas for shortest paths, we present algorithms that run in O(n/Ok) time (for (1 + e)-factor approximation) and in O(n/k) time (for O(log n)-factor approximation) respectively. Our results are a step towards understanding the complexity of distributively solving large-scale graph problems.

  • Distributed Computation of large scale graph problems
    arXiv: Distributed Parallel and Cluster Computing, 2013
    Co-Authors: Hartmut Klauck, Gopal Pandurangan, Danupon Nanongkai, Peter Robinson
    Abstract:

    Motivated by the increasing need for fast Distributed processing of large-scale graphs such as the Web graph and various social networks, we study a message-passing Distributed computing model for graph processing and present lower bounds and algorithms for several graph problems. This work is inspired by recent large-scale graph processing systems (e.g., Pregel and Giraph) which are designed based on the message-passing model of Distributed computing. Our model consists of a point-to-point communication network of $k$ machines interconnected by bandwidth-restricted links. Communicating data between the machines is the costly operation (as opposed to local Computation). The network is used to process an arbitrary $n$-node input graph (typically $n \gg k > 1$) that is randomly partitioned among the $k$ machines (a common implementation in many real world systems). Our goal is to study fundamental complexity bounds for solving graph problems in this model. We present techniques for obtaining lower bounds on the Distributed time complexity. Our lower bounds develop and use new bounds in random-partition communication complexity. We first show a lower bound of $\Omega(n/k)$ rounds for computing a spanning tree (ST) of the input graph. This result also implies the same bound for other fundamental problems such as computing a minimum spanning tree (MST). We also show an $\Omega(n/k^2)$ lower bound for connectivity, ST verification and other related problems. We give algorithms for various fundamental graph problems in our model. We show that problems such as PageRank, MST, connectivity, and graph covering can be solved in $\tilde{O}(n/k)$ time, whereas for shortest paths, we present algorithms that run in $\tilde{O}(n/\sqrt{k})$ time (for $(1+\epsilon)$-factor approx.) and in $\tilde{O}(n/k)$ time (for $O(\log n)$-factor approx.) respectively.

Aditya Karnik - One of the best experts on this subject based on the ideXlab platform.

  • time and energy complexity of Distributed Computation of a class of functions in wireless sensor networks
    IEEE Transactions on Mobile Computing, 2008
    Co-Authors: N Khude, Anurag Kumar, Aditya Karnik
    Abstract:

    We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing Distributed Computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the Computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the Distributed function Computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the Computation algorithm, we establish scaling laws for the Computation time and energy expenditure for one-time maximum Computation. We show that for an optimal algorithm, the Computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific Computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the Computation time and energy expenditure as n rarr infin. In particular, we show that the Computation time for these algorithms scales as Theta(radicn/log n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical Distributed schedulers.

  • time and energy complexity of Distributed Computation in wireless sensor networks
    International Conference on Computer Communications, 2005
    Co-Authors: N Khude, Anurag Kumar, Aditya Karnik
    Abstract:

    We consider a scenario where a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing the maximum value of the measurements and communicating it to an operator station. We view the problem as one of message passing Distributed Computation over a geometric random graph. The network is assumed to be synchronous; at each sampling instant each sensor measures a value, and then the sensors collaboratively compute and deliver the maximum of these values to the operator station. Computation algorithms differ in the messages they need to exchange, and our formulation focuses on the problem of scheduling of the message exchanges. We do not exploit techniques such as source compression, or block coding of the Computations. For this problem, we study the Computation time and energy expenditure for one time maximum Computation, and also the pipeline throughput. We show that, for an optimal algorithm, the Computation time, energy expenditure and the achievable rate of Computation scale as /spl Theta/(/spl radic/ n/log n), /spl Theta/(n) and /spl Theta/(1/log n) asymptotically (in probability) as the number of sensors n/spl rarr//spl infin/. We also analyze the performance of three specific Computational algorithms, namely, the tree algorithm, multihop transmission, and the ripple algorithm, and obtain scaling laws for the Computation time and energy expenditure as n/spl rarr//spl infin/. Simulation results are provided to show that our analysis indeed captures the correct scaling; the simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized scheduler and hence our results can be viewed as providing bounds for the performance with a Distributed scheduler.