Matrix Computation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 72027 Experts worldwide ranked by ideXlab platform

Aditya Ramamoorthy - One of the best experts on this subject based on the ideXlab platform.

  • coded sparse Matrix Computation schemes that leverage partial stragglers
    International Symposium on Information Theory, 2021
    Co-Authors: Anindya Bijoy Das, Aditya Ramamoorthy
    Abstract:

    Coded Matrix Computation utilizes concepts from erasure coding to mitigate the effect of slow worker nodes (stragglers) in the distributed setting. While this is useful, there are issues with applying, e.g., MDS codes in a straightforward manner for this problem. Several practical scenarios involve sparse matrices. MDS codes typically require dense linear combinations of submatrices of the original matrices which destroy their inherent sparsity; this leads to significantly higher worker Computation times. Moreover, treating slow nodes as erasures ignores the potentially useful partial Computations performed by them. In this work we present schemes that allow us to leverage partial Computation by stragglers while imposing constraints on the level of coding that is required in generating the encoded submatrices. This significantly reduces the worker Computation time as compared to previous approaches and results in improved numerical stability in the decoding process. Exhaustive numerical experiments support our findings.

  • Random Convolutional Coding for Robust and Straggler Resilient Distributed Matrix Computation.
    arXiv: Information Theory, 2019
    Co-Authors: Anindya Bijoy Das, Aditya Ramamoorthy, Namrata Vaswani
    Abstract:

    Distributed Matrix Computations (Matrix-vector and Matrix-Matrix multiplications) are at the heart of several tasks within the machine learning pipeline. However, distributed clusters are well-recognized to suffer from the problem of stragglers (slow or failed nodes). Prior work in this area has presented straggler mitigation strategies based on polynomial evaluation/interpolation. However, such approaches suffer from numerical problems (blow up of round-off errors) owing to the high condition numbers of the corresponding Vandermonde matrices. In this work, we introduce a novel solution approach that relies on embedding distributed Matrix Computations into the structure of a convolutional code. This simple innovation allows us to develop a provably numerically robust and efficient (fast) solution for distributed Matrix-vector and Matrix-Matrix multiplication.

  • universally decodable matrices for distributed Matrix vector multiplication
    International Symposium on Information Theory, 2019
    Co-Authors: Aditya Ramamoorthy, Li Tang, Pascal O Vontobel
    Abstract:

    Coded Computation is an emerging research area that leverages concepts from erasure coding to mitigate the effect of stragglers (slow nodes) in distributed Computation clusters, especially for Matrix Computation problems. In this work, we present a class of distributed Matrix-vector multiplication schemes that are based on codes in the Rosenbloom-Tsfasman metric and universally decodable matrices. Our schemes take into account the inherent Computation order within a worker node. In particular, they allow us to effectively leverage partial Computations performed by stragglers (a feature that many prior works lack). An additional main contribution of our work is a companion-Matrix-based embedding of these codes that allows us to obtain sparse and numerically stable schemes for the problem at hand. Experimental results confirm the effectiveness of our techniques.

Zheng Zhang - One of the best experts on this subject based on the ideXlab platform.

  • madlinq large scale distributed Matrix Computation for the cloud
    European Conference on Computer Systems, 2012
    Co-Authors: Zhengping Qian, Xiuwei Chen, Nanxi Kang, Mingcheng Chen, Thomas Moscibroda, Zheng Zhang
    Abstract:

    The Computation core of many data-intensive applications can be best expressed as Matrix Computations. The MadLINQ project addresses the following two important research problems: the need for a highly scalable, efficient and fault-tolerant Matrix Computation system that is also easy to program, and the seamless integration of such specialized execution engines in a general purpose data-parallel computing system. MadLINQ exposes a unified programming model to both Matrix algorithm and application developers. Matrix algorithms are expressed as sequential programs operating on tiles (i.e., sub-matrices). For application developers, MadLINQ provides a distributed Matrix Computation library for .NET languages. Via the LINQ technology, MadLINQ also seamlessly integrates with DryadLINQ, a data-parallel computing system focusing on relational algebra. The system automatically handles the parallelization and distributed execution of programs on a large cluster. It outperforms current state-of-the-art systems by employing two key techniques, both of which are enabled by the Matrix abstraction: exploiting extra parallelism using fine-grained pipelining and efficient on-demand failure recovery using a distributed fault-tolerant execution engine. We describe the design and implementation of MadLINQ and evaluate system performance using several real-world applications.

Joerg Kliewer - One of the best experts on this subject based on the ideXlab platform.

  • distributed and private coded Matrix Computation with flexible communication load
    International Symposium on Information Theory, 2019
    Co-Authors: Malihe Aliasgari, Osvaldo Simeone, Joerg Kliewer
    Abstract:

    Tensor operations, such as Matrix multiplication, are central to large-scale machine learning applications. These operations can be carried out on a distributed computing platform with a master server at the user side and multiple workers in the cloud operating in parallel. For distributed platforms, it has been recently shown that coding over the input data matrices can reduce the Computational delay, yielding a tradeoff between recovery threshold and communication load. In this work, we impose an additional security constraint on the data matrices and assume that workers can collude to eavesdrop on the content of these data matrices. Specifically, we introduce a novel class of secure codes, referred to as secure generalized PolyDot codes, that generalizes previously published non-secure versions of these codes for Matrix multiplication. These codes extend the state-of-the-art by allowing a flexible trade-off between recovery threshold and communication load for a fixed maximum number of colluding workers.

  • distributed and private coded Matrix Computation with flexible communication load
    arXiv: Information Theory, 2019
    Co-Authors: Malihe Aliasgari, Osvaldo Simeone, Joerg Kliewer
    Abstract:

    Tensor operations, such as Matrix multiplication, are central to large-scale machine learning applications. For user-driven tasks these operations can be carried out on a distributed computing platform with a master server at the user side and multiple workers in the cloud operating in parallel. For distributed platforms, it has been recently shown that coding over the input data matrices can reduce the Computational delay, yielding a trade-off between recovery threshold and communication load. In this paper we impose an additional security constraint on the data matrices and assume that workers can collude to eavesdrop on the content of these data matrices. Specifically, we introduce a novel class of secure codes, referred to as secure generalized PolyDot codes, that generalizes previously published non-secure versions of these codes for Matrix multiplication. These codes extend the state-of-the-art by allowing a flexible trade-off between recovery threshold and communication load for a fixed maximum number of colluding workers.

Anindya Bijoy Das - One of the best experts on this subject based on the ideXlab platform.

  • coded sparse Matrix Computation schemes that leverage partial stragglers
    International Symposium on Information Theory, 2021
    Co-Authors: Anindya Bijoy Das, Aditya Ramamoorthy
    Abstract:

    Coded Matrix Computation utilizes concepts from erasure coding to mitigate the effect of slow worker nodes (stragglers) in the distributed setting. While this is useful, there are issues with applying, e.g., MDS codes in a straightforward manner for this problem. Several practical scenarios involve sparse matrices. MDS codes typically require dense linear combinations of submatrices of the original matrices which destroy their inherent sparsity; this leads to significantly higher worker Computation times. Moreover, treating slow nodes as erasures ignores the potentially useful partial Computations performed by them. In this work we present schemes that allow us to leverage partial Computation by stragglers while imposing constraints on the level of coding that is required in generating the encoded submatrices. This significantly reduces the worker Computation time as compared to previous approaches and results in improved numerical stability in the decoding process. Exhaustive numerical experiments support our findings.

  • Random Convolutional Coding for Robust and Straggler Resilient Distributed Matrix Computation.
    arXiv: Information Theory, 2019
    Co-Authors: Anindya Bijoy Das, Aditya Ramamoorthy, Namrata Vaswani
    Abstract:

    Distributed Matrix Computations (Matrix-vector and Matrix-Matrix multiplications) are at the heart of several tasks within the machine learning pipeline. However, distributed clusters are well-recognized to suffer from the problem of stragglers (slow or failed nodes). Prior work in this area has presented straggler mitigation strategies based on polynomial evaluation/interpolation. However, such approaches suffer from numerical problems (blow up of round-off errors) owing to the high condition numbers of the corresponding Vandermonde matrices. In this work, we introduce a novel solution approach that relies on embedding distributed Matrix Computations into the structure of a convolutional code. This simple innovation allows us to develop a provably numerically robust and efficient (fast) solution for distributed Matrix-vector and Matrix-Matrix multiplication.

Yasuyuki Sugaya - One of the best experts on this subject based on the ideXlab platform.

  • compact fundamental Matrix Computation
    Ipsj Transactions on Computer Vision and Applications, 2010
    Co-Authors: Kenichi Kanatani, Yasuyuki Sugaya
    Abstract:

    A very compact algorithm is presented for fundamental Matrix Computation from point correspondences over two images. The Computation is based on the maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Although our algorithm produces the same solution as all existing ML-based methods, it is probably the most practical of all, being small and simple. By numerical experiments, we confirm that our algorithm behaves as expected.

  • high accuracy fundamental Matrix Computation and its performance evaluation
    IEICE Transactions on Information and Systems, 2007
    Co-Authors: Kenichi Kanatani, Yasuyuki Sugaya
    Abstract:

    We compare the convergence performance of different numerical schemes for computing the fundamental Matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three well-known methods: FNS, HEIV, and renormalization. We also introduce Gauss-Newton iterations as a new method for fundamental Matrix Computation. For initial values, we test random choice, least squares, and Taubin's method. Experiments using simulated and real images reveal different characteristics of each method. Overall, FNS exhibits the best convergence properties.

  • Performance evaluation of iterative geometric fitting algorithms
    Computational Statistics & Data Analysis, 2007
    Co-Authors: Kenichi Kanatani, Yasuyuki Sugaya
    Abstract:

    The convergence performance of typical numerical schemes for geometric fitting for computer vision applications is compared. First, the problem and the associated KCR lower bound are stated. Then, three well-known fitting algorithms are described: FNS, HEIV, and renormalization. To these, we add a special variant of Gauss-Newton iterations. For initialization of iterations, random choice, least squares, and Taubin's method are tested. Simulation is conducted for fundamental Matrix Computation and ellipse fitting, which reveals different characteristics of each method.