Linear Algebra

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 64968 Experts worldwide ranked by ideXlab platform

Lars Eld??n - One of the best experts on this subject based on the ideXlab platform.

  • Numerical Linear Algebra in data mining
    Acta Numerica, 2006
    Co-Authors: Lars Eld??n
    Abstract:

    Ideas and algorithms from numerical Linear Algebra are important in several areas of data mining. We give an overview of Linear Algebra methods in text mining (information retrieval), pattern recognition (classification of hand- written digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value de- composition and clustering, and on eigenvalue methods for network analysis.

Chris Jermaine - One of the best experts on this subject based on the ideXlab platform.

  • scalable Linear Algebra on a relational database system
    IEEE Transactions on Knowledge and Data Engineering, 2019
    Co-Authors: Michael Gubanov, Luis L Perez, Chris Jermaine
    Abstract:

    As data analytics has become an important application for modern data management systems, a new category of data management system has appeared recently: the scalable Linear Algebra system. In this paper, we argue that a parallel or distributed database system is actually an excellent platform upon which to build such functionality. Most relational systems already have support for cost-based optimization—which is vital to scaling Linear Algebra computations—and it is well-known how to make relational systems scale. We show that by making just a few changes to a parallel/distributed relational database system, such a system can be a competitive platform for scalable Linear Algebra. Taken together, our results should at least raise the possibility that brand new systems designed from the ground up to support scalable Linear Algebra are not absolutely necessary, and that such systems could instead be built on top of existing relational technology. Our results also suggest that if scalable Linear Algebra is to be added to a modern dataflow platform such as Spark, they should be added on top of the system's more structured (relational) data abstractions, rather than being constructed directly on top of the system's raw dataflow operators.

  • scalable Linear Algebra on a relational database system
    International Conference on Management of Data, 2018
    Co-Authors: Michael Gubanov, Luis L Perez, Chris Jermaine
    Abstract:

    Scalable Linear Algebra is important for analytics and machine learning (including deep learning). In this paper, we argue that a parallel or distributed database system is actually an excellent platform upon which to build such functionality. Most relational systems already have support for cost-based optimization-which is vital to scaling Linear Algebra computations-and it is well-known how to make relational systems scale. We show that by making just a few changes to a parallel/distributed relational database system, such a system can be a competitive platform for scalable Linear Algebra. Our results suggest that brand new systems supporting scalable Linear Algebra are not absolutely necessary, and that such systems could instead be built on top of existing relational technology.

Qaiser Iqbal - One of the best experts on this subject based on the ideXlab platform.

Jack Dongarra - One of the best experts on this subject based on the ideXlab platform.

  • accelerating gpu kernels for dense Linear Algebra
    IEEE International Conference on High Performance Computing Data and Analytics, 2010
    Co-Authors: Rajib Nath, Stanimire Tomov, Jack Dongarra
    Abstract:

    Implementations of the Basic Linear Algebra Subprograms (BLAS) interface are major building block of dense Linear Algebra (DLA). libraries, and therefore have to be highly optimized. We present some techniques and implementations that significantly accelerate the corresponding routines from currently available libraries for GPUs. In particular, Pointer Redirecting - a set of GPU specific optimization techniques - allows us to easily remove performance oscillations associated with problem dimensions not divisible by fixed blocking sizes. For example, applied to the matrix-matrix multiplication routines, depending on the hardware configuration and routine parameters, this can lead to two times faster algorithms. Similarly, the matrix-vector multiplication can be accelerated more than two times in both single and double precision arithmetic. Additionally, GPU specific acceleration techniques are applied to develop new kernels (e.g. syrk, symv) that are up to 20× faster than the currently available kernels. We present these kernels and also show their acceleration effect to higher level dense Linear Algebra routines. The accelerated kernels are now freely available through the MAGMA BLAS library.

  • Numerical Linear Algebra algorithms and software
    Journal of Computational and Applied Mathematics, 2000
    Co-Authors: Jack Dongarra, Victor Eijkhout
    Abstract:

    Abstract The increasing availability of advanced-architecture computers has a significant effect on all spheres of scientific computation, including algorithm research and software development in numerical Linear Algebra. Linear Algebra – in particular, the solution of Linear systems of equations – lies at the heart of most calculations in scientific computing. This paper discusses some of the recent developments in Linear Algebra designed to exploit these advanced-architecture computers. We discuss two broad classes of algorithms: those for dense, and those for sparse matrices.

  • automatically tuned Linear Algebra software
    Conference on High Performance Computing (Supercomputing), 1998
    Co-Authors: Clint R Whaley, Jack Dongarra
    Abstract:

    This paper describes an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. The production of such software for machines ranging from desktop workstations to embedded processors can be a tedious and time consuming process. The work described here can help in automating much of this process. We will concentrate our efforts on the widely used Linear Algebra kernels called the Basic Linear Algebra Subroutines (BLAS). In particular, the work presented here is for general matrix multiply, DGEMM. However much of the technology and approach developed here can be applied to the other Level 3 BLAS and the general strategy can have an impact on basic Linear Algebra operations in general and may be extended to other important kernel operations.

Markus Puschel - One of the best experts on this subject based on the ideXlab platform.

  • a basic Linear Algebra compiler
    Symposium on Code Generation and Optimization, 2014
    Co-Authors: Daniele G Spampinato, Markus Puschel
    Abstract:

    Many applications in media processing, control, graphics, and other domains require efficient small-scale Linear Algebra computations. However, most existing high performance libraries for Linear Algebra, such as ATLAS or Intel MKL are more geared towards large-scale problems (matrix sizes in the hundreds and larger) and towards specific interfaces (e.g., BLAS). In this paper we present LGen: a compiler for small-scale, basic Linear Algebra computations. The input to LGen is a fixed-size Linear Algebra expression; the output is a corresponding C function optionally including intrinsics to efficiently use SIMD vector extensions. LGen generates code using two levels of mathematical domain-specific languages (DSLs). The DSLs are used to perform tiling, loop fusion, and vectorization at a high level of abstraction, before the final code is generated. In addition, search is used to select among alternative generated implementations. We show benchmarks of code generated by LGen against Intel MKL and IPP as well as against alternative generators, such as the C++ template-based Eigen and the BTO compiler. The achieved speed-up is typically about a factor of two to three.