Factorization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 192216 Experts worldwide ranked by ideXlab platform

Zhouwang Yang - One of the best experts on this subject based on the ideXlab platform.

  • an algorithm for low rank matrix Factorization and its applications
    Neurocomputing, 2018
    Co-Authors: Baiyu Chen, Zi Yang, Zhouwang Yang
    Abstract:

    Abstract This paper proposes a valid and fast algorithm for low-rank matrix Factorization. There are multiple applications for low-rank matrix Factorization, and numerous algorithms have been developed to solve this problem. However, many algorithms do not use rank directly; instead, they minimize a nuclear norm by using Singular Value Decomposition (SVD), which requires a huge time cost. In addition, these algorithms often fix the dimension of the factorized matrix, meaning that one must first find an optimum dimension for the factorized matrix in order to obtain a solution. Unfortunately, the optimum dimension is unknown in many practical problems, such as matrix completion and recommender systems. Therefore, it is necessary to develop a faster algorithm that can also estimate the optimum dimension. In this paper, we use the Hidden Matrix Factorized Augmented Lagrangian Method to solve low-rank matrix Factorizations. We also add a tool to dynamically estimate the optimum dimension and adjust it while simultaneously running the algorithm. Additionally, in the era of Big Data, there will be more and more large, sparse data. In face of such highly sparse data, our algorithm has the potential to be more effective than other algorithms. We applied it to some practical problems, e.g. Low-Rank Representation(LRR), and matrix completion with constraint. In numerical experiments, it has performed well when applied to both synthetic data and real-world data.

Jack Dongarra - One of the best experts on this subject based on the ideXlab platform.

  • Updating incomplete Factorization preconditioners for model order reduction
    Numerical Algorithms, 2016
    Co-Authors: Hartwig Anzt, Jens Saak, Edmond Chow, Jack Dongarra
    Abstract:

    When solving a sequence of related linear systems by iterative methods, it is common to reuse the preconditioner for several systems, and then to recompute the preconditioner when the matrix has changed significantly. Rather than recomputing the preconditioner from scratch, it is potentially more efficient to update the previous preconditioner. Unfortunately, it is not always known how to update a preconditioner, for example, when the preconditioner is an incomplete Factorization. A recently proposed iterative algorithm for computing incomplete Factorizations, however, is able to exploit an initial guess, unlike existing algorithms for incomplete Factorizations. By treating a previous Factorization as an initial guess to this algorithm, an incomplete Factorization may thus be updated. We use a sequence of problems from model order reduction. Experimental results using an optimized GPU implementation show that updating a previous Factorization can be inexpensive and effective, making solving sequences of linear systems a potential niche problem for the iterative incomplete Factorization algorithm.

  • Updating incomplete Factorization preconditioners for model order reduction
    Numerical Algorithms, 2016
    Co-Authors: Hartwig Anzt, Jens Saak, Edmond Chow, Jack Dongarra
    Abstract:

    © 2016 Springer Science+Business Media New York When solving a sequence of related linear systems by iterative methods, it is common to reuse the preconditioner for several systems, and then to recompute the preconditioner when the matrix has changed significantly. Rather than recomputing the preconditioner from scratch, it is potentially more efficient to update the previous preconditioner. Unfortunately, it is not always known how to update a preconditioner, for example, when the preconditioner is an incomplete Factorization. A recently proposed iterative algorithm for computing incomplete Factorizations, however, is able to exploit an initial guess, unlike existing algorithms for incomplete Factorizations. By treating a previous Factorization as an initial guess to this algorithm, an incomplete Factorization may thus be updated. We use a sequence of problems from model order reduction. Experimental results using an optimized GPU implementation show that updating a previous Factorization can be inexpensive and effective, making solving sequences of linear systems a potential niche problem for the iterative incomplete Factorization algorithm.

Pauli Miettinen - One of the best experts on this subject based on the ideXlab platform.

  • Clustering Boolean tensors
    Data Mining and Knowledge Discovery, 2015
    Co-Authors: Saskia Metzler, Pauli Miettinen
    Abstract:

    Graphs—such as friendship networks—that evolve over time are an example of data that are naturally represented as binary tensors. Similarly to analysing the adjacency matrix of a graph using a matrix Factorization, we can analyse the tensor by factorizing it. Unfortunately, tensor Factorizations are computationally hard problems, and in particular, are often significantly harder than their matrix counterparts. In case of Boolean tensor Factorizations—where the input tensor and all the factors are required to be binary and we use Boolean algebra—much of that hardness comes from the possibility of overlapping components. Yet, in many applications we are perfectly happy to partition at least one of the modes. For instance, in the aforementioned time-evolving friendship networks, groups of friends might be overlapping, but the time points at which the network was captured are always distinct. In this paper we investigate what consequences this partitioning has on the computational complexity of the Boolean tensor Factorizations and present a new algorithm for the resulting clustering problem. This algorithm can alternatively be seen as a particularly regularized clustering algorithm that can handle extremely high-dimensional observations. We analyse our algorithm with the goal of maximizing the similarity and argue that this is more meaningful than minimizing the dissimilarity. As a by-product we obtain a PTAS and an efficient 0.828-approximation algorithm for rank-1 binary Factorizations. Our algorithm for Boolean tensor clustering achieves high scalability, high similarity, and good generalization to unseen data with both synthetic and real-world data sets.

  • ICDM - Boolean Tensor Factorizations
    2011 IEEE 11th International Conference on Data Mining, 2011
    Co-Authors: Pauli Miettinen
    Abstract:

    Tensors are multi-way generalizations of matrices, and similarly to matrices, they can also be factorized, that is, represented (approximately) as a product of factors. These factors are typically either all matrices or a mixture of matrices and tensors. With the widespread adoption of matrix Factorization techniques in data mining, also tensor Factorizations have started to gain attention. In this paper we study the Boolean tensor Factorizations. We assume that the data is binary multi-way data, and we want to factorize it to binary factors using Boolean arithmetic (i.e. defining that 1+1=1). Boolean tensor Factorizations are, therefore, natural generalization of the Boolean matrix Factorizations. We will study the theory of Boolean tensor Factorizations and show that at least some of the benefits Boolean matrix Factorizations have over normal matrix Factorizations carry over to the tensor data. We will also present algorithms for Boolean variations of CP and Tucker decompositions, the two most-common types of tensor Factorizations. With experimentation done with synthetic and real-world data, we show that Boolean tensor Factorizations are a viable alternative when the data is naturally binary.

Ludmil Katzarkov - One of the best experts on this subject based on the ideXlab platform.

  • Resolutions in Factorization categories
    Advances in Mathematics, 2016
    Co-Authors: Matthew Ballard, Dragos Deliu, David Favero, M. Umut Isik, Ludmil Katzarkov
    Abstract:

    Abstract Building upon ideas of Eisenbud, Buchweitz, Positselski, and others, we introduce the notion of a Factorization category. We then develop some essential tools for working with Factorization categories, including constructions of resolutions of Factorizations from resolutions of their components and derived functors. Using these resolutions, we lift fully-faithfulness and equivalence statements from derived categories of Abelian categories to derived categories of Factorizations. Some immediate geometric consequences include a realization of the derived category of a projective hypersurface as matrix Factorizations over a noncommutative algebra and recover of a theorem of Baranovsky and Pecharich.

  • A category of kernels for equivariant Factorizations and its implications for Hodge theory
    Publications mathématiques de l'IHÉS, 2014
    Co-Authors: Matthew Ballard, David Favero, Ludmil Katzarkov
    Abstract:

    We provide a Factorization model for the continuous internal Hom, in the homotopy category of k -linear dg-categories, between dg-categories of equivariant Factorizations. This motivates a notion, similar to that of Kuznetsov, which we call the extended Hochschild cohomology algebra of the category of equivariant Factorizations. In some cases of geometric interest, extended Hochschild cohomology contains Hochschild cohomology as a subalgebra and Hochschild homology as a homogeneous component. We use our Factorization model for the internal Hom to calculate the extended Hochschild cohomology for equivariant Factorizations on affine space. Combining the computation of extended Hochschild cohomology with the Hochschild-Kostant-Rosenberg isomorphism and a theorem of Orlov recovers and extends Griffiths’ classical description of the primitive cohomology of a smooth, complex projective hypersurface in terms of homogeneous pieces of the Jacobian algebra. In the process, the primitive cohomology is identified with the fixed subspace of the cohomological endomorphism associated to an interesting endofunctor of the bounded derived category of coherent sheaves on the hypersurface. We also demonstrate how to understand the whole Jacobian algebra as morphisms between kernels of endofunctors of the derived category. Finally, we present a bootstrap method for producing algebraic cycles in categories of equivariant Factorizations. As proof of concept, we show how this reproves the Hodge conjecture for all self-products of a particular K3 surface closely related to the Fermat cubic fourfold.

  • Resolutions in Factorization categories
    arXiv: Category Theory, 2012
    Co-Authors: Matthew Ballard, Dragos Deliu, David Favero, M. Umut Isik, Ludmil Katzarkov
    Abstract:

    Generalizing Eisenbud's matrix Factorizations, we define Factorization categories. Following work of Positselski, we define their associated derived categories. We construct specific resolutions of Factorizations built from a choice of resolutions of their components. We use these resolutions to lift fully-faithfulness statements from derived categories of Abelian categories to derived categories of Factorizations and to construct a spectral sequence computing the morphism spaces in the derived categories of Factorizations from Ext-groups of their components in the underlying Abelian category.

Baiyu Chen - One of the best experts on this subject based on the ideXlab platform.

  • an algorithm for low rank matrix Factorization and its applications
    Neurocomputing, 2018
    Co-Authors: Baiyu Chen, Zi Yang, Zhouwang Yang
    Abstract:

    Abstract This paper proposes a valid and fast algorithm for low-rank matrix Factorization. There are multiple applications for low-rank matrix Factorization, and numerous algorithms have been developed to solve this problem. However, many algorithms do not use rank directly; instead, they minimize a nuclear norm by using Singular Value Decomposition (SVD), which requires a huge time cost. In addition, these algorithms often fix the dimension of the factorized matrix, meaning that one must first find an optimum dimension for the factorized matrix in order to obtain a solution. Unfortunately, the optimum dimension is unknown in many practical problems, such as matrix completion and recommender systems. Therefore, it is necessary to develop a faster algorithm that can also estimate the optimum dimension. In this paper, we use the Hidden Matrix Factorized Augmented Lagrangian Method to solve low-rank matrix Factorizations. We also add a tool to dynamically estimate the optimum dimension and adjust it while simultaneously running the algorithm. Additionally, in the era of Big Data, there will be more and more large, sparse data. In face of such highly sparse data, our algorithm has the potential to be more effective than other algorithms. We applied it to some practical problems, e.g. Low-Rank Representation(LRR), and matrix completion with constraint. In numerical experiments, it has performed well when applied to both synthetic data and real-world data.