Submatrix

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6435 Experts worldwide ranked by ideXlab platform

Oren Weimann - One of the best experts on this subject based on the ideXlab platform.

  • Submatrix maximum queries in monge and partial monge matrices are equivalent to predecessor search
    ACM Transactions on Algorithms, 2020
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present an optimal data structure for Submatrix maximum queries in n× n Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem in a universe of polynomial size. This gives a data structure of O(n) space that answers Submatrix maximum queries in O(log log n) time, as well as a matching lower bound, showing that O(log log n) query-time is optimal for any data structure of size O(npolylog(n)). Our result settles the problem, improving on the O(log2 n) query time in SODA’12, and on the O(log n) query-time in ICALP’14. In addition, we show that partial Monge matrices can be handled in the same bounds as full Monge matrices. In both previous results, partial Monge matrices incurred additional inverse-Ackermann factors.

  • Submatrix maximum queries in monge matrices are equivalent to predecessor search
    International Colloquium on Automata Languages and Programming, 2015
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present an optimal data structure for Submatrix maximum queries in \(n\times n\) Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem in a universe of polynomial size. This gives a data structure of \(O(n)\) space that answers Submatrix maximum queries in \(O(\log \log n)\) time, as well as a matching lower bound, showing that \(O(\log \log n)\) query-time is optimal for any data structure of size \(O(n\) polylog\((n))\). Our result settles the problem, improving on the \(O(\log ^2 n)\) query-time in SODA’12, and on the \(O(\log n)\) query-time in ICALP’14.

  • Submatrix maximum queries in monge matrices are equivalent to predecessor search
    arXiv: Data Structures and Algorithms, 2015
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present an optimal data structure for Submatrix maximum queries in n x n Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem in a universe of polynomial size. This gives a data structure of O(n) space that answers Submatrix maximum queries in O(loglogn) time. It also gives a matching lower bound, showing that O(loglogn) query-time is optimal for any data structure of size O(n polylog(n)). Our result concludes a line of improvements that started in SODA'12 with O(log^2 n) query-time and continued in ICALP'14 with O(log n) query-time. Finally, we show that partial Monge matrices can be handled in the same bounds as full Monge matrices. In both previous results, partial Monge matrices incurred additional inverse-Ackerman factors.

  • improved Submatrix maximum queries in monge matrices
    International Colloquium on Automata Languages and Programming, 2014
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present efficient data structures for Submatrix maximum queries in Monge matrices and Monge partial matrices. For n×n Monge matrices, we give a data structure that requires O(n) space and answers Submatrix maximum queries in O(logn) time. The best previous data structure [Kaplan et al., SODA‘12] required O(n logn) space and O(log2 n) query time. We also give an alternative data structure with constant query-time and O(n 1 + e ) construction time and space for any fixed e < 1. For n×n partial Monge matrices we obtain a data structure with O(n) space and O(logn ·α(n)) query time. The data structure of Kaplan et al. required O(n logn ·α(n)) space and O(log2 n) query time. Our improvements are enabled by a technique for exploiting the structure of the upper envelope of Monge matrices to efficiently report column maxima in skewed rectangular Monge matrices. We hope this technique will be useful in obtaining faster search algorithms in Monge partial matrices. In addition, we give a linear upper bound on the number of breakpoints in the upper envelope of a Monge partial matrix. This shows that the inverse Ackermann α(n) factor in the analysis of the data structure of Kaplan et. al is superfluous.

  • improved Submatrix maximum queries in monge matrices
    arXiv: Data Structures and Algorithms, 2013
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present efficient data structures for Submatrix maximum queries in Monge matrices and Monge partial matrices. For $n\times n$ Monge matrices, we give a data structure that requires O(n) space and answers Submatrix maximum queries in $O(\log n)$ time. The best previous data structure [Kaplan et al., SODA`12] required $O(n \log n)$ space and $O(\log^2 n)$ query time. We also give an alternative data structure with constant query-time and $ O(n^{1+\varepsilon})$ construction time and space for any fixed $\varepsilon<1$. For $n\times n$ {\em partial} Monge matrices we obtain a data structure with O(n) space and $O(\log n \cdot \alpha(n))$ query time. The data structure of Kaplan et al. required $O(n \log n \cdot \alpha(n))$ space and $O(\log^2 n)$ query time. Our improvements are enabled by a technique for exploiting the structure of the upper envelope of Monge matrices to efficiently report column maxima in skewed rectangular Monge matrices. We hope this technique can be useful in obtaining faster search algorithms in Monge partial matrices. In addition, we give a linear upper bound on the number of breakpoints in the upper envelope of a Monge partial matrix. This shows that the inverse Ackermann $\alpha(n)$ term in the analysis of the data structure of Kaplan et. al is superfluous.

George N. Karystinos - One of the best experts on this subject based on the ideXlab platform.

  • The sparse principal component of a constant-rank matrix
    2016
    Co-Authors: Megasthenis Asteris, Student Member, Dimitris S. Papailiopoulos, George N. Karystinos
    Abstract:

    Abstract—The computation of the sparse principal component of a matrix is equivalent to the identification of its principal Submatrix with the largest maximum eigenvalue. Finding this optimal Submatrix is what renders the problem NP-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity O (ND+1), where N and D are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient. Index Terms—Eigenvalues and eigenfunctions, feature ex-traction, information processing, machine learning algorithms, principal component analysis, signal processing algorithms. I

  • the sparse principal component of a constant rank matrix
    IEEE Transactions on Information Theory, 2014
    Co-Authors: Megasthenis Asteris, Dimitris S. Papailiopoulos, George N. Karystinos
    Abstract:

    The computation of the sparse principal component of a matrix is equivalent to the identification of its principal Submatrix with the largest maximum eigenvalue. Finding this optimal Submatrix is what renders the problem NP-hard. In this paper, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. In addition, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity O(ND+1), where N and D are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.

  • the sparse principal component of a constant rank matrix
    arXiv: Information Theory, 2013
    Co-Authors: Megasthenis Asteris, Dimitris S. Papailiopoulos, George N. Karystinos
    Abstract:

    The computation of the sparse principal component of a matrix is equivalent to the identification of its principal Submatrix with the largest maximum eigenvalue. Finding this optimal Submatrix is what renders the problem ${\mathcal{NP}}$-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity ${\mathcal O}\left(N^{D+1}\right)$, where $N$ and $D$ are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.

Pawel Gawrychowski - One of the best experts on this subject based on the ideXlab platform.

  • Submatrix maximum queries in monge and partial monge matrices are equivalent to predecessor search
    ACM Transactions on Algorithms, 2020
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present an optimal data structure for Submatrix maximum queries in n× n Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem in a universe of polynomial size. This gives a data structure of O(n) space that answers Submatrix maximum queries in O(log log n) time, as well as a matching lower bound, showing that O(log log n) query-time is optimal for any data structure of size O(npolylog(n)). Our result settles the problem, improving on the O(log2 n) query time in SODA’12, and on the O(log n) query-time in ICALP’14. In addition, we show that partial Monge matrices can be handled in the same bounds as full Monge matrices. In both previous results, partial Monge matrices incurred additional inverse-Ackermann factors.

  • Submatrix maximum queries in monge matrices are equivalent to predecessor search
    International Colloquium on Automata Languages and Programming, 2015
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present an optimal data structure for Submatrix maximum queries in \(n\times n\) Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem in a universe of polynomial size. This gives a data structure of \(O(n)\) space that answers Submatrix maximum queries in \(O(\log \log n)\) time, as well as a matching lower bound, showing that \(O(\log \log n)\) query-time is optimal for any data structure of size \(O(n\) polylog\((n))\). Our result settles the problem, improving on the \(O(\log ^2 n)\) query-time in SODA’12, and on the \(O(\log n)\) query-time in ICALP’14.

  • Submatrix maximum queries in monge matrices are equivalent to predecessor search
    arXiv: Data Structures and Algorithms, 2015
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present an optimal data structure for Submatrix maximum queries in n x n Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem in a universe of polynomial size. This gives a data structure of O(n) space that answers Submatrix maximum queries in O(loglogn) time. It also gives a matching lower bound, showing that O(loglogn) query-time is optimal for any data structure of size O(n polylog(n)). Our result concludes a line of improvements that started in SODA'12 with O(log^2 n) query-time and continued in ICALP'14 with O(log n) query-time. Finally, we show that partial Monge matrices can be handled in the same bounds as full Monge matrices. In both previous results, partial Monge matrices incurred additional inverse-Ackerman factors.

  • improved Submatrix maximum queries in monge matrices
    International Colloquium on Automata Languages and Programming, 2014
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present efficient data structures for Submatrix maximum queries in Monge matrices and Monge partial matrices. For n×n Monge matrices, we give a data structure that requires O(n) space and answers Submatrix maximum queries in O(logn) time. The best previous data structure [Kaplan et al., SODA‘12] required O(n logn) space and O(log2 n) query time. We also give an alternative data structure with constant query-time and O(n 1 + e ) construction time and space for any fixed e < 1. For n×n partial Monge matrices we obtain a data structure with O(n) space and O(logn ·α(n)) query time. The data structure of Kaplan et al. required O(n logn ·α(n)) space and O(log2 n) query time. Our improvements are enabled by a technique for exploiting the structure of the upper envelope of Monge matrices to efficiently report column maxima in skewed rectangular Monge matrices. We hope this technique will be useful in obtaining faster search algorithms in Monge partial matrices. In addition, we give a linear upper bound on the number of breakpoints in the upper envelope of a Monge partial matrix. This shows that the inverse Ackermann α(n) factor in the analysis of the data structure of Kaplan et. al is superfluous.

  • improved Submatrix maximum queries in monge matrices
    arXiv: Data Structures and Algorithms, 2013
    Co-Authors: Pawel Gawrychowski, Shay Mozes, Oren Weimann
    Abstract:

    We present efficient data structures for Submatrix maximum queries in Monge matrices and Monge partial matrices. For $n\times n$ Monge matrices, we give a data structure that requires O(n) space and answers Submatrix maximum queries in $O(\log n)$ time. The best previous data structure [Kaplan et al., SODA`12] required $O(n \log n)$ space and $O(\log^2 n)$ query time. We also give an alternative data structure with constant query-time and $ O(n^{1+\varepsilon})$ construction time and space for any fixed $\varepsilon<1$. For $n\times n$ {\em partial} Monge matrices we obtain a data structure with O(n) space and $O(\log n \cdot \alpha(n))$ query time. The data structure of Kaplan et al. required $O(n \log n \cdot \alpha(n))$ space and $O(\log^2 n)$ query time. Our improvements are enabled by a technique for exploiting the structure of the upper envelope of Monge matrices to efficiently report column maxima in skewed rectangular Monge matrices. We hope this technique can be useful in obtaining faster search algorithms in Monge partial matrices. In addition, we give a linear upper bound on the number of breakpoints in the upper envelope of a Monge partial matrix. This shows that the inverse Ackermann $\alpha(n)$ term in the analysis of the data structure of Kaplan et. al is superfluous.

Zbigniew Kolakowski - One of the best experts on this subject based on the ideXlab platform.

  • non linear stability and load carrying capacity of thin walled laminated columns in aspects of coupled buckling and coupled stiffness Submatrix
    Composite Structures, 2018
    Co-Authors: Andrzej Teter, Radoslaw J Mania, Zbigniew Kolakowski
    Abstract:

    Abstract To assess a load-carrying capacity of compressed thin-walled plate structures in the paper the coupling buckling phenomenon of compressed columns was analyzed. The columns were of open cross-sections and made of coupled laminate. Selected configuration of laminate layers enables different types of coupling between membrane and bending states which describes the coupling stiffness Submatrix B . Element values of stiffness matrix ABD were determined with application of classical laminate plate theory CLPT. The main aim of the work is to estimate an influence of chosen Submatrix B elements on buckling, postbuckling and load-carrying capacity of analyzed thin-walled structures. The problem was solved with the Koiter’s theory application. The detailed computations were performed for uniformly compressed lip channel and top hat channel. The dimensions of both columns were chosen in a way which allowed to observe the strong coupling effect among different buckling modes. It were two laminate configurations considered which differed in value of stiffness reduction coefficients.

  • effect of selected elements of the coupling stiffness Submatrix on the load carrying capacity of hybrid columns under compression
    Composite Structures, 2017
    Co-Authors: Andrzej Teter, Radoslaw J Mania, Zbigniew Kolakowski
    Abstract:

    Abstract The problems of a multi-mode buckling approach which is based on Koiter’s theory of a hybrid column are presented in this paper. An interaction of global buckling modes with the local ones is discussed. There are many different local and global buckling modes. Their selected combinations are dangerous and cause a reduction in the load-carrying capacity. All walls of the hybrid column were plane and made of many layers. The outer layers were thermal barriers and made of a TiC ceramic layer or an AL-TiC-type FGM. The inner layers were composed of aluminium layers and a few carbon-epoxy laminate layers. The classical laminate theory is used to define the ABD matrix which described the relations between applied loads and the associated deformations. The layup configuration of the hybrid column is general, so the coupling Submatrix B is non-trivial. This Submatrix has a significant impact on the value of local buckling load, whereas its effect on the value of global buckling load can be neglected. The main topic discussed in this paper is whether and how individual elements of the B Submatrix can change the load-carrying capacity of a hybrid column. A detailed discussion is conducted for simple supported columns with opened cross-sections subjected to mechanical loads only. Thermal effects are neglected.

Megasthenis Asteris - One of the best experts on this subject based on the ideXlab platform.

  • The sparse principal component of a constant-rank matrix
    2016
    Co-Authors: Megasthenis Asteris, Student Member, Dimitris S. Papailiopoulos, George N. Karystinos
    Abstract:

    Abstract—The computation of the sparse principal component of a matrix is equivalent to the identification of its principal Submatrix with the largest maximum eigenvalue. Finding this optimal Submatrix is what renders the problem NP-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity O (ND+1), where N and D are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient. Index Terms—Eigenvalues and eigenfunctions, feature ex-traction, information processing, machine learning algorithms, principal component analysis, signal processing algorithms. I

  • the sparse principal component of a constant rank matrix
    IEEE Transactions on Information Theory, 2014
    Co-Authors: Megasthenis Asteris, Dimitris S. Papailiopoulos, George N. Karystinos
    Abstract:

    The computation of the sparse principal component of a matrix is equivalent to the identification of its principal Submatrix with the largest maximum eigenvalue. Finding this optimal Submatrix is what renders the problem NP-hard. In this paper, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. In addition, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity O(ND+1), where N and D are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.

  • the sparse principal component of a constant rank matrix
    arXiv: Information Theory, 2013
    Co-Authors: Megasthenis Asteris, Dimitris S. Papailiopoulos, George N. Karystinos
    Abstract:

    The computation of the sparse principal component of a matrix is equivalent to the identification of its principal Submatrix with the largest maximum eigenvalue. Finding this optimal Submatrix is what renders the problem ${\mathcal{NP}}$-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity ${\mathcal O}\left(N^{D+1}\right)$, where $N$ and $D$ are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.