Orthogonal Tensor

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7356 Experts worldwide ranked by ideXlab platform

Qiuqi Ruan - One of the best experts on this subject based on the ideXlab platform.

  • 3d facial expression recognition using Orthogonal Tensor marginal fisher analysis on geometric maps
    International Conference on Wavelet Analysis and Pattern Recognition, 2017
    Co-Authors: Qiuqi Ruan
    Abstract:

    In this paper, a novel 3D facial expression recognition algorithm by Orthogonal Tensor Marginal Fisher Analysis (OTMFA) on geometric maps was proposed. In order to describe expressions properly in detail and simplify calculation, five kinds of geometric 2D maps including Depth Map (DM), three Normal Maps (NOMs) and Shape Index Map (SIM) are extracted. These maps are treated as second order Tensors and they are used to learn low dimensional Tensor subspaces by OTMFA, where preserves the manifold structure of expressions and reduces dimensions simultaneously. Next, features are further extracted by projecting these geometric maps into Tensor subspaces. Finally, multi-class SVM classifiers are trained and tested respectively for each maps and their contributions are combined using the sum rule for final recognition. The effectiveness of proposed method is verified on BU-3DFE database, and the performance achieves an average of 88.32% and 84.27%, which is comparable to the state of the art ones.

  • letters Orthogonal Tensor rank one differential graph preserving projections with its application to facial expression recognition
    Neurocomputing, 2012
    Co-Authors: Shuai Liu, Qiuqi Ruan, Yi Jin
    Abstract:

    In this paper, a new Tensor dimensionality reduction algorithm is proposed based on graph preserving criterion and Tensor rank-one projections. In the algorithm, a novel, effective and converged Orthogonalization process is given based on a differential-form objective function. A set of Orthogonal rank-one basis Tensors are obtained to preserve the intra-class local manifolds and enhance the inter-class margins. The algorithm is evaluated by applying to the basic facial expressions recognition.

  • Orthogonal Tensor neighborhood preserving embedding for facial expression recognition
    Pattern Recognition, 2011
    Co-Authors: Shuai Liu, Qiuqi Ruan
    Abstract:

    In this paper a generalized Tensor subspace model is concluded from the existing Tensor dimensionality reduction algorithms. With this model, we investigate the Orthogonality of the bases of the high-order Tensor subspace, and propose the Orthogonal Tensor Neighborhood Preserving Embedding (OTNPE) algorithm. We evaluate the algorithm by applying it to facial expression recognition, where both the 2nd-order gray-level raw pixels and the encoded 3rd-order Tensor-formed Gabor features of facial expression images are utilized. The experiments show the excellent performance of our algorithm for the dimensionality reduction of the Tensor-formed data especially when they lie on some smooth and compact manifold embedded in the high dimensional Tensor space.

  • an Orthogonal Tensor rank one discriminative graph embedding method for facial expression recognition
    4th IET International Conference on Wireless Mobile & Multimedia Networks (ICWMMN 2011), 2011
    Co-Authors: Shuai Liu, Qiuqi Ruan, Zhan Wang
    Abstract:

    In this paper a new Tensor dimensionality reduction algorithm is proposed based on graph embedding and Orthogonal Tensor rank one decomposition. In the algorithm, both the intra- class local manifold structure and the inter-class margins are enhanced by projecting the original Tensors onto a group of Orthogonal rank-one Tensors, and a novel and effective Orthogonalization process is given. In the experiments the algorithm is used for the facial expression recognition and achieves accelerant results.

  • Orthogonal Tensor Marginal Fisher Analysis with application to facial expression recognition
    IEEE 10th INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, 2010
    Co-Authors: Shuai Liu, Qiuqi Ruan
    Abstract:

    A new Tensor dimensionality reduction algorithm, Orthogonal Tensor Marginal Fisher Analysis (OTMFA), is proposed in this paper, which finds a set of orthonormal transformation matrices based on Tensor Marginal Fisher Analysis (TMFA). The obtained orthonormal transformation matrices do not distort the metric of the original Tensor space such that the manifold structure of the input Tensors can be better preserved. The experimental results show the effectiveness of the proposed algorithm for facial expression recognition.

Shuai Liu - One of the best experts on this subject based on the ideXlab platform.

Andrzej Cichocki - One of the best experts on this subject based on the ideXlab platform.

  • non Orthogonal Tensor diagonalization
    Signal Processing, 2017
    Co-Authors: Petr Tichavsky, Anh Huy Phan, Andrzej Cichocki
    Abstract:

    Abstract Tensor diagonalization means transforming a given Tensor to an exactly or nearly diagonal form through multiplying the Tensor by non-Orthogonal invertible matrices along selected dimensions of the Tensor. It has a link to an approximate joint diagonalization (AJD) of a set of matrices. In this paper, we derive (1) a new algorithm for a symmetric AJD, which is called two-sided symmetric diagonalization of an order-three Tensor, (2) a similar algorithm for a non-symmetric AJD, also called a two-sided diagonalization of an order-three Tensor, and (3) an algorithm for three-sided diagonalization of order-three or order-four Tensors. The latter two algorithms may serve for canonical polyadic (CP) Tensor decomposition, and in certain scenarios they can outperform traditional CP decomposition methods. Finally, we propose (4) similar algorithms for Tensor block diagonalization, which is related to Tensor block-term decomposition. The proposed algorithm can either outperform the existing block-term decomposition algorithms, or produce good initial points for their application.

  • non Orthogonal Tensor diagonalization a tool for block Tensor decompositions
    arXiv: Numerical Analysis, 2014
    Co-Authors: Petr Tichavsky, Anh Huy Phan, Andrzej Cichocki
    Abstract:

    This paper presents algorithms for non-Orthogonal Tensor diagonalization, which can be used for block Tensor decomposition. The diagonalization can be performed along two or more Tensor dimensions simultaneously. The method seeks for one diagonalizing matrix of determinant 1 for each mode that together convert a given Tensor into a Tensor that meets a block revealing condition. Perturbation analysis of the algorithm showing how small changes in the Tensor translate in small changes of the diagonalization outcome is provided. The algorithm has a low computational complexity, comparable to complexity of the fastest available canonical polyadic decomposition algorithms. For example, the diagonalization of order-3 Tensor of the size NxNxN has the complexity of the order O(N^4) per iteration. If the Tensor has a different shape, a Tucker compression should be applied prior to the diagonalization. The algorithm can be applied in cumulant-based independent subspace decomposition or for Tensor deconvolution and feature extraction using the convolutive model.

  • non Orthogonal Tensor diagonalization
    arXiv: Numerical Analysis, 2014
    Co-Authors: Petr Tichavsky, Anh Huy Phan, Andrzej Cichocki
    Abstract:

    Tensor diagonalization means transforming a given Tensor to an exactly or nearly diagonal form through multiplying the Tensor by non-Orthogonal invertible matrices along selected dimensions of the Tensor. It is generalization of approximate joint diagonalization (AJD) of a set of matrices. In particular, we derive (1) a new algorithm for symmetric AJD, which is called two-sided symmetric diagonalization of order-three Tensor, (2) a similar algorithm for non-symmetric AJD, also called general two-sided diagonalization of an order-3 Tensor, and (3) an algorithm for three-sided diagonalization of order-3 or order-4 Tensors. The latter two algorithms may serve for canonical polyadic (CP) Tensor decomposition, and they can outperform other CP Tensor decomposition methods in terms of computational speed under the restriction that the Tensor rank does not exceed the Tensor multilinear rank. Finally, we propose (4) similar algorithms for Tensor block diagonalization, which is related to the Tensor block-term decomposition.

  • multilinear subspace regression an Orthogonal Tensor decomposition approach
    Neural Information Processing Systems, 2011
    Co-Authors: Qibin Zhao, Cesar F Caiafa, Danilo P Mandic, Liqing Zhang, Tonio Ball, Andreas Schulzebonhage, Andrzej Cichocki
    Abstract:

    A multilinear subspace regression model based on so called latent variable decomposition is introduced. Unlike standard regression methods which typically employ matrix (2D) data representations followed by vector subspace transformations, the proposed approach uses Tensor subspace transformations to model common latent variables across both the independent and dependent data. The proposed approach aims to maximize the correlation between the so derived latent variables and is shown to be suitable for the prediction of multidimensional dependent data from multidimensional independent data, where for the estimation of the latent variables we introduce an algorithm based on Multilinear Singular Value Decomposition (MSVD) on a specially defined cross-covariance Tensor. It is next shown that in this way we are also able to unify the existing Partial Least Squares (PLS) and N-way PLS regression algorithms within the same framework. Simulations on benchmark synthetic data confirm the advantages of the proposed approach, in terms of its predictive ability and robustness, especially for small sample sizes. The potential of the proposed technique is further illustrated on a real world task of the decoding of human intracranial electrocorticogram (ECoG) from a simultaneously recorded scalp electroencephalograph (EEG).

Yan Huang - One of the best experts on this subject based on the ideXlab platform.

  • dynamic texture recognition via Orthogonal Tensor dictionary learning
    International Conference on Computer Vision, 2015
    Co-Authors: Yuhui Quan, Yan Huang
    Abstract:

    Dynamic textures (DTs) are video sequences with stationary properties, which exhibit repetitive patterns over space and time. This paper aims at investigating the sparse coding based approach to characterizing local DT patterns for recognition. Owing to the high dimensionality of DT sequences, existing dictionary learning algorithms are not suitable for our purpose due to their high computational costs as well as poor scalability. To overcome these obstacles, we proposed a structured Tensor dictionary learning method for sparse coding, which learns a dictionary structured with Orthogonality and separability. The proposed method is very fast and more scalable to high-dimensional data than the existing ones. In addition, based on the proposed dictionary learning method, a DT descriptor is developed, which has better adaptivity, discriminability and scalability than the existing approaches. These advantages are demonstrated by the experiments on multiple datasets.

Michael Hayes - One of the best experts on this subject based on the ideXlab platform.

  • unsheared triads and extended polar decompositions of the deformation gradient
    International Journal of Non-linear Mechanics, 2001
    Co-Authors: Philippe Boulanger, Michael Hayes
    Abstract:

    Abstract In this paper, the concept of unsheared triads of material line elements at a point X is introduced. We find that there is an infinity of unsheared triads. More precisely, it is shown that, in general, for any given unsheared pair at X , a unique third material line element at X may be found such that the three material line elements form an unsheared triad. Special cases are analyzed in detail. A link between unsheared triads and new decompositions of the deformation gradient, is exhibited. These decompositions generalize the classical polar decomposition F = RU = VR of the deformation gradient F , in which R is a proper Orthogonal Tensor and U , V are positive-definite symmetric. Associated with any unsheared (oblique) triad is a new decomposition F = QG = HQ , in which Q is a proper Orthogonal Tensor, but G and H are no longer symmetric, but have three positive eigenvalues and three linearly independent right eigenvectors. Because there is an infinity of unsheared triads, there is an infinity of such decompositions. We call them “extended polar decompositions”. Several examples of unsheared triads and extended polar decompositions are presented.