Matrix Factorization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 59592 Experts worldwide ranked by ideXlab platform

Ojaerkki - One of the best experts on this subject based on the ideXlab platform.

Erkki Oja - One of the best experts on this subject based on the ideXlab platform.

  • Quadratic nonnegative Matrix Factorization
    Pattern Recognition, 2012
    Co-Authors: Zhirong Yang, Erkki Oja
    Abstract:

    In Nonnegative Matrix Factorization (NMF), a nonnegative Matrix is approximated by a product of lower-rank factorizing matrices. Most NMF methods assume that each factorizing Matrix appears only once in the approximation, thus the approximation is linear in the factorizing matrices. We present a new class of approximative NMF methods, called Quadratic Nonnegative Matrix Factorization (QNMF), where some factorizing matrices occur twice in the approximation. We demonstrate QNMF solutions to four potential pattern recognition problems in graph partitioning, two-way clustering, estimating hidden Markov chains, and graph matching. We derive multiplicative algorithms that monotonically decrease the approximation error under a variety of measures. We also present extensions in which one of the factorizing matrices is constrained to be orthogonal or stochastic. Empirical studies show that for certain application scenarios, QNMF is more advantageous than other existing nonnegative Matrix Factorization methods.

Karthik Subbian - One of the best experts on this subject based on the ideXlab platform.

  • dynamic Matrix Factorization a state space approach
    International Conference on Acoustics Speech and Signal Processing, 2012
    Co-Authors: John Z Sun, Kush R Varshney, Karthik Subbian
    Abstract:

    Matrix Factorization from a small number of observed entries has recently garnered much attention as the key ingredient of successful recommendation systems. One unresolved problem in this area is how to adapt current methods to handle changing user preferences over time. Recent proposals to address this issue are heuristic in nature and do not fully exploit the time-dependent structure of the problem. As a principled and general temporal formulation, we propose a dynamical state space model of Matrix Factorization. Our proposal builds upon probabilistic Matrix Factorization, a Bayesian model with Gaussian priors. We utilize results in state tracking, i.e. the Kalman filter, to provide accurate recommendations in the presence of both process and measurement noise. We show how system parameters can be learned via expectation-maximization and provide comparisons to current published techniques.

  • dynamic Matrix Factorization a state space approach
    arXiv: Learning, 2011
    Co-Authors: John Z Sun, Kush R Varshney, Karthik Subbian
    Abstract:

    Matrix Factorization from a small number of observed entries has recently garnered much attention as the key ingredient of successful recommendation systems. One unresolved problem in this area is how to adapt current methods to handle changing user preferences over time. Recent proposals to address this issue are heuristic in nature and do not fully exploit the time-dependent structure of the problem. As a principled and general temporal formulation, we propose a dynamical state space model of Matrix Factorization. Our proposal builds upon probabilistic Matrix Factorization, a Bayesian model with Gaussian priors. We utilize results in state tracking, such as the Kalman filter, to provide accurate recommendations in the presence of both process and measurement noise. We show how system parameters can be learned via expectation-maximization and provide comparisons to current published techniques.

Xiaoyuan Jing - One of the best experts on this subject based on the ideXlab platform.

  • supervised discrete Matrix Factorization hashing for cross modal retrieval
    International Conference on Cloud Computing, 2018
    Co-Authors: Yujian Feng, Jun Zhou, He Huang, Xiwei Dong, Xiaoyuan Jing
    Abstract:

    Cross-modal hashing has attracted extensive attention due to the small data storage space and favorable retrieval efficiency. Matrix Factorization-based method is an important kind of cross-modal hashing method. Most of existing Matrix Factorization hashing methods map heterogeneous cross-modal data into a low-dimensional common Hamming space, and then adopt a relaxation and quantification strategy to obtain an approximate hash-coded solution. However, there exist uncontrollable quantization error in this process, which may affect the retrieval performance. In this paper, we propose a Supervised Discrete Matrix Factorization Hashing (SDMFH) approach for cross-modal retrieval, which learns modality-specific latent semantic spaces for each modality based on Matrix Factorization. The semantic spaces are required to well reconstruct the similarity affinity Matrix, such that the label consistency across modalities are fully considered. Furthermore, the binary hash code is directly learned by using the discrete cyclic coordinate descent algorithm, such that the quantization error is effectively reduced. Experiments on the widely used Wiki and NUS-WIDE datasets demonstrate that the proposed SDMFH approach can outperform state-of-the-art related methods.

  • CCIS - Supervised Discrete Matrix Factorization Hashing For Cross-Modal Retrieval
    2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS), 2018
    Co-Authors: Yujian Feng, Jun Zhou, Xiwei Dong, Xiaoyuan Jing
    Abstract:

    Cross-modal hashing has attracted extensive attention due to the small data storage space and favorable retrieval efficiency. Matrix Factorization-based method is an important kind of cross-modal hashing method. Most of existing Matrix Factorization hashing methods map heterogeneous cross-modal data into a low-dimensional common Hamming space, and then adopt a relaxation and quantification strategy to obtain an approximate hash-coded solution. However, there exist uncontrollable quantization error in this process, which may affect the retrieval performance. In this paper, we propose a Supervised Discrete Matrix Factorization Hashing (SDMFH) approach for cross-modal retrieval, which learns modality-specific latent semantic spaces for each modality based on Matrix Factorization. The semantic spaces are required to well reconstruct the similarity affinity Matrix, such that the label consistency across modalities are fully considered. Furthermore, the binary hash code is directly learned by using the discrete cyclic coordinate descent algorithm, such that the quantization error is effectively reduced. Experiments on the widely used Wiki and NUS-WIDE datasets demonstrate that the proposed SDMFH approach can outperform state-of-the-art related methods.

Qiang Yang - One of the best experts on this subject based on the ideXlab platform.

  • Privacy Threats Against Federated Matrix Factorization.
    arXiv: Cryptography and Security, 2020
    Co-Authors: Dashan Gao, Ben Tan, Vincent W. Zheng, Qiang Yang
    Abstract:

    Matrix Factorization has been very successful in practical recommendation applications and e-commerce. Due to data shortage and stringent regulations, it can be hard to collect sufficient data to build performant recommender systems for a single company. Federated learning provides the possibility to bridge the data silos and build machine learning models without compromising privacy and security. Participants sharing common users or items collaboratively build a model over data from all the participants. There have been some works exploring the application of federated learning to recommender systems and the privacy issues in collaborative filtering systems. However, the privacy threats in federated Matrix Factorization are not studied. In this paper, we categorize federated Matrix Factorization into three types based on the partition of feature space and analyze privacy threats against each type of federated Matrix Factorization model. We also discuss privacy-preserving approaches. As far as we are aware, this is the first study of privacy threats of the Matrix Factorization method in the federated learning framework.

  • Secure Federated Matrix Factorization
    IEEE Intelligent Systems, 2020
    Co-Authors: Di Chai, Leye Wang, Kai Chen, Qiang Yang
    Abstract:

    To protect user privacy and meet law regulations, federated (machine) learning is obtaining vast interests in recent years. The key principle of federated learning is training a machine learning model without needing to know each user's personal raw private data. In this paper, we propose a secure Matrix Factorization framework under the federated learning setting, called FedMF. First, we design a user-level distributed Matrix Factorization framework where the model can be learned when each user only uploads the gradient information (instead of the raw preference data) to the server. While gradient information seems secure, we prove that it could still leak users' raw data. To this end, we enhance the distributed Matrix Factorization framework with homomorphic encryption. We implement the prototype of FedMF and test it with a real movie rating dataset. Results verify the feasibility of FedMF. We also discuss the challenges for applying FedMF in practice for future research.

  • general functional Matrix Factorization using gradient boosting
    International Conference on Machine Learning, 2013
    Co-Authors: Tianqi Chen, Qiang Yang
    Abstract:

    Matrix Factorization is among the most successful techniques for collaborative filtering. One challenge of collaborative filtering is how to utilize available auxiliary information to improve prediction accuracy. In this paper, we study the problem of utilizing auxiliary information as features of Factorization and propose formalizing the problem as general functional Matrix Factorization, whose model includes conventional Matrix Factorization models as its special cases. Moreover, we propose a gradient boosting based algorithm to efficiently solve the optimization problem. Finally, we give two specific algorithms for efficient feature function construction for two specific tasks. Our method can construct more suitable feature functions by searching in an infinite functional space based on training data and thus can yield better prediction accuracy. The experimental results demonstrate that the proposed method out-performs the baseline methods on three realworld datasets.