Independent Factor

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 440085 Experts worldwide ranked by ideXlab platform

Patrice Aknin - One of the best experts on this subject based on the ideXlab platform.

  • Partially supervised Independent Factor Analysis using soft labels elicited from multiple experts: application to railway track circuit diagnosis
    Soft Computing, 2012
    Co-Authors: Zohra L. Cherfi, Etienne Côme, Latifa Oukhellou, Thierry Denœux, Patrice Aknin
    Abstract:

    Using a statistical model in a diagnosis task generally requires a large amount of labeled data. When ground truth information is not available, too expensive or difficult to collect, one has to rely on expert knowledge. In this paper, it is proposed to use partial information from domain experts expressed as belief functions. Expert opinions are combined in this framework and used with measurement data to estimate the parameters of a statistical model using a variant of the EM algorithm. The particular application investigated here concerns the diagnosis of railway track circuits. A noiseless Independent Factor Analysis model is postulated, assuming the observed variables extracted from railway track inspection signals to be generated by a linear mixture of Independent latent variables linked to the system component states. Usually, learning with this statistical model is performed in an unsupervised way using unlabeled examples only. In this paper, it is proposed to handle this learning process in a soft-supervised way using imperfect information on the system component states. Fusing partially reliable information about cluster membership is shown to significantly improve classification results.

  • Fault diagnosis of a railway device using semi-supervised Independent Factor analysis with mixing constraints
    Pattern Analysis and Applications, 2012
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denoeux, Patrice Aknin
    Abstract:

    Independent Factor analysis (IFA) defines a generative model for observed data that are assumed to be linear mixtures of some unknown non-Gaussian, mutually Independent latent variables (also called sources or Independent components). The probability density function of each individual latent variable is modelled by a mixture of Gaussians (MOG). Learning in the context of this model is usually performed within an unsupervised framework in which only unlabelled samples are used. Both the mixing matrix and the parameters of latent variable densities are learned from observed data. This paper investigates the possibility of estimating an IFA model in a noiseless setting when two kinds of prior information are incorporated; namely, constraints on the mixing process and partial knowledge on the cluster membership of some training samples. Semi-supervised or partially-supervised learning frameworks can thus be handled. The investigation of these two kinds of prior information was motivated by a real-world application concerning the fault diagnosis of railway track circuits. Results from both this application, simulated data are provided to demonstrate the capacity of our approach to enhance estimation accuracy and remove the indeterminacy commonly encountered in unsupervised IFA, such as source permutations.

  • Fault diagnosis of a railway device using semi-supervised Independent Factor analysis with mixing constraints
    Pattern Analysis and Applications, 2011
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denœux, Patrice Aknin
    Abstract:

    Independent Factor analysis (IFA) defines a generative model for observed data that are assumed to be linear mixtures of some unknown non-Gaussian, mutually Independent latent variables (also called sources or Independent components). The probability density function of each individual latent variable is modelled by a mixture of Gaussians. Learning in the context of this model is usually performed within an unsupervised framework in which only unlabelled samples are used. Both the mixing matrix and the parameters of latent variable densities are learned from the observed data. This paper investigates the possibility of estimating an IFA model in a noiseless setting when two kinds of prior information are incorporated, namely constraints on the mixing process and partial knowledge on the cluster membership of some training samples. Semi-supervised or partially supervised learning frameworks can thus be handled. The investigation of these two kinds of prior information was motivated by a real-world application concerning the fault diagnosis of railway track circuits. Simulated data, resulting from both these applications, are provided to demonstrate the capacity of our approach to enhance estimation accuracy and remove the indeterminacy commonly encountered in unsupervised IFA, such as source permutations.

  • ICMLA (2) - Semi-supervised Feature Extraction Using Independent Factor Analysis
    2011 10th International Conference on Machine Learning and Applications and Workshops, 2011
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Efficient dimensionality reduction can involve generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA). Such models aim to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these models occur within an unsupervised framework where only unlabeled samples are used. In this paper, we investigate the possibility of estimating an Independent Factor analysis model (IFA), and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. We propose to allow this model to learn within a semi-supervised framework in which likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results with real data sets are provided to demonstrate the ability of our approach to find a low dimensional manifold with good explanatory power.

  • Semi-supervised feature extraction using Independent Factor analysis
    2010
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Dimensionality reduction can be efficiently achieved by generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA), aiming to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these methods is achieved within the unsupervised framework where only unlabeled samples are used. In this paper we investigate the possibility of estimating Independent Factor analysis model (IFA) and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. In the basic IFA model, latent variables are only recovered from their linear observed mixtures (original features). Both the mapping matrix (assumed to be linear) and the latent variable densities (that are assumed to be mutually Independent and generated according to mixtures of Gaussians) are learned from observed data. We propose to learn this model within semisupervised framework where the likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results on real data sets are provided to demonstrate the ability of our approach to find law dimensional manifold with good explanatory power.

Etienne Côme - One of the best experts on this subject based on the ideXlab platform.

  • Partially supervised Independent Factor Analysis using soft labels elicited from multiple experts: application to railway track circuit diagnosis
    Soft Computing, 2012
    Co-Authors: Zohra L. Cherfi, Etienne Côme, Latifa Oukhellou, Thierry Denœux, Patrice Aknin
    Abstract:

    Using a statistical model in a diagnosis task generally requires a large amount of labeled data. When ground truth information is not available, too expensive or difficult to collect, one has to rely on expert knowledge. In this paper, it is proposed to use partial information from domain experts expressed as belief functions. Expert opinions are combined in this framework and used with measurement data to estimate the parameters of a statistical model using a variant of the EM algorithm. The particular application investigated here concerns the diagnosis of railway track circuits. A noiseless Independent Factor Analysis model is postulated, assuming the observed variables extracted from railway track inspection signals to be generated by a linear mixture of Independent latent variables linked to the system component states. Usually, learning with this statistical model is performed in an unsupervised way using unlabeled examples only. In this paper, it is proposed to handle this learning process in a soft-supervised way using imperfect information on the system component states. Fusing partially reliable information about cluster membership is shown to significantly improve classification results.

  • Fault diagnosis of a railway device using semi-supervised Independent Factor analysis with mixing constraints
    Pattern Analysis and Applications, 2012
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denoeux, Patrice Aknin
    Abstract:

    Independent Factor analysis (IFA) defines a generative model for observed data that are assumed to be linear mixtures of some unknown non-Gaussian, mutually Independent latent variables (also called sources or Independent components). The probability density function of each individual latent variable is modelled by a mixture of Gaussians (MOG). Learning in the context of this model is usually performed within an unsupervised framework in which only unlabelled samples are used. Both the mixing matrix and the parameters of latent variable densities are learned from observed data. This paper investigates the possibility of estimating an IFA model in a noiseless setting when two kinds of prior information are incorporated; namely, constraints on the mixing process and partial knowledge on the cluster membership of some training samples. Semi-supervised or partially-supervised learning frameworks can thus be handled. The investigation of these two kinds of prior information was motivated by a real-world application concerning the fault diagnosis of railway track circuits. Results from both this application, simulated data are provided to demonstrate the capacity of our approach to enhance estimation accuracy and remove the indeterminacy commonly encountered in unsupervised IFA, such as source permutations.

  • Fault diagnosis of a railway device using semi-supervised Independent Factor analysis with mixing constraints
    Pattern Analysis and Applications, 2011
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denœux, Patrice Aknin
    Abstract:

    Independent Factor analysis (IFA) defines a generative model for observed data that are assumed to be linear mixtures of some unknown non-Gaussian, mutually Independent latent variables (also called sources or Independent components). The probability density function of each individual latent variable is modelled by a mixture of Gaussians. Learning in the context of this model is usually performed within an unsupervised framework in which only unlabelled samples are used. Both the mixing matrix and the parameters of latent variable densities are learned from the observed data. This paper investigates the possibility of estimating an IFA model in a noiseless setting when two kinds of prior information are incorporated, namely constraints on the mixing process and partial knowledge on the cluster membership of some training samples. Semi-supervised or partially supervised learning frameworks can thus be handled. The investigation of these two kinds of prior information was motivated by a real-world application concerning the fault diagnosis of railway track circuits. Simulated data, resulting from both these applications, are provided to demonstrate the capacity of our approach to enhance estimation accuracy and remove the indeterminacy commonly encountered in unsupervised IFA, such as source permutations.

  • ICMLA (2) - Semi-supervised Feature Extraction Using Independent Factor Analysis
    2011 10th International Conference on Machine Learning and Applications and Workshops, 2011
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Efficient dimensionality reduction can involve generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA). Such models aim to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these models occur within an unsupervised framework where only unlabeled samples are used. In this paper, we investigate the possibility of estimating an Independent Factor analysis model (IFA), and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. We propose to allow this model to learn within a semi-supervised framework in which likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results with real data sets are provided to demonstrate the ability of our approach to find a low dimensional manifold with good explanatory power.

  • Semi-supervised feature extraction using Independent Factor analysis
    2010
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Dimensionality reduction can be efficiently achieved by generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA), aiming to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these methods is achieved within the unsupervised framework where only unlabeled samples are used. In this paper we investigate the possibility of estimating Independent Factor analysis model (IFA) and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. In the basic IFA model, latent variables are only recovered from their linear observed mixtures (original features). Both the mapping matrix (assumed to be linear) and the latent variable densities (that are assumed to be mutually Independent and generated according to mixtures of Gaussians) are learned from observed data. We propose to learn this model within semisupervised framework where the likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results on real data sets are provided to demonstrate the ability of our approach to find law dimensional manifold with good explanatory power.

Thierry Denoeux - One of the best experts on this subject based on the ideXlab platform.

  • Fault diagnosis of a railway device using semi-supervised Independent Factor analysis with mixing constraints
    Pattern Analysis and Applications, 2012
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denoeux, Patrice Aknin
    Abstract:

    Independent Factor analysis (IFA) defines a generative model for observed data that are assumed to be linear mixtures of some unknown non-Gaussian, mutually Independent latent variables (also called sources or Independent components). The probability density function of each individual latent variable is modelled by a mixture of Gaussians (MOG). Learning in the context of this model is usually performed within an unsupervised framework in which only unlabelled samples are used. Both the mixing matrix and the parameters of latent variable densities are learned from observed data. This paper investigates the possibility of estimating an IFA model in a noiseless setting when two kinds of prior information are incorporated; namely, constraints on the mixing process and partial knowledge on the cluster membership of some training samples. Semi-supervised or partially-supervised learning frameworks can thus be handled. The investigation of these two kinds of prior information was motivated by a real-world application concerning the fault diagnosis of railway track circuits. Results from both this application, simulated data are provided to demonstrate the capacity of our approach to enhance estimation accuracy and remove the indeterminacy commonly encountered in unsupervised IFA, such as source permutations.

  • ICMLA (2) - Semi-supervised Feature Extraction Using Independent Factor Analysis
    2011 10th International Conference on Machine Learning and Applications and Workshops, 2011
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Efficient dimensionality reduction can involve generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA). Such models aim to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these models occur within an unsupervised framework where only unlabeled samples are used. In this paper, we investigate the possibility of estimating an Independent Factor analysis model (IFA), and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. We propose to allow this model to learn within a semi-supervised framework in which likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results with real data sets are provided to demonstrate the ability of our approach to find a low dimensional manifold with good explanatory power.

  • Semi-supervised feature extraction using Independent Factor analysis
    2010
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Dimensionality reduction can be efficiently achieved by generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA), aiming to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these methods is achieved within the unsupervised framework where only unlabeled samples are used. In this paper we investigate the possibility of estimating Independent Factor analysis model (IFA) and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. In the basic IFA model, latent variables are only recovered from their linear observed mixtures (original features). Both the mapping matrix (assumed to be linear) and the latent variable densities (that are assumed to be mutually Independent and generated according to mixtures of Gaussians) are learned from observed data. We propose to learn this model within semisupervised framework where the likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results on real data sets are provided to demonstrate the ability of our approach to find law dimensional manifold with good explanatory power.

  • Noiseless Independent Factor Analysis with Mixing Constraints in a Semi-supervised Framework. Application to Railway Device Fault Diagnosis
    2009
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denoeux, Patrice Aknin
    Abstract:

    In Independent Factor Analysis (IFA), latent components (or sources) are only recovered from their linear observed mixtures. Both the mixing process and the sources densities (that are assumed to be generated according to mixtures of gaussians) are learned from observed data. This paper investigates the possibility of estimating the IFA model when two prior knowledge are incorporated : constraints on the mixing process and partial knowledge on the cluster membership of some examples. Semi-supervised or partially supervised learning frameworks can thus be handled. These two proposals have been initially motivated by a real-world application that concerns a fault diagnosis of a railway device. Results on this application are provided to demonstrate its ability to enhance estimation accuracy and remove indeterminacy commonly encountered in unsupervised IFA such as the sources permutations. \keywords{Independent Factor Analysis, mixing constraints, semi-supervised learning, diagnosis, railway device}

  • Noiseless Independent Factor Analysis with mixing constraints in a semi-supervised framework. Application to railway device fault diagnosis.
    2009
    Co-Authors: Etienne Côme, Latifa Oukhellou, Patrice Aknin, Thierry Denoeux
    Abstract:

    In Independent Factor Analysis (IFA), latent components (or sources) are recovered from only their linear observed mixtures. Both the mixing process and the source densities (that are assumed to be gener- ated according to mixtures of Gaussians) are learned from observed data. This paper investigates the possibility of estimating the IFA model in its noiseless setting when two kinds of prior information are incorporated: constraints on the mixing process and partial knowledge on the cluster membership of some examples. Semi-supervised or partially supervised learning frameworks can thus be handled. These two proposals have been initially motivated by a real-world application that concerns fault diag- nosis of a railway device. Results from this application are provided to demonstrate the ability of our approach to enhance estimation accuracy and remove indeterminacy commonly encountered in unsupervised IFA such as source permutations.

Latifa Oukhellou - One of the best experts on this subject based on the ideXlab platform.

  • Partially supervised Independent Factor Analysis using soft labels elicited from multiple experts: application to railway track circuit diagnosis
    Soft Computing, 2012
    Co-Authors: Zohra L. Cherfi, Etienne Côme, Latifa Oukhellou, Thierry Denœux, Patrice Aknin
    Abstract:

    Using a statistical model in a diagnosis task generally requires a large amount of labeled data. When ground truth information is not available, too expensive or difficult to collect, one has to rely on expert knowledge. In this paper, it is proposed to use partial information from domain experts expressed as belief functions. Expert opinions are combined in this framework and used with measurement data to estimate the parameters of a statistical model using a variant of the EM algorithm. The particular application investigated here concerns the diagnosis of railway track circuits. A noiseless Independent Factor Analysis model is postulated, assuming the observed variables extracted from railway track inspection signals to be generated by a linear mixture of Independent latent variables linked to the system component states. Usually, learning with this statistical model is performed in an unsupervised way using unlabeled examples only. In this paper, it is proposed to handle this learning process in a soft-supervised way using imperfect information on the system component states. Fusing partially reliable information about cluster membership is shown to significantly improve classification results.

  • Fault diagnosis of a railway device using semi-supervised Independent Factor analysis with mixing constraints
    Pattern Analysis and Applications, 2012
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denoeux, Patrice Aknin
    Abstract:

    Independent Factor analysis (IFA) defines a generative model for observed data that are assumed to be linear mixtures of some unknown non-Gaussian, mutually Independent latent variables (also called sources or Independent components). The probability density function of each individual latent variable is modelled by a mixture of Gaussians (MOG). Learning in the context of this model is usually performed within an unsupervised framework in which only unlabelled samples are used. Both the mixing matrix and the parameters of latent variable densities are learned from observed data. This paper investigates the possibility of estimating an IFA model in a noiseless setting when two kinds of prior information are incorporated; namely, constraints on the mixing process and partial knowledge on the cluster membership of some training samples. Semi-supervised or partially-supervised learning frameworks can thus be handled. The investigation of these two kinds of prior information was motivated by a real-world application concerning the fault diagnosis of railway track circuits. Results from both this application, simulated data are provided to demonstrate the capacity of our approach to enhance estimation accuracy and remove the indeterminacy commonly encountered in unsupervised IFA, such as source permutations.

  • Fault diagnosis of a railway device using semi-supervised Independent Factor analysis with mixing constraints
    Pattern Analysis and Applications, 2011
    Co-Authors: Etienne Côme, Latifa Oukhellou, Thierry Denœux, Patrice Aknin
    Abstract:

    Independent Factor analysis (IFA) defines a generative model for observed data that are assumed to be linear mixtures of some unknown non-Gaussian, mutually Independent latent variables (also called sources or Independent components). The probability density function of each individual latent variable is modelled by a mixture of Gaussians. Learning in the context of this model is usually performed within an unsupervised framework in which only unlabelled samples are used. Both the mixing matrix and the parameters of latent variable densities are learned from the observed data. This paper investigates the possibility of estimating an IFA model in a noiseless setting when two kinds of prior information are incorporated, namely constraints on the mixing process and partial knowledge on the cluster membership of some training samples. Semi-supervised or partially supervised learning frameworks can thus be handled. The investigation of these two kinds of prior information was motivated by a real-world application concerning the fault diagnosis of railway track circuits. Simulated data, resulting from both these applications, are provided to demonstrate the capacity of our approach to enhance estimation accuracy and remove the indeterminacy commonly encountered in unsupervised IFA, such as source permutations.

  • ICMLA (2) - Semi-supervised Feature Extraction Using Independent Factor Analysis
    2011 10th International Conference on Machine Learning and Applications and Workshops, 2011
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Efficient dimensionality reduction can involve generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA). Such models aim to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these models occur within an unsupervised framework where only unlabeled samples are used. In this paper, we investigate the possibility of estimating an Independent Factor analysis model (IFA), and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. We propose to allow this model to learn within a semi-supervised framework in which likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results with real data sets are provided to demonstrate the ability of our approach to find a low dimensional manifold with good explanatory power.

  • Semi-supervised feature extraction using Independent Factor analysis
    2010
    Co-Authors: Latifa Oukhellou, Etienne Côme, Patrice Aknin, Thierry Denoeux
    Abstract:

    Dimensionality reduction can be efficiently achieved by generative latent variable models such as probabilistic principal component analysis (PPCA) or Independent component analysis (ICA), aiming to extract a reduced set of variables (latent variables) from the original ones. In most cases, the learning of these methods is achieved within the unsupervised framework where only unlabeled samples are used. In this paper we investigate the possibility of estimating Independent Factor analysis model (IFA) and thus projecting original data onto a lower dimensional space, when prior knowledge on the cluster membership of some training samples is incorporated. In the basic IFA model, latent variables are only recovered from their linear observed mixtures (original features). Both the mapping matrix (assumed to be linear) and the latent variable densities (that are assumed to be mutually Independent and generated according to mixtures of Gaussians) are learned from observed data. We propose to learn this model within semisupervised framework where the likelihood of both labeled and unlabeled samples is maximized by a generalized expectation-maximization (GEM) algorithm. Experimental results on real data sets are provided to demonstrate the ability of our approach to find law dimensional manifold with good explanatory power.

Cinzia Viroli - One of the best experts on this subject based on the ideXlab platform.

  • The Independent Factor analysis approach to latent variable modelling
    Statistics, 2010
    Co-Authors: Angela Montanari, Cinzia Viroli
    Abstract:

    Independent Factor analysis (IFA) has recently been proposed in the signal processing literature as a way to model a set of observed variables through linear combinations of latent Independent variables and a noise term. A peculiarity of the method is that it defines a probability density function for the latent variables by mixtures of Gaussians. The aim of this paper is to cast the method into a more rigorous statistical framework and to propose some developments. In the first part, we present the IFA model in its population version, address identifiability issues and draw some parallels between the IFA model and the ordinary Factor analysis (FA) one. Then we show that the IFA model may be reinterpreted as an Independent component analysis-based rotation of an ordinary FA solution. We also give evidence that the IFA model represents a special case of mixture of Factor analysers. In the second part, we address inferential issues, also deriving the standard errors for the model parameter estimates and pro...

  • Independent Factor discriminant analysis
    Computational Statistics & Data Analysis, 2008
    Co-Authors: Angela Montanari, Daniela Giovanna Calo, Cinzia Viroli
    Abstract:

    In the general classification context the recourse to the so-called Bayes decision rule requires to estimate the class conditional probability density functions. A mixture model for the observed variables which is derived by assuming that the data have been generated by an Independent Factor model is proposed. Independent Factor analysis is in fact a generative latent variable model whose structure closely resembles the one of the ordinary Factor model, but it assumes that the latent variables are mutually Independent and not necessarily Gaussian. The method therefore provides a dimension reduction together with a semiparametric estimate of the class conditional probability density functions. This density approximation is plugged into the classic Bayes rule and its performance is evaluated both on real and simulated data.

  • Fitting the Independent Factor analysis model using the MCMC algorithm
    Journal of Statistical Computation and Simulation, 2007
    Co-Authors: Cinzia Viroli
    Abstract:

    Independent Factor analysis is a recent and novel latent variable model, in which the Factors are supposed to be mutually Independent and not necessarily Gaussian distributed. The Factors are modeled by Gaussian mixtures that are quite flexible to approximate any probability density function. The model estimation can be quite promisingly solved by the EM algorithm when the number of Factors is not too high. However, the computational burden needed to fit the model grows rapidly with the number of Factors and the number of terms in the mixture involved. In any but the simplest cases, other estimation procedures have to be employed. In this work, an MCMC approach, based on the Gibbs sampler algorithm, is proposed. Its estimation performances are compared with the ordinary EM algorithm on real and simulated data.

  • Choosing the Number of Factors in Independent Factor Analysis Model
    2005
    Co-Authors: Cinzia Viroli
    Abstract:

    Independent Factor Analysis (IFA) has recently been proposed in the signal processing literature as a way to model a set of observed variables through linear combinations of hidden Independent ones plus a noise term. Despite the peculiarity of its origin the method can be framed within the latent variable model domain and some parallels with the ordinary Factor Analysis can be drawn. If no prior information on the latent structure is available a relevant issue concer ns the correct specification of the model. In this work some methods to detect the number of significant latent variables are investigated. Moreover, since the method defi nes a probability density function for the latent variables by mixtures of gaussians, the correct number of mixture components must also be determined. This issue will be treated according to two main approaches. The first one amounts to carry out a likel ihood ratio test. The other one is based on a penalized form of the likelihood, that leads to the so called information criteria. Some simulations and empirical results on real data sets are finally presented.

  • GfKl - Model-based Density Estimation by Independent Factor Analysis
    From Data and Information Analysis to Knowledge Engineering, 1
    Co-Authors: Daniela Giovanna Calo, Angela Montanari, Cinzia Viroli
    Abstract:

    In this paper we propose a model based density estimation method which is rooted in Independent Factor Analysis (IFA). IFA is in fact a generative latent variable model, whose structure closely resembles the one of an ordinary Factor model but which assumes that the latent variables are mutually Independent and distributed according to Gaussian mixtures. From these assumptions, the possibility of modelling the observed data density as a mixture of Gaussian distributions too derives. The number of free parameters is controlled through the dimension of the latent Factor space. The model is proved to be a special case of mixture of Factor analyzers which is less parameterized than the original proposal by McLachlan and Peel (2000). We illustrate the use of IFA density estimation for supervised classification both on real and simulated data.