Regression Problem

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 12651 Experts worldwide ranked by ideXlab platform

Alex Fornito - One of the best experts on this subject based on the ideXlab platform.

  • identifying and removing widespread signal deflections from fmri data rethinking the global signal Regression Problem
    NeuroImage, 2020
    Co-Authors: Kevin M. Aquino, Ben D. Fulcher, Linden Parkes, Kristina Sabaroedin, Alex Fornito
    Abstract:

    One of the most controversial procedures in the analysis of resting-state functional magnetic resonance imaging (rsfMRI) data is global signal Regression (GSR): the removal, via linear Regression, of the mean signal averaged over the entire brain. On one hand, the global mean signal contains variance associated with respiratory, scanner-, and motion-related artifacts, and its removal via GSR improves various quality-control metrics, enhances the anatomical specificity of functional-connectivity patterns, and can increase the behavioral variance explained by such patterns. On the other hand, GSR alters the distribution of regional signal correlations in the brain, can induce artifactual anticorrelations, may remove real neural signal, and can distort case-control comparisons of functional-connectivity measures. Global signal fluctuations can be identified visually from a matrix of colour-coded signal intensities, called a carpet plot, in which rows represent voxels and columns represent time. Prior to GSR, large, periodic bands of coherent signal changes that affect most of the brain are often apparent; after GSR, these apparently global changes are greatly diminished. Here, using three independent datasets, we show that reordering carpet plots to emphasize cluster structure in the data reveals a greater diversity of spatially widespread signal deflections (WSDs) than previously thought. Their precise form varies across time and participants, and GSR is only effective in removing specific kinds of WSDs. We present an alternative, iterative correction method called Diffuse Cluster Estimation and Regression (DiCER), that identifies representative signals associated with large clusters of coherent voxels. DiCER is more effective than GSR at removing diverse WSDs as visualized in carpet plots, reduces correlations between functional connectivity and head-motion estimates, reduces inter-individual variability in global correlation structure, and results in comparable or improved identification of canonical functional-connectivity networks. Using task fMRI data across 47 contrasts from 7 tasks in the Human Connectome Project, we also present evidence that DiCER is more successful than GSR in preserving the spatial structure of expected task-related activation patterns. Our findings indicate that care must be exercised when examining WSDs (and their possible removal) in rsfMRI data, and that DiCER is a viable alternative to GSR for removing anatomically widespread and temporally coherent signals. All code for implementing DiCER and replicating our results is available at https://github.com/BMHLab/DiCER.

  • Identifying and removing widespread signal deflections from fMRI data: Rethinking the global signal Regression Problem.
    bioRxiv, 2019
    Co-Authors: Kevin M. Aquino, Ben D. Fulcher, Linden Parkes, Kristina Sabaroedin, Alex Fornito
    Abstract:

    One of the most controversial procedures in the analysis of resting-state functional magnetic resonance imaging (rsfMRI) data is global signal Regression (GSR): the removal, via linear Regression, of the mean signal averaged over the entire brain, from voxel-wise or regional time series. On one hand, the global mean signal contains variance associated with respiratory, scanner-, and motion-related artifacts. Its removal via GSR improves various quality control metrics, enhances the anatomical specificity of functional connectivity patterns, and can increase the behavioural variance explained by such patterns. On the other hand, GSR alters the distribution of regional signal correlations in the brain, can induce artifactual anticorrelations, may remove real neural signal, and can distort case-control comparisons of functional-connectivity measures. Global signal fluctuations can be identified by visualizing a matrix of colour-coded signal intensities, called a carpet plot, in which rows represent voxels and columns represent time. Prior to GSR, large, periodic bands of coherent signal changes that affect most of the brain are often apparent; after GSR, these apparent global changes are greatly diminished. Here, using three independent datasets, we show that reordering carpet plots to emphasize cluster structure in the data reveals a greater diversity of spatially widespread signal deflections (WSDs) than previously thought. Their precise form varies across time and participants and GSR is only effective in removing specific kinds of WSDs. We present an alternative, iterative correction method called Diffuse Cluster Estimation and Regression (DiCER), that identifies representative signals associated with large clusters of coherent voxels. DiCER is more effective than GSR at removing diverse WSDs as visualized in carpet plots, reduces correlations between functional connectivity and head-motion estimates, reduces inter-individual variability in global correlation structure, and results in comparable or improved identification of canonical functional-connectivity networks. All code for implementing DiCER and replicating our results is available at https://github.com/BMHLab/DiCER.

Rama Chellappa - One of the best experts on this subject based on the ideXlab platform.

  • Analysis of Sparse Regularization Based Robust Regression Approaches
    IEEE Transactions on Signal Processing, 2013
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Regression in the presence of outliers is an inherently combinatorial Problem. However, compressive sensing theory suggests that certain combinatorial optimization Problems can be exactly solved using polynomial-time algorithms. Motivated by this connection, several research groups have proposed polynomial-time algorithms for robust Regression. In this paper we specifically address the traditional robust Regression Problem, where the number of observations is more than the number of unknown Regression parameters and the structure of the regressor matrix is defined by the training dataset (and hence it may not satisfy properties such as Restricted Isometry Property or incoherence). We derive the precise conditions under which the sparse regularization (l0 and l1-norm) approaches solve the robust Regression Problem. We show that the smallest principal angle between the regressor subspace and all k-dimensional outlier subspaces is the fundamental quantity that determines the performance of these algorithms. In terms of this angle we provide an estimate of the number of outliers the sparse regularization based approaches can handle. We then empirically evaluate the sparse (l1-norm) regularization approach against other traditional robust Regression algorithms to identify accurate and efficient algorithms for high-dimensional Regression Problems.

  • ICASSP - Robust Regression using sparse learning for high dimensional parameter estimation Problems
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Algorithms such as Least Median of Squares (LMedS) and Random Sample Consensus (RANSAC) have been very successful for low-dimensional robust Regression Problems. However, the combinatorial nature of these algorithms makes them practically unusable for high-dimensional applications. In this paper, we introduce algorithms that have cubic time complexity in the dimension of the Problem, which make them computationally efficient for high-dimensional Problems. We formulate the robust Regression Problem by projecting the dependent variable onto the null space of the independent variables which receives significant contributions only from the outliers. We then identify the outliers using sparse representation/learning based algorithms. Under certain conditions, that follow from the theory of sparse representation, these polynomial algorithms can accurately solve the robust Regression Problem which is, in general, a combinatorial Problem. We present experimental results that demonstrate the efficacy of the proposed algorithms. We also analyze the intrinsic parameter space of robust Regression and identify an efficient and accurate class of algorithms for different operating conditions. An application to facial age estimation is presented.

Stephane Girard - One of the best experts on this subject based on the ideXlab platform.

  • On the regularization of Sliced Inverse Regression
    2010
    Co-Authors: Stephane Girard
    Abstract:

    Sliced Inverse Regression (SIR) is an effective method for dimension reduction in high dimensional Regression Problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on an interpretation of SIR axes as solutions of an inverse Regression Problem. A prior distribution is then introduced on the unknown parameters of the inverse Regression Problem in order to regularize their estimation [3]. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods [2]. Three new priors are proposed, leading to new regularizations of the SIR method, and compared on simulated data. An application to the estimation of Mars surface physical properties from hyperspectral images [1] is provided. -- References : [1] C. Bernard-Michel, S. Douté, M. Fauvel, L. Gardes & S. Girard. "Retrieval of Mars surface physical properties from OMEGA hyperspectral images using Regularized Sliced Inverse Regression", Journal of Geophysical Research - Planets, 114, E06005, 2009. [2] C. Bernard-Michel, L. Gardes & S. Girard. "A Note on Sliced Inverse Regression with Regularizations", Biometrics, 64, 982--986, 2008. [3] C. Bernard-Michel, L. Gardes & S. Girard. "Gaussian Regularized Sliced Inverse Regression", Statistics and Computing, 19, 85--98, 2009.

  • Gaussian Regularized Sliced Inverse Regression
    Statistics and Computing, 2009
    Co-Authors: Caroline Bernard-michel, Laurent Gardes, Stephane Girard
    Abstract:

    Sliced Inverse Regression (SIR) is an effective method for dimension reduction in high-dimensional Regression Problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on a Fisher Lecture given by R.D. Cook where it is shown that SIR axes can be interpreted as solutions of an inverse Regression Problem. In this paper, a Gaussian prior distribution is introduced on the unknown parameters of the inverse Regression Problem in order to regularize their estimation. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods. Three new priors are proposed leading to new regularizations of the SIR method. A comparison on simulated data is provided.

  • Regularization methods for Sliced Inverse Regression
    2008
    Co-Authors: Caroline Bernard-michel, Laurent Gardes, Stephane Girard
    Abstract:

    Sliced Inverse Regression (SIR) is an effective method for dimension reduction in high-dimensional Regression Problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on a Fisher Lecture given by R.D. Cook where it is shown that SIR axes can be interpreted as solutions of an inverse Regression Problem. We propose to introduce a Gaussian prior distribution on the unknown parameters of the inverse Regression Problem in order to regularize their estimation. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods. Three new priors are proposed leading to new regularizations of the SIR method. A comparison on simulated data as well as an application to the estimation of Mars surface physical properties from hyperspectral images are provided.

Kevin M. Aquino - One of the best experts on this subject based on the ideXlab platform.

  • identifying and removing widespread signal deflections from fmri data rethinking the global signal Regression Problem
    NeuroImage, 2020
    Co-Authors: Kevin M. Aquino, Ben D. Fulcher, Linden Parkes, Kristina Sabaroedin, Alex Fornito
    Abstract:

    One of the most controversial procedures in the analysis of resting-state functional magnetic resonance imaging (rsfMRI) data is global signal Regression (GSR): the removal, via linear Regression, of the mean signal averaged over the entire brain. On one hand, the global mean signal contains variance associated with respiratory, scanner-, and motion-related artifacts, and its removal via GSR improves various quality-control metrics, enhances the anatomical specificity of functional-connectivity patterns, and can increase the behavioral variance explained by such patterns. On the other hand, GSR alters the distribution of regional signal correlations in the brain, can induce artifactual anticorrelations, may remove real neural signal, and can distort case-control comparisons of functional-connectivity measures. Global signal fluctuations can be identified visually from a matrix of colour-coded signal intensities, called a carpet plot, in which rows represent voxels and columns represent time. Prior to GSR, large, periodic bands of coherent signal changes that affect most of the brain are often apparent; after GSR, these apparently global changes are greatly diminished. Here, using three independent datasets, we show that reordering carpet plots to emphasize cluster structure in the data reveals a greater diversity of spatially widespread signal deflections (WSDs) than previously thought. Their precise form varies across time and participants, and GSR is only effective in removing specific kinds of WSDs. We present an alternative, iterative correction method called Diffuse Cluster Estimation and Regression (DiCER), that identifies representative signals associated with large clusters of coherent voxels. DiCER is more effective than GSR at removing diverse WSDs as visualized in carpet plots, reduces correlations between functional connectivity and head-motion estimates, reduces inter-individual variability in global correlation structure, and results in comparable or improved identification of canonical functional-connectivity networks. Using task fMRI data across 47 contrasts from 7 tasks in the Human Connectome Project, we also present evidence that DiCER is more successful than GSR in preserving the spatial structure of expected task-related activation patterns. Our findings indicate that care must be exercised when examining WSDs (and their possible removal) in rsfMRI data, and that DiCER is a viable alternative to GSR for removing anatomically widespread and temporally coherent signals. All code for implementing DiCER and replicating our results is available at https://github.com/BMHLab/DiCER.

  • Identifying and removing widespread signal deflections from fMRI data: Rethinking the global signal Regression Problem.
    bioRxiv, 2019
    Co-Authors: Kevin M. Aquino, Ben D. Fulcher, Linden Parkes, Kristina Sabaroedin, Alex Fornito
    Abstract:

    One of the most controversial procedures in the analysis of resting-state functional magnetic resonance imaging (rsfMRI) data is global signal Regression (GSR): the removal, via linear Regression, of the mean signal averaged over the entire brain, from voxel-wise or regional time series. On one hand, the global mean signal contains variance associated with respiratory, scanner-, and motion-related artifacts. Its removal via GSR improves various quality control metrics, enhances the anatomical specificity of functional connectivity patterns, and can increase the behavioural variance explained by such patterns. On the other hand, GSR alters the distribution of regional signal correlations in the brain, can induce artifactual anticorrelations, may remove real neural signal, and can distort case-control comparisons of functional-connectivity measures. Global signal fluctuations can be identified by visualizing a matrix of colour-coded signal intensities, called a carpet plot, in which rows represent voxels and columns represent time. Prior to GSR, large, periodic bands of coherent signal changes that affect most of the brain are often apparent; after GSR, these apparent global changes are greatly diminished. Here, using three independent datasets, we show that reordering carpet plots to emphasize cluster structure in the data reveals a greater diversity of spatially widespread signal deflections (WSDs) than previously thought. Their precise form varies across time and participants and GSR is only effective in removing specific kinds of WSDs. We present an alternative, iterative correction method called Diffuse Cluster Estimation and Regression (DiCER), that identifies representative signals associated with large clusters of coherent voxels. DiCER is more effective than GSR at removing diverse WSDs as visualized in carpet plots, reduces correlations between functional connectivity and head-motion estimates, reduces inter-individual variability in global correlation structure, and results in comparable or improved identification of canonical functional-connectivity networks. All code for implementing DiCER and replicating our results is available at https://github.com/BMHLab/DiCER.

Kaushik Mitra - One of the best experts on this subject based on the ideXlab platform.

  • Analysis of Sparse Regularization Based Robust Regression Approaches
    IEEE Transactions on Signal Processing, 2013
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Regression in the presence of outliers is an inherently combinatorial Problem. However, compressive sensing theory suggests that certain combinatorial optimization Problems can be exactly solved using polynomial-time algorithms. Motivated by this connection, several research groups have proposed polynomial-time algorithms for robust Regression. In this paper we specifically address the traditional robust Regression Problem, where the number of observations is more than the number of unknown Regression parameters and the structure of the regressor matrix is defined by the training dataset (and hence it may not satisfy properties such as Restricted Isometry Property or incoherence). We derive the precise conditions under which the sparse regularization (l0 and l1-norm) approaches solve the robust Regression Problem. We show that the smallest principal angle between the regressor subspace and all k-dimensional outlier subspaces is the fundamental quantity that determines the performance of these algorithms. In terms of this angle we provide an estimate of the number of outliers the sparse regularization based approaches can handle. We then empirically evaluate the sparse (l1-norm) regularization approach against other traditional robust Regression algorithms to identify accurate and efficient algorithms for high-dimensional Regression Problems.

  • ICASSP - Robust Regression using sparse learning for high dimensional parameter estimation Problems
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Kaushik Mitra, Ashok Veeraraghavan, Rama Chellappa
    Abstract:

    Algorithms such as Least Median of Squares (LMedS) and Random Sample Consensus (RANSAC) have been very successful for low-dimensional robust Regression Problems. However, the combinatorial nature of these algorithms makes them practically unusable for high-dimensional applications. In this paper, we introduce algorithms that have cubic time complexity in the dimension of the Problem, which make them computationally efficient for high-dimensional Problems. We formulate the robust Regression Problem by projecting the dependent variable onto the null space of the independent variables which receives significant contributions only from the outliers. We then identify the outliers using sparse representation/learning based algorithms. Under certain conditions, that follow from the theory of sparse representation, these polynomial algorithms can accurately solve the robust Regression Problem which is, in general, a combinatorial Problem. We present experimental results that demonstrate the efficacy of the proposed algorithms. We also analyze the intrinsic parameter space of robust Regression and identify an efficient and accurate class of algorithms for different operating conditions. An application to facial age estimation is presented.