Linear Models

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 312 Experts worldwide ranked by ideXlab platform

Ryohei Fujimaki - One of the best experts on this subject based on the ideXlab platform.

  • NIPS - Partition-wise Linear Models
    2014
    Co-Authors: Hidekazu Oiwa, Ryohei Fujimaki
    Abstract:

    Region-specific Linear Models are widely used in practical applications because of their non-Linear but highly interpretable model representations. One of the key challenges in their use is non-convexity in simultaneous optimization of regions and region-specific Models. This paper proposes novel convex region-specific Linear Models, which we refer to as partition-wise Linear Models. Our key ideas are 1) assigning Linear Models not to regions but to partitions (region-specifiers) and representing region-specific Linear Models by Linear combinations of partition-specific Models, and 2) optimizing regions via partition selection from a large number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our convex formulation makes it possible to derive a generalization bound and to use such advanced optimization techniques as proximal methods and decomposition of the proximal maps for sparsity-inducing regularizations. Experimental results demonstrate that our partition-wise Linear Models perform better than or are at least competitive with state-of-the-art region-specific or locally Linear Models.

  • Partition-wise Linear Models
    arXiv: Machine Learning, 2014
    Co-Authors: Hidekazu Oiwa, Ryohei Fujimaki
    Abstract:

    Region-specific Linear Models are widely used in practical applications because of their non-Linear but highly interpretable model representations. One of the key challenges in their use is non-convexity in simultaneous optimization of regions and region-specific Models. This paper proposes novel convex region-specific Linear Models, which we refer to as partition-wise Linear Models. Our key ideas are 1) assigning Linear Models not to regions but to partitions (region-specifiers) and representing region-specific Linear Models by Linear combinations of partition-specific Models, and 2) optimizing regions via partition selection from a large number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our convex formulation makes it possible to derive a generalization bound and to use such advanced optimization techniques as proximal methods and decomposition of the proximal maps for sparsity-inducing regularizations. Experimental results demonstrate that our partition-wise Linear Models perform better than or are at least competitive with state-of-the-art region-specific or locally Linear Models.

Hidekazu Oiwa - One of the best experts on this subject based on the ideXlab platform.

  • NIPS - Partition-wise Linear Models
    2014
    Co-Authors: Hidekazu Oiwa, Ryohei Fujimaki
    Abstract:

    Region-specific Linear Models are widely used in practical applications because of their non-Linear but highly interpretable model representations. One of the key challenges in their use is non-convexity in simultaneous optimization of regions and region-specific Models. This paper proposes novel convex region-specific Linear Models, which we refer to as partition-wise Linear Models. Our key ideas are 1) assigning Linear Models not to regions but to partitions (region-specifiers) and representing region-specific Linear Models by Linear combinations of partition-specific Models, and 2) optimizing regions via partition selection from a large number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our convex formulation makes it possible to derive a generalization bound and to use such advanced optimization techniques as proximal methods and decomposition of the proximal maps for sparsity-inducing regularizations. Experimental results demonstrate that our partition-wise Linear Models perform better than or are at least competitive with state-of-the-art region-specific or locally Linear Models.

  • Partition-wise Linear Models
    arXiv: Machine Learning, 2014
    Co-Authors: Hidekazu Oiwa, Ryohei Fujimaki
    Abstract:

    Region-specific Linear Models are widely used in practical applications because of their non-Linear but highly interpretable model representations. One of the key challenges in their use is non-convexity in simultaneous optimization of regions and region-specific Models. This paper proposes novel convex region-specific Linear Models, which we refer to as partition-wise Linear Models. Our key ideas are 1) assigning Linear Models not to regions but to partitions (region-specifiers) and representing region-specific Linear Models by Linear combinations of partition-specific Models, and 2) optimizing regions via partition selection from a large number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our convex formulation makes it possible to derive a generalization bound and to use such advanced optimization techniques as proximal methods and decomposition of the proximal maps for sparsity-inducing regularizations. Experimental results demonstrate that our partition-wise Linear Models perform better than or are at least competitive with state-of-the-art region-specific or locally Linear Models.

Narayanaswamy Balakrishnan - One of the best experts on this subject based on the ideXlab platform.

  • Robust likelihood inference for regression parameters in partially Linear Models
    Computational Statistics & Data Analysis, 2011
    Co-Authors: Chung-wei Shen, Tsung-shan Tsou, Narayanaswamy Balakrishnan
    Abstract:

    A robust likelihood approach is proposed for inference about regression parameters in partially-Linear Models. More specifically, normality is adopted as the working model and is properly corrected to accomplish the objective. Knowledge about the true underlying random mechanism is not required for the proposed method. Simulations and illustrative examples demonstrate the usefulness of the proposed robust likelihood method, even in irregular situations caused by the components of the nonparametric smooth function in partially-Linear Models.

Alessandro Rinaldo - One of the best experts on this subject based on the ideXlab platform.

  • maximum likelihood estimation in log Linear Models
    Annals of Statistics, 2012
    Co-Authors: Stephen E Fienberg, Alessandro Rinaldo
    Abstract:

    In this article, we combine results from the theory of Linear exponential families, polyhedral geometry and algebraic geometry to provide analytic and geometric characterizations of log-Linear Models and maximum likelihood estimation. Geometric and combinatorial conditions for the existence of the Maximum Likelihood Estimate (MLE) of the cell mean vector of a contingency table are given for general log-Linear Models under conditional Poisson sampling. It is shown that any log-Linear model can be generalized to an extended exponential family of distributions parametrized, in a mean value sense, by points of a polyhedron. Such a parametrization is continuous and, with respect to this extended family, the MLE always exists and is unique. In addition, the set of cell mean vectors form a subset of a toric variety consisting of non-negative points satisfying a certain system of polynomial equations. These results of are theoretical and practical importance for estimation and model selection.

  • maximum likelihood estimation in log Linear Models
    arXiv: Statistics Theory, 2011
    Co-Authors: Stephen E Fienberg, Alessandro Rinaldo
    Abstract:

    We study maximum likelihood estimation in log-Linear Models under conditional Poisson sampling schemes. We derive necessary and sufficient conditions for existence of the maximum likelihood estimator (MLE) of the model parameters and investigate estimability of the natural and mean-value parameters under a nonexistent MLE. Our conditions focus on the role of sampling zeros in the observed table. We situate our results within the framework of extended exponential families, and we exploit the geometric properties of log-Linear Models. We propose algorithms for extended maximum likelihood estimation that improve and correct the existing algorithms for log-Linear model analysis.

Ronald Christensen - One of the best experts on this subject based on the ideXlab platform.

  • Multivariate Linear Models: Applications
    Springer Texts in Statistics, 2019
    Co-Authors: Ronald Christensen
    Abstract:

    This chapter applies the results of Chap. 9 to the one-sample, two-sample, and one-way ANOVA problems. A major tool in MANOVA is profile analysis, which is analogous to performing a split-plot analysis. Profile analysis leads us to the consideration of generalized multivariate Linear Models (growth curve Models, GMANOVA Models). Finally, we consider testing for whether a subset of the dependent variables actually provides us with additional information over and above the variables not considered in the subset. In Chap. 12 testing for additional information is seen as an important tool in Linear discriminant analysis.

  • Multivariate Linear Models
    Springer Texts in Statistics, 2001
    Co-Authors: Ronald Christensen
    Abstract:

    Chapters I, II, and III examine topics in multivariate analysis. Specifically, they discuss multivariate Linear Models, discriminant analysis, principal components, and factor analysis. The basic ideas behind these subjects are closely related to Linear model theory. Multivariate Linear Models are simply Linear Models with more than one dependent variable. Discriminant analysis is closely related to both Mahalanobis’s distance (cf. Christensen, 1987, Section XIII.1) and multivariate one-way analysis of variance. Principal components are user-constructed variables which are best Linear predictors (cf. Christensen, 1987, Section VI.3) of the original data. Factor analysis has ties to both multivariate Linear Models and principal components.

  • log Linear Models and logistic regression
    1997
    Co-Authors: Ronald Christensen
    Abstract:

    The primary focus here is on log-Linear Models for contingency tables, but in this second edition, greater emphasis has been placed on logistic regression. The book explores topics such as logistic discrimination and generalised Linear Models, and builds upon the relationships between these basic Models for continuous data and the analogous log-Linear and logistic regression Models for discrete data. It also carefully examines the differences in model interpretations and evaluations that occur due to the discrete nature of the data. Sample commands are given for analyses in SAS, BMFP, and GLIM, while numerous data sets from fields as diverse as engineering, education, sociology, and medicine are used to illustrate procedures and provide exercises. Throughoutthe book, the treatment is designed for students with prior knowledge of analysis of variance and regression.

  • Log-Linear Models
    1994
    Co-Authors: Ronald Christensen
    Abstract:

    This book examines log-Linear Models for contingency tables. It uses previous knowledge of analysis of variance and regression to motivate and explicate the use of log-Linear Models. It is a textbook primarily directed at advanced Masters degree students in statistics but can be used at both higher and lower levels. Outlines for introductory, intermediate and advanced courses are given in the preface. All the fundamental statistics for analyzing data using log-Linear Models are given.