Candidate Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Maiza Bekara - One of the best experts on this subject based on the ideXlab platform.

  • a criterion for Model selection in the presence of incomplete data based on kullback s symmetric divergence
    Signal Processing, 2005
    Co-Authors: Abd Krim Seghouane, Maiza Bekara, Gilles Fleury
    Abstract:

    A criterion is proposed for Model selection in the presence of incomplete data. It's construction is based on the motivations provided for the KIC criterion that has been recently developed and for the PDIO (predictive divergence for incomplete observation Models) criterion. The proposed criterion serves as an asymptotically unbiased estimator of the complete data Kullback-Leibler symmetric divergence between a Candidate Model and the generating Model. It is therefore a natural extension of KIC to settings where the observed data is incomplete and is equivalent to KIC when there is no missing data. The proposed criterion differs from PDIO in its goodness of fit term and its complexity term, but it differs from AICcd (where the notation "cd" stands for "complete data") only in its complexity term. Unlike AIC, KIC and PDIO this criterion can be evaluated using only complete data tools, readily available through the EM and SEM algorithms. The performance of the proposed criterion relative to other well-known criteria are examined in a simulation study.

  • a small sample Model selection criterion based on kullback s symmetric divergence
    IEEE Transactions on Signal Processing, 2004
    Co-Authors: Abd‐krim Seghouane, Maiza Bekara
    Abstract:

    The Kullback information criterion (KIC) is a recently developed tool for statistical Model selection. KIC serves as an asymptotically unbiased estimator of a variant (within a constant) of the Kullback symmetric divergence, known also as J-divergence between the generating Model and the fitted Candidate Model. In this paper, a bias correction to KIC is derived for linear regression Models. The correction is of particular use when the sample size is small or when the number of fitted parameters is a moderate to large fraction of the sample size. For linear regression Models, the corrected criterion, called KICc is an exactly unbiased estimator of the variant of the Kullback symmetric divergence, assuming that the true Model is correctly specified or overfitted. Furthermore, when applied to polynomial regression and autoregressive time-series Modeling, KICc is found to estimate the Model order more accurately than any other asymptotically efficient method. Finally, KICc is tested on real data to forecast foreign currency exchange rate; the result is very interesting in comparison to classical techniques.

Xindong Zhao - One of the best experts on this subject based on the ideXlab platform.

  • on time series Model selection involving many Candidate arma Models
    Computational Statistics & Data Analysis, 2007
    Co-Authors: Guoqi Qian, Xindong Zhao
    Abstract:

    We study how to perform Model selection for time series data where millions of Candidate ARMA Models may be eligible for selection. We propose a feasible computing method based on the Gibbs sampler. By this method Model selection is performed through a random sample generation algorithm, and given a Model of fixed dimension the parameter estimation is done through the maximum likelihood method. Our method takes into account several computing difficulties encountered in estimating ARMA Models. The method is found to have probability of 1 in the limit in selecting the best Candidate Model under some regularity conditions. We then propose several empirical rules to implement our computing method for applications. Finally, a simulation study and an example on Modelling China's Consumer Price Index (CPI) data are presented for purpose of illustration and verification.

Philip M. Novack-gottshall - One of the best experts on this subject based on the ideXlab platform.

  • Three-Model Model-selection support data files for Kope and Waynesville Formation samples, stratigraphic section, member, and formation aggregates
    2016
    Co-Authors: Philip M. Novack-gottshall
    Abstract:

    File is in comma-separated value (.csv) format. The first five columns describe the Paleobiology Database collection identification number, scale (hand sample, stratigraphic section, etc.) of the sample, and stratigraphic/section names. Columns 6–14 list sample size (S, species richness) and values for eight disparity statistics (with NA designating when a statistic could not be calculated, because there were fewer than four unique life habits in the sample); see text for descriptions and abbreviations of statistics. The last column identifies which Model has the best support among those Candidates considered. The remaining columns list the classification-tree support each sample has for each Candidate Model considered. emp3-Modelfits.csv lists Model support for the tree trained on 50%, 90%, and 100% training data

  • Two-Model Model-selection support data files for Kope and Waynesville Formation samples, stratigraphic section, member, and formation aggregates
    2016
    Co-Authors: Philip M. Novack-gottshall
    Abstract:

    File is in comma-separated value (.csv) format. The first five columns describe the Paleobiology Database collection identification number, scale (hand sample, stratigraphic section, etc.) of the sample, and stratigraphic/section names. Columns 6–14 list sample size (S, species richness) and values for eight disparity statistics (with NA designating when a statistic could not be calculated, because there were fewer than four unique life habits in the sample); see text for descriptions and abbreviations of statistics. The last column identifies which Model has the best support among those Candidates considered. The remaining columns list the classification-tree support each sample has for each Candidate Model considered. emp2-Modelfits.csv lists Model support using the classification tree trained on the 50% and 100%-strength training data sets. emp3-Modelfits.csv lists Model support for the tree trained on 50%, 90%, and 100% training data

Yun Zhang - One of the best experts on this subject based on the ideXlab platform.

  • drug target mining and analysis of the chinese tree shrew for pharmacological testing
    PLOS ONE, 2014
    Co-Authors: Feng Zhao, Yanjie Wang, Yun Zhang
    Abstract:

    The discovery of new drugs requires the development of improved animal Models for drug testing. The Chinese tree shrew is considered to be a realistic Candidate Model. To assess the potential of the Chinese tree shrew for pharmacological testing, we performed drug target prediction and analysis on genomic and transcriptomic scales. Using our pipeline, 3,482 proteins were predicted to be drug targets. Of these predicted targets, 446 and 1,049 proteins with the highest rank and total scores, respectively, included homologs of targets for cancer chemotherapy, depression, age-related decline and cardiovascular disease. Based on comparative analyses, more than half of drug target proteins identified from the tree shrew genome were shown to be higher similarity to human targets than in the mouse. Target validation also demonstrated that the constitutive expression of the proteinase-activated receptors of tree shrew platelets is similar to that of human platelets but differs from that of mouse platelets. We developed an effective pipeline and search strategy for drug target prediction and the evaluation of Model-based target identification for drug testing. This work provides useful information for future studies of the Chinese tree shrew as a source of novel targets for drug discovery research.

Gilles Fleury - One of the best experts on this subject based on the ideXlab platform.

  • a criterion for Model selection in the presence of incomplete data based on kullback s symmetric divergence
    Signal Processing, 2005
    Co-Authors: Abd Krim Seghouane, Maiza Bekara, Gilles Fleury
    Abstract:

    A criterion is proposed for Model selection in the presence of incomplete data. It's construction is based on the motivations provided for the KIC criterion that has been recently developed and for the PDIO (predictive divergence for incomplete observation Models) criterion. The proposed criterion serves as an asymptotically unbiased estimator of the complete data Kullback-Leibler symmetric divergence between a Candidate Model and the generating Model. It is therefore a natural extension of KIC to settings where the observed data is incomplete and is equivalent to KIC when there is no missing data. The proposed criterion differs from PDIO in its goodness of fit term and its complexity term, but it differs from AICcd (where the notation "cd" stands for "complete data") only in its complexity term. Unlike AIC, KIC and PDIO this criterion can be evaluated using only complete data tools, readily available through the EM and SEM algorithms. The performance of the proposed criterion relative to other well-known criteria are examined in a simulation study.

  • Model selection via worst case criterion for nonlinear bounded error estimation
    IEEE Instrumentation & Measurement Magazine, 2000
    Co-Authors: S Brahimbelhouari, Michel Kieffer, Gilles Fleury, Luc Jaulin, Eric Walter
    Abstract:

    In this paper the problem of Model selection for measurement purpose is studied. A new selection procedure in a deterministic framework is proposed. The problem of nonlinear bounded-error estimation is viewed as a set inversion procedure. As each Candidate Model structure leads to a specific set of admissible values of the measurement vector, the worts-case criterion is used to select the optimal Model. The selection procedure is applied to a real measurement problem, grooves dimensioning using Remote Field Eddy Current (RFEC) inspection.