Density Estimate

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 321 Experts worldwide ranked by ideXlab platform

C.j. Harris - One of the best experts on this subject based on the ideXlab platform.

  • Probability Density Estimation With Tunable Kernels Using Orthogonal Forward Regression
    IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics), 2010
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    A generalized or tunable-kernel model is proposed for probability Density function estimation based on an orthogonal forward regression procedure. Each stage of the Density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse Density Estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact Density Estimate accurately.

  • Probability Density Function Estimation Using Orthogonal Forward Regression
    2007 International Joint Conference on Neural Networks, 2007
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    Using the classical Parzen window Estimate as the target function, the kernel Density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel Density Estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the Density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the Density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel Density Estimate with comparable accuracy to that of the full-sample optimised Parzen window Density Estimate.

  • Sparse kernel Density construction using orthogonal forward regression with leave-one-out test score and local regularization
    IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics), 2004
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    This paper presents an efficient construction algorithm for obtaining sparse kernel Density Estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the Density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the Density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the Density construction procedure. This is in contrast to an existing state-of-art kernel Density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel Density Estimate with comparable accuracy to that of the full sample optimized Parzen window Density Estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel Density Estimates.

S. Chen - One of the best experts on this subject based on the ideXlab platform.

  • Probability Density Estimation With Tunable Kernels Using Orthogonal Forward Regression
    IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics), 2010
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    A generalized or tunable-kernel model is proposed for probability Density function estimation based on an orthogonal forward regression procedure. Each stage of the Density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse Density Estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact Density Estimate accurately.

  • Probability Density Function Estimation Using Orthogonal Forward Regression
    2007 International Joint Conference on Neural Networks, 2007
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    Using the classical Parzen window Estimate as the target function, the kernel Density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel Density Estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the Density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the Density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel Density Estimate with comparable accuracy to that of the full-sample optimised Parzen window Density Estimate.

  • Sparse kernel Density construction using orthogonal forward regression with leave-one-out test score and local regularization
    IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics), 2004
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    This paper presents an efficient construction algorithm for obtaining sparse kernel Density Estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the Density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the Density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the Density construction procedure. This is in contrast to an existing state-of-art kernel Density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel Density Estimate with comparable accuracy to that of the full sample optimized Parzen window Density Estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel Density Estimates.

Luc Devroye - One of the best experts on this subject based on the ideXlab platform.

  • Choosing a Density Estimate
    Combinatorial Methods in Density Estimation, 2020
    Co-Authors: Luc Devroye, Gábor Lugosi
    Abstract:

    Consider the following simple situation: g n and f n are two Density Estimates, and we must select the best one, that is, arg min(∫ |f n − f|, ∫ |g n − f|). More precisely, given the sample X 1, …, X n distributed according to Density f, we are asked to construct a Density Estimate φ n such that $$ \int {|{\varphi _n} - f| \approx {\text{min}}\left( {\int {|fn - f|,\int {|{g_n} - f|} } } \right).} $$ This simple problem turns out to be surprisingly difficult, even if the Estimates f n and g n are fixed densities, not depending on the data.

  • Weighted k-nearest neighbor Density Estimates
    Lectures on the Nearest Neighbor Method, 2020
    Co-Authors: Gerard Biau, Luc Devroye
    Abstract:

    There are different ways to weigh or smooth the k-nearest neighbor Density Estimate. Some key ideas are surveyed in this chapter. For some of them, consistency theorems are stated.

  • Estimation of a Density Using Real and Artificial Data
    IEEE Transactions on Information Theory, 2013
    Co-Authors: Luc Devroye, Tina Felber, Michael Kohler
    Abstract:

    Let X, X1, X2, ... be independent and identically distributed Rd-valued random variables and let m: Rd → R be a measurable function such that a Density f of Y=m(X) exists. Given a sample of the distribution of (X,Y) and additional independent observations of X , we are interested in estimating f. We apply a regression Estimate to the sample of (X,Y) and use this Estimate to generate additional artificial observations of Y . Using these artificial observations together with the real observations of Y, we construct a Density Estimate of f by using a convex combination of two kernel Density Estimates. It is shown that if the bandwidths satisfy the usual conditions and if in addition the supremum norm error of the regression Estimate converges almost surely faster toward zero than the bandwidth of the kernel Density Estimate applied to the artificial data, then the convex combination of the two Density Estimates is L1-consistent. The performance of the Estimate for finite sample size is illustrated by simulated data, and the usefulness of the procedure is demonstrated by applying it to a Density estimation problem in a simulation model.

  • a weighted k nearest neighbor Density Estimate for geometric inference
    Electronic Journal of Statistics, 2011
    Co-Authors: Gerard Biau, Luc Devroye, Frederic Chazal, David Cohensteiner, Carlos Rodriguez
    Abstract:

    Motivated by a broad range of potential applications in topological and geometric inference, we introduce a weighted version of the k-nearest neighbor Density Estimate. Various pointwise consistency results of this Estimate are established. We present a general central limit theorem under the lightest possible conditions. In addition, a strong approximation result is obtained and the choice of the optimal set of weights is discussed. In particular, the classical k-nearest neighbor Estimate is not optimal in a sense described in the manuscript. The proposed method has been implemented to recover level sets in both simulated and real-life data.

  • On the Hilbert kernel Density Estimate
    Statistics & Probability Letters, 1999
    Co-Authors: Luc Devroye, Adam Krzyżak
    Abstract:

    Let X be an -valued random variable with unknown Density f. Let X1,...,Xn be i.i.d. random variables drawn from f. We study the pointwise convergence of a new class of Density Estimates, of which the most striking member is the Hilbert kernel Estimatewhere Vd is the volume of the unit ball in . This is particularly interesting as this Density Estimate is basically of the format of the kernel Estimate (except for the log n factor in front) and the kernel Estimate does not have a smoothing parameter.

X. Hong - One of the best experts on this subject based on the ideXlab platform.

  • Probability Density Estimation With Tunable Kernels Using Orthogonal Forward Regression
    IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics), 2010
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    A generalized or tunable-kernel model is proposed for probability Density function estimation based on an orthogonal forward regression procedure. Each stage of the Density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse Density Estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact Density Estimate accurately.

  • Probability Density Function Estimation Using Orthogonal Forward Regression
    2007 International Joint Conference on Neural Networks, 2007
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    Using the classical Parzen window Estimate as the target function, the kernel Density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel Density Estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the Density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the Density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel Density Estimate with comparable accuracy to that of the full-sample optimised Parzen window Density Estimate.

  • Sparse kernel Density construction using orthogonal forward regression with leave-one-out test score and local regularization
    IEEE Transactions on Systems Man and Cybernetics Part B (Cybernetics), 2004
    Co-Authors: S. Chen, X. Hong, C.j. Harris
    Abstract:

    This paper presents an efficient construction algorithm for obtaining sparse kernel Density Estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the Density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the Density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the Density construction procedure. This is in contrast to an existing state-of-art kernel Density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel Density Estimate with comparable accuracy to that of the full sample optimized Parzen window Density Estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel Density Estimates.

Alexander Johannes Smola - One of the best experts on this subject based on the ideXlab platform.

  • The kernel mutual information
    2004
    Co-Authors: Arthur Gretton, Ralf Herbrich, Alexander Johannes Smola
    Abstract:

    We introduce a new contrast function, the kernel mutual information (KMI), to measure the degree of independence of continuous random variables. This contrast function provides an approximate upper bound on the mutual information, as measured near independence, and is based on a kernel Density Estimate of the mutual information between a discretised approximation of the continuous random variables. We show that Bach and Jordan's kernel generalised variance (KGV) is also an upper bound on the same kernel Density Estimate, but is looser. Finally, we suggest that the addition of a regularising term in the KGV causes it to approach the KMI, which motivates the introduction of this regularisation.

  • The kernel mutual information
    2003 IEEE International Conference on Acoustics Speech and Signal Processing 2003. Proceedings. (ICASSP '03)., 2003
    Co-Authors: Arthur Gretton, Ralf Herbrich, Alexander Johannes Smola
    Abstract:

    We introduce a new contrast function, the kernel mutual information (KMI), to measure the degree of independence of continuous random variables. This contrast function provides an approximate upper bound on the mutual information, as measured near independence, and is based on a kernel Density Estimate of the mutual information between a discretised approximation of the continuous random variables. We show that the kernel generalised variance (KGV) of F. Bach and M. Jordan (see JMLR, vol.3, p.1-48, 2002) is also an upper bound on the same kernel Density Estimate, but is looser. Finally, we suggest that the addition of a regularising term in the KGV causes it to approach the KMI, which motivates the introduction of this regularisation.