Normalization Term

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 13878 Experts worldwide ranked by ideXlab platform

T.l. Ainsworth - One of the best experts on this subject based on the ideXlab platform.

  • polarimetric sar characterization of man made structures in urban areas using normalized circular pol correlation coefficients
    Remote Sensing of Environment, 2008
    Co-Authors: T.l. Ainsworth, Dale L. Schuler
    Abstract:

    Abstract Polarimetric Synthetic Aperture Radar (SAR) backscatter from man-made structures in urban areas is quite different than backscatter from predominantly natural areas. Backscatter from natural areas is often reflection symmetric; i.e., characterized by near zero values for covariance matrix off-diagonal Terms of the form 〈 S HV S HH ⁎ 〉, 〈 S HV S VV ⁎ 〉 and their conjugates. A new approach is proposed to detect scattering from non-reflection symmetric structures using circular-pol, RR-LL, correlation coefficients, | ρ |. This method creates a Normalization Term, | ρ 0 |, and then forms a ratio, | ρ |/| ρ 0 |. The Normalization Term, | ρ 0 |, contains the same diagonal Terms of the covariance matrix. The 〈 S HV S HH ⁎ 〉 and 〈 S HV S VV ⁎ 〉 off-diagonal Terms and their conjugates are purposely set to zero. The ratio, | ρ |/| ρ 0 |, is rewritten as a product of separable helicity (τ) and orientation angle ( θ ) dependencies. The mathematical form of the τ dependence is a resonant singularity, or pole, Term. This pole significantly enhances returns from man-made, high helicity, non-reflection symmetric structures. These structures have values of τ near the resonance value at τ = ± 1. Natural scatterers possess very strong RR / LL symmetry (τ ≈ 0) and the pole response for them is correspondingly weak. The dependence of | ρ |/| ρ 0 | on the orientation angle ( θ ) is known from previous studies to be useful for measuring urban building alignments (relative to the azimuth direction) and measuring surface topography. The ratio | ρ |/| ρ 0 | reduces much of the un-needed image detail of backscatter variations from natural areas of different surface roughness. This image simplification further facilitates detection of localized man-made targets.

  • IGARSS - Polarimetric SAR Detection of Man-Made Structures Using Normalized Circular-pol Correlation Coefficients
    2006 IEEE International Symposium on Geoscience and Remote Sensing, 2006
    Co-Authors: Dale L. Schuler, J.s. Lee, T.l. Ainsworth
    Abstract:

    Polarimetric synthetic aperture radar (SAR) backscatter from man-made structures is often quite different than scatter from predominantly natural areas. Backscatter from natural areas is often characterized by near zero values for linear-basis covariance matrix off- diagonal Terms of the form and . A new approach is proposed to detect man-made structures using circular-pol RR-LL correlation coefficients. This method uses a Normalization Term, which enhances the return from man-made structures and eliminates most of the unnecessary details of the backscatter from natural areas.

Dale L. Schuler - One of the best experts on this subject based on the ideXlab platform.

  • polarimetric sar characterization of man made structures in urban areas using normalized circular pol correlation coefficients
    Remote Sensing of Environment, 2008
    Co-Authors: T.l. Ainsworth, Dale L. Schuler
    Abstract:

    Abstract Polarimetric Synthetic Aperture Radar (SAR) backscatter from man-made structures in urban areas is quite different than backscatter from predominantly natural areas. Backscatter from natural areas is often reflection symmetric; i.e., characterized by near zero values for covariance matrix off-diagonal Terms of the form 〈 S HV S HH ⁎ 〉, 〈 S HV S VV ⁎ 〉 and their conjugates. A new approach is proposed to detect scattering from non-reflection symmetric structures using circular-pol, RR-LL, correlation coefficients, | ρ |. This method creates a Normalization Term, | ρ 0 |, and then forms a ratio, | ρ |/| ρ 0 |. The Normalization Term, | ρ 0 |, contains the same diagonal Terms of the covariance matrix. The 〈 S HV S HH ⁎ 〉 and 〈 S HV S VV ⁎ 〉 off-diagonal Terms and their conjugates are purposely set to zero. The ratio, | ρ |/| ρ 0 |, is rewritten as a product of separable helicity (τ) and orientation angle ( θ ) dependencies. The mathematical form of the τ dependence is a resonant singularity, or pole, Term. This pole significantly enhances returns from man-made, high helicity, non-reflection symmetric structures. These structures have values of τ near the resonance value at τ = ± 1. Natural scatterers possess very strong RR / LL symmetry (τ ≈ 0) and the pole response for them is correspondingly weak. The dependence of | ρ |/| ρ 0 | on the orientation angle ( θ ) is known from previous studies to be useful for measuring urban building alignments (relative to the azimuth direction) and measuring surface topography. The ratio | ρ |/| ρ 0 | reduces much of the un-needed image detail of backscatter variations from natural areas of different surface roughness. This image simplification further facilitates detection of localized man-made targets.

  • IGARSS - Polarimetric SAR Detection of Man-Made Structures Using Normalized Circular-pol Correlation Coefficients
    2006 IEEE International Symposium on Geoscience and Remote Sensing, 2006
    Co-Authors: Dale L. Schuler, J.s. Lee, T.l. Ainsworth
    Abstract:

    Polarimetric synthetic aperture radar (SAR) backscatter from man-made structures is often quite different than scatter from predominantly natural areas. Backscatter from natural areas is often characterized by near zero values for linear-basis covariance matrix off- diagonal Terms of the form and . A new approach is proposed to detect man-made structures using circular-pol RR-LL correlation coefficients. This method uses a Normalization Term, which enhances the return from man-made structures and eliminates most of the unnecessary details of the backscatter from natural areas.

Kenji Yamanishi - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Computation of Normalized Maximum Likelihood Codes for Gaussian Mixture Models With Its Applications to Clustering
    IEEE Transactions on Information Theory, 2013
    Co-Authors: So Hirai, Kenji Yamanishi
    Abstract:

    This paper addresses the issue of estimating from a given data sequence the number of mixture components for a Gaussian mixture model(GMM). Our approach is to compute the normalized maximum likelihood (NML) code length for the data sequence relative to a GMM, then to find the mixture size that attains the minimum of the NML on the basis of the minimum description length principle. For finite domains, Kontkanen and Myllymaki proposed a method for efficient computation of the NML code length for specific models, however, for general classes over infinite domains, it has remained open how we compute the NML code length efficiently. We first propose a general method for calculating the NML code length for a general exponential family. Then, we apply it to the efficient computation of the NML code length for a GMM. The key idea is to restrict the data domain in combination with the technique of employing a generating function for computing the Normalization Term for a GMM. We use artificial datasets to empirically demonstrate that our estimate of the mixture size converges to the true one significantly faster than other criteria.

  • Normalized Maximum Likelihood Coding for Exponential Family with Its Applications to Optimal Clustering
    arXiv: Learning, 2012
    Co-Authors: So Hirai, Kenji Yamanishi
    Abstract:

    We are concerned with the issue of how to calculate the normalized maximum likelihood (NML) code-length. There is a problem that the Normalization Term of the NML code-length may diverge when it is continuous and unbounded and a straightforward computation of it is highly expensive when the data domain is finite . In previous works it has been investigated how to calculate the NML code-length for specific types of distributions. We first propose a general method for computing the NML code-length for the exponential family. Then we specifically focus on Gaussian mixture model (GMM), and propose a new efficient method for computing the NML to them. We develop it by generalizing Rissanen's re-normalizing technique. Then we apply this method to the clustering issue, in which a clustering structure is modeled using a GMM, and the main task is to estimate the optimal number of clusters on the basis of the NML code-length. We demonstrate using artificial data sets the superiority of the NML-based clustering over other criteria such as AIC, BIC in Terms of the data size required for high accuracy rate to be achieved.

Michael Wagner - One of the best experts on this subject based on the ideXlab platform.

  • Similarity Normalization for speaker verification by fuzzy fusion
    Pattern Recognition, 2000
    Co-Authors: Tuan D. Pham, Michael Wagner
    Abstract:

    Abstract Similarity or likelihood Normalization techniques are important for speaker verification systems as they help to alleviate the variations in the speech signals. In the conventional Normalization, the a priori probabilities of the cohort speakers are assumed to be equal. From this standpoint, we apply the theory of fuzzy measure and fuzzy integral to combine the likelihood values of the cohort speakers in which the assumption of equal a priori probabilities is relaxed. This approach replaces the conventional Normalization Term by the fuzzy integral which acts as a non-linear fusion of the similarity measures of an utterance assigned to the cohort speakers. We illustrate the performance of the proposed approach by testing the speaker verification system with both the conventional and the fuzzy algorithms using the commercial speech corpus TI46. The results in Terms of the equal error rates show that the speaker verification system using the fuzzy integral is more flexible and more favorable than the conventional Normalization method.

  • Speaker Verification with Fuzzy Fusion and Genetic Optimization
    Journal of Advanced Computational Intelligence and Intelligent Informatics, 1999
    Co-Authors: Tuan D. Pham, Michael Wagner
    Abstract:

    Most speaker verification systems are based on similarity or likelihood Normalization techniques as they help to better cope with speaker variability. In the conventional Normalization, the it a priori probabilities of the cohort speakers are assumed to be equal. From this standpoint, we apply the fuzzy integral and genetic algorithms to combine the likelihood values of the cohort speakers in which the assumption of equal <I>a priori</I> probabilities is relaxed. This approach replaces the conventional Normalization Term by the fuzzy integral which acts as a non-linear fusion of the similarity measures of an utterance assigned to the cohort speakers. Furthermore, genetic algorithms are applied to find optimal fuzzy densities which are very important for the fuzzy fusion. We illustrate the performance of the proposed approach by testing the speaker verification system with both the conventional and the proposed algorithms using the commercial speech corpus TI46. The results in Terms of the equal error rates show that the speaker verification system using the fuzzy integral is more favorable than the conventional Normalization method.

Song Liu - One of the best experts on this subject based on the ideXlab platform.

  • Estimating Density Models with Truncation Boundaries
    arXiv: Machine Learning, 2019
    Co-Authors: Song Liu, Takafumi Kanamori, Daniel J. Williams
    Abstract:

    Truncated densities are probability density functions defined on truncated domains. They share the same parametric form with their non-truncated counterparts up to a Normalization Term. Since the computation of their Normalization Term is usually infeasible, unnormalized models are used for parameter estimation. Score Matching is a powerful tool for fitting parameters in unnormalized models. However, it cannot be straightforwardly applied here as boundary conditions used to derive a tractable objective are usually not satisfied by truncated distributions. In this paper, we study parameter estimation for truncated probability densities using generalized SM. The choice of the weight function in generalized SM is critical to provide a computationally tractable and statistically preferable estimator even for complicated boundaries. As to the weight function, we use the distance function that is defined as the distance from a point in the domain to the boundary of the domain. We show the consistency of the proposed method as well as its link with the minimum Stein discrepancy estimator. The usefulness of our method is demonstrated by numerical experiments and real-world experiments.

  • NeurIPS - Fisher Efficient Inference of Intractable Models
    2019
    Co-Authors: Song Liu, Takafumi Kanamori, Wittawat Jitkrittum, Yu Chen
    Abstract:

    Maximum Likelihood Estimators (MLE) has many good properties. For example, the asymptotic variance of MLE solution attains equality of the asymptotic Cram{\'e}r-Rao lower bound (efficiency bound), which is the minimum possible variance for an unbiased estimator. However, obtaining such MLE solution requires calculating the likelihood function which may not be tractable due to the Normalization Term of the density model. In this paper, we derive a Discriminative Likelihood Estimator (DLE) from the Kullback-Leibler divergence minimization criterion implemented via density ratio estimation and a Stein operator. We study the problem of model inference using DLE. We prove its consistency and show that the asymptotic variance of its solution can attain the equality of the efficiency bound under mild regularity conditions. We also propose a dual formulation of DLE which can be easily optimized. Numerical studies validate our asymptotic theorems and we give an example where DLE successfully estimates an intractable model constructed using a pre-trained deep neural network.

  • Fisher Efficient Inference of Intractable Models
    arXiv: Machine Learning, 2018
    Co-Authors: Song Liu, Takafumi Kanamori, Wittawat Jitkrittum, Yu Chen
    Abstract:

    Maximum Likelihood Estimators (MLE) has many good properties. For example, the asymptotic variance of MLE solution attains equality of the asymptotic Cram{e}r-Rao lower bound (efficiency bound), which is the minimum possible variance for an unbiased estimator. However, obtaining such MLE solution requires calculating the likelihood function which may not be tractable due to the Normalization Term of the density model. In this paper, we derive a Discriminative Likelihood Estimator (DLE) from the Kullback-Leibler divergence minimization criterion implemented via density ratio estimation and a Stein operator. We study the problem of model inference using DLE. We prove its consistency and show that the asymptotic variance of its solution can attain the equality of the efficiency bound under mild regularity conditions. We also propose a dual formulation of DLE which can be easily optimized. Numerical studies validate our asymptotic theorems and we give an example where DLE successfully estimates an intractable model constructed using a pre-trained deep neural network.

  • Direct learning of sparse changes in markov networks by density ratio estimation
    Neural computation, 2014
    Co-Authors: Song Liu, John Quinn, Michael U. Gutmann, Taiji Suzuki, Masashi Sugiyama
    Abstract:

    We propose a new method for detecting changes in Markov network structure between two sets of samples. Instead of naively fitting two Markov network models separately to the two data sets and figuring out their difference, we directly learn the network structure change by estimating the ratio of Markov network models. This density-ratio formulation naturally allows us to introduce sparsity in the network structure change, which highly contributes to enhancing interpretability. Furthermore, computation of the Normalization Term, a critical bottleneck of the naive approach, can be remarkably mitigated. We also give the dual formulation of the optimization problem, which further reduces the computation cost for large-scale Markov networks. Through experiments, we demonstrate the usefulness of our method.

  • Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation
    arXiv: Machine Learning, 2013
    Co-Authors: Song Liu, John Quinn, Michael U. Gutmann, Taiji Suzuki, Masashi Sugiyama
    Abstract:

    We propose a new method for detecting changes in Markov network structure between two sets of samples. Instead of naively fitting two Markov network models separately to the two data sets and figuring out their difference, we \emph{directly} learn the network structure change by estimating the ratio of Markov network models. This density-ratio formulation naturally allows us to introduce sparsity in the network structure change, which highly contributes to enhancing interpretability. Furthermore, computation of the Normalization Term, which is a critical bottleneck of the naive approach, can be remarkably mitigated. We also give the dual formulation of the optimization problem, which further reduces the computation cost for large-scale Markov networks. Through experiments, we demonstrate the usefulness of our method.