Bayesian Interpretation - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Bayesian Interpretation

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Zeungnam Bien – 1st expert on this subject based on the ideXlab platform

  • IRI – Agglomerative Fuzzy Clustering based on Bayesian Interpretation
    2007 IEEE International Conference on Information Reuse and Integration, 2007
    Co-Authors: Zeungnam Bien

    Abstract:

    This paper presents iterative Bayesian fuzzy clustering (IBFC), which is based on incorporating integrated adaptive fuzzy clustering (IAFC) with Bayesian decision theory, and finally derives agglomerative IBFC based on its Bayesian Interpretation. IAFC performs a vigilance test so that outliers can be eliminated from learning procedure. However, we have no theoretical background on the rationality of the test. Thus, we claim that the decision and vigilance test of IBFC follow Bayesian minimum risk classification rule within a framework of Bayesian decision theory. Moreover, based on this Interpretation, we propose Agglomerative IBFC capable of clustering data of complex structure. Test on synthetic data shows an outstanding success rate, and test on benchmark data shows that our proposed method performs better than several existing methods.

  • Agglomerative Fuzzy Clustering based on Bayesian Interpretation
    2007 IEEE International Conference on Information Reuse and Integration, 2007
    Co-Authors: Zeungnam Bien

    Abstract:

    This paper presents iterative Bayesian fuzzy clustering (IBFC), which is based on incorporating integrated adaptive fuzzy clustering (IAFC) with Bayesian decision theory, and finally derives agglomerative IBFC based on its Bayesian Interpretation. IAFC performs a vigilance test so that outliers can be eliminated from learning procedure. However, we have no theoretical background on the rationality of the test. Thus, we claim that the decision and vigilance test of IBFC follow Bayesian minimum risk classification rule within a framework of Bayesian decision theory. Moreover, based on this Interpretation, we propose Agglomerative IBFC capable of clustering data of complex structure. Test on synthetic data shows an outstanding success rate, and test on benchmark data shows that our proposed method performs better than several existing methods.

  • Bayesian Interpretation of Adaptive Fuzzy Neural Network Model
    2006 IEEE International Conference on Fuzzy Systems, 2006
    Co-Authors: Zeungnam Bien

    Abstract:

    This paper conveys Bayesian Interpretation of improved integrated adaptive fuzzy clustering(IAFC), which is one of the adaptive fuzzy neural network models and suggests upper bound of vigilance parameter, which gives us a guideline to endow IAFC with flexibility within the framework of minimum risk classifier. Besides, we proposed the off-line and on-line learning strategy of IAFC. The proposed techniques are applied to construct facial expression recognition system dealing with neutral, happy, sad, and angry. We empirically show that proposed methods are able to outperform the conventional IAFC.

Steve Renals – 2nd expert on this subject based on the ideXlab platform

  • Hierarchical Bayesian language models for conversational speech recognition
    IEEE Transactions on Audio Speech and Language Processing, 2010
    Co-Authors: Songfang Huang, Steve Renals

    Abstract:

    Traditional n-gram language models are widely used in state-of-the-art large vocabulary speech recognition systems. This simple model suffers from some limitations, such as overfitting of maximum-likelihood estimation and the lack of rich contextual knowledge sources. In this paper, we exploit a hierarchical Bayesian Interpretation for language modeling, based on a nonparametric prior called the Pitman-Yor process. This offers a principled approach to language model smoothing, embedding the power-law distribution for natural language. Experiments on the recognition of conversational speech in multiparty meetings demonstrate that by using hierarchical Bayesian language models, we are able to achieve significant reductions in perplexity and word error rate.

  • Hierarchical Bayesian Language Models for Conversational Speech Recognition
    IEEE Transactions on Audio Speech and Language Processing, 2010
    Co-Authors: Songfang Huang, Steve Renals

    Abstract:

    Traditional n -gram language models are widely used in state-of-the-art large vocabulary speech recognition systems. This simple model suffers from some limitations, such as overfitting of maximum-likelihood estimation and the lack of rich contextual knowledge sources. In this paper, we exploit a hierarchical Bayesian Interpretation for language modeling, based on a nonparametric prior called Pitman-Yor process. This offers a principled approach to language model smoothing, embedding the power-law distribution for natural language. Experiments on the recognition of conversational speech in multiparty meetings demonstrate that by using hierarchical Bayesian language models, we are able to achieve significant reductions in perplexity and word error rate.

Dennis Mcnevin – 3rd expert on this subject based on the ideXlab platform

  • Response to: Biedermann & Hicks (2019), Commentary on “Dennis McNevin, Bayesian Interpretation of discrete class characteristics, Forensic Science International, 292 (2018) 125–130”
    Forensic Science International, 2019
    Co-Authors: Dennis Mcnevin

    Abstract:

    This letter is a response to the commentary by Biedermann & Hicks (2019) on “Dennis McNevin, Bayesian Interpretation of discrete class characteristics, Forensic Science International, 292 (2018) 125–130”.

  • Bayesian Interpretation of discrete class characteristics
    Forensic Science International, 2018
    Co-Authors: Dennis Mcnevin

    Abstract:

    Abstract Bayesian Interpretation of forensic evidence has become dominated by the likelihood ratio (LR) with a large LR generally considered favourable to the prosecution hypothesis, H P , over the defence hypothesis, H D . However, the LR simply quantifies by how much the prior odds ratio of the probability of H P relative to H D has been improved by the forensic evidence to provide a posterior ratio. Because the prior ratio is mostly neglected, the posterior ratio is largely unknown, regardless of the LR used to improve it. In fact, we show that the posterior ratio will only favour H P when LR is at least as large as the number of things that could possibly be the source of that evidence, all being equally able to contribute. This restriction severely limits the value of evidence to the prosecution when only a single, discrete class characteristic is used to match a subset of these things to the evidence. The limitation can be overcome by examining more than one individual characteristic, as long as they are independent of each other, as they are for the genotypes at multiple loci combined for DNA evidence. We present a criterion for determining how many such characteristics are required. Finally, we conclude that a frequentist Interpretation is inappropriate as a measure of the strength of forensic evidence precisely because it only estimates the denominator of the LR.