Frequentist Interpretation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 243 Experts worldwide ranked by ideXlab platform

P I Fierens - One of the best experts on this subject based on the ideXlab platform.

  • An extension of chaotic probability models to real-valued variables
    International Journal of Approximate Reasoning, 2009
    Co-Authors: P I Fierens
    Abstract:

    In a recent series of papers, Fine and colleagues [P.I. Fierens, T.L. Fine, Towards a Frequentist Interpretation of sets of measures, in: G. de Cooman, T.L. Fine, T. Seidenfeld (Eds.), Proceedings of the Second International Symposium on Imprecise Probabilities and Their Applications, Shaker Publishing, 2001; P.I. Fierens, T.L. Fine, Towards a chaotic probability model for Frequentist probability, in: J. Bernard, T. Seidenfeld, M. Zaffalon (Eds.), Proceedings of the Third International Symposium on Imprecise Probabilities and Their Applications, Carleton Scientific, 2003; L.C. Rego, T.L. Fine, Estimation of chaotic probabilities, in: Proceedings of the Fourth International Symposium on Imprecise Probabilities and Their Applications, 2005] have presented the first steps towards a Frequentist understanding of sets of measures as imprecise probability models which have been called chaotic models. Simulation of the chaotic variables is an integral part of the theory. Previous models, however, dealt only with sets of probability measures on finite algebras, that is, probability measures which can be related to variables with a finite number of possible values. In this paper, an extension of chaotic models is proposed in order to deal with the more general case of real-valued variables. This extension is based on the introduction of real-valued test functions which generalize binary-valued choices in the previous work.

  • Towards a chaotic probability model for Frequentist probability
    2003
    Co-Authors: Terrence L Fine, P I Fierens
    Abstract:

    We adopt the same mathematical model of a set M of probability measures as is central to the theory of coherent imprecise probability. However, we endow this model with an objective, Frequentist Interpretation in place of a behavioral subjective one. We seek to use M to model stable physical sources of time series data that have highly irregular behavior and not to model states of belief or knowledge that are assuredly imprecise. The approach we present in this paper is to understand a set of measures model M not as a traditional compound hypothesis, in which one of the measures in M is a true description, but rather as one in which none of the individual measures in M provides an adequate description of the potential behavior of the physical source as actualized in the form of a long time series. We provide an instrumental construction of random process measures consistent with M and the highly irregular physical phenomena we intend to model by M. This construction provides us with the basic tools for simulation of our models. We present a method to estimate M from data which studies any given data sequence by analyzing it into subsequences selected by a set of computable rules. We prove results that help us to choose an adequate set of rules and evaluate the performance of the estimator.

  • towards a Frequentist Interpretation of sets of measures
    International Symposium on Imprecise Probabilities and Their Applications, 2001
    Co-Authors: P I Fierens, Terrence L Fine
    Abstract:

    We explore an objective, Frequentist-related Interpretation for a set of measures M such as would determine upper and lower envelopes; M also specifies the classical Frequentist concept of a compound hypothesis. However, in contrast to the compound hypothesis case, in which there is a true measure µθ0 ∈M that is assumed either unknown or random selected, we do not believe that any single measure is the true description for the random phenomena in question. Rather, it is the whole set M, itself, that is the appropriate imprecise probabilistic description. Envelope models have hitherto been used almost exclusively in subjective settings to model the uncertainty or strength of belief of individuals or groups. Our interest in these imprecise probability representations is as mathematical models for those objective Frequentist phenomena of engineering and scientific significance where what is known may be substantial, but relative frequencies, nonetheless, lack (statistical) stability. A full probabilistic methodology needs not only an appropriate mathematical probability concept, enriched by such notions as expectation and conditioning, but also an interpretive component to identify data that is typical of the model and an estimation component to enable inference to the model from data and background knowledge. Our starting point is this first task of determining typicality. Kolmogorov complexity is used as the key non-probabilistic idea to enable us to create simulation data from an envelope model in an attempt to identify “typical” sequences. First steps in finite sequence Frequentist modeling will also be taken towards inference of the set M from finite Frequentist data and then applied to data on vowel production from an Internet message source.

  • ISIPTA - Towards a Frequentist Interpretation of Sets of Measures.
    2001
    Co-Authors: P I Fierens, Terrence L Fine
    Abstract:

    We explore an objective, Frequentist-related Interpretation for a set of measures M such as would determine upper and lower envelopes; M also specifies the classical Frequentist concept of a compound hypothesis. However, in contrast to the compound hypothesis case, in which there is a true measure µθ0 ∈M that is assumed either unknown or random selected, we do not believe that any single measure is the true description for the random phenomena in question. Rather, it is the whole set M, itself, that is the appropriate imprecise probabilistic description. Envelope models have hitherto been used almost exclusively in subjective settings to model the uncertainty or strength of belief of individuals or groups. Our interest in these imprecise probability representations is as mathematical models for those objective Frequentist phenomena of engineering and scientific significance where what is known may be substantial, but relative frequencies, nonetheless, lack (statistical) stability. A full probabilistic methodology needs not only an appropriate mathematical probability concept, enriched by such notions as expectation and conditioning, but also an interpretive component to identify data that is typical of the model and an estimation component to enable inference to the model from data and background knowledge. Our starting point is this first task of determining typicality. Kolmogorov complexity is used as the key non-probabilistic idea to enable us to create simulation data from an envelope model in an attempt to identify “typical” sequences. First steps in finite sequence Frequentist modeling will also be taken towards inference of the set M from finite Frequentist data and then applied to data on vowel production from an Internet message source.

T. K. Dijkstra - One of the best experts on this subject based on the ideXlab platform.

Mieczyslaw A. Klopotek - One of the best experts on this subject based on the ideXlab platform.

  • Identification and Interpretation of Belief Structure in Dempster-Shafer Theory.
    arXiv: Artificial Intelligence, 2017
    Co-Authors: Mieczyslaw A. Klopotek
    Abstract:

    Mathematical Theory of Evidence called also Dempster-Shafer Theory (DST) is known as a foundation for reasoning when knowledge is expressed at various levels of detail. Though much research effort has been committed to this theory since its foundation, many questions remain open. One of the most important open questions seems to be the relationship between frequencies and the Mathematical Theory of Evidence. The theory is blamed to leave frequencies outside (or aside of) its framework. The seriousness of this accusation is obvious: (1) no experiment may be run to compare the performance of DST-based models of real world processes against real world data, (2) data may not serve as foundation for construction of an appropriate belief model. In this paper we develop a Frequentist Interpretation of the DST bringing to fall the above argument against DST. An immediate consequence of it is the possibility to develop algorithms acquiring automatically DST belief models from data. We propose three such algorithms for various classes of belief model structures: for tree structured belief networks, for poly-tree belief networks and for general type belief networks.

Nils-olav Skeie - One of the best experts on this subject based on the ideXlab platform.

  • Analysing uncertainty in parameter estimation and prediction for grey-box building thermal behaviour models
    Energy and Buildings, 2020
    Co-Authors: Ole Magnus Brastein, A. Ghaderi, C.f. Pfeiffer, Nils-olav Skeie
    Abstract:

    Abstract The potential reduction in energy consumption for space heating in buildings realised by the use of predictive control systems directly depends on the prediction accuracy of the building thermal behaviour model. Hence, model calibration methods that allow improved prediction accuracy for specific buildings have received significant scientific interest. An extension of this work is the potential use of calibrated models to estimate the thermal properties of an existing building, using measurements collected from the actual building, rather than relying on building specifications. Simplified thermal network models, often expressed as grey-box Resistor-Capacitor circuit analogue models, have been successfully applied in the prediction setting. However, the use of such models as soft sensors for the thermal properties of a building requires an assumption of physical Interpretation of the estimated parameters. The parameters of these models are estimated under the effects of both epistemic and aleatoric uncertainty, in the model structure and the calibration data. This uncertainty is propagated to the estimated parameters. Depending on the model structure and the dynamic information content in the data, the parameters may not be identifiable, thus resulting in ambiguous point estimates. In this paper, the Profile Likelihood method, typical of a Frequentist Interpretation of parameter estimation, is used to diagnose parameter identifiability by projecting the likelihood function onto each parameter. If a Bayesian framework is used, treating the parameters as random variables with a probability distribution in the parameter space, projections of the posterior distribution can be studied by using the Profile Posterior method. The latter results in projections that are similar to the marginal distributions obtained by the popular Markov Chain Monte Carlo method. The different approaches are applied and compared for five experimental cases based on observed data. Ambiguity of the estimated parameters is resolved by the application of a prior distribution derived from a priori knowledge, or by appropriate modification of the model structure. The posterior predictive distribution of the model output predictions is shown to be mostly unaffected by the parameter non-identifiability.

Terrence L Fine - One of the best experts on this subject based on the ideXlab platform.

  • ISIPTA - Estimation of Chaotic Probabilities
    2005
    Co-Authors: Leandro Chaves Rêgo, Terrence L Fine
    Abstract:

    A Chaotic Probability model is a usual set of probability measures, M, the totality of which is endowed with an objective, Frequentist Interpretation as opposed to being viewed as a statistical compound hypothesis or an imprecise behavioral subjective one. In the prior work of Fierens and Fine, given finite time series data, the estimation of the Chaotic Probability model is based on the analysis of a set of relative frequencies of events taken along a set of subsequences selected by a set of rules. Fierens and Fine proved the existence of families of causal subsequence selection rules that can make M visible, but they did not provide a methodology for finding such family. This paper provides a universal methodology for finding a family of subsequences that can make M visible such that relative frequencies taken along such subsequences are provably close enough to a measure in M with high probability.

  • Towards a chaotic probability model for Frequentist probability
    2003
    Co-Authors: Terrence L Fine, P I Fierens
    Abstract:

    We adopt the same mathematical model of a set M of probability measures as is central to the theory of coherent imprecise probability. However, we endow this model with an objective, Frequentist Interpretation in place of a behavioral subjective one. We seek to use M to model stable physical sources of time series data that have highly irregular behavior and not to model states of belief or knowledge that are assuredly imprecise. The approach we present in this paper is to understand a set of measures model M not as a traditional compound hypothesis, in which one of the measures in M is a true description, but rather as one in which none of the individual measures in M provides an adequate description of the potential behavior of the physical source as actualized in the form of a long time series. We provide an instrumental construction of random process measures consistent with M and the highly irregular physical phenomena we intend to model by M. This construction provides us with the basic tools for simulation of our models. We present a method to estimate M from data which studies any given data sequence by analyzing it into subsequences selected by a set of computable rules. We prove results that help us to choose an adequate set of rules and evaluate the performance of the estimator.

  • towards a Frequentist Interpretation of sets of measures
    International Symposium on Imprecise Probabilities and Their Applications, 2001
    Co-Authors: P I Fierens, Terrence L Fine
    Abstract:

    We explore an objective, Frequentist-related Interpretation for a set of measures M such as would determine upper and lower envelopes; M also specifies the classical Frequentist concept of a compound hypothesis. However, in contrast to the compound hypothesis case, in which there is a true measure µθ0 ∈M that is assumed either unknown or random selected, we do not believe that any single measure is the true description for the random phenomena in question. Rather, it is the whole set M, itself, that is the appropriate imprecise probabilistic description. Envelope models have hitherto been used almost exclusively in subjective settings to model the uncertainty or strength of belief of individuals or groups. Our interest in these imprecise probability representations is as mathematical models for those objective Frequentist phenomena of engineering and scientific significance where what is known may be substantial, but relative frequencies, nonetheless, lack (statistical) stability. A full probabilistic methodology needs not only an appropriate mathematical probability concept, enriched by such notions as expectation and conditioning, but also an interpretive component to identify data that is typical of the model and an estimation component to enable inference to the model from data and background knowledge. Our starting point is this first task of determining typicality. Kolmogorov complexity is used as the key non-probabilistic idea to enable us to create simulation data from an envelope model in an attempt to identify “typical” sequences. First steps in finite sequence Frequentist modeling will also be taken towards inference of the set M from finite Frequentist data and then applied to data on vowel production from an Internet message source.

  • ISIPTA - Towards a Frequentist Interpretation of Sets of Measures.
    2001
    Co-Authors: P I Fierens, Terrence L Fine
    Abstract:

    We explore an objective, Frequentist-related Interpretation for a set of measures M such as would determine upper and lower envelopes; M also specifies the classical Frequentist concept of a compound hypothesis. However, in contrast to the compound hypothesis case, in which there is a true measure µθ0 ∈M that is assumed either unknown or random selected, we do not believe that any single measure is the true description for the random phenomena in question. Rather, it is the whole set M, itself, that is the appropriate imprecise probabilistic description. Envelope models have hitherto been used almost exclusively in subjective settings to model the uncertainty or strength of belief of individuals or groups. Our interest in these imprecise probability representations is as mathematical models for those objective Frequentist phenomena of engineering and scientific significance where what is known may be substantial, but relative frequencies, nonetheless, lack (statistical) stability. A full probabilistic methodology needs not only an appropriate mathematical probability concept, enriched by such notions as expectation and conditioning, but also an interpretive component to identify data that is typical of the model and an estimation component to enable inference to the model from data and background knowledge. Our starting point is this first task of determining typicality. Kolmogorov complexity is used as the key non-probabilistic idea to enable us to create simulation data from an envelope model in an attempt to identify “typical” sequences. First steps in finite sequence Frequentist modeling will also be taken towards inference of the set M from finite Frequentist data and then applied to data on vowel production from an Internet message source.