True Probability

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 249 Experts worldwide ranked by ideXlab platform

Michael Emmett Brady - One of the best experts on this subject based on the ideXlab platform.

  • keynes rejected the concepts of probabilistic truth True expected values True expectations True Probability distributions and True probabilities Probability begins and ends with Probability keynes 1921
    Social Science Research Network, 2020
    Co-Authors: Michael Emmett Brady
    Abstract:

    Ramsey’s many, many confusions and errors about Keynes’s Logical Theory of Probability all stemmed from his failure to a) read more than just the first four chapters of Keynes’s A Treatise on Probability(1921),b) his gross ignorance of Boole’s 1854 logical theory of Probability that Keynes had built on in Parts II, III, IV, and V of the A Treatise on Probability, c) his complete and total ignorance of real world decision making under time constraint in financial markets(bond, money, stocks, commodity futures),government, industry and business, and d) his complete and total ignorance of the role that intuition and perception played in tournament chess competition under time constraint, a role that was taught to J M Keynes by his father, J N Keynes, a rated chess master who played first board for Cambridge University in the late 1870’s and early 1880’s.Keynes simply generalized the important role of intuition and perception in decision making in tournament chess competition under time constraint (OTB) to real world decision making under time constraint, where there was missing information, unavailable information, incomplete information, unclear information, and conflicting information. Keynes successfully applied his logical theory of Probability in 1913 in his Indian Currency and Finance, in 1919 in his Economic Consequences of the Peace, and was a millionaire with vast first hand knowledge, expertise,and experience (only Alan Greenspan had comparable hands on knowledge, experience and expertise. His 2004 paper on the role of uncertainty in the making of monetary policy from 1987 till 2004 shocked and stunned the economics profession) of how decision making under time constraint was done in financial markets(real estate, bonds, money, stocks, commodity futures), government, industry and business. Academia was strictly a part time sideline for Keynes. Against this back drop, an 18 year old boy appeared at Cambridge University who had absolutely NO real world knowledge of what Keynes had mastered.His name was Frank Ramsey.Keynes’s theory was a theory of real world decision making. For just one example ,Keynes realized that ,in the real world of non additive and nonlinear Probability, applications involving interval valued Probability and decision weights,like his own conventional coefficient, c, Ramsey’s Dutch Book argument simply did not apply. However, in the academic world of additive Probability, where academicians served to provide the “…forces of banking and finance …” with a variety of “…pretty, polite techniques, made for a well paneled Board Room and a nicely regulated market…”,Keynes saw that there was a place for Ramsey ,where he would be dominating .Ramsey was a very keen and sharp thinker who would have a great career publishing articles and books- in academia. Keynes also liked those who challenged him intellectually,even if they were quite wrong. After Ramsey published an error filled paper in the 1922 Cambridge Magazine challenging Keynes, which it is clear was never refereed, a myth arose that Ramsey had single handedly confronted Keynes in person one-one-one and showed Keynes that his logical theory of Probability was full of all types of logical, philosophical ,and epistemological holes .According to Misak (2020),Ramsey “….shook Keynes’s confidence in his newly published Probability theory…” .Supposedly, Keynes then quickly agreed with Ramsey and adopted Ramsey’s subjective ,additive, linear theory of Probability in 1931. The problem here is that MIsak is completely and totally ignorant and confused, since ,at best, Ramsey’s theory, which is additive, linear, and deals with mere degrees of belief is a SPECIAL case of Keynes’s general theory, which is non additive, non linear,and deals with degrees of RATIONAL belief. Keynes was never shaken by Ramsey’s theory ,as he was conversant with Borel’s earlier work that also used a betting quotient approach as a foundation for subjective Probability. Anyone who has read Parts II, III, IV and V of the A Treatise on Probability can avoid coming to the conclusion that Keynes was “shaken” by Ramsey, since Keynes had been applying his theory successfully in the real world since 1909. Unfortunately, this myth has been given new life in C. Misak’s ( 2020,p.xxvi) biography of Ramsey, where it is asserted that Ramsey easily demolished Keynes’s logical theory of Probability and convinced Keynes himself to repudiate his incomprehensible, strange, unfathomable and mysterious beliefs in “ non numerical” probabilities. Of course, It would have been quite impossible for Keynes, the only economist and philosopher in the 20th and 21st centuries to have mastered Boole’s approach, to have accepted Ramsey’s position as it directly contradicted that of Boole, who, according to Bertrand Russell ,was the greatest mathematical logician who ever lived. Of course, since Keynes’s work in the A Treatise on Probability in Parts II-V is directly based on Boole’s theory of interval valued Probability, that defined lower and upper probabilities in chapters 16 -21 of the 1854 The Laws of Thought,that are non additive and non linear, and which Keynes applied successfully in the period from 1912 to 1921, Keynes actually took Ramsey’s approach with a grain of salt. Ramsey had made a major advance in strengthening the theoretical and logical foundations of additive and linear Probability with his betting quotient-Dutch book approach to degrees of belief. However, this theory had nothing to offer decision makers operating in a world of nonlinear and non additive Probability and degrees of rational belief. For example, Ramsey’s approach is applicable only to rated Correspondence Chess (postal-CC) and never to Over -The -Board (OTB) rated Tournament chess.

  • keynes s logical objective relation of Probability p a h α where α is a degree of rational belief has nothing to do with truth both orthodox and heterodox economists fail to realize that there is no such thing as a True Probability True expectation o
    Social Science Research Network, 2019
    Co-Authors: Michael Emmett Brady
    Abstract:

    Keynes’s logical, objective, relation of Probability, P(a/h)=α,where α is a degree of rational belief, has nothing to do with truth or falsehood. Probability is not truth. The belief that a Probability, expectation or expected values can be True (false) appears to involve the same kind of error pointed out by George Box regarding statistical models, that “all models are wrong, but some are useful.” Orthodox rational expectation theorists appear to believe that there are True rational expected values, True statistical models and True, objective probabilities, which rational decision makers can know.There is no support in any theory of Probability for such beliefs. Heterodox economists are no better and make the same error. Keynes, in his conclusion to chapter 26 of the A Treatise on Probability, pointed this out and emphasized that rational decision making has nothing to do with the truth. Probability has nothing to do with the truth. The idea that there are True objective probabilities or that probabilities can be correct, right and valid is an oxymoron.

  • the rational expectations hypothesis can never ever be shown to be True false correct incorrect right wrong or proven not proven ignoring haavelmo s warning about confusing deduction mathematics logic with induction Probability statistics econometric
    Social Science Research Network, 2019
    Co-Authors: Michael Emmett Brady
    Abstract:

    Advocates and opponents of the rational expectations hypothesis have both confused statistics with mathematics (logic). Only in mathematics can I prove something to be True, false, correct, incorrect, right or wrong. The best one can do in statistics is, by the use of proper scoring rules, establish that one forecaster (model) is better, more accurate or reliable relative to another forecaster (model). A best forecast can never, ever be a True forecast. Definitions of what the rational expectations hypothesis is are confused, ambiguous, hazy, unclear, and foggy. Practically none of the definitions involve a correct use of statistics and Probability. The one technical definition, given by R.Muth in 1961 in an article in Econometrica, is an intellectual mess due to Muth’s own misunderstandings of what statistics and Probability meant when the terms subjective theory of Probability and objective theory of Probability are used. Subjective theories of Probability involve degree of belief. Objective theories of Probability involve empirical, relative frequencies based on the direct observation of outcomes in a very large number of repeated experiments carried out under identical conditions. An estimate of a subjective Probability can never be an estimate of an objective Probability. Muth’s definition was that “…for a given information set, the subjective probabilities (distributions) were distributed around a unique, objective Probability distribution.” This definition is simply impossible and would be rejected by all of the existing theories of Probability (limiting (relative) frequency, subjective, logical, propensity, classical). The Rational expectations hypothesis appears to mean no more than the claim that all producers and consumers, who are rational, make use of all available information so that they all have a complete information set. However, this definition is not substantially different from the very similar definition used by Jeremy Bentham in his disputes with Adam Smith over the nature of Probability in 1787 in his revised The Principles of Moral and Legislation. Bentham argued that all men love money at all times, that all men calculate, especially outcomes involving pecuniary concerns, and that all the calculated probabilities were linear, exact, definite and precise,as were all of the outcomes, so that all rational people could calculate their Maximal Utility. Smith, on the other hand, argued that Probability was inexact, imprecise and non linear, as risk was not proportional to returns. Keynes’s views on decision making and Probability are very similar to Adam Smith’s views.

Michael A. Proschan - One of the best experts on this subject based on the ideXlab platform.

  • Internal pilot studies I: type I error rate of the naive t-test.
    Statistics in medicine, 1999
    Co-Authors: Janet Wittes, Oliver Schabenberger, David M. Zucker, Erica Brittain, Michael A. Proschan
    Abstract:

    When sample size is recalculated using unblinded interim data, use of the usual t-test at the end of a study may lead to an elevated type I error rate. This paper describes a numerical quadrature investigation to calculate the True Probability of rejection as a function of the time of the recalculation, the magnitude of the detectable treatment effect, and the ratio of the guessed to the True variance. We consider both 'restricted' designs, those that require final sample size at least as large as the originally calculated size, and 'unrestricted' designs, those that permit smaller final sample sizes than originally calculated. Our results indicate that the bias in the type I error rate is often negligible, especially in restricted designs. Some sets of parameters, however, induce non-trivial bias in the unrestricted design.

  • internal pilot studies i type i error rate of the naive t test
    Statistics in Medicine, 1999
    Co-Authors: Janet Wittes, Oliver Schabenberger, David M. Zucker, Erica Brittain, Michael A. Proschan
    Abstract:

    When sample size is recalculated using unblinded interim data, use of the usual t-test at the end of a study may lead to an elevated type I error rate. This paper describes a numerical quadrature investigation to calculate the True Probability of rejection as a function of the time of the recalculation, the magnitude of the detectable treatment effect, and the ratio of the guessed to the True variance. We consider both ‘restricted’ designs, those that require final sample size at least as large as the originally calculated size, and ‘unrestricted’ designs, those that permit smaller final sample sizes than originally calculated. Our results indicate that the bias in the type I error rate is often negligible, especially in restricted designs. Some sets of parameters, however, induce non-trivial bias in the unrestricted design. Copyright © 1999 John Wiley & Sons, Ltd.

Janet Wittes - One of the best experts on this subject based on the ideXlab platform.

  • Internal pilot studies I: type I error rate of the naive t-test.
    Statistics in medicine, 1999
    Co-Authors: Janet Wittes, Oliver Schabenberger, David M. Zucker, Erica Brittain, Michael A. Proschan
    Abstract:

    When sample size is recalculated using unblinded interim data, use of the usual t-test at the end of a study may lead to an elevated type I error rate. This paper describes a numerical quadrature investigation to calculate the True Probability of rejection as a function of the time of the recalculation, the magnitude of the detectable treatment effect, and the ratio of the guessed to the True variance. We consider both 'restricted' designs, those that require final sample size at least as large as the originally calculated size, and 'unrestricted' designs, those that permit smaller final sample sizes than originally calculated. Our results indicate that the bias in the type I error rate is often negligible, especially in restricted designs. Some sets of parameters, however, induce non-trivial bias in the unrestricted design.

  • internal pilot studies i type i error rate of the naive t test
    Statistics in Medicine, 1999
    Co-Authors: Janet Wittes, Oliver Schabenberger, David M. Zucker, Erica Brittain, Michael A. Proschan
    Abstract:

    When sample size is recalculated using unblinded interim data, use of the usual t-test at the end of a study may lead to an elevated type I error rate. This paper describes a numerical quadrature investigation to calculate the True Probability of rejection as a function of the time of the recalculation, the magnitude of the detectable treatment effect, and the ratio of the guessed to the True variance. We consider both ‘restricted’ designs, those that require final sample size at least as large as the originally calculated size, and ‘unrestricted’ designs, those that permit smaller final sample sizes than originally calculated. Our results indicate that the bias in the type I error rate is often negligible, especially in restricted designs. Some sets of parameters, however, induce non-trivial bias in the unrestricted design. Copyright © 1999 John Wiley & Sons, Ltd.

Qi Wang - One of the best experts on this subject based on the ideXlab platform.

  • Analyses de groupe multivariées pour la neuroimagerie fonctionnelle avancées conceptuelles et expérimentales
    HAL CCSD, 2020
    Co-Authors: Qi Wang
    Abstract:

    In functional neuroimaging experiments, participants perform a set of tasks while their brain activity is recorded, e.g. with electroencephalography (EEG), magnetoencephalography (MEG) or functional magnetic resonance imaging (fMRI). Analysing data from a group of participants, which is often denoted as group-level analysis, aims at identifying traits in the data that relate with the tasks performed by the participant and that are invariant within the population. This allows understanding the functional organization of the brain in healthy subjects and its dysfunctions in pathological populations. While group-level analyses for classical univariate statistical inference schemes, such as the general linear model, have been heavily studied, there are still many open questions for group-level strategies based on multivariate machine learning methods. This thesis therefore focuses on multivariate group-level analysis of functional neuroimaging and brings four contributions. The first contribution is a comparison of the results provided by two classifier-based multivariate group-level strategies: i) the standard one in which one aggregates the performances of within-subject models in a hierarchical analysis, and ii) the scheme we denote as inter-subject pattern analysis, where a population-level predictive model is directly estimated from data recorded on multiple subjects. An extensive set of experiments are conducted on both a large number of artificial datasets - where we parametrically control the size of the multivariate effect and the amount of inter-individual variability - as well as on two real fMRI datasets. Our results show that the two strategies can provide different results and that inter-subject analysis both offers a greater ability to small multivariate effects and facilitates the interpretation of the obtained results at a comparable computational cost.We then provide a survey of the methods that have been proposed to improve inter-subject pattern analysis, which is actually a hard task due to the largely heterogeneous vocabulary employed in the literature dedicated to this topic. Our second contribution consists in first introducing an unifying formalization of this framework, that we cast as a multi-source transductive transfer learning problem, and then in reviewing more than 500 related papers to offer a first comprehensive view of the existing literature where inter-subject pattern analysis was used in task-based functional neuroimaging experiments.Our third contribution is an experimental study that examines the well-foundedness of our multi-source transductive transfer formalization of inter-subject pattern analysis. With fMRI and MEG data recorded from numerous subjects, we demonstrate that between-subject variability impairs the generalization ability of classical machine learning algorithms and that a standard multi-source transductive learning strategy improves the generalization performances of such algorithms. Based on these promising results we further investigate the use of two more advanced machine learning methods to deal with the multi-source problem.The fourth contribution of this thesis is a new multivariate group-level analysis method for functional neuroimaging datasets. Our method is based on optimal transport, which leverages the geometrical properties of multivariate brain patterns to overcome inter-individual differences impacting the traditional group-level analyses. We extend the concept of Wasserstein barycenter, which was initially meant to average Probability measures, to make it applicable to arbitrary data that do not necessarily fulfill the properties of a True Probability measure. For this, we introduce a new algorithm that estimates a barycenter and provide an experimental study on artificial and real functional MRI.Dans les expériences de neuroimagerie fonctionnelle, les participants effectuent un ensemble de tâches pendant que leur activité cérébrale est enregistrée, par exemple en utilisant l’électroencéphalographie (EEG), la magnétoencéphalographie (MEG) ou l'imagerie par résonance magnétique fonctionnelle (fMRI). L'analyse des données d'un groupe de participants, souvent appelée analyse de  groupe, vise à identifier des invariants de population qui se rapportent aux tâches accomplies par les participants. Ceci permet de comprendre l'organisation fonctionnelle du cerveau chez les sujets sains et ses dysfonctionnements dans les populations pathologiques. Tandis que les analyses de groupes univariées, basées sur le modèle linéaire généralisé, ont fait l'objet d'études approfondies, de nombreuses questions restent ouvertes pour les analyses de groupe fondées sur des méthodes d'apprentissage machine multivariées. Cette thèse étudie donc sur les analyses de groupe multivariées pour les expériences de neuroimagerie fonctionnelle.  Nous nous focalisons sur un schéma d’analyse de groupe multivarié sous utilisé, que nous désignons “analyse de motifs inter-sujet”, qui consiste à entraîner un modèle sur des données d’un ensemble de sujet et à évaluer sa capacité à généraliser sur des données enregistrées dans d’autres sujets. Nous effectuons d’abord une comparaison des résultats fournis par l'analyse de motifs inter-sujet avec ceux obtenus en utilisant la méthode standard. L'analyse inter-sujet offre à la fois une plus grande capacité de détection et facilite l'interprétation des résultats obtenus à un coût de calcul comparable. Dans ce contexte, notre deuxième contribution introduit une formalisation unifiée de l'analyse de motifs inter-sujet, que nous modélisons comme un problème d'apprentissage par transfert transductif multi-sources. Ensuite, nous produisons une revue de la littérature des méthodes développées pour l’analyse de motifs inter-sujet. Notre troisième contribution est une série d’études expérimentales qui examine le bien-fondé de la formalisation par transfert transductif multi-sources de l'analyse de motifs inter-sujet. La quatrième contribution de cette thèse est une nouvelle méthode d'analyse multivariée au niveau du groupe pour les expériences de neuroimagerie fonctionnelle. Notre méthode est basée sur le transport optimal, qui tire parti des propriétés géométriques des cartes d’activité cérébrales pour surmonter les différences inter-individuelles qui ont un impact sur les analyses de groupe traditionnelles

  • Analyses de groupe multivariées pour la neuroimagerie fonctionnelle : avancée conceptuelles et expérimentales
    HAL CCSD, 2020
    Co-Authors: Qi Wang
    Abstract:

    In functional neuroimaging experiments, participants perform a set of tasks while their brain activity is recorded, e.g. with electroencephalography (EEG), magnetoencephalography (MEG) or functional magnetic resonance imaging (fMRI). Analysing data from a group of participants, which is often denoted as group-level analysis, aims at identifying traits in the data that relate with the tasks performed by the participant and that are invariant within the population. This allows understanding the functional organization of the brain in healthy subjects and its dysfunctions in pathological populations. While group-level analyses for classical univariate statistical inference schemes, such as the general linear model, have been heavily studied, there are still many open questions for group-level strategies based on multivariate machine learning methods. This thesis therefore focuses on multivariate group-level analysis of functional neuroimaging and brings four contributions. The first contribution is a comparison of the results provided by two classifier-based multivariate group-level strategies: i) the standard one in which one aggregates the performances of within-subject models in a hierarchical analysis, and ii) the scheme we denote as inter-subject pattern analysis, where a population-level predictive model is directly estimated from data recorded on multiple subjects. An extensive set of experiments are conducted on both a large number of artificial datasets - where we parametrically control the size of the multivariate effect and the amount of inter-individual variability - as well as on two real fMRI datasets. Our results show that the two strategies can provide different results and that inter-subject analysis both offers a greater ability to small multivariate effects and facilitates the interpretation of the obtained results at a comparable computational cost.We then provide a survey of the methods that have been proposed to improve inter-subject pattern analysis, which is actually a hard task due to the largely heterogeneous vocabulary employed in the literature dedicated to this topic. Our second contribution consists in first introducing an unifying formalization of this framework, that we cast as a multi-source transductive transfer learning problem, and then in reviewing more than 500 related papers to offer a first comprehensive view of the existing literature where inter-subject pattern analysis was used in task-based functional neuroimaging experiments.Our third contribution is an experimental study that examines the well-foundedness of our multi-source transductive transfer formalization of inter-subject pattern analysis. With fMRI and MEG data recorded from numerous subjects, we demonstrate that between-subject variability impairs the generalization ability of classical machine learning algorithms and that a standard multi-source transductive learning strategy improves the generalization performances of such algorithms. Based on these promising results we further investigate the use of two more advanced machine learning methods to deal with the multi-source problem.The fourth contribution of this thesis is a new multivariate group-level analysis method for functional neuroimaging datasets. Our method is based on optimal transport, which leverages the geometrical properties of multivariate brain patterns to overcome inter-individual differences impacting the traditional group-level analyses. We extend the concept of Wasserstein barycenter, which was initially meant to average Probability measures, to make it applicable to arbitrary data that do not necessarily fulfill the properties of a True Probability measure. For this, we introduce a new algorithm that estimates a barycenter and provide an experimental study on artificial and real functional MRI.Dans les expériences de neuroimagerie fonctionnelle, les participants effectuent un ensemble de tâches pendant que leur activité cérébrale est enregistrée, par exemple en utilisant l’électroencéphalographie (EEG), la magnétoencéphalographie (MEG) ou l'imagerie par résonance magnétique fonctionnelle (fMRI). L'analyse des données d'un groupe de participants, souvent appelée analyse de groupe, vise à identifier des invariants de population qui se rapportent aux tâches accomplies par les participants. Ceci permet de comprendre l'organisation fonctionnelle du cerveau chez les sujets sains et ses dysfonctionnements dans les populations pathologiques. Tandis que les analyses de groupes univariées, basées sur le modèle linéaire généralisé, ont fait l'objet d'études approfondies, de nombreuses questions restent ouvertes pour les analyses de groupe fondées sur des méthodes d'apprentissage machine multivariées. Cette thèse étudie donc sur les analyses de groupe multivariées pour les expériences de neuroimagerie fonctionnelle. Nous nous focalisons sur un schéma d’analyse de groupe multivarié sous utilisé, que nous désignons “analyse de motifs inter-sujet”, qui consiste à entraîner un modèle sur des données d’un ensemble de sujet et à évaluer sa capacité à généraliser sur des données enregistrées dans d’autres sujets. Nous effectuons d’abord une comparaison des résultats fournis par l'analyse de motifs inter-sujet avec ceux obtenus en utilisant la méthode standard. L'analyse inter-sujet offre à la fois une plus grande capacité de détection et facilite l'interprétation des résultats obtenus à un coût de calcul comparable. Dans ce contexte, notre deuxième contribution introduit une formalisation unifiée de l'analyse de motifs inter-sujet, que nous modélisons comme un problème d'apprentissage par transfert transductif multi-sources. Ensuite, nous produisons une revue de la littérature des méthodes développées pour l’analyse de motifs inter-sujet. Puis, nous effectuons une série d’études expérimentales qui examine le bien-fondé de la formalisation par transfert transductif multi-sources de l'analyse de motifs inter-sujet. La quatrième contribution de cette thèse est une nouvelle méthode d'analyse multivariée au niveau du groupe pour les expériences de neuroimagerie fonctionnelle. Notre méthode est basée sur le transport optimal, qui tire parti des propriétés géométriques des cartes d’activité cérébrales pour surmonter les différences inter-individuelles qui ont un impact sur les analyses de groupe traditionnelles

Kay Teschke - One of the best experts on this subject based on the ideXlab platform.

  • personal privacy and public health potential impacts of privacy legislation on health research in canada
    Canadian Journal of Public Health-revue Canadienne De Sante Publique, 2008
    Co-Authors: M. A. Harris, Adrian R Levy, Kay Teschke
    Abstract:

    Abstract Despite variation in Canadian privacy laws between provinces and territories, increasing legislative protection of personal privacy has imposed restrictions on health research across the country. The effects of these restrictions on patient recruitment include increased study costs, durations, and decreased participation rates. Low participation rates can jeopardize the validity of research findings and the accuracy of measures of association by introducing non-response, or participation bias. We constructed simulations to assess potential effects of non-response bias on the accuracy of measures of association in a hypothetical case-control study. Small biases that alter the Probability of selecting an exposed case can lead to dramatic inflation or attrition of the odds ratio (OR) in case-control studies. ORs are more unstable and subject to error when the True Probability of selecting an exposed case is greater, such that strong positive associations are subject to error even at low levels of bias. Well-powered, population-based epidemiological research is a cornerstone of public health. Therefore, when weighing the benefits of protecting personal privacy, the benefits of valid and robust health research must also be considered. Options might include special legislative treatment of health research, or the use of an “opt-out” (vs. the current “opt-in”) construct for consent in confidential research. Key words: Privacy; legislation; public health; epidemiology Resume Les lois sur la protection des renseignements personnels different d’une province et d’un territoire a l’autre au Canada, mais dans l’ensemble, la protection juridique accrue accordee a la protection de la vie privee impose des restrictions a la recherche en sante dans tout le pays. Comme ces restrictions ont un effet sur le recrutement des patients, elles font augmenter le cout et la duree des etudes, et diminuer les taux de participation. Or, de faibles taux de participation peuvent compromettre la validite des resultats de recherche et l’exactitude des mesures du degre d’association en introduisant des biais de non-reponse ou de participation. Nous avons construit des simulations pour analyser les effets possibles du biais de non-reponse sur l’exactitude des mesures d’association pour une etude cas-temoin hypothetique. On sait que de faibles biais, s’ils modifient la probabilite de selectionner un cas expose, peuvent entrainer une inflation ou une attrition considerable du rapport de cotes (RC) dans une etude cas-temoins. Les RC sont plus instables, et comportent un risque d’erreur plus grand, lorsque la probabilite reelle de selectionner un cas expose est plus grande, ce qui fait que des associations fortement positives peuvent etre erronees meme si les biais sont faibles. La recherche epidemiologique fondee sur la population et dotee d’un bon degre de puissance est la pierre angulaire de la sante publique. Par consequent, lorsqu’on soupese les avantages de la protection de la vie privee, il faut tenir compte de l’avantage de pouvoir mener des etudes de recherche en sante valides et robustes. On pourrait par exemple appliquer un traitement juridique particulier a la recherche en sante ou faire appel a la « participation facultative » (par opposition a la « participation sur demande » en vigueur actuellement) pour obtenir le consentement des patients aux travaux de recherche confidentiels. Mots cles : protection des renseignements personnels; lois; sante publique; epidemiologie

  • personal privacy and public health potential impacts of privacy legislation on health research in canada
    Canadian Journal of Public Health-revue Canadienne De Sante Publique, 2008
    Co-Authors: M. A. Harris, Adrian R Levy, Kay Teschke
    Abstract:

    Despite variation in Canadian privacy laws between provinces and territories, increasing legislative protection of personal privacy has imposed restrictions on health research across the country. The effects of these restrictions on patient recruitment include increased study costs, durations, and decreased participation rates. Low participation rates can jeopardize the validity of research findings and the accuracy of measures of association by introducing non-response, or participation bias. We constructed simulations to assess potential effects of non-response bias on the accuracy of measures of association in a hypothetical case-control study. Small biases that alter the Probability of selecting an exposed case can lead to dramatic inflation or attrition of the odds ratio (OR) in case-control studies. ORs are more unstable and subject to error when the True Probability of selecting an exposed case is greater, such that strong positive associations are subject to error even at low levels of bias. Well-powered, population-based epidemiological research is a cornerstone of public health. Therefore, when weighing the benefits of protecting personal privacy, the benefits of valid and robust health research must also be considered. Options might include special legislative treatment of health research, or the use of an "opt-out" (vs. the current "opt-in") construct for consent in confidential research.