Endpoints

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 509760 Experts worldwide ranked by ideXlab platform

Stuart G. Baker - One of the best experts on this subject based on the ideXlab platform.

  • Surrogate Endpoint Analysis: An Exercise in Extrapolation
    Journal of the National Cancer Institute, 2012
    Co-Authors: Stuart G. Baker, Barnett S. Kramer
    Abstract:

    Surrogate Endpoints offer the hope of smaller or shorter cancer trials. It is, however, important to realize they come at the cost of an unverifiable extrapolation that could lead to misleading conclusions. With cancer prevention, the focus is on hypothesis testing in small surrogate endpoint trials before deciding whether to proceed to a large prevention trial. However, it is not generally appreciated that a small surrogate endpoint trial is highly sensitive to a deviation from the key Prentice criterion needed for the hypothesis-testing extrapolation. With cancer treatment, the focus is on estimation using historical trials with both surrogate and true Endpoints to predict treatment effect based on the surrogate endpoint in a new trial. Successively leaving out one historical trial and computing the predicted treatment effect in the left-out trial yields a standard error multiplier that summarizes the increased uncertainty in estimation extrapolation. If this increased uncertainty is acceptable, three additional extrapolation issues (biological mechanism, treatment following observation of the surrogate endpoint, and side effects following observation of the surrogate endpoint) need to be considered. In summary, when using surrogate endpoint analyses, an appreciation of the problems of extrapolation is crucial.

  • Surrogate Endpoints: Wishful Thinking or Reality?
    Journal of the National Cancer Institute, 2006
    Co-Authors: Stuart G. Baker
    Abstract:

    More than 100 years ago, the noted French mathematician Henri Poincare quoted the following remark about the assumption of a normal distribution: “ Everybody fi rmly believes in it because the mathematicians imagine it is a fact of observation, and observers that it is a theory of mathematics ” ( 1 ) . A similar generalization could be made about methods to validate surrogate Endpoints: Biostatisticians believe that the methods they propose are useful because clinicians adopt them, and clinicians believe that the methods proposed by biostatisticians are useful because they have the “ imprimatur ” of mathematical statistics. Given this state of affairs, a critical examination of methods to validate surrogate Endpoints is needed. However, before delving further it is necessary to precisely state the role of surrogate Endpoints. The purpose of a surrogate endpoint is to draw conclusions about the effect of intervention on true endpoint without having to observe the true endpoint. If this purpose could be achieved, clinical research would be greatly accelerated. Unfortunately it is a tall order, and many proposed surrogate Endpoints have subsequently been shown to have led to incorrect conclusions about the effect of intervention on the true Endpoints ( 2 ) . Therefore, before a surrogate endpoint can be used with confi dence, it must be validated. Part of the controversy with the use of surrogate Endpoints is that there is no agreed-upon defi nition of a validated surrogate endpoint. Essentially, validation of a surrogate endpoint consists of whatever the investigators think will make them and others feel confi dent about the use of the surrogate endpoint in a future trial. Validation measures must ensure that this confi dence is grounded more in reality than wishful thinking.

  • A simple meta-analytic approach for using a binary surrogate endpoint to predict the effect of intervention on true endpoint
    Biostatistics, 2005
    Co-Authors: Stuart G. Baker
    Abstract:

    SUMMARY A surrogate endpoint is an endpoint that is obtained sooner, at lower cost, or less invasively than the true endpoint for a health outcome and is used to make conclusions about the effect of intervention on the true endpoint. In this approach, each previous trial with surrogate and true Endpoints contributes an estimated predicted effect of intervention on true endpoint in the trial of interest based on the surrogate endpoint in the trial of interest. These predicted quantities are combined in a simple random-effects meta-analysis to estimate the predicted effect of intervention on true endpoint in the trial of interest. Validation involves comparing the average prediction error of the aforementioned approach with (i) the average prediction error of a standard meta-analysis using only true Endpoints in the other trials and (ii) the average clinically meaningful difference in true Endpoints implicit in the trials. Validation is illustrated using data from multiple randomized trials of patients with advanced colorectal cancer in which the surrogate endpoint was tumor response and the true endpoint was median survival time.

Hasan Jilaihawi - One of the best experts on this subject based on the ideXlab platform.

  • clinical trial principles and endpoint definitions for paravalvular leaks in surgical prosthesis
    European Heart Journal, 2018
    Co-Authors: Carlos E Ruiz, Greg Fontana, Rebecca T Hahn, Jeffrey S. Borer, Vladimir Jelnin, Reda Ibrahim, Gino Gerosa, Donald E Cutlip, Alain Berrebi, Hasan Jilaihawi
    Abstract:

    The VARC (Valve Academic Research Consortium) for transcatheter aortic valve replacement set the standard for selecting appropriate clinical Endpoints reflecting safety and effectiveness of transcatheter devices, and defining single and composite clinical Endpoints for clinical trials. No such standardization exists for circumferentially sutured surgical valve paravalvular leak (PVL) closure. This document seeks to provide core principles, appropriate clinical Endpoints, and endpoint definitions to be used in clinical trials of PVL closure devices. The PVL Academic Research Consortium met to review evidence and make recommendations for assessment of disease severity, data collection, and updated endpoint definitions. A 5-class grading scheme to evaluate PVL was developed in concordance with VARC recommendations. Unresolved issues in the field are outlined. The current PVL Academic Research Consortium provides recommendations for assessment of disease severity, data collection, and endpoint definitions. Future research in the field is warranted.

  • clinical trial principles and endpoint definitions for paravalvular leaks in surgical prosthesis an expert statement
    Journal of the American College of Cardiology, 2017
    Co-Authors: Carlos E Ruiz, Greg Fontana, Rebecca T Hahn, Jeffrey S. Borer, Vladimir Jelnin, Reda Ibrahim, Gino Gerosa, Donald E Cutlip, Alain Berrebi, Hasan Jilaihawi
    Abstract:

    Abstract The VARC (Valve Academic Research Consortium) for transcatheter aortic valve replacement set the standard for selecting appropriate clinical Endpoints reflecting safety and effectiveness of transcatheter devices, and defining single and composite clinical Endpoints for clinical trials. No such standardization exists for circumferentially sutured surgical valve paravalvular leak (PVL) closure. This document seeks to provide core principles, appropriate clinical Endpoints, and endpoint definitions to be used in clinical trials of PVL closure devices. The PVL Academic Research Consortium met to review evidence and make recommendations for assessment of disease severity, data collection, and updated endpoint definitions. A 5-class grading scheme to evaluate PVL was developed in concordance with VARC recommendations. Unresolved issues in the field are outlined. The current PVL Academic Research Consortium provides recommendations for assessment of disease severity, data collection, and endpoint definitions. Future research in the field is warranted.

Maribel Acosta - One of the best experts on this subject based on the ideXlab platform.

  • shepherd a shipping based query processor to enhance sparql endpoint performance
    International Semantic Web Conference, 2014
    Co-Authors: Maribel Acosta, Mariaesther Vidal, Fabian Flock, Simon Castillo, Carlos Builaranda, Andreas Harth
    Abstract:

    Recent studies reveal that publicly available SPARQL Endpoints exhibit significant limitations in supporting real-world applications. In order for this querying infrastructure to reach its full potential, more flexible client-server architectures capable of deciding appropriate shipping plans are needed. Shipping plans indicate how the execution of query operators is distributed between the client and the server. We propose SHEPHERD, a SPARQL client-server query processor tailored to reduce SPARQL endpoint workload and generate shipping plans where costly operators are placed at the client site. We evaluated SHEPHERD on a variety of public SPARQL Endpoints and SPARQL queries. Experimental results suggest that SHEPHERD can enhance endpoint performance while shifting workload from the endpoint to the client.

  • anapsid an adaptive query processing engine for sparql Endpoints
    International Semantic Web Conference, 2011
    Co-Authors: Maribel Acosta, Mariaesther Vidal, Tomas Lampo, Julio Castillo, Edna Ruckhaus
    Abstract:

    Following the design rules of Linked Data, the number of available SPARQL Endpoints that support remote query processing is quickly growing; however, because of the lack of adaptivity, query executions may frequently be unsuccessful. First, fixed plans identified following the traditional optimize-thenexecute paradigm, may timeout as a consequence of endpoint availability. Second, because blocking operators are usually implemented, endpoint query engines are not able to incrementally produce results, and may become blocked if data sources stop sending data. We present ANAPSID, an adaptive query engine for SPARQL Endpoints that adapts query execution schedulers to data availability and run-time conditions. ANAPSID provides physical SPARQL operators that detect when a source becomes blocked or data traffic is bursty, and opportunistically, the operators produce results as quickly as data arrives from the sources. Additionally, ANAPSID operators implement main memory replacement policies to move previously computed matches to secondary memory avoiding duplicates. We compared ANAPSID performance with respect to RDF stores and Endpoints, and observed that ANAPSID speeds up execution time, in some cases, in more than one order of magnitude.

  • International Semantic Web Conference (1) - ANAPSID: an adaptive query processing engine for SPARQL Endpoints
    The Semantic Web – ISWC 2011, 2011
    Co-Authors: Maribel Acosta, Mariaesther Vidal, Tomas Lampo, Julio Castillo, Edna Ruckhaus
    Abstract:

    Following the design rules of Linked Data, the number of available SPARQL Endpoints that support remote query processing is quickly growing; however, because of the lack of adaptivity, query executions may frequently be unsuccessful. First, fixed plans identified following the traditional optimize-thenexecute paradigm, may timeout as a consequence of endpoint availability. Second, because blocking operators are usually implemented, endpoint query engines are not able to incrementally produce results, and may become blocked if data sources stop sending data. We present ANAPSID, an adaptive query engine for SPARQL Endpoints that adapts query execution schedulers to data availability and run-time conditions. ANAPSID provides physical SPARQL operators that detect when a source becomes blocked or data traffic is bursty, and opportunistically, the operators produce results as quickly as data arrives from the sources. Additionally, ANAPSID operators implement main memory replacement policies to move previously computed matches to secondary memory avoiding duplicates. We compared ANAPSID performance with respect to RDF stores and Endpoints, and observed that ANAPSID speeds up execution time, in some cases, in more than one order of magnitude.

Marc Buyse - One of the best experts on this subject based on the ideXlab platform.

  • Endpoints and surrogate Endpoints in colorectal cancer: a review of recent developments.
    Current opinion in oncology, 2008
    Co-Authors: Pascal Piedbois, Marc Buyse
    Abstract:

    PURPOSE OF REVIEW The purpose of this review is to discuss recently published work on Endpoints for early and advanced colorectal cancer, as well as the statistical approaches used to validate surrogate Endpoints. RECENT FINDINGS Most attempts to validate surrogate Endpoints have estimated the correlation between the surrogate and the true endpoint, and between the treatment effects on these Endpoints. The correlation approach has made it possible to validate disease-free survival and progression-free survival as acceptable surrogates for overall survival in early and advanced disease, respectively. SUMMARY The search for surrogate Endpoints will intensify over the coming years. In parallel, efforts to either standardize or extend the Endpoints or both will improve the reliability and relevance of clinical trial results.

  • Endpoints in adjuvant treatment trials a systematic review of the literature in colon cancer and proposed definitions for future trials
    Journal of the National Cancer Institute, 2007
    Co-Authors: Cornelis J A Punt, Marc Buyse, Claus Henning Kohne, Peter Hohenberger, Roberto Labianca, Hans J Schmoll, Lars Pahlman, A Sobrero, Jeanyves Douillard
    Abstract:

    Disease-free survival is increasingly being used as the primary endpoint of most trials testing adjuvant treatments in cancer. Other frequently used Endpoints include overall survival, recurrence-free survival, and time to recurrence. These Endpoints are often defined differently in different trials in the same type of cancer, leading to a lack of comparability among trials. In this Commentary, we used adjuvant studies in colon cancer as a model to address this issue. In a systematic review of the literature, we identified 52 studies of adjuvant treatment in colon cancer published in 1997-2006 that used eight other Endpoints in addition to overall survival. Both the definition of these Endpoints and the starting point for measuring time to the events that constituted these Endpoints varied widely. A panel of experts on clinical research on colorectal cancer then reached consensus on the definition of each endpoint. Disease-free survival--defined as the time from randomization to any event, irrespective of cause--was considered to be the most informative endpoint for assessing the effect of treatment and therefore the most relevant to clinical practice. The proposed guidelines may add to the quality and cross-comparability of future studies of adjuvant treatments for cancer.

  • the validation of surrogate Endpoints in meta analyses of randomized experiments
    Biostatistics, 2000
    Co-Authors: Marc Buyse, Geert Molenberghs, Tomasz Burzykowski, Didier Renard, Helena Geys
    Abstract:

    The validation of surrogate Endpoints has been studied by Prentice (1989). He presented a definition as well as a set of criteria, which are equivalent only if the surrogate and true Endpoints are binary. Freedman et al. (1992) supplemented these criteria with the so-called 'proportion explained'. Buyse and Molenberghs (1998) proposed replacing the proportion explained by two quantities: (1) the relative effect linking the effect of treatment on both Endpoints and (2) an individual-level measure of agreement between both Endpoints. The latter quantity carries over when data are available on several randomized trials, while the former can be extended to be a trial-level measure of agreement between the effects of treatment of both Endpoints. This approach suggests a new method for the validation of surrogate Endpoints, and naturally leads to the prediction of the effect of treatment upon the true endpoint, given its observed effect upon the surrogate endpoint. These ideas are illustrated using data from two sets of multicenter trials: one comparing chemotherapy regimens for patients with advanced ovarian cancer, the other comparing interferon-alpha with placebo for patients with age-related macular degeneration.

  • criteria for the validation of surrogate Endpoints in randomized experiments
    Biometrics, 1998
    Co-Authors: Marc Buyse, Geert Molenberghs
    Abstract:

    The validation of surrogate Endpoints has been studied by Prentice (1989, Statistics in Medicine 8, 431-440) and Freedman, Graubard, and Schatzkin (1992, Statistics in Medicine 11, 167-178). We extended their proposals in the cases where the surrogate and the final Endpoints are both binary or normally distributed. Letting T and S be random variables that denote the true and surrogate endpoint, respectively, and Z be an indicator variable for treatment, Prentice's criteria are fulfilled if Z has a significant effect on T and on S, if S has a significant effect on T, and if Z has no effect on T given S. Freedman relaxed the latter criterion by estimating PE, the proportion of the effect of Z on T that is explained by S, and by requiring that the lower confidence limit of PE be larger than some proportion, say 0.5 or 0.75. This condition can only be verified if the treatment has a massively significant effect on the true endpoint, a rare situation. We argue that two other quantities must be considered in the validation of a surrogate endpoint: RE, the effect of Z on T relative to that of Z on S, and gamma Z, the association between S and T after adjustment for Z. A surrogate is said to be perfect at the individual level when there is a perfect association between the surrogate and the final endpoint after adjustment for treatment. A surrogate is said to be perfect at the population level if RE is 1. A perfect surrogate fulfills both conditions, in which case S and T are identical up to a deterministic transformation. Fieller's theorem is used for the estimation of PE, RE, and their respective confidence intervals. Logistic regression models and the global odds ratio model studied by Dale (1986, Biometrics, 42, 909-917) are used for binary Endpoints. Linear models are employed for continuous Endpoints. In order to be of practical value, the validation of surrogate Endpoints is shown to require large numbers of observations.

Eugenio Baraldi - One of the best experts on this subject based on the ideXlab platform.

  • Endpoints in respiratory diseases.
    European journal of clinical pharmacology, 2010
    Co-Authors: Fernando Maria De Benedictis, Roberto Guidi, Silvia Carraro, Eugenio Baraldi
    Abstract:

    A wide range of outcome measures or Endpoints has been used in clinical trials to assess the effects of treatments in paediatric respiratory diseases. This can make it difficult to compare treatment outcomes from different trials and also to understand whether new treatments offer a real clinical benefit for patients. Clinical trials in respiratory diseases evaluate three types of Endpoints: subjective, objective and health-related outcomes. The ideal endpoint in a clinical trial needs to be accurate, precise and reliable. Ideally, the endpoint would also be measured with minimal risk and across all ages, easy to perform, and be inexpensive. As for any other disease, Endpoints for respiratory diseases must be viewed in the context of the important distinction between clinical Endpoints and surrogate Endpoints. The association between surrogate Endpoints and clinical Endpoints must be clearly defined for any disease in order for them to be meaningful as outcome measures. The most common Endpoints which are used in paediatric trials in respiratory diseases are discussed. For practical purposes, diseases have been separated into acute (bronchiolitis, acute viral-wheeze, acute asthma and croup) and chronic (asthma and cystic fibrosis). Further development of Endpoints will enable clinical trials in children with respiratory diseases with the main objective of improving prognosis and safety.

  • Endpoints in respiratory diseases
    European Journal of Clinical Pharmacology, 2010
    Co-Authors: Fernando Maria De Benedictis, Roberto Guidi, Silvia Carraro, Eugenio Baraldi
    Abstract:

    International audienceA wide range of outcome measures or Endpoints has been used in clinical trials to assess the effects of treatments in paediatric respiratory diseases. This can make it difficult to compare treatment outcomes from different trials and also to understand whether new treatments offer a real clinical benefit for patients. Clinical trials in respiratory diseases evaluate three types of Endpoints: subjective, objective and health-related outcomes. The ideal endpoint in a clinical trial needs to be accurate, precise and reliable. Ideally, the endpoint would also be measured with minimal risk and across all ages, easy to perform, and be inexpensive. As for any other disease, Endpoints for respiratory diseases must be viewed in the context of the important distinction between clinical Endpoints and surrogate Endpoints. The association between surrogate Endpoints and clinical Endpoints must be clearly defined for any disease in order for them to be meaningful as outcome measures. The most common Endpoints which are used in paediatric trials in respiratory diseases are discussed. For practical purposes, diseases have been separated into acute (bronchiolitis, acute viral-wheeze, acute asthma and croup) and chronic (asthma and cystic fibrosis). Further development of Endpoints will enable clinical trials in children with respiratory diseases with the main objective of improving prognosis and safety