Survival Distribution

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 109776 Experts worldwide ranked by ideXlab platform

James M. Robins - One of the best experts on this subject based on the ideXlab platform.

  • Wiley StatsRef: Statistics Reference Online - Inverse Probability Weighting in Survival Analysis
    Wiley StatsRef: Statistics Reference Online, 2014
    Co-Authors: Andrea Rotnitzky, James M. Robins
    Abstract:

    Survival studies usually incorporate high-dimensional covariate data, but interest may focus on low-dimensional characteristics of the Survival Distribution. Inverse probability weighted augmented estimation (AIPW) provides a robust approach, largely insensitive to model misspecification. The method was originally introduced to deal with coarsened (incompletely observed) data, and is discussed here with and without the assumption of random coarsening. Keywords: coarsening; missing data; dimensionality; Cox proportional hazards model; marginal Survival function; competing risks; robustness; model misspecification

  • Inverse Probability Weighting in Survival Analysis
    Encyclopedia of Biostatistics, 2005
    Co-Authors: Andrea Rotnitzky, James M. Robins
    Abstract:

    Survival studies usually incorporate high-dimensional covariate data, but interest may focus on low-dimensional characteristics of the Survival Distribution. Inverse probability weighted augmented estimation (AIPW) provides a robust approach, largely insensitive to model misspecification. The method was originally introduced to deal with coarsened (incompletely observed) data, and is discussed here with and without the assumption of random coarsening. Keywords: coarsening; missing data; dimensionality; Cox proportional hazards model; marginal Survival function; competing risks; robustness; model misspecification

  • Nonparametric Locally Efficient Estimation of the Treatment Specific Survival Distribution with Right Censored Data and Covariates in Observational Studies
    Statistical Models in Epidemiology the Environment and Clinical Trials, 2000
    Co-Authors: Alan E. Hubbard, Mark J. Van Der Laan, James M. Robins
    Abstract:

    In many observational studies one is concerned with comparing treatment specific Survival Distributions in the presence of confounding factors and censoring. In this paper we develop locally efficient point and interval estimators of these Survival Distributions which adjust for confounding by using an estimate of the propensity score and concurrently allow for dependent censoring. The proposed methodology is an application of a general methodology for construction of locally efficient estimators as presented in Robins (1993) and Robins and Rotnitzky (1992). The practical performance of the methods are tested with a simulation study.

Anastasios A. Tsiatis - One of the best experts on this subject based on the ideXlab platform.

  • modeling Survival Distribution as a function of time to treatment discontinuation a dynamic treatment regime approach
    Biometrics, 2018
    Co-Authors: Shu Yang, Anastasios A. Tsiatis, Michael A Blazing
    Abstract:

    We consider estimating the effect that discontinuing a beneficial treatment will have on the Distribution of a time to event clinical outcome, and in particular assessing whether there is a period of time over which the beneficial effect may continue after discontinuation. There are two major challenges. The first is to make a distinction between mandatory discontinuation, where by necessity treatment has to be terminated and optional discontinuation which is decided by the preference of the patient or physician. The innovation in this article is to cast the intervention in the form of a dynamic regime “terminate treatment optionally at time v unless a mandatory treatment‐terminating event occurs prior to v” and consider estimating the Distribution of time to event as a function of treatment regime v. The second challenge arises from biases associated with the nonrandom assignment of treatment regimes, because, naturally, optional treatment discontinuation is left to the patient and physician, and so time to discontinuation may depend on the patient's disease status. To address this issue, we develop dynamic‐regime Marginal Structural Models and use inverse probability of treatment weighting to estimate the impact of time to treatment discontinuation on a time to event outcome, compared to the effect of not discontinuing treatment. We illustrate our methods using the IMPROVE‐IT data on cardiovascular disease.

  • Doubly-Robust Estimators of Treatment-Specific Survival Distributions in Observational Studies with Stratified Sampling
    Biometrics, 2013
    Co-Authors: Xiaofei Bai, Anastasios A. Tsiatis, Sean M. O'brien
    Abstract:

    Observational studies are frequently conducted to compare the effects of two treatments on Survival. For such studies we must be concerned about confounding; that is, there are covariates that affect both the treatment assignment and the Survival Distribution. With confounding the usual treatment-specific Kaplan-Meier estimator might be a biased estimator of the underlying treatment-specific Survival Distribution. This article has two aims. In the first aim we use semiparametric theory to derive a doubly robust estimator of the treatment-specific Survival Distribution in cases where it is believed that all the potential confounders are captured. In cases where not all potential confounders have been captured one may conduct a substudy using a stratified sampling scheme to capture additional covariates that may account for confounding. The second aim is to derive a doubly-robust estimator for the treatment-specific Survival Distributions and its variance estimator with such a stratified sampling scheme. Simulation studies are conducted to show consistency and double robustness. These estimators are then applied to the data from the ASCERT study that motivated this research.

  • Semiparametric efficient estimation of Survival Distributions in two-stage randomisation designs in clinical trials with censored data
    Biometrika, 2006
    Co-Authors: Abdus S. Wahed, Anastasios A. Tsiatis
    Abstract:

    Two-stage randomisation designs are useful in the evaluation of combination therapies where patients are initially randomised to an induction therapy and then, depending upon their response and consent, are randomised to a maintenance therapy. In this paper we derive the best regular asymptotically linear estimator for the Survival Distribution and related quantities of treatment regimes. We propose an estimator which is easily computable and is more efficient than existing estimators. Large-sample properties of the proposed estimator are derived and comparisons with other estimators are made using simulation. Copyright 2006, Oxford University Press.

  • Optimal Estimator for the Survival Distribution and Related Quantities for Treatment Policies in Two-Stage Randomization Designs in Clinical Trials
    Biometrics, 2004
    Co-Authors: Abdus S. Wahed, Anastasios A. Tsiatis
    Abstract:

    Two-stage designs are common in therapeutic clinical trials such as Cancer or AIDS treatments. In a two-stage design, patients are initially treated with one induction (primary) therapy and then depending upon their response and consent, are treated by a maintenance therapy, sometimes to intensify the effect of the first stage therapy. The goal is to compare different combinations of primary and maintenance (intensification) therapies to find the combination that is most beneficial. To achieve this goal, patients are initially randomized to one of several induction therapies and then if they are eligible for the second-stage randomization, are offered to be randomized to one of several maintenance therapies. In practice, the analysis is usually conducted in two separate stages which does not directly address the major objective of finding the best combination. Recently Lunceford et al. (2002, Biometrics, 58, 48-57) introduced ad hoc estimators for the Survival Distribution and mean restricted Survival time under different treatment policies. These estimators are consistent but not efficient, and do not include information from auxiliary covariates. In this dissertation study we derive estimators that are easy to compute and are more efficient than previous estimators. We also show how to improve efficiency further by taking into account additional information from auxiliary variables. Large sample properties of these estimators are derived and comparisons with other estimators are made using simulation. We apply our estimators to a leukemia clinical trial data set that motivated this study.

  • a consistent estimator for the Distribution of quality adjusted Survival time
    Biometrika, 1997
    Co-Authors: Hongwei Zhao, Anastasios A. Tsiatis
    Abstract:

    SUMMARY Quality adjusted Survival analysis is a new approach to therapy evaluation in clinical trials. It has received much attention recently because of its ability to take patients' quality of life into consideration. In this paper, we present a method that enables us to calculate the Survival Distribution of quality adjusted lifetime. Using martingale theory for counting processes, we can show that our estimator is asymptotically consistent, normally distributed, and its asymptotic variance estimate can be obtained analytically. Simulation experiments are conducted to compare our estimator with the true underlying Distribution for two cases that are of practical importance.

Sin-ho Jung - One of the best experts on this subject based on the ideXlab platform.

  • Statistical Methods for Conditional Survival Analysis.
    Journal of biopharmaceutical statistics, 2017
    Co-Authors: Sin-ho Jung, Ho Yun Lee, Shein-chung Chow
    Abstract:

    We investigate the Survival Distribution of the patients who have survived over a certain time period. This is called a conditional Survival Distribution. In this paper, we show that one-sample estimation, two-sample comparison and regression analysis of conditional Survival Distributions can be conducted using the regular methods for unconditional Survival Distributions that are provided by the standard statistical software, such as SAS and SPSS. We conduct extensive simulations to evaluate the finite sample property of these conditional Survival analysis methods. We illustrate these methods with real clinical data.

  • sample size calculation for the weighted rank statistics with paired Survival data
    Statistics in Medicine, 2008
    Co-Authors: Sin-ho Jung
    Abstract:

    This paper introduces a sample size calculation method for the weighted rank test statistics with paired two-sample Survival data. Our sample size formula requires specification of joint Survival and censoring Distributions. For modelling the Distribution of paired Survival variables, we may use a paired exponential Survival Distribution that is specified by the marginal hazard rates and a measure of dependency. Also, in most trials randomizing paired subjects, the subjects of each pair are accrued and censored at the same time over an accrual period and an additional follow-up period, so that the paired subjects have a common censoring time. Under these practical settings, the design parameters include type I and type II error probabilities, marginal hazard rates under the alternative hypothesis, correlation coefficient, accrual period (or accrual rate) and follow-up period. If pilot data are available, we may estimate the Survival Distributions from them, but we specify the censoring Distribution based on the specified accrual trend and the follow-up period planned for the new study. Through simulations, the formula is shown to provide accurate sample sizes under practical settings. Real studies are taken to demonstrate the proposed method. Copyright © 2008 John Wiley & Sons, Ltd.

Takeo Fukagawa - One of the best experts on this subject based on the ideXlab platform.

Paula Diehr - One of the best experts on this subject based on the ideXlab platform.

  • longitudinal data with follow up truncated by death match the analysis method to research aims
    Statistical Science, 2009
    Co-Authors: Brenda F Kurland, Laura Lee Johnson, Brian L Egleston, Paula Diehr
    Abstract:

    Diverse analysis approaches have been proposed to distinguish data missing due to death from nonresponse, and to summarize trajectories of longitudinal data truncated by death. We demonstrate how these analysis approaches arise from factorizations of the Distribution of longitudinal data and Survival information. Models are illustrated using cognitive functioning data for older adults. For unconditional models, deaths do not occur, deaths are independent of the longitudinal response, or the unconditional longitudinal response is averaged over the Survival Distribution. Unconditional models, such as random effects models fit to unbalanced data, may implicitly impute data beyond the time of death. Fully conditional models stratify the longitudinal response trajectory by time of death. Fully conditional models are effective for describing individual trajectories, in terms of either aging (age, or years from baseline) or dying (years from death). Causal models (principal stratification) as currently applied are fully conditional models, since group differences at one timepoint are described for a cohort that will survive past a later timepoint. Partly conditional models summarize the longitudinal response in the dynamic cohort of survivors. Partly conditional models are serial cross-sectional snapshots of the response, reflecting the average response in survivors at a given timepoint rather than individual trajectories. Joint models of Survival and longitudinal response describe the evolving health status of the entire cohort. Researchers using longitudinal data should consider which method of accommodating deaths is consistent with research aims, and use analysis methods accordingly.