Intermediate Variable

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 46563 Experts worldwide ranked by ideXlab platform

Wanfang Shen - One of the best experts on this subject based on the ideXlab platform.

  • two stage dea models with undesirable input Intermediate outputs
    Omega-international Journal of Management Science, 2015
    Co-Authors: Zhongbao Zhou, Wanfang Shen
    Abstract:

    Data envelopment analysis (DEA) is a non-parametric approach for measuring the relative efficiencies of peer decision making units (DMUs). Many studies have examined DEA efficiencies of two-stage systems, where all the outputs from the first stage are the only inputs to the second stage. Although single-stage DEA models with undesirable input-outputs have been extensively studied, there still lacks of more systematical investigation on two-stage DEA with undesirable Variables. For instance, depending on its operating model, even whether an Intermediate Variable is desirable or undesirable can be questionable for a particular two-stage system. Furthermore, most of the existing studies on two-stage systems focus on the case where only the final outputs are undesirable. In this work, we try to systematically examine two-stage DEA models with undesirable input-Intermediate-outputs. Particularly, we utilize the free-disposal axioms to construct the production possibility sets (PPS) and the corresponding DEA models with undesirable Variables. The proposed models are then used to illustrate some theoretical perspectives by using the data of China׳s listed banks.

Kyle Steenland - One of the best experts on this subject based on the ideXlab platform.

  • 0363 marginal structural models to account for time varying confounding Variable and the health worker survivor effect for a mini symposium on dynamics of exposure and disease organised by vermeulen
    Occupational and Environmental Medicine, 2014
    Co-Authors: Kyle Steenland
    Abstract:

    Objectives Marginal structural models (MSMs) in longitudinal studies are needed when time-varying confounders are themselves predicted by previous exposure, and are Intermediate Variables on the pathway between exposure and disease. The epidemiologist is left with the unenviable choice of adjusting or not for the confounder/Intermediate Variable. An example would be whether aspirin decreases cardiovascular mortality, in which the confounder/Intermediate Variable is cardiovascular morbidity. Method MSMs use inverse-probability weights based on an ‘exposure’ model which assesses the probability that each subject has received their own exposure and confounder history up to time t, with the follow-up period divided into T (t=1 to T) categories. These weights are then used in standard regression models (eg., pooled logistic regression models across T categories) relating exposure to disease. Their use creates a pseudo-population where time-varying confounding is eliminated. Results Empirical results show that standard methods to control for time-varying confounders can result in bias towards the null, compared to MSMs. A recent simulation study showed MSMs lead to unbiased results under a variety of assumptions. Conclusions Some studies have used somewhat different but related methods (“g-estimation”) to account for the healthy worker survivor effect, where employment status is a time-varying confounder which predicts future exposure and may predict disease, but may also act as an Intermediate Variable because prior exposure may cause illness which results in leaving employment. Here we will present an overview of MSMs and the related g-estimation models.

Yasutaka Chiba - One of the best experts on this subject based on the ideXlab platform.

  • Bounds on controlled direct effects under monotonic assumptions about mediators and confounders.
    Biometrical journal. Biometrische Zeitschrift, 2010
    Co-Authors: Yasutaka Chiba
    Abstract:

    Adjusting for Intermediate Variables is a common analytic strategy for estimating a direct effect. Even if the total effect is unconfounded, the direct effect is not identified when unmeasured Variables affect the Intermediate and outcome Variables. Therefore, some researchers presented bounds on the controlled direct effects via linear programming. They applied a monotonic assumption about treatment and Intermediate Variables and a no-interaction assumption to derive narrower bounds. Here, we improve their bounds without using linear programming and hence derive a bound under the monotonic assumption about an Intermediate Variable only. To improve the bounds, we further introduce the monotonic assumption about confounders. While previous studies assumed that an outcome is a binary Variable, we do not make that assumption. The proposed bounds are illustrated using two examples from randomized trials.

  • Bias Analysis for The Principal Stratum Direct Effect in The Presence of Confounded Intermediate Variables
    Journal of biometrics & biostatistics, 2010
    Co-Authors: Yasutaka Chiba
    Abstract:

    In epidemiological and clinical research, investigators often want to estimate the direct effect of a treatment on an outcome, which is not relayed by Intermediate Variables. Even if the total effect is unconfounded, the direct effect is not identifi ed when unmeasured Variables affect the Intermediate and outcome Variables. This article focuses on the principal stratum direct effect (PSDE) of a randomized treatment, which is the difference between expectations of potential outcomes within latent subgroups of subjects for whom the Intermediate Variable would be constant, regardless of the randomized treatment assignment. Unfortunately, the PSDE will not generally be estimated in an unbiased manner without untestable conditions, even if monotonicity is assumed. Thus, we propose bounds and a simple method of sensitivity analysis for the PSDE under a monotonicity assumption. To develop them, we introduce sensitivity parameters that are defi ned as the difference in potential outcomes with the same value of the Intermediate Variable between subjects who are assigned to the treatment and those who are assigned to the control group. Investigators can use the proposed method without complex computer programming. The method is illustrated using a randomized trial for coronary heart disease.

  • Estimating the principal stratum direct effect when the total effects are consistent between two standard populations
    Statistics & Probability Letters, 2010
    Co-Authors: Yasutaka Chiba
    Abstract:

    Adjusting for an Intermediate Variable is a common analytic strategy for estimating a direct effect. Even if the total effect is unconfounded, the direct effect is not identified when unmeasured Variables affect the Intermediate and outcome Variables. This paper focuses on the application of the principal stratification approach for estimating the direct effect of a randomized treatment. The approach is used to evaluate the direct effect of treatment as the difference between the expectations of potential outcomes within latent subgroups of subjects for which the Intermediate Variable would be constant, regardless of the randomized treatment assignment. To derive an estimator of the direct effect in cases in which the treatment and Intermediate Variables are dichotomous, we assume that the total effects are consistent between two standard populations. This assumption implies that the total effects are equal between two subpopulations with the same treatment assignment and a different Intermediate behavior, or the total effects are equal between two subpopulations with a different treatment assignment and the same Intermediate behavior. We show that the direct effect corresponds to the standard intention-to-treat effect under this assumption.

Qingyu Zhang - One of the best experts on this subject based on the ideXlab platform.

  • supply chain collaboration impact on collaborative advantage and firm performance
    Journal of Operations Management, 2011
    Co-Authors: Qingyu Zhang
    Abstract:

    Abstract Facing uncertain environments, firms have strived to achieve greater supply chain collaboration to leverage the resources and knowledge of their suppliers and customers. The objective of the study is to uncover the nature of supply chain collaboration and explore its impact on firm performance based on a paradigm of collaborative advantage. Reliable and valid instruments of these constructs were developed through rigorous empirical analysis. Data were collected through a Web survey of U.S. manufacturing firms in various industries. The statistical methods used include confirmatory factor analysis and structural equation modeling (i.e., LISREL). The results indicate that supply chain collaboration improves collaborative advantage and indeed has a bottom-line influence on firm performance, and collaborative advantage is an Intermediate Variable that enables supply chain partners to achieve synergies and create superior performance. A further analysis of the moderation effect of firm size reveals that collaborative advantage completely mediates the relationship between supply chain collaboration and firm performance for small firms while it partially mediates the relationship for medium and large firms.

Sanora Ann Slaughter - One of the best experts on this subject based on the ideXlab platform.

  • Software development practices, software complexity, and software maintenance performance: A field study
    Management Science, 1998
    Co-Authors: Rajiv D. Banker, Gordon B. Davis, Sanora Ann Slaughter
    Abstract:

    Software maintenance claims a large proportion of organizational resources. It is thought that many maintenance problems derive from inadequate software design and development practices. Poor design choices can result in complex software that is costly to support and difficult to change. However, it is difficult to assess the actual maintenance performance effects of software development practices because their impact is realized over the software life cycle. To estimate the impact of development activities in a more practical time frame, this research develops a two-stage model in which software complexity is a key Intermediate Variable that links design and development decisions to their downstream effects on software maintenance. The research analyzes data collected from a national mass merchandising retailer on 29 software enhancement projects and 23 software applications in a large IBM COBOL environment. Results indicate that the use of a code generator in development is associated with increased software complexity and software enhancement project effort. The use of packaged software is associated with decreased software complexity and software enhancement effort. These results suggest an important link between software development practices and maintenance performance.