Steady State Simulation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 68568 Experts worldwide ranked by ideXlab platform

James R. Wilson - One of the best experts on this subject based on the ideXlab platform.

  • Optimal Linear Combinations of Overlapping Variance Estimators for Steady-State Simulation
    Advancing the Frontiers of Simulation, 2009
    Co-Authors: Tuba Aktaran-kalayci, David Goldsman, Christos Alexopoulos, James R. Wilson
    Abstract:

    To estimate the variance parameter (i.e., the sum of covariances at all lags) of a Steady-State Simulation output process, we formulate an optimal linear combination of overlapping variance estimators (OLCOVE). Each variance estimator is computed from the same data set using one of the following methods: (i) overlapping batch means (OBM); or (ii) standardized time series (STS) applied to overlapping batches separately and then averaged over all such batches. Each estimator’s batch size is a fixed real multiple (at least unity) of a base batch size, appropriately rounded. The overall sample size is a fixed integral multiple of the base batch size. Exploiting the control-variates method, we assign OLCOVE coefficients so as to yield a minimum-variance estimator. We establish asymptotic properties of the bias and variance of OLCOVEs computed from OBM or STS variance estimators as the base batch size increases. Finally, we use OLCOVEs to construct confidence intervals for both the mean and the variance parameter of the target process. An experimental performance evaluation revealed the potential benefits of using OLCOVEs for Steady-State Simulation analysis.

  • SBatch: A spaced batch means procedure for Steady-State Simulation analysis
    Journal of Simulation, 2008
    Co-Authors: Emily K. Lada, Natalie M. Steiger, James R. Wilson
    Abstract:

    We discuss SBatch, a simplified procedure for Steady-State Simulation analysis that is based on spaced batch means, incorporating many advantages of its predecessors ASAP3 and WASSP while avoiding many of their disadvantages. SBatch is a sequential procedure designed to produce a confidence-interval (CI) estimator for the Steady-State mean response that satisfies user-specified precision and coverage-probability requirements. First SBatch determines a batch size and an interbatch spacer size such that beyond the initial spacer, the spaced batch means approximately form a stationary first-order autoregressive process whose lag-one correlation does not significantly exceed 0.8. Next SBatch delivers a correlation-adjusted CI based on the sample variance and lag-one correlation of the spaced batch means as well as the grand mean of all the individual observations beyond the initial spacer. In an experimental evaluation on a broad range of test problems, SBatch compared favourably with ASAP3 and WASSP.

  • Performance evaluation of recent procedures for Steady-State Simulation analysis
    IIE Transactions, 2006
    Co-Authors: Emily K. Lada, Natalie M. Steiger, James R. Wilson
    Abstract:

    The performance of the batch-means procedure ASAP3 and the spectral procedure WASSP is evaluated on test problems with characteristics typical of practical applications of Steady-State Simulation analysis procedures. ASAP3 and WASSP are sequential procedures designed to produce a confidence-interval estimator for the mean response that satisfies user-specified half-length and coverage-probability requirements. ASAP3 is based on an inverse Cornish-Fisher expansion for the classical batch-means t-ratio, whereas WASSP is based on a wavelet estimator of the batch-means power spectrum. Regarding closeness of the empirical coverage probability and average half-length of the delivered confidence intervals to their respective nominal levels, both procedures compared favorably with the Law-Carson procedure and the original ASAP algorithm. Regarding the average sample sizes required for decreasing levels of maximum confidence-interval half-length, ASAP3 and WASSP exhibited reasonable efficiency in the test problems.

  • A wavelet-based spectral procedure for Steady-State Simulation analysis
    European Journal of Operational Research, 2006
    Co-Authors: Emily K. Lada, James R. Wilson
    Abstract:

    Abstract We develop WASSP, a wavelet-based spectral method for Steady-State Simulation analysis. First WASSP determines a batch size and a warm-up period beyond which the computed batch means form an approximately stationary Gaussian process. Next WASSP computes the discrete wavelet transform of the bias-corrected log-smoothed-periodogram of the batch means, using a soft-thresholding scheme to denoise the estimated wavelet coefficients. Then taking the inverse discrete wavelet transform of the thresholded wavelet coefficients, WASSP computes estimators of the batch means log-spectrum and the Steady-State variance parameter (i.e., the sum of covariances at all lags) for the original (unbatched) process. Finally by combining the latter estimator with the batch means grand average, WASSP provides a sequential procedure for constructing a confidence interval on the Steady-State mean that satisfies user-specified requirements concerning absolute or relative precision as well as coverage probability. An experimental performance evaluation demonstrates WASSP’s effectiveness compared with other Simulation analysis methods.

  • Stochastics and Statistics A wavelet-based spectral procedure for Steady-State Simulation analysis q
    2006
    Co-Authors: Emily K. Lada, James R. Wilson
    Abstract:

    We develop WASSP, a wavelet-based spectral method for Steady-State Simulation analysis. First WASSP determines a batch size and a warm-up period beyond which the computed batch means form an approximately stationary Gaussian process. Next WASSP computes the discrete wavelet transform of the bias-corrected log-smoothed-periodogram of the batch means, using a soft-thresholding scheme to denoise the estimated wavelet coefficients. Then taking the inverse discrete wavelet transform of the thresholded wavelet coefficients, WASSP computes estimators of the batch means logspectrum and the Steady-State variance parameter (i.e., the sum of covariances at all lags) for the original (unbatched) process. Finally by combining the latter estimator with the batch means grand average, WASSP provides a sequential procedure for constructing a confidence interval on the Steady-State mean that satisfies user-specified requirements concerning absolute or relative precision as well as coverage probability. An experimental performance evaluation demon

Krzysztof Pawlikowski - One of the best experts on this subject based on the ideXlab platform.

  • Some effects of transient deletion on sequential Steady-State Simulation
    Simulation Modelling Practice and Theory, 2010
    Co-Authors: Donald C. Mcnickle, Gregory Ewing, Krzysztof Pawlikowski
    Abstract:

    Abstract In discrete event Steady-State Simulation, deleting the initial transient phase of the Simulation is usually recommended in order to reduce bias in the results. Various heuristics and tests have been proposed to determine how many observations to delete. The plummeting cost of Simulation, combined with uncertainties about the overall reliability of transient methods, suggests revisiting the notion that deletion is essential. We consider this in a framework of sequential Simulation, where the Simulation is run until a pre-specified accuracy of the results is reached. Our results show that for run lengths required for commonly used levels of accuracy, there is no substantial difference in point or interval estimates of means due to deleting the initial transient for the models we consider. However, in sequential Simulation, deleting the initial transient turns out to have considerable value in reducing the risk that the Simulation stops too early, thus ensuring that the accuracy of the final results is closer to that specified by the decision-maker.

  • VALUETOOLS - Detecting the duration of initial transient in Steady State Simulation of arbitrary performance measures
    Proceedings of the 2nd International ICST Conference on Performance Evaluation Methodologies and Tools, 2007
    Co-Authors: Mirko Eickhoff, Don Mcnickle, Krzysztof Pawlikowski
    Abstract:

    The issue of the initial transient phase in Steady State Simulation has been widely discussed in Simulation literature. Many methods have been proposed for deciding the duration of this phase of Simulation, to determine a valid truncation point of the transient portion of output data. However, practically all these methods can only be used in Simulations aimed at estimation of mean values. In this paper, we show that analyses of performance measures which do not represent mean values require different solutions, as the rate of convergence to Steady State is different for mean values than, for example, for quantiles. We describe and present additional results for a new method of determining the duration of initial transient phase which can be applied in analysis of Steady State quantiles and probability distributions. The method appears robust and applicable in analysis of arbitrary performance measures.

  • Distributed Steady-State Simulation of telecommunication networks with self-similar teletraffic
    Simulation Modelling Practice and Theory, 2005
    Co-Authors: Hae-duck J. Jeong, Donald C. Mcnickle, Jongsuk Ruth Lee, Krzysztof Pawlikowski
    Abstract:

    Abstract Recent measurement studies of teletraffic data in modern telecommunication networks have shown that self-similar processes may provide better models of teletraffic than Poisson processes. If this is not taken into account, it can lead to inaccurate conclusions about performance of telecommunication networks. We show how arrival processes with self-similar input influences the run-length of a distributed Steady-State Simulation of queueing systems in telecommunication networks. For this purpose, the Simulation run-length of SSM/M/1/∞ queueing systems in the method based on the batch means, conducted for estimating Steady-State mean waiting times is compared with the results obtained from Simulations of M/M/1/∞ queueing systems when a single processor and multiple processors are used. We also investigate speedup conducted stochastic Simulation of SSM/M/1/∞ queueing systems on multiple processors under a scenario of distributed stochastic Simulation known as MRIP (Multiple Replications In Parallel) in a local area network (LAN) environment on Solaris operating system. We show that, assuming self-similar inter-event processes (i.e., SSM/M/1/∞ queueing systems), many more observations are required to obtain the final Simulation results with a required precision, as the value of the Hurst parameter H increases, than when assuming Poisson models, exhibiting short-range dependence (i.e., M/M/1/∞ queueing systems) on a single processor and multiple processors. Our results show that the time for collecting many numbers of observations under the MRIP scenario is clearly reduced as traffic intensity and the value of the Hurst parameter increase, and as the engaged processor increases one to four. In particular, the value of H influences much more the speedup than traffic intensity and the engaged processor.

  • coverage of confidence intervals in sequential Steady State Simulation
    Simulation Practice and Theory, 1998
    Co-Authors: Krzysztof Pawlikowski, Donald C. Mcnickle, Gregory Ewing
    Abstract:

    Abstract Stochastic discrete-event Simulation has become one of the most-used tools for performance evaluation in science and engineering. But no innovation can replace the responsibility of simulators for obtaining credible results from their Simulation experiments. In this paper we address the problem of the statistical correctness of Simulation output data analysis, in the context of sequential Steady-State stochastic Simulation, conducted for studying long run behavior of stable systems. Such Simulations are stopped as soon as the relative precision of estimates, defined as the relative half-width of confidence intervals at a specified confidence level, reaches the required level. We formulate basic rules for the proper experimental analysis of the coverage of Steady-State interval estimators. Our main argument is that such an analysis should be done sequentially. The numerical results of our coverage analysis of the method of non-overlapping batch means and spectral analysis are presented, and compared with those obtained by the traditional, non-sequential approach. Two scenarios for stochastic Simulation are considered: traditional sequential Simulation on a single processor, and fast concurrent Simulation based on multiple replications in parallel (MRIP), with multiple processors cooperating in the production of output data.

  • Steady State Simulation of queueing processes survey of problems and solutions
    ACM Computing Surveys, 1990
    Co-Authors: Krzysztof Pawlikowski
    Abstract:

    For years computer-based stochastic Simulation has been a commonly used tool in the performance evaluation of various systems. Unfortunately, the results of Simulation studies quite often have little credibility, since they are presented without regard to their random nature and the need for proper statistical analysis of Simulation output data. This paper discusses the main factors that can affect the accuracy of stochastic Simulations designed to give insight into the Steady-State behavior of queuing processes. The problems of correctly starting and stopping such Simulation experiments to obtain the required statistical accuracy of the results are addressed. In this survey of possible solutions, the emphasis is put on possible applications in the sequential analysis of output data, which adaptively decides about continuing a Simulation experiment until the required accuracy of results is reached. A suitable solution for deciding upon the starting point of a Steady-State analysis and two techniques for obtaining the final Simulation results to a required level of accuracy are presented, together with pseudocode implementations.

David Goldsman - One of the best experts on this subject based on the ideXlab platform.

  • Steady-State Simulation with Replication-Dependent Initial Transients: Analysis and Examples
    INFORMS Journal on Computing, 2013
    Co-Authors: Nilay Tanık Argon, Christos Alexopoulos, Sigrún Andradóttir, David Goldsman
    Abstract:

    The replicated batch means RBM method for Steady-State Simulation output analysis generalizes both the independent replications IR and batch means BM methods. We analyze the performance of RBM in situations where the underlying stochastic process possesses an additive initial transient. Our analysis differs from prior work in that the initial transient is stochastic, and hence the sample paths of the transient process may be replication dependent, and possibly also correlated across replications. We provide asymptotic expressions for the mean and variance of the RBM estimators of the Steady-State mean and variance parameter of the stochastic process being simulated. We then use our results to study the performance of RBM as a function of the number of replications, initialization method for the replications, and decay rate of the associated initialization bias. Our results provide guidance on when IR, BM, or a combination thereof is the best choice, and also on effective choices of initial States for the replications.

  • Optimal Linear Combinations of Overlapping Variance Estimators for Steady-State Simulation
    Advancing the Frontiers of Simulation, 2009
    Co-Authors: Tuba Aktaran-kalayci, David Goldsman, Christos Alexopoulos, James R. Wilson
    Abstract:

    To estimate the variance parameter (i.e., the sum of covariances at all lags) of a Steady-State Simulation output process, we formulate an optimal linear combination of overlapping variance estimators (OLCOVE). Each variance estimator is computed from the same data set using one of the following methods: (i) overlapping batch means (OBM); or (ii) standardized time series (STS) applied to overlapping batches separately and then averaged over all such batches. Each estimator’s batch size is a fixed real multiple (at least unity) of a base batch size, appropriately rounded. The overall sample size is a fixed integral multiple of the base batch size. Exploiting the control-variates method, we assign OLCOVE coefficients so as to yield a minimum-variance estimator. We establish asymptotic properties of the bias and variance of OLCOVEs computed from OBM or STS variance estimators as the base batch size increases. Finally, we use OLCOVEs to construct confidence intervals for both the mean and the variance parameter of the target process. An experimental performance evaluation revealed the potential benefits of using OLCOVEs for Steady-State Simulation analysis.

  • An improved standardized time series Durbin-Watson variance estimator for Steady-State Simulation
    Operations Research Letters, 2009
    Co-Authors: Demet Batur, David Goldsman, Seong Hee Kim
    Abstract:

    We discuss an improved jackknifed Durbin-Watson estimator for the variance parameter from a Steady-State Simulation. The estimator is based on a combination of standardized time series area and Cramer-von Mises estimators. Various examples demonstrate its efficiency in terms of bias and variance compared to other estimators.

  • ASAP3: a batch means procedure for Steady-State Simulation analysis
    ACM Transactions on Modeling and Computer Simulation, 2005
    Co-Authors: Natalie M. Steiger, Christos Alexopoulos, James R. Wilson, Emily K. Lada, Jeffrey A. Joines, David Goldsman
    Abstract:

    We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidence-interval estimators for the expected response of a Steady-State Simulation. ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability. ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next, ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t-ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion. Regarding not only conformance to the precision and coverage-probability requirements but also the mean and variance of the half-length of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation.

  • Ranking and Selection for Steady-State Simulation: Procedures and Perspectives
    INFORMS Journal on Computing, 2002
    Co-Authors: David Goldsman, William S. Marshall, Seong Hee Kim, Barry L. Nelson
    Abstract:

    We present and evaluate three ranking-and-selection procedures for use in Steady-State Simulation experiments when the goal is to find which among a finite number of alternative systems has the largest or smallest long-run average performance. All three procedures extend existing methods for independent and identically normally distributed observations to general stationary output processes, and all procedures are sequential. We also provide our thoughts about the evaluation of Simulation design and analysis procedures, and illustrate these concepts in our evaluation of the new procedures.

Emily K. Lada - One of the best experts on this subject based on the ideXlab platform.

  • SBatch: A spaced batch means procedure for Steady-State Simulation analysis
    Journal of Simulation, 2008
    Co-Authors: Emily K. Lada, Natalie M. Steiger, James R. Wilson
    Abstract:

    We discuss SBatch, a simplified procedure for Steady-State Simulation analysis that is based on spaced batch means, incorporating many advantages of its predecessors ASAP3 and WASSP while avoiding many of their disadvantages. SBatch is a sequential procedure designed to produce a confidence-interval (CI) estimator for the Steady-State mean response that satisfies user-specified precision and coverage-probability requirements. First SBatch determines a batch size and an interbatch spacer size such that beyond the initial spacer, the spaced batch means approximately form a stationary first-order autoregressive process whose lag-one correlation does not significantly exceed 0.8. Next SBatch delivers a correlation-adjusted CI based on the sample variance and lag-one correlation of the spaced batch means as well as the grand mean of all the individual observations beyond the initial spacer. In an experimental evaluation on a broad range of test problems, SBatch compared favourably with ASAP3 and WASSP.

  • Performance evaluation of recent procedures for Steady-State Simulation analysis
    IIE Transactions, 2006
    Co-Authors: Emily K. Lada, Natalie M. Steiger, James R. Wilson
    Abstract:

    The performance of the batch-means procedure ASAP3 and the spectral procedure WASSP is evaluated on test problems with characteristics typical of practical applications of Steady-State Simulation analysis procedures. ASAP3 and WASSP are sequential procedures designed to produce a confidence-interval estimator for the mean response that satisfies user-specified half-length and coverage-probability requirements. ASAP3 is based on an inverse Cornish-Fisher expansion for the classical batch-means t-ratio, whereas WASSP is based on a wavelet estimator of the batch-means power spectrum. Regarding closeness of the empirical coverage probability and average half-length of the delivered confidence intervals to their respective nominal levels, both procedures compared favorably with the Law-Carson procedure and the original ASAP algorithm. Regarding the average sample sizes required for decreasing levels of maximum confidence-interval half-length, ASAP3 and WASSP exhibited reasonable efficiency in the test problems.

  • A wavelet-based spectral procedure for Steady-State Simulation analysis
    European Journal of Operational Research, 2006
    Co-Authors: Emily K. Lada, James R. Wilson
    Abstract:

    Abstract We develop WASSP, a wavelet-based spectral method for Steady-State Simulation analysis. First WASSP determines a batch size and a warm-up period beyond which the computed batch means form an approximately stationary Gaussian process. Next WASSP computes the discrete wavelet transform of the bias-corrected log-smoothed-periodogram of the batch means, using a soft-thresholding scheme to denoise the estimated wavelet coefficients. Then taking the inverse discrete wavelet transform of the thresholded wavelet coefficients, WASSP computes estimators of the batch means log-spectrum and the Steady-State variance parameter (i.e., the sum of covariances at all lags) for the original (unbatched) process. Finally by combining the latter estimator with the batch means grand average, WASSP provides a sequential procedure for constructing a confidence interval on the Steady-State mean that satisfies user-specified requirements concerning absolute or relative precision as well as coverage probability. An experimental performance evaluation demonstrates WASSP’s effectiveness compared with other Simulation analysis methods.

  • Stochastics and Statistics A wavelet-based spectral procedure for Steady-State Simulation analysis q
    2006
    Co-Authors: Emily K. Lada, James R. Wilson
    Abstract:

    We develop WASSP, a wavelet-based spectral method for Steady-State Simulation analysis. First WASSP determines a batch size and a warm-up period beyond which the computed batch means form an approximately stationary Gaussian process. Next WASSP computes the discrete wavelet transform of the bias-corrected log-smoothed-periodogram of the batch means, using a soft-thresholding scheme to denoise the estimated wavelet coefficients. Then taking the inverse discrete wavelet transform of the thresholded wavelet coefficients, WASSP computes estimators of the batch means logspectrum and the Steady-State variance parameter (i.e., the sum of covariances at all lags) for the original (unbatched) process. Finally by combining the latter estimator with the batch means grand average, WASSP provides a sequential procedure for constructing a confidence interval on the Steady-State mean that satisfies user-specified requirements concerning absolute or relative precision as well as coverage probability. An experimental performance evaluation demon

  • ASAP3: a batch means procedure for Steady-State Simulation analysis
    ACM Transactions on Modeling and Computer Simulation, 2005
    Co-Authors: Natalie M. Steiger, Christos Alexopoulos, James R. Wilson, Emily K. Lada, Jeffrey A. Joines, David Goldsman
    Abstract:

    We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidence-interval estimators for the expected response of a Steady-State Simulation. ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability. ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next, ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t-ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion. Regarding not only conformance to the precision and coverage-probability requirements but also the mean and variance of the half-length of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation.

Natalie M. Steiger - One of the best experts on this subject based on the ideXlab platform.

  • SBatch: A spaced batch means procedure for Steady-State Simulation analysis
    Journal of Simulation, 2008
    Co-Authors: Emily K. Lada, Natalie M. Steiger, James R. Wilson
    Abstract:

    We discuss SBatch, a simplified procedure for Steady-State Simulation analysis that is based on spaced batch means, incorporating many advantages of its predecessors ASAP3 and WASSP while avoiding many of their disadvantages. SBatch is a sequential procedure designed to produce a confidence-interval (CI) estimator for the Steady-State mean response that satisfies user-specified precision and coverage-probability requirements. First SBatch determines a batch size and an interbatch spacer size such that beyond the initial spacer, the spaced batch means approximately form a stationary first-order autoregressive process whose lag-one correlation does not significantly exceed 0.8. Next SBatch delivers a correlation-adjusted CI based on the sample variance and lag-one correlation of the spaced batch means as well as the grand mean of all the individual observations beyond the initial spacer. In an experimental evaluation on a broad range of test problems, SBatch compared favourably with ASAP3 and WASSP.

  • Performance evaluation of recent procedures for Steady-State Simulation analysis
    IIE Transactions, 2006
    Co-Authors: Emily K. Lada, Natalie M. Steiger, James R. Wilson
    Abstract:

    The performance of the batch-means procedure ASAP3 and the spectral procedure WASSP is evaluated on test problems with characteristics typical of practical applications of Steady-State Simulation analysis procedures. ASAP3 and WASSP are sequential procedures designed to produce a confidence-interval estimator for the mean response that satisfies user-specified half-length and coverage-probability requirements. ASAP3 is based on an inverse Cornish-Fisher expansion for the classical batch-means t-ratio, whereas WASSP is based on a wavelet estimator of the batch-means power spectrum. Regarding closeness of the empirical coverage probability and average half-length of the delivered confidence intervals to their respective nominal levels, both procedures compared favorably with the Law-Carson procedure and the original ASAP algorithm. Regarding the average sample sizes required for decreasing levels of maximum confidence-interval half-length, ASAP3 and WASSP exhibited reasonable efficiency in the test problems.

  • ASAP3: a batch means procedure for Steady-State Simulation analysis
    ACM Transactions on Modeling and Computer Simulation, 2005
    Co-Authors: Natalie M. Steiger, Christos Alexopoulos, James R. Wilson, Emily K. Lada, Jeffrey A. Joines, David Goldsman
    Abstract:

    We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidence-interval estimators for the expected response of a Steady-State Simulation. ASAP3 is a sequential procedure designed to produce a confidence-interval estimator that satisfies user-specified requirements on absolute or relative precision as well as coverage probability. ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next, ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t-ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion. Regarding not only conformance to the precision and coverage-probability requirements but also the mean and variance of the half-length of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation.

  • improved batching for confidence interval construction in Steady State Simulation
    Winter Simulation Conference, 1999
    Co-Authors: Natalie M. Steiger, J R Wilson
    Abstract:

    We describe an improved batch-means procedure for building a confidence interval on a Steady-State expected Simulation response that is centered on the sample mean of a portion of the corresponding Simulation-generated time series and satisfies a user-specified absolute or relative precision requirement. The theory supporting the new algorithm merely requires the output process to be weakly dependent (phi-mixing) so that for a sufficiently large batch size, the batch means are approximately multivariate normal but not necessarily uncorrelated. A variant of the method of nonoverlapping batch means (NOBM), the Automated Simulation Analysis Procedure (ASAP) operates as follows: the batch size is progressively increased until either: the batch means pass the von Neumann test for independence, and then ASAP delivers a classical NOBM confidence interval; or the batch means pass the Shapiro-Wilk test for multivariate normality, and then ASAP delivers a corrected confidence interval. The latter correction is based on an inverted Cornish-Fisher expansion for the classical NOBM t-ratio, where the terms of the expansion are estimated via an autoregressive-moving average time series model of the batch means. An experimental performance evaluation demonstrates the advantages of ASAP versus other widely used batch-means procedures.

  • Winter Simulation Conference - Steady-State Simulation analysis using ASAP3
    Proceedings of the 2004 Winter Simulation Conference 2004., 1
    Co-Authors: Natalie M. Steiger, Christos Alexopoulos, James R. Wilson, Emily K. Lada, Jeffrey A. Joines, David Goldsman
    Abstract:

    We discuss ASAP3, a refinement of the batch means algorithms ASAP and ASAP2. ASAP3 is a sequential procedure designed to produce a confidence-interval estimator for the expected response of a Steady-State Simulation that satisfies user-specified precision and coverage-probability requirements. ASAP3 operates as follows: the batch size is increased until the batch means pass the Shapiro-Wilk test for multivariate normality; and then ASAP3 fits a first-order autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next ASAP3 computes the terms of an inverse Cornish-Fisher expansion for the classical batch means t-ratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlation-adjusted confidence interval based on this expansion. ASAP3 compared favorably with other batch means procedures (namely, ABATCH, ASAP, ASAP2, and LBATCH) in an extensive experimental performance evaluation.