Worst Case Analysis

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Sophie Quinton - One of the best experts on this subject based on the ideXlab platform.

  • A Generic Coq Proof of Typical Worst-Case Analysis
    2018
    Co-Authors: Pascal Fradet, Maxime Lesourd, Jean-françois Monin, Sophie Quinton
    Abstract:

    This paper presents a generic proof of Typical Worst-Case Analysis (TWCA), an Analysis technique for weakly-hard real-time uniprocessor systems. TWCA was originally introduced for systems with fixed priority preemptive (FPP) schedulers and has since been extended to fixed-priority nonpreemptive (FPNP) and earliest-deadline-first (EDF) schedulers. Our generic Analysis is based on an abstract model that characterizes the exact properties needed to make TWCA applicable to any system model. Our results are formalized and checked using the Coq proof assistant along with the Prosa schedulability Analysis library. Our experience with formalizing real-time systems analyses shows that this is not only a way to increase confidence in our claimed results: The discipline required to obtain machine checked proofs helps understanding the exact assumptions required by a given Analysis, its key intermediate steps and how this Analysis can be generalized.

  • RTSS - A Generic Coq Proof of Typical Worst-Case Analysis
    2018 IEEE Real-Time Systems Symposium (RTSS), 2018
    Co-Authors: Pascal Fradet, Maxime Lesourd, Jean-françois Monin, Sophie Quinton
    Abstract:

    This paper presents a generic proof of Typical Worst-Case Analysis (TWCA), an Analysis technique for weakly-hard real-time uniprocessor systems. TWCA was originally introduced for systems with fixed priority preemptive (FPP) schedulers and has since been extended to fixed-priority nonpreemptive (FPNP) and earliest-deadline-first (EDF) schedulers. Our generic Analysis is based on an abstract model that characterizes the exact properties needed to make TWCA applicable to any system model. Our results are formalized and checked using the Coq proof assistant along with the Prosa schedulability Analysis library. Our experience with formalizing real-time systems analyses shows that this is not only a way to increase confidence in our claimed results: The discipline required to obtain machine checked proofs helps understanding the exact assumptions required by a given Analysis, its key intermediate steps and how this Analysis can be generalized.

  • Extending typical Worst-Case Analysis using response-time dependencies to bound deadline misses
    2014
    Co-Authors: Zain A. H. Hammadeh, Sophie Quinton, Rolf Ernst
    Abstract:

    Weakly-hard time constraints have been proposed for applications where occasional deadline misses are permitted. Recently, a new approach called Typical Worst-Case Analysis (TWCA) has been introduced which exploits similar constraints to bound response times of systems with sporadic overload. In this paper, we extend that approach for static priority preemptive and non-preemptive scheduling to determine the maximum number of deadline misses for a given deadline. The approach is based on an optimization problem which trades off higher priority interference versus miss count. We formally derive a lattice structure for the possible combinations that lays the ground for an integer linear programming (ILP) formulation. The ILP solution is evaluated showing effectiveness of the approach and far better results than previous TWCA.

  • EMSOFT - Extending typical Worst-Case Analysis using response-time dependencies to bound deadline misses
    Proceedings of the 14th International Conference on Embedded Software - EMSOFT '14, 2014
    Co-Authors: Zain A. H. Hammadeh, Sophie Quinton, Rolf Ernst
    Abstract:

    Weakly-hard time constraints have been proposed for applications where occasional deadline misses are permitted. Recently, a new approach called Typical Worst-Case Analysis (TWCA) has been introduced which exploits similar constraints to bound response times of systems with sporadic overload. In this paper, we extend that approach for static priority preemptive and non-preemptive scheduling to determine the maximum number of deadline misses for a given deadline. The approach is based on an optimization problem which trades off higher priority interference versus miss count. We formally derive a lattice structure for the possible combinations that lays the ground for an integer linear programming (ILP) formulation. The ILP solution is evaluated showing effectiveness of the approach and far better results than previous TWCA.

Hirofumi Shinohara - One of the best experts on this subject based on the ideXlab platform.

  • Worst Case Analysis to obtain stable read write dc margin of high density 6t sram array with local vth variability
    International Conference on Computer Aided Design, 2005
    Co-Authors: Yasumasa Tsukamoto, Koji Nii, S Imaoka, Y Oda, S Ohbayashi, T Yoshizawa, H Makino, Koichiro Ishibashi, Hirofumi Shinohara
    Abstract:

    6T-SRAM cells in the sub-100 nm CMOS generation are now being exposed to a fatal risk that originates from large local Vth variability (/spl sigma//sub v/spl I.bar/Local/). To achieve high-yield SRAM arrays in presence of random /spl sigma//sub v/spl I.bar/Local/ component, we propose Worst-Case Analysis that determines the boundary of the stable Vth region for the SRAM read/write DC margin (Vth curve). Applying this to our original 65 nm SPICE model, we demonstrate typical behavior of the Vth curve and show new criteria for discussing SRAM array stability with Vth variability.

  • ICCAD - Worst-Case Analysis to obtain stable read/write DC margin of high density 6T-SRAM-array with local Vth variability
    2005
    Co-Authors: Yasumasa Tsukamoto, Koji Nii, S Imaoka, Y Oda, S Ohbayashi, T Yoshizawa, H Makino, Koichiro Ishibashi, Hirofumi Shinohara
    Abstract:

    6T-SRAM cells in the sub-100 nm CMOS generation are now being exposed to a fatal risk that originates from large local Vth variability (/spl sigma//sub v/spl I.bar/Local/). To achieve high-yield SRAM arrays in presence of random /spl sigma//sub v/spl I.bar/Local/ component, we propose Worst-Case Analysis that determines the boundary of the stable Vth region for the SRAM read/write DC margin (Vth curve). Applying this to our original 65 nm SPICE model, we demonstrate typical behavior of the Vth curve and show new criteria for discussing SRAM array stability with Vth variability.

Tim Roughgarden - One of the best experts on this subject based on the ideXlab platform.

  • Beyond the Worst-Case Analysis of Algorithms (Introduction).
    arXiv: Data Structures and Algorithms, 2020
    Co-Authors: Tim Roughgarden
    Abstract:

    One of the primary goals of the mathematical Analysis of algorithms is to provide guidance about which algorithm is the "best" for solving a given computational problem. Worst-Case Analysis summarizes the performance profile of an algorithm by its Worst performance on any input of a given size, implicitly advocating for the algorithm with the best-possible Worst-Case performance. Strong Worst-Case guarantees are the holy grail of algorithm design, providing an application-agnostic certification of an algorithm's robustly good performance. However, for many fundamental problems and performance measures, such guarantees are impossible and a more nuanced Analysis approach is called for. This chapter surveys several alternatives to Worst-Case Analysis that are discussed in detail later in the book.

  • Beyond Worst-Case Analysis
    Communications of the ACM, 2019
    Co-Authors: Tim Roughgarden
    Abstract:

    In the Worst-Case Analysis of algorithms, the overall performance of an algorithm is summarized by its Worst performance on any input. This approach has countless success stories, but there are also important computational problems --- like linear programming, clustering, online caching, and neural network training --- where the Worst-Case Analysis framework does not provide any helpful advice on how to solve the problem. This article covers a number of modeling methods for going beyond Worst-Case Analysis and articulating which inputs are the most relevant.

  • CS264: Beyond Worst-Case Analysis Lecture #6: Clustering in Approximation-Stable Instances
    2014
    Co-Authors: Tim Roughgarden
    Abstract:

    In some optimization problems, the objective function can be taken quite literally. If one wants to maximize profit or accomplish some goal at minimum cost, then the goal translates directly into a numerical objective function. In other applications, an objective function is only a means to an end. Consider, for example, the problem of clustering. Given a set of data points, the goal is to cluster them into “coherent groups,” with points in the same group being “similar” and those in different groups being “dissimilar.” There is not an obvious, unique way to translate this goal into a numerical objective function, and as a result many different objective functions have been studied (k-means, k-median, k-center, etc.) with the intent of making the fuzzy notion of a “good/meaningful clustering” into a concrete optimization problem. In this Case, we do not care about the objective function value per se; rather, we want to discover interesting structure in the data. So we’re perfectly happy to compute a “meaningful clustering” with suboptimal objective function value, and would be highly dissatisfied with an “optimal solution” that fails to indicate any patterns in the data (which suggests that we were asking the wrong question, or expecting structure where none exists). The point is that if we are trying to cluster a data set, then we are implicitly assuming that interesting structure exists in the data. This perspective suggests that an explicit model of data could sharpen the insights provided by a traditional Worst-Case Analysis framework (cf., modeling locality of reference in online paging). This lecture begins our exploration of the conjecture that clustering is hard only when it doesn’t matter. That is, clustering

  • CS264: Beyond Worst-Case Analysis Lecture #4: Parameterized Analysis of Online Paging
    2014
    Co-Authors: Tim Roughgarden
    Abstract:

    Recall our three goals for the mathematical Analysis of algorithms — the Explanation Goal, the Comparison Goal, and the Design Goal. Recall that, for the online paging problem, traditional competitive Analysis earns a pretty bad report card on the first two goals. First, the competitive ratios of all online paging algorithms are way too big to be taken seriously as predictions or explanations of empirical performance. Second, while competitive Analysis does identify the least recently used (LRU) policy as an optimal online algorithm, it also identifies less laudable policies (like first-in first-out (FIFO) or even flush-when-full (FWF)) as optimal. Last lecture introduced resource augmentation guarantees. This approach made no compromises about Worst-Case Analysis, but rather changed the benchmark — restricting the offline optimal algorithm to a smaller cache, which of course can only make competitive ratios smaller. The primary benefit of resource augmentation is much more meaningful and interpretable performance guarantees, which is good progress on the Explanation Goal. A key drawback is that it failed to differentiate between the LRU policy and the FIFO and FWF policies. Intuitively, because the empirical superiority of LRU appears to be driven by the properties of “real-world” data, namely locality of reference, we can’t expect to separate LRU from other natural paging algorithms (like FIFO) without at least implicitly articulating such properties. The goal of this lecture is to parameterize page sequences according to a natural measure of locality, to prove parameterized upper and lower bounds on the performance of natural paging policies, and (finally!) to prove a sense in which LRU is strictly superior to FIFO (and FWF). As a bonus to this progress on the Comparison Goal, we’ll obtain good absolute performance guarantees that represent a step forward for the Explanation Goal.

  • CS369N: Beyond Worst-Case Analysis Lecture #5: Self-Improving Algorithms ∗
    2010
    Co-Authors: Tim Roughgarden
    Abstract:

    Last lecture concluded with a discussion of semi-random graph models, an interpolation between Worst-Case Analysis and average-Case Analysis designed to identify robust algorithms in the face of strong impossibility results for Worst-Case guarantees. This lecture and the next two give three more Analysis frameworks that blend aspects of Worstand average-Case Analysis. Today’s model, of self-improving algorithms, is the closest to traditional averageCase Analysis. The model and results are by Ailon, Chazelle, Comandar, and Liu [1].

Yasumasa Tsukamoto - One of the best experts on this subject based on the ideXlab platform.

  • Worst Case Analysis to obtain stable read write dc margin of high density 6t sram array with local vth variability
    International Conference on Computer Aided Design, 2005
    Co-Authors: Yasumasa Tsukamoto, Koji Nii, S Imaoka, Y Oda, S Ohbayashi, T Yoshizawa, H Makino, Koichiro Ishibashi, Hirofumi Shinohara
    Abstract:

    6T-SRAM cells in the sub-100 nm CMOS generation are now being exposed to a fatal risk that originates from large local Vth variability (/spl sigma//sub v/spl I.bar/Local/). To achieve high-yield SRAM arrays in presence of random /spl sigma//sub v/spl I.bar/Local/ component, we propose Worst-Case Analysis that determines the boundary of the stable Vth region for the SRAM read/write DC margin (Vth curve). Applying this to our original 65 nm SPICE model, we demonstrate typical behavior of the Vth curve and show new criteria for discussing SRAM array stability with Vth variability.

  • ICCAD - Worst-Case Analysis to obtain stable read/write DC margin of high density 6T-SRAM-array with local Vth variability
    2005
    Co-Authors: Yasumasa Tsukamoto, Koji Nii, S Imaoka, Y Oda, S Ohbayashi, T Yoshizawa, H Makino, Koichiro Ishibashi, Hirofumi Shinohara
    Abstract:

    6T-SRAM cells in the sub-100 nm CMOS generation are now being exposed to a fatal risk that originates from large local Vth variability (/spl sigma//sub v/spl I.bar/Local/). To achieve high-yield SRAM arrays in presence of random /spl sigma//sub v/spl I.bar/Local/ component, we propose Worst-Case Analysis that determines the boundary of the stable Vth region for the SRAM read/write DC margin (Vth curve). Applying this to our original 65 nm SPICE model, we demonstrate typical behavior of the Vth curve and show new criteria for discussing SRAM array stability with Vth variability.

Felice Balarin - One of the best experts on this subject based on the ideXlab platform.

  • ICCAD - STARS in VCC: complementing simulation with Worst-Case Analysis
    2001
    Co-Authors: Felice Balarin
    Abstract:

    STARS is a methodology for Worst-Case Analysis of embedded systems. STARS manipulates abstract representations of system components to obtain upper bounds on the number of various events in the system, as well as a bound on the response time. VCC is a commercial discrete event simulator, that can be used both for functional and performance verification. We describe an extension of VCC to facilitate STARS. The extension allows the user to specify abstract representations of VCC modules. These abstractions are used by STARS, but their validity can also be checked by VCC simulation. We also propose a mostly automatic procedure to generate these abstractions. Finally, we illustrate on an example how STARS can be combined with simulation to find bugs that would be hard to find by simulation alone.

  • CODES - STARS of MPEG decoder: a Case study in Worst-Case Analysis of discrete-event systems
    Proceedings of the ninth international symposium on Hardware software codesign - CODES '01, 2001
    Co-Authors: Felice Balarin
    Abstract:

    STARS (STatic Analysis of Reactive Systems) is a methodology for Worst-Case Analysis of discrete systems. Theoretical foundations of STARS have been laid down [1, 2, 3], but no implementation has been presented so far. We introduce an implementation of STARS as an extension of YAPI, a programming interface used to model signal processing applications as process networks [7]. We apply STARS to a YAPI model of an MPEG decoder. We show that Worst-Case bounds computed by STARS are quite close to simulated values (within 15%). We also show that additional effort by the designer required to build STARS models is very small compared to effort of building the YAPI simulation model, and that the run times of STARS are negligible compared to the simulation run times.

  • DATE - Automatic abstraciton for Worst-Case Analysis of discrete systems
    Proceedings of the conference on Design automation and test in Europe - DATE '00, 2000
    Co-Authors: Felice Balarin
    Abstract:

    Recently a methodology for Worst-Case Analysis of discrete systems has been proposed by the author. The methodology relies on a user-provided abstraction of system components. In this paper the author proposes a procedure to automatically generate such abstractions for system components with Boolean transition functions. She use a binary decision diagram (BDD) of the transition function to generate a formula in Presburger arithmetic representing the desired abstraction. The author's experiments indicate that the approach can be applied to control-dominated embedded systems.

  • ICCAD - Worst-Case Analysis of discrete systems
    1999
    Co-Authors: Felice Balarin
    Abstract:

    We propose a methodology for Worst-Case Analysis of systems with discrete observable signals. The methodology can be used to verify different properties of systems such as power consumption, timing performance or resource utilization. We also propose an application of the methodology to timing Analysis of embedded systems implemented on a single processor. The Analysis provides a bound on the response time of such systems. It is typically very efficient, because it does not require a state space search.

  • CODES - Worst-Case Analysis of discrete systems based on conditional abstractions
    Proceedings of the seventh international workshop on Hardware software codesign - CODES '99, 1999
    Co-Authors: Felice Balarin
    Abstract:

    Recently, a methodology for Worst-Case Analysis of systems with discrete observable signals has been proposed. We extend this methodology to make use of conditional system abstractions that are valid only in some system states. We show that the response-time Analysis for single-processor systems is particularly well suited for use of such abstractions. We use an example to demonstrate that significantly better response-time bounds can be obtained using conditional abstractions.