Average Execution Time - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Average Execution Time

The Experts below are selected from a list of 327 Experts worldwide ranked by ideXlab platform

Kees Goossens – 1st expert on this subject based on the ideXlab platform

  • ECRTS – Dynamic Command Scheduling for Real-Time Memory Controllers
    2014 26th Euromicro Conference on Real-Time Systems, 2014
    Co-Authors: Yonghui Li, Benny Akesson, Kees Goossens

    Abstract:

    Memory controller design is challenging as real-Time embedded systems feature an increasing diversity of real-Time and non-real-Time applications with variable transaction sizes. To satisfy the requirements of the applications, tight bounds on the worst-case Execution Time (WCET) of memory transactions must be provided to real-Time applications, while the lowest possible Average Execution Time must be given to the rest. Existing real-Time memory controllers cannot efficiently achieve this goal as they either bound the WCET by sacrificing the Average Execution Time, or are not scalable to directly support variable transaction sizes, or both. In this paper, we propose to use dynamic command scheduling, which is capable of efficiently dealing with transactions with variable sizes. The three main contributions of this paper are: 1) a back-end architecture for a real-Time memory controller with a dynamic command scheduling algorithm, 2) a formalization of the timings of the memory transactions for the proposed architecture and algorithm, and 3) two techniques to bound the WCET of transactions with both fixed and variable sizes, respectively. We experimentally evaluate the proposed memory controller and compare both the worst-case and Average-case Execution Times of transactions to a state-of-the-art semi-static approach. The results demonstrate that dynamic command scheduling outperforms the semi-static approach by 33.4% in the Average case and performs at least equally well in the worst case. We also show the WCET is tight for transactions with fixed and variable sizes, respectively.

  • Dynamic Command Scheduling for Real-Time Memory Controllers
    2014 26th Euromicro Conference on Real-Time Systems, 2014
    Co-Authors: Yonghui Li, Benny Akesson, Kees Goossens

    Abstract:

    Memory controller design is challenging as real-Time embedded systems feature an increasing diversity of real-Time and non-real-Time applications with variable transaction sizes. To satisfy the requirements of the applications, tight bounds on the worst-case Execution Time (WCET) of memory transactions must be provided to real-Time applications, while the lowest possible Average Execution Time must be given to the rest. Existing real-Time memory controllers cannot efficiently achieve this goal as they either bound the WCET by sacrificing the Average Execution Time, or are not scalable to directly support variable transaction sizes, or both. In this paper, we propose to use dynamic command scheduling, which is capable of efficiently dealing with transactions with variable sizes. The three main contributions of this paper are: 1) a back-end architecture for a real-Time memory controller with a dynamic command scheduling algorithm, 2) a formalization of the timings of the memory transactions for the proposed architecture and algorithm, and 3) two techniques to bound the WCET of transactions with both fixed and variable sizes, respectively. We experimentally evaluate the proposed memory controller and compare both the worst-case and Average-case Execution Times of transactions to a state-of-the-art semi-static approach. The results demonstrate that dynamic command scheduling outperforms the semi-static approach by 33.4% in the Average case and performs at least equally well in the worst case. We also show the WCET is tight for transactions with fixed and variable sizes, respectively.

Erik Larsson – 2nd expert on this subject based on the ideXlab platform

  • Level of confidence evaluation and its usage for Roll-back Recovery with Checkpointing optimization
    2011 IEEE IFIP 41st International Conference on Dependable Systems and Networks Workshops (DSN-W), 2011
    Co-Authors: Dimitar Nikolov, Virendra Singh, Urban Ingelsson, Erik Larsson

    Abstract:

    Increasing soft error rates for semiconductor devices manufactured in later technologies enforces the use of fault tolerant techniques such as Roll-back Recovery with Checkpointing (RRC). However, RRC introduces Time overhead that increases the completion (Execution) Time. For non-real-Time systems, research have focused on optimizing RRC and shown that it is possible to find the optimal number of checkpoints such that the Average Execution Time is minimal. While minimal Average Execution Time is important, it is for real-Time systems important to provide a high probability that deadlines are met. Hence, there is a need of probabilistic guarantees that jobs employing RRC complete before a given deadline. First, we present a mathematical framework for the evaluation of level of confidence, the probability that a given deadline is met, when RRC is employed. Second, we present an optimization method for RRC that finds the number of checkpoints that results in the minimal completion Time while the minimal completion Time satisfies a given level of confidence requirement. Third, we use the proposed framework to evaluate probabilistic guarantees for RRC optimization in non-real-Time systems.

  • Fault-tolerant Average Execution Time optimization for general-purpose multi-processor system-on-chips
    2009 Design Automation & Test in Europe Conference & Exhibition, 2009
    Co-Authors: Mikael Vayrynen, Virendra Singh, Erik Larsson

    Abstract:

    Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the Average Execution Time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.

Jose Alberto Fernandez-zepeda – 3rd expert on this subject based on the ideXlab platform

  • Average Execution Time Analysis of a Self-stabilizing Leader Election Algorithm
    2007 IEEE International Parallel and Distributed Processing Symposium, 2007
    Co-Authors: Juan Paulo Alvarado-magana, Jose Alberto Fernandez-zepeda

    Abstract:

    This paper deals with the self-stabilizing leader election algorithm of Xu and Srimani that finds a leader in a tree graph. The worst case Execution Time for this algorithm is O(N4), where N is the number of nodes in the tree. We show that the Average Execution Time for this algorithm, assuming two different scenarios, is much lower than O(N4). In the first scenario, the algorithm assumes a equiprobable daemon and it only privileges a single node at a Time. The Average Execution Time for this case is O(N2). For the second case, the algorithm can privilege multiple nodes at a Time. We eliminate the daemon from this algorithm by making random choices to avoid interference between neighbor nodes. The Execution Time for this case is O(N). We also show that for specific tree graphs, these results reduce even more.

  • IPDPS – Average Execution Time Analysis of a Self-stabilizing Leader Election Algorithm
    2007 IEEE International Parallel and Distributed Processing Symposium, 2007
    Co-Authors: Juan Paulo Alvarado-magana, Jose Alberto Fernandez-zepeda

    Abstract:

    This paper deals with the self-stabilizing leader election algorithm of Xu and Srimani that finds a leader in a tree graph. The worst case Execution Time for this algorithm is O(N4), where N is the number of nodes in the tree. We show that the Average Execution Time for this algorithm, assuming two different scenarios, is much lower than O(N4). In the first scenario, the algorithm assumes a equiprobable daemon and it only privileges a single node at a Time. The Average Execution Time for this case is O(N2). For the second case, the algorithm can privilege multiple nodes at a Time. We eliminate the daemon from this algorithm by making random choices to avoid interference between neighbor nodes. The Execution Time for this case is O(N). We also show that for specific tree graphs, these results reduce even more.