Computing Power

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Kenichi Hagihara - One of the best experts on this subject based on the ideXlab platform.

  • near optimal dynamic task scheduling of independent coarse grained tasks onto a computational grid
    International Conference on Parallel Processing, 2003
    Co-Authors: Noriyuki Fujimoto, Kenichi Hagihara
    Abstract:

    The most common objective function of task scheduling problems is makespan. However, on a computational grid, the 2nd optimal makespan may be much longer than the optimal makespan because the Computing Power of a grid varies over time. So, if the performance measure is makespan, there is no approximation algorithm in general for scheduling onto a grid. A novel criterion of a schedule is proposed. The proposed criterion is called total processor cycle consumption, which is the total number of instructions the grid could compute until the completion time of the schedule. Moreover, for the criterion, this gives a (l+ m(loge(m-1)+1)/n)-approximation algorithm for scheduling n independent coarse-grained tasks with the same length onto a grid with m processors. The proposed algorithm does not use any prediction information on the performance of underlying resources. This result implies a nontrivial result that the Computing Power consumed by a parameter sweep application can be limited in such a case within (1+m(loge(m-1)+1)/n) times that required by an optimal schedule, regardless how the speed of each processor varies over time

  • near optimal dynamic task scheduling of precedence constrained coarse grained tasks onto a computational grid
    International Symposium on Parallel and Distributed Computing, 2003
    Co-Authors: Noriyuki Fujimoto, Kenichi Hagihara
    Abstract:

    The most common objective function of task scheduling problems is makespan. However, on a computational grid, the 2nd optimal makespan may be much longer than the optimal makespan because the speed of each processor of a grid varies over time. So, if the performance measure is makespan, there is no approximation algorithm in general for scheduling onto a grid. In contrast, recently the authors proposed the Computing Power consumed by a schedule as a criterion of the schedule. For the criterion, this paper gives a (1 + Lcp(n)ċm(loge(m-1)+1)/n)-approximation algorithm for scheduling precedence constrained coarse-grained tasks with the same length onto a grid where n is the number of tasks, m is the number of processors, and Lcp(n) is the length of the critical path of the task graph. The proposed algorithm does not use any prediction information on the performance of underlying resources. Lcp(n) is usually a sublinear function of n. So, the above performance guarantee converges to one as n grows. This result implies a non-trivial result that the Computing Power consumed by an application on a grid can be limited within (1 + Lcp(n)ċm(loge(m-1)+1)/n) times that required by an optimal schedule in such a case.

Mark D. Plumbley - One of the best experts on this subject based on the ideXlab platform.

  • automatic environmental sound recognition performance versus computational cost
    arXiv: Sound, 2016
    Co-Authors: Siddharth Sigtia, Adam M Stark, Sacha Krstulovic, Mark D. Plumbley
    Abstract:

    In the context of the Internet of Things (IoT), sound sensing applications are required to run on embedded platforms where notions of product pricing and form factor impose hard constraints on the available Computing Power. Whereas Automatic Environmental Sound Recognition (AESR) algorithms are most often developed with limited consideration for computational cost, this article seeks which AESR algorithm can make the most of a limited amount of Computing Power by comparing the sound classification performance em as a function of its computational cost. Results suggest that Deep Neural Networks yield the best ratio of sound classification accuracy across a range of computational costs, while Gaussian Mixture Models offer a reasonable accuracy at a consistently small cost, and Support Vector Machines stand between both in terms of compromise between accuracy and computational cost.

  • Automatic Environmental Sound Recognition: Performance Versus Computational Cost
    IEEE ACM Transactions on Audio Speech and Language Processing, 2016
    Co-Authors: Siddharth Sigtia, Sacha Krstulović, Adam M Stark, Mark D. Plumbley
    Abstract:

    In the context of the Internet of Things, sound sensing applications are required to run on embedded platforms where notions of product pricing and form factor impose hard constraints on the available Computing Power. Whereas Automatic Environmental Sound Recognition (AESR) algorithms are most often developed with limited consideration for computational cost, this paper seeks which AESR algorithm can make the most of a limited amount of Computing Power by comparing the sound classification performance as a function of its computational cost. Results suggest that Deep Neural Networks yield the best ratio of sound classification accuracy across a range of computational costs, while Gaussian Mixture Models offer a reasonable accuracy at a consistently small cost, and Support Vector Machines stand between both in terms of compromise between accuracy and computational cost.

Noriyuki Fujimoto - One of the best experts on this subject based on the ideXlab platform.

  • near optimal dynamic task scheduling of independent coarse grained tasks onto a computational grid
    International Conference on Parallel Processing, 2003
    Co-Authors: Noriyuki Fujimoto, Kenichi Hagihara
    Abstract:

    The most common objective function of task scheduling problems is makespan. However, on a computational grid, the 2nd optimal makespan may be much longer than the optimal makespan because the Computing Power of a grid varies over time. So, if the performance measure is makespan, there is no approximation algorithm in general for scheduling onto a grid. A novel criterion of a schedule is proposed. The proposed criterion is called total processor cycle consumption, which is the total number of instructions the grid could compute until the completion time of the schedule. Moreover, for the criterion, this gives a (l+ m(loge(m-1)+1)/n)-approximation algorithm for scheduling n independent coarse-grained tasks with the same length onto a grid with m processors. The proposed algorithm does not use any prediction information on the performance of underlying resources. This result implies a nontrivial result that the Computing Power consumed by a parameter sweep application can be limited in such a case within (1+m(loge(m-1)+1)/n) times that required by an optimal schedule, regardless how the speed of each processor varies over time

  • near optimal dynamic task scheduling of precedence constrained coarse grained tasks onto a computational grid
    International Symposium on Parallel and Distributed Computing, 2003
    Co-Authors: Noriyuki Fujimoto, Kenichi Hagihara
    Abstract:

    The most common objective function of task scheduling problems is makespan. However, on a computational grid, the 2nd optimal makespan may be much longer than the optimal makespan because the speed of each processor of a grid varies over time. So, if the performance measure is makespan, there is no approximation algorithm in general for scheduling onto a grid. In contrast, recently the authors proposed the Computing Power consumed by a schedule as a criterion of the schedule. For the criterion, this paper gives a (1 + Lcp(n)ċm(loge(m-1)+1)/n)-approximation algorithm for scheduling precedence constrained coarse-grained tasks with the same length onto a grid where n is the number of tasks, m is the number of processors, and Lcp(n) is the length of the critical path of the task graph. The proposed algorithm does not use any prediction information on the performance of underlying resources. Lcp(n) is usually a sublinear function of n. So, the above performance guarantee converges to one as n grows. This result implies a non-trivial result that the Computing Power consumed by an application on a grid can be limited within (1 + Lcp(n)ċm(loge(m-1)+1)/n) times that required by an optimal schedule in such a case.

S Orbely - One of the best experts on this subject based on the ideXlab platform.

  • boosting the performance of embedded vision systems using a dsp fpga co processor system
    Systems Man and Cybernetics, 2007
    Co-Authors: F Rinnerthale, Wilfried Kubinge, Josef Lange, Marti Humenberge, S Orbely
    Abstract:

    Sensor systems for robotics and autonomous systems usually require small-sized and Power-aware dedicated solutions for their realization. Therefore, an embedded system is the first choice - but the drawback is weaker Computing Power compared to state-of-the-art PC-based systems. This paper describes our approach on "boosting" the Computing Power of embedded vision systems. Our system consists of a platform based on a digital signal processor (DSP) enhanced by an additional field programmable gate array (FPGA) used as a co-processor. In our novel approach called resource optimized co-processing, both the DSP and the FPGA are driven in parallel for the execution of crucial parts of the vision algorithms. Thereby, through efficient usage of system resources a significant increase of the system performance is possible. The approach and the achievable profit in Computing Power is outlined in the paper. Based on an example case - the realization of a robot soccer embedded vision sensor - the usefulness and the Powerfulness of our approach is demonstrated. In this demo case, by applying resource optimized co-processing, the most crucial and Computing-intensive function was executed twice as fast as before. Thus, we were able to fulfill the stringent real-time requirements of the vision system.

François Bérard - One of the best experts on this subject based on the ideXlab platform.

  • Perceptual user interfaces: things that see
    Communications of the ACM, 2000
    Co-Authors: James L. Crowley, Joëlle Coutaz, François Bérard
    Abstract:

    The exponential decrease in the costs of computation and of communication is rapidly leading to convergence and ubiquity. At the same time, inexpensive Computing Power is enabling a quiet revolution in the machine perception of human action. In the near future, we expect machine perception to converge with ubiquitous Computing and communication.