Multiprogramming

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2016 Experts worldwide ranked by ideXlab platform

Thomas J Leblanc - One of the best experts on this subject based on the ideXlab platform.

  • Multiprogramming and multiprocessing
    Wiley Encyclopedia of Electrical and Electronics Engineering, 1999
    Co-Authors: Evangelos P Markatos, Thomas J Leblanc
    Abstract:

    The sections in this article are 1 Multiprogramming Shared-Memory Multiprocessors 2 Multiprogramming Distributed-Memory Multiprocessors 3 Multiprogramming Networks of Workstations 4 Summary 5 Acknowledgments

  • the effects of Multiprogramming on barrier synchronization
    International Parallel and Distributed Processing Symposium, 1991
    Co-Authors: Evangelos P Markatos, Mark Crovella, P Das, Cezary Dubnicki, Thomas J Leblanc
    Abstract:

    One of the most common ways to share a multiprocessor among several applications is to give each application a set of dedicated processors. To ensure fairness, an application may receive fewer processors than it has processes. Unless an application can easily adjust the number of processes it employs during execution, several processes from the same application may have to share a processor. The authors quantify the performance penalty that arises when more than one process from the same application runs on a single processor of a NUMA (Non Uniform Memory Access) multiprocessor. They consider programs that use coarse-grain parallelism and barrier synchronization because they are particularly sensitive to Multiprogramming. They quantify the impact on the performance of an application of quantum size, frequency of synchronization, and the type of barrier used. They conclude that dedicating processors to an application, even without migration or dynamic adjustment of the number of processes, is an effective scheduling policy for programs that synchronize frequently using barriers. >

  • Multiprogramming on multiprocessors
    International Parallel and Distributed Processing Symposium, 1991
    Co-Authors: Mark Crovella, P Das, Cezary Dubnicki, Thomas J Leblanc, Evangelos P Markatos
    Abstract:

    Many solutions have been proposed to the problem of Multiprogramming a multiprocessor. However, each has limited applicability or fails to address an important source of overhead. In addition, there has been little experimental comparison of the various solutions in the presence of applications with varying degrees of parallelism and synchronization. The authors explore the tradeoffs between three different approaches to Multiprogramming a multiprocessor: time-slicing, coscheduling, and dynamic hardware partitions. They implemented applications that vary in the degree of parallelism, and the frequency and type of synchronization. They show that in most cases coscheduling is preferable to time-slicing. They also show that although there are cases where coscheduling is beneficial, dynamic hardware partitions do no worse, and will often do better. They conclude that under most circumstances, hardware partitioning is the best strategy for Multiprogramming a multiprocessor, no matter how much parallelism applications employ or how frequently synchronization occurs. >

Mateo Valero - One of the best experts on this subject based on the ideXlab platform.

  • enabling preemptive Multiprogramming on gpus
    International Symposium on Computer Architecture, 2014
    Co-Authors: Ivan Tanasic, Isaac Gelado, Javier Cabezas, Alex Ramirez, Nacho Navarro, Mateo Valero
    Abstract:

    GPUs are being increasingly adopted as compute accelerators in many domains, spanning environments from mobile systems to cloud computing. These systems are usually running multiple applications, from one or several users. However GPUs do not provide the support for resource sharing traditionally expected in these scenarios. Thus, such systems are unable to provide key multiprogrammed workload requirements, such as responsiveness, fairness or quality of service.In this paper, we propose a set of hardware extensions that allow GPUs to efficiently support multiprogrammed GPU workloads. We argue for preemptive multitasking and design two preemption mechanisms that can be used to implement GPU scheduling policies. We extend the architecture to allow concurrent execution of GPU kernels from different user processes and implement a scheduling policy that dynamically distributes the GPU cores among concurrently running kernels, according to their priorities. We extend the NVIDIA GK110 (Kepler) like GPU architecture with our proposals and evaluate them on a set of multiprogrammed workloads with up to eight concurrent processes. Our proposals improve execution time of high-priority processes by 15.6x, the average application turnaround time between 1.5x to 2x, and system fairness up to 3.4x

Evangelos P Markatos - One of the best experts on this subject based on the ideXlab platform.

  • Multiprogramming and multiprocessing
    Wiley Encyclopedia of Electrical and Electronics Engineering, 1999
    Co-Authors: Evangelos P Markatos, Thomas J Leblanc
    Abstract:

    The sections in this article are 1 Multiprogramming Shared-Memory Multiprocessors 2 Multiprogramming Distributed-Memory Multiprocessors 3 Multiprogramming Networks of Workstations 4 Summary 5 Acknowledgments

  • the effects of Multiprogramming on barrier synchronization
    International Parallel and Distributed Processing Symposium, 1991
    Co-Authors: Evangelos P Markatos, Mark Crovella, P Das, Cezary Dubnicki, Thomas J Leblanc
    Abstract:

    One of the most common ways to share a multiprocessor among several applications is to give each application a set of dedicated processors. To ensure fairness, an application may receive fewer processors than it has processes. Unless an application can easily adjust the number of processes it employs during execution, several processes from the same application may have to share a processor. The authors quantify the performance penalty that arises when more than one process from the same application runs on a single processor of a NUMA (Non Uniform Memory Access) multiprocessor. They consider programs that use coarse-grain parallelism and barrier synchronization because they are particularly sensitive to Multiprogramming. They quantify the impact on the performance of an application of quantum size, frequency of synchronization, and the type of barrier used. They conclude that dedicating processors to an application, even without migration or dynamic adjustment of the number of processes, is an effective scheduling policy for programs that synchronize frequently using barriers. >

  • Multiprogramming on multiprocessors
    International Parallel and Distributed Processing Symposium, 1991
    Co-Authors: Mark Crovella, P Das, Cezary Dubnicki, Thomas J Leblanc, Evangelos P Markatos
    Abstract:

    Many solutions have been proposed to the problem of Multiprogramming a multiprocessor. However, each has limited applicability or fails to address an important source of overhead. In addition, there has been little experimental comparison of the various solutions in the presence of applications with varying degrees of parallelism and synchronization. The authors explore the tradeoffs between three different approaches to Multiprogramming a multiprocessor: time-slicing, coscheduling, and dynamic hardware partitions. They implemented applications that vary in the degree of parallelism, and the frequency and type of synchronization. They show that in most cases coscheduling is preferable to time-slicing. They also show that although there are cases where coscheduling is beneficial, dynamic hardware partitions do no worse, and will often do better. They conclude that under most circumstances, hardware partitioning is the best strategy for Multiprogramming a multiprocessor, no matter how much parallelism applications employ or how frequently synchronization occurs. >

Robert W Wisniewski - One of the best experts on this subject based on the ideXlab platform.

  • Scheduler-Conscious Synchronization
    University of Rochester. Computer Science Department., 2014
    Co-Authors: Kontothanassis Leonidas, Robert W Wisniewski, Michael L. - ) Scott
    Abstract:

    Efficient synchronization is important for achieving good performance in parallel programs, especially on large-scale multiprocessors. Most synchronization algorithms have been designed to run on a dedicated machine, with one application process per processor, and can suffer serious performance degradation in the presence of Multiprogramming. Problems arise when running processes block or, worse, busy-wait for action on the part of a process that the scheduler has chosen not to run. In this paper we describe and evaluate a set of scheduler-conscious synchronization algorithms that perform well in the presence of Multiprogramming while maintaining good performance on dedicated machines. We consider both large and small machines, with a particular focus on scalability, and examine mutual-exclusion locks, reader-writer locks, and barriers. The algorithms we study fall into two classes: those that heuristically determine appropriate behavior and those that use scheduler information to guide their behavior. We show that while in some cases either method is sufficient, in general sharing information across the kernel-user interface both eases the design of synchronization algorithms and improves their performance

  • using scheduler information to achieve optimal barrier synchronization performance
    ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 1993
    Co-Authors: Leonidas Kontothanassis, Robert W Wisniewski
    Abstract:

    Parallel programs frequently use barriers to synchronize successive steps in an algorithm. In the presence of Multiprogramming the choice of spinning versus blocking barriers can have a significant impact on performance. We demonstrate how competitive spinning techniques previously designed for locks can be extended to barriers, and we evaluate their performance. We design an additional competitive spinning technique that adapts more quickly in a dynamic environment. We then propose and evaluate a new method that obtains better peformance than previous techniques by using scheduler information to decide between spinning and blocking. The scheduler information technique makes optimal choices incurring little overhead.

Emilio Luque - One of the best experts on this subject based on the ideXlab platform.

  • coscheduling and Multiprogramming level in a non dedicated cluster
    Lecture Notes in Computer Science, 2004
    Co-Authors: Mauricio Hanzich, Francesc Gine, P Hernandez, Francesc Solsona, Emilio Luque
    Abstract:

    Our interest is oriented towards keeping both local and parallel jobs together in a time-sharing non-dedicated cluster. In such systems, dynamic coscheduling techniques, without memory restriction, that consider the Multiprogramming Level for parallel applications (MPL), is a main goal in current cluster research. In this paper, a new technique called Cooperating Coscheduling (CCS), that combines a dynamic coscheduling system and a resource balancing schema, is applied.

  • Multiprogramming level of pvm jobs in a non dedicated linux now
    Lecture Notes in Computer Science, 2003
    Co-Authors: Francesc Gine, Mauricio Hanzich, P Hernandez, Francesc Solsona, Jesus Barrientos, Emilio Luque
    Abstract:

    Our research is focussed on keeping both local and PVM jobs together in a time-sharing Network Of Workstations (NOW) and efficiently scheduling them by means of dynamic coscheduling mechanisms. In this framework, we study the sensitivity of PVM jobs to Multiprogramming. According to the results obtained, we propose a new extension of the dynamic coscheduling technique, named Cooperating Coscheduling. Its feasibility is shown experimentally in a PVM-Linux cluster.