Resource Isolation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 18972 Experts worldwide ranked by ideXlab platform

Haibing Guan - One of the best experts on this subject based on the ideXlab platform.

  • VGRIS: Virtualized GPU Resource Isolation and Scheduling in Cloud Gaming
    ACM Transactions on Architecture and Code Optimization, 2014
    Co-Authors: Jianguo Yao, Chao Zhang, Zhizhou Yang, Haibing Guan
    Abstract:

    To achieve efficient Resource management on a graphics processing unit (GPU), there is a demand to develop a framework for scheduling virtualized Resources in cloud gaming. In this article, we propose VGRIS, a Resource management framework for virtualized GPU Resource Isolation and scheduling in cloud gaming. A set of application programming interfaces (APIs) is provided so that a variety of scheduling algorithms can be implemented within the framework without modifying the framework itself. Three scheduling algorithms are implemented by the APIs within VGRIS. Experimental results show that VGRIS can effectively schedule GPU Resources among various workloads.

  • vgris virtualized gpu Resource Isolation and scheduling in cloud gaming
    High Performance Distributed Computing, 2013
    Co-Authors: Chao Zhang, Jianguo Yao, Yin Wang, Haibing Guan
    Abstract:

    Fueled by the maturity of virtualization technology for Graphics Processing Unit (GPU), there is an increasing number of data centers dedicated to GPU-related computation tasks in cloud gaming. However, GPU Resource sharing in these applications is usually poor. This stems from the fact that the typical cloud gaming service providers often allocate one GPU exclusively for one game. To achieve the efficiency of computational Resource management, there is a demand for cloud computing to employ the multi-task scheduling technologies to improve the utilization of GPU. In this paper, we propose VGRIS, a Resource management framework for Virtualized GPU Resource Isolation and Scheduling in cloud gaming. By leveraging the mature GPU paravirtualization architecture, VGRIS resides in the host through library API interception, while the guest OS and the GPU computing applications remain unmodified. In the proposed framework, we implemented three scheduling algorithms in VGRIS for different objectives, i.e., Service Level Agreement (SLA)-aware scheduling, proportional-share scheduling, and hybrid scheduling that mixes the former two. By designing such a scheduling framework, it is possible to handle different kinds of GPU computation tasks for different purposes in cloud gaming. Our experimental results show that each scheduling algorithm can achieve its goals under various workloads.

  • HPDC - VGRIS: virtualized GPU Resource Isolation and scheduling in cloud gaming
    Proceedings of the 22nd international symposium on High-performance parallel and distributed computing, 2013
    Co-Authors: Chao Zhang, Jianguo Yao, Yin Wang, Haibing Guan
    Abstract:

    Fueled by the maturity of virtualization technology for Graphics Processing Unit (GPU), there is an increasing number of data centers dedicated to GPU-related computation tasks in cloud gaming. However, GPU Resource sharing in these applications is usually poor. This stems from the fact that the typical cloud gaming service providers often allocate one GPU exclusively for one game. To achieve the efficiency of computational Resource management, there is a demand for cloud computing to employ the multi-task scheduling technologies to improve the utilization of GPU. In this paper, we propose VGRIS, a Resource management framework for Virtualized GPU Resource Isolation and Scheduling in cloud gaming. By leveraging the mature GPU paravirtualization architecture, VGRIS resides in the host through library API interception, while the guest OS and the GPU computing applications remain unmodified. In the proposed framework, we implemented three scheduling algorithms in VGRIS for different objectives, i.e., Service Level Agreement (SLA)-aware scheduling, proportional-share scheduling, and hybrid scheduling that mixes the former two. By designing such a scheduling framework, it is possible to handle different kinds of GPU computation tasks for different purposes in cloud gaming. Our experimental results show that each scheduling algorithm can achieve its goals under various workloads.

Georg Carle - One of the best experts on this subject based on the ideXlab platform.

  • intra node Resource Isolation for sfc with sr iov
    IEEE International Conference on Cloud Networking, 2018
    Co-Authors: Simon Bauer, Daniel Raumer, Paul Emmerich, Georg Carle
    Abstract:

    Single Root I/O Virtualization (SR-IOV) is intended to provide simultaneous native access to network interface cards (NIC) from multiple virtual machines or applications. For this, SR-IOV offloads packet switching from software to hardware. We use SR-IOV outside its original purpose and establish a chaining infrastructure between virtual Service Functions. Thus, Resources to provide chaining are isolated by the NIC. We compare Service Function Chains based on SR-IOV to fully software-based Service Function Chains and survey how shifting workload from the CPU to the NIC affects performance. Furthermore, we analyze the impact of virtual PCIe functions, which are required for the use of SR-IOV, on performance. Our study provides a detailed performance evaluation of Service Function Chains implemented with Open vSwitch and DPDK. The performance evaluation is based on comparative measurements on commodity hardware including profiling of the CPU and PCIe bus. We model the Resource constraints of both implementation approaches to specify performance bottlenecks and to determine a scenario's maximum throughput,

  • CloudNet - Intra-Node Resource Isolation for SFC with SR-IOV
    2018 IEEE 7th International Conference on Cloud Networking (CloudNet), 2018
    Co-Authors: Simon Bauer, Daniel Raumer, Paul Emmerich, Georg Carle
    Abstract:

    Single Root I/O Virtualization (SR-IOV) is intended to provide simultaneous native access to network interface cards (NIC) from multiple virtual machines or applications. For this, SR-IOV offloads packet switching from software to hardware. We use SR-IOV outside its original purpose and establish a chaining infrastructure between virtual Service Functions. Thus, Resources to provide chaining are isolated by the NIC. We compare Service Function Chains based on SR-IOV to fully software-based Service Function Chains and survey how shifting workload from the CPU to the NIC affects performance. Furthermore, we analyze the impact of virtual PCIe functions, which are required for the use of SR-IOV, on performance. Our study provides a detailed performance evaluation of Service Function Chains implemented with Open vSwitch and DPDK. The performance evaluation is based on comparative measurements on commodity hardware including profiling of the CPU and PCIe bus. We model the Resource constraints of both implementation approaches to specify performance bottlenecks and to determine a scenario's maximum throughput,

Akihiro Nakao - One of the best experts on this subject based on the ideXlab platform.

  • CloudNet - On-Site Evaluation of a Software Cellular Based MEC System with Downlink Slicing Technology
    2018 IEEE 7th International Conference on Cloud Networking (CloudNet), 2018
    Co-Authors: Koichiro Amemiya, Yuko Akiyama, Kazunari Kobayashi, Yoshio Inoue, Shu Yamamoto, Akihiro Nakao
    Abstract:

    MEC (MobilelMulti-access Edge Computing) has recently caught much attention for processing data in the vicinity of user equipment (UE) to reduce latency for real-time applications. One of the challenges of MEC is to reduce the cost of deploying computational Resources near the cellular base stations. Recently, software implementation of cellular base stations is emerging as it allows colocation of access point functionalities and those of MEC within the same commodity hardware, e.g., using containers, thus facilitates deployment of MEC without incurring much cost. In this paper, we posit that one of the most significant challenges for realizing softwarized base stations with MEC capability is to enable Resource Isolation among slices, especially isolating low latency slice as the primary concern of MEC is to enable low-latency application. Our contributions are threefold. First, we define the architecture of MEC infrastructure in softwarized cellular network. Second, we measure the actual latency and throughput of on-site MEC in a softwarized cellular network. And at last, we propose a novel slicing method for softwarized base stations to isolate a low latency slice from a broadband one. Our evaluation shows that the proposed method enables reasonable Resource Isolation, achieving the same minimal latency even with a competing broadband slice as that without any other slice.

  • network Resource Isolation for virtualization nodes
    International Symposium on Computers and Communications, 2012
    Co-Authors: Yasusi Kanada, Kei Shiraishi, Akihiro Nakao
    Abstract:

    One key requirement for achieving network virtualization is Resource Isolation among slices (virtual networks), that is, to avoid interferences between slices of Resources. This paper proposes two methods, per-slice shaping and per-link policing for network-Resource Isolation (NRI) in terms of bandwidth and delay. These methods use traffic shaping and traffic policing, which are widely-used traffic control methods for guaranteeing QoS. Per-slice shaping utilizes weighted fair queuing (WFQ) usually applied to a finegrained flow such as a flow from a specific server application to a user. Since the WFQ for fine-grained flows requires many queues, it may not scale to a large number of slices with a large number of virtual nodes. Considering that the purpose of NRI is not thoroughly guaranteeing QoS but avoiding interferences between slices, we believe per-slice shaping suffices our objective. In contrast, per-link policing uses traffic policing per virtual link. It requires less Resource and achieves less strict Isolation between hundreds of slices. Our results show that both methods perform NRI well but the performance of the former is better in terms of delay. Accordingly, per-slice shaping is effective for delay-sensitive services while per-link policing may be sufficiently used for the other types of services.

  • Network-Resource Isolation for virtualization nodes
    2012 Fourth International Conference on Communication Systems and Networks (COMSNETS 2012), 2012
    Co-Authors: Yasusi Kanada, Katsuhiko Shiraishi, Akihiro Nakao
    Abstract:

    Two methods for NRI for virtualization networks are proposed in this paper: per-slice shaping and per-link policing. The former enables NRI with 80-90% less queues compared to the per-link shaping. The latter enables less strict Isolation between tens or hundreds of slices using only one queue. Evaluations of these methods show that the former performs slightly better in terms of delay and packet-drop ratio. Accordingly, the former with/without policing is effective for delay-sensitive services while the latter may be sufficiently used for the other types of services.

  • ISCC - Network-Resource Isolation for virtualization nodes
    2012 IEEE Symposium on Computers and Communications (ISCC), 2012
    Co-Authors: Yasusi Kanada, Kei Shiraishi, Akihiro Nakao
    Abstract:

    One key requirement for achieving network virtualization is Resource Isolation among slices (virtual networks), that is, to avoid interferences between slices of Resources. This paper proposes two methods, per-slice shaping and per-link policing for network-Resource Isolation (NRI) in terms of bandwidth and delay. These methods use traffic shaping and traffic policing, which are widely-used traffic control methods for guaranteeing QoS. Per-slice shaping utilizes weighted fair queuing (WFQ) usually applied to a finegrained flow such as a flow from a specific server application to a user. Since the WFQ for fine-grained flows requires many queues, it may not scale to a large number of slices with a large number of virtual nodes. Considering that the purpose of NRI is not thoroughly guaranteeing QoS but avoiding interferences between slices, we believe per-slice shaping suffices our objective. In contrast, per-link policing uses traffic policing per virtual link. It requires less Resource and achieves less strict Isolation between hundreds of slices. Our results show that both methods perform NRI well but the performance of the former is better in terms of delay. Accordingly, per-slice shaping is effective for delay-sensitive services while per-link policing may be sufficiently used for the other types of services.

  • COMSNETS - Network-Resource Isolation for virtualization nodes
    2012 Fourth International Conference on Communication Systems and Networks (COMSNETS 2012), 2012
    Co-Authors: Yasusi Kanada, Kei Shiraishi, Akihiro Nakao
    Abstract:

    Two methods for NRI for virtualization networks are proposed in this paper: per-slice shaping and per-link policing. The former enables NRI with 80–90% less queues compared to the per-link shaping. The latter enables less strict Isolation between tens or hundreds of slices using only one queue. Evaluations of these methods show that the former performs slightly better in terms of delay and packet-drop ratio. Accordingly, the former with/without policing is effective for delay-sensitive services while the latter may be sufficiently used for the other types of services.

Chao Zhang - One of the best experts on this subject based on the ideXlab platform.

  • VGRIS: Virtualized GPU Resource Isolation and Scheduling in Cloud Gaming
    ACM Transactions on Architecture and Code Optimization, 2014
    Co-Authors: Jianguo Yao, Chao Zhang, Zhizhou Yang, Haibing Guan
    Abstract:

    To achieve efficient Resource management on a graphics processing unit (GPU), there is a demand to develop a framework for scheduling virtualized Resources in cloud gaming. In this article, we propose VGRIS, a Resource management framework for virtualized GPU Resource Isolation and scheduling in cloud gaming. A set of application programming interfaces (APIs) is provided so that a variety of scheduling algorithms can be implemented within the framework without modifying the framework itself. Three scheduling algorithms are implemented by the APIs within VGRIS. Experimental results show that VGRIS can effectively schedule GPU Resources among various workloads.

  • vgris virtualized gpu Resource Isolation and scheduling in cloud gaming
    High Performance Distributed Computing, 2013
    Co-Authors: Chao Zhang, Jianguo Yao, Yin Wang, Haibing Guan
    Abstract:

    Fueled by the maturity of virtualization technology for Graphics Processing Unit (GPU), there is an increasing number of data centers dedicated to GPU-related computation tasks in cloud gaming. However, GPU Resource sharing in these applications is usually poor. This stems from the fact that the typical cloud gaming service providers often allocate one GPU exclusively for one game. To achieve the efficiency of computational Resource management, there is a demand for cloud computing to employ the multi-task scheduling technologies to improve the utilization of GPU. In this paper, we propose VGRIS, a Resource management framework for Virtualized GPU Resource Isolation and Scheduling in cloud gaming. By leveraging the mature GPU paravirtualization architecture, VGRIS resides in the host through library API interception, while the guest OS and the GPU computing applications remain unmodified. In the proposed framework, we implemented three scheduling algorithms in VGRIS for different objectives, i.e., Service Level Agreement (SLA)-aware scheduling, proportional-share scheduling, and hybrid scheduling that mixes the former two. By designing such a scheduling framework, it is possible to handle different kinds of GPU computation tasks for different purposes in cloud gaming. Our experimental results show that each scheduling algorithm can achieve its goals under various workloads.

  • HPDC - VGRIS: virtualized GPU Resource Isolation and scheduling in cloud gaming
    Proceedings of the 22nd international symposium on High-performance parallel and distributed computing, 2013
    Co-Authors: Chao Zhang, Jianguo Yao, Yin Wang, Haibing Guan
    Abstract:

    Fueled by the maturity of virtualization technology for Graphics Processing Unit (GPU), there is an increasing number of data centers dedicated to GPU-related computation tasks in cloud gaming. However, GPU Resource sharing in these applications is usually poor. This stems from the fact that the typical cloud gaming service providers often allocate one GPU exclusively for one game. To achieve the efficiency of computational Resource management, there is a demand for cloud computing to employ the multi-task scheduling technologies to improve the utilization of GPU. In this paper, we propose VGRIS, a Resource management framework for Virtualized GPU Resource Isolation and Scheduling in cloud gaming. By leveraging the mature GPU paravirtualization architecture, VGRIS resides in the host through library API interception, while the guest OS and the GPU computing applications remain unmodified. In the proposed framework, we implemented three scheduling algorithms in VGRIS for different objectives, i.e., Service Level Agreement (SLA)-aware scheduling, proportional-share scheduling, and hybrid scheduling that mixes the former two. By designing such a scheduling framework, it is possible to handle different kinds of GPU computation tasks for different purposes in cloud gaming. Our experimental results show that each scheduling algorithm can achieve its goals under various workloads.

Sanjay Chaudhary - One of the best experts on this subject based on the ideXlab platform.

  • Performance Isolation and scheduler behavior
    2010 First International Conference On Parallel Distributed and Grid Computing (PDGC 2010), 2010
    Co-Authors: Gaurav Somani, Sanjay Chaudhary
    Abstract:

    Performance Isolation is desirable in virtual machine based infrastructures to meet Service Level Objectives (SLO). In performance Isolation, ideally, no virtual machine should affect performance of other co hosted virtual machine. Virtual machine scheduler is the key in allocating Resources among virtual machines. This fact attracts attention towards scheduling, as fairness and Resource Isolation are the key requirements for which any user virtualizes the servers. I/O models are the main bottlenecks in sharing Resources among virtual machines. This work aims to evaluate the performance Isolation achieved by Xen hypervisor in different scheduler configurations with different kind of Resource intensive applications. Experiment results show that Isolation is critical when we run I/O application in conjunction with CPU intensive applications.

  • application performance Isolation in virtualization
    International Conference on Cloud Computing, 2009
    Co-Authors: Gaurav Somani, Sanjay Chaudhary
    Abstract:

    Modern data centers use virtual machine based implementation for numerous advantages like Resource Isolation, hardware utilization, security and easy management. Applications are generally hosted on different virtual machines on a same physical machine. Virtual machine monitor like Xen is a popular tool to manage virtual machines by scheduling them to use Resources such as CPU, memory and network. Performance Isolation is the desirable thing in virtual machine based infrastructure to meet Service Level Objectives. Many experiments in this area measure the performance of applications while running the applications in different domains, which gives an insight into the problem of Isolation. In this paper we run different kind of benchmarks simultaneously in Xen environment to evaluate the Isolation strategy provided by Xen. Results are presented and discussed for different combinations and a case of I/O intensive applications with low response latency has been presented.

  • IEEE CLOUD - Application Performance Isolation in Virtualization
    2009 IEEE International Conference on Cloud Computing, 2009
    Co-Authors: Gaurav Somani, Sanjay Chaudhary
    Abstract:

    Modern data centers use virtual machine based implementation for numerous advantages like Resource Isolation, hardware utilization, security and easy management. Applications are generally hosted on different virtual machines on a same physical machine. Virtual machine monitor like Xen is a popular tool to manage virtual machines by scheduling them to use Resources such as CPU, memory and network. Performance Isolation is the desirable thing in virtual machine based infrastructure to meet Service Level Objectives. Many experiments in this area measure the performance of applications while running the applications in different domains, which gives an insight into the problem of Isolation. In this paper we run different kind of benchmarks simultaneously in Xen environment to evaluate the Isolation strategy provided by Xen. Results are presented and discussed for different combinations and a case of I/O intensive applications with low response latency has been presented.