Virtualized Environment

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 118668 Experts worldwide ranked by ideXlab platform

Hai Jin - One of the best experts on this subject based on the ideXlab platform.

  • Spatial Locality Aware Disk Scheduling in Virtualized Environment
    IEEE Transactions on Parallel and Distributed Systems, 2015
    Co-Authors: Xiao Ling, Shadi Ibrahim, Hai Jin
    Abstract:

    Exploiting spatial locality, a key technique for improving disk I/O utilization and performance, faces additional challenges in the Virtualized cloud because of the transparency feature of virtualization. This paper contributes a novel disk I/O scheduling framework, named Pregather, to improve disk I/O efficiency through exposure and exploitation of the special spatial locality in the Virtualized Environment, thereby improving the performance of disk-intensive applications without harming the transparency feature of virtualization. The key idea behind Pregather is to implement an intelligent model to predict the access regularity of spatial locality for each VM. Moreover, Pregather embraces an adaptive time slice allocation scheme to further reduce the resource contention and ensure fairness among VMs. We implement the Pregather disk scheduling framework and perform extensive experiments that involve multiple simultaneous applications of both synthetic benchmarks and MapReduce applications on Xen-based platforms. Our experiments demonstrate the accuracy of our prediction model and indicate that Pregather results in the high disk spatial locality and a significant improvement in disk throughput and application performance.

  • A Real-Time Scheduling Framework Based on Multi-core Dynamic Partitioning in Virtualized Environment
    2014
    Co-Authors: Like Zhou, Hai Jin, Xuanhua Shi
    Abstract:

    With the prevalence of virtualization and cloud computing, many real-time applications are running in Virtualized cloud Environments. However, their performance cannot be guaranteed because current hypervisors’ CPU schedulers aim to share CPU resources fairly and improve system throughput. They do not consider real-time constraints of these applications, which result in frequent deadline misses. In this paper, we present a real-time scheduling framework in Virtualized Environment. In the framework, we propose a mechanism called multi-core dynamic partitioning to divide physical CPUs (PCPUs) into two pools dynamically according to the scheduling parameters of real-time virtual machines (RT-VMs). We apply different schedulers to these pools to schedule RT-VMs and non-RT-VMs respectively. Besides, we design a global earliest deadline first (vGEDF) scheduler to schedule RT-VMs. We implement a prototype in the Xen hypervisor and conduct experiments to verify its effectiveness.

  • A Performance Optimization Mechanism for SSD in Virtualized Environment
    2012
    Co-Authors: Xiaofei Liao, Hai Jin
    Abstract:

    Applications in the cloud computing era have the emergent requirement of fast I/O support. Compared with a hard drive disk, a solid state disk (SSD) has low delay, low energy consumption, high throughput and other advantages. However, the semantics of an SSD cannot be recognized by current virtual machine monitors. The trim instruction, which plays an important role in space management of an SSD, cannot be passed to the underlying SSD device in a Virtualized Environment. So, how to bridge the semantics gap between the application layer and virtualization layer for SSD devices should be an important problem. We propose Vtrim to solve the above problem in this paper. Vtrim monitors the operations in the virtual machine and sends the SSD semantics to the Domain 0 immediately. And then the semantics are translated into the operations of Domain 0, which can trigger the SSD’s local instructions. To improve the write performance in multiple guest operating systems, we set a Vtrim cache to buffer all instructions from the guests and flush them into an SSD in a well-scheduled way. Experiment results in the para-virtualization Environments with Vtrim show that the random write performance is improved by even up to 100 % and the average response time is reduced by up to 40%

  • doi:10.1093/comjnl/bxt041 A Performance Optimization Mechanism for SSD in Virtualized Environment
    2012
    Co-Authors: Xiaofei Liao, Hai Jin
    Abstract:

    Applications in the cloud computing era have the emergent requirement of fast I/O support. Compared with a hard drive disk, a solid state disk (SSD) has low delay, low energy consumption, high throughput and other advantages. However, the semantics of an SSD cannot be recognized by current virtual machine monitors. The trim instruction, which plays an important role in space management of an SSD, cannot be passed to the underlying SSD device in a Virtualized Environment. So, how to bridge the semantics gap between the application layer and virtualization layer for SSD devices should be an important problem. We propose Vtrim to solve the above problem in this paper. Vtrim monitors the operations in the virtual machine and sends the SSD semantics to the Domain 0 immediately. And then the semantics are translated into the operations of Domain 0, which can trigger the SSD’s local instructions. To improve the write performance in multiple guest operating systems, we set a Vtrim cache to buffer all instructions from the guests and flush them into an SSD in a well-scheduled way. Experiment results in the para-virtualization Environments with Vtrim show that the random write performance is improved by even up to 100 % and the average response time is reduced by up to 40%

  • adaptive disk i o scheduling for mapreduce in Virtualized Environment
    International Conference on Parallel Processing, 2011
    Co-Authors: Shadi Ibrahim, Hai Jin
    Abstract:

    Virtual machine (VM) interference has long been a challenging problem for performance predictability and system throughput for large-scale Virtualized Environments in the cloud. Such interferences are contributed by intertwined factors including the application's type, the number of concurrent VMs, and the VM scheduling algorithms used within the host. Since MapReduce has become an important data processing platform in the cloud, we investigate the impact of disk schedulers in Hadoop. Interestingly, our experimental results report a noticeable variation of the Hadoop performance between different applications when applying different disk pairs' schedulers in both the hyper visor and the virtual machines. Furthermore, a typical Hadoop application consists of different interleaving stages, each requiring different I/O workloads and patterns. As a result, the disk pairs' schedulers are not only sub-optimal for different MapReduce applications, but also sub-optimal for different sub-phases of the whole job. Accordingly, this paper presents a novel approach for adaptively tuning the disk pairs' schedulers in both the hyper visor and the virtual machines during the execution of a single MapReduce job. Our results show that MapReduce performance can be signi„0²3cantly improved, speci„0²3cally, adaptive tuning of disk pairs' schedulers achieves a 25% performance improvement on a sort benchmark with Hadoop.

Flavio Elias Gomes De Deus - One of the best experts on this subject based on the ideXlab platform.

  • A proposal to provide automated information technology infrastructure with integrated service catalog
    2011
    Co-Authors: Osmar Ribeiro Torres, Flavio Elias Gomes De Deus, Robson De Oliveira Albuquerque
    Abstract:

    The main objective of this article is to create a service catalog aiming the possibility to automate the availability in an IT (Information Technology) infrastructure, aligning the serves' virtualization concepts and infrastructure management tools. This work emerged from the analysis on the problems regarding the services availability in the IT infrastructure. This article demonstrates that the use of Virtualized Environment, with a standard services catalog and specific infrastructure management tools, provides a time saving, reducing the request interval to a new Server from several days to a few hours.

  • ATC - Virtualization with automated services catalog for providing integrated information technology infrastructure
    Lecture Notes in Computer Science, 2011
    Co-Authors: Robson De Oliveira Albuquerque, Osmar Ribeiro Torres, Luis Javier García Villalba, Flavio Elias Gomes De Deus
    Abstract:

    This paper proposes a service catalog service integrated with Virtualized systems aiming at the possibility of raising and automating the availability in an IT (Information Technology) infrastructure. This paper demonstrates that aligning the server virtualization concepts and infrastructure management tools is possible to have gains in time and costs when compared to systems without automated service catalog. The main results presented illustrates that the use of a Virtualized Environment, with a standard services catalog and specific tools for infrastructure management, provides a time saving, reducing the request interval to a new server from several days to a few hours.

Saneyasu Yamaguchi - One of the best experts on this subject based on the ideXlab platform.

  • dynamic memory allocation in virtual machines based on cache hit ratio
    International Symposium on Computing and Networking, 2015
    Co-Authors: Masaki Sakamoto, Saneyasu Yamaguchi
    Abstract:

    In Virtualized Environment, several computers run on a physical computer. Many of virtualization systems have a ballooning function with which memory allocation size for a virtual machine can be dynamically changed without restarting the virtual machine. Thus, it is expected that performance of an applications in a virtual machine can be improved by dynamic optimization of the virtual machine memory size. Xen also has a ballooning function, which is called xenballoon. However, it takes account of only memories consumed by processes and does not consider page cache memory size. Thus, I/O performance improvement cannot be expected with this ballooning function. In this paper, we focus on Xen Virtualized Environment and read-only applications, and discuss a method for improving I/O performance by dynamic optimization of memory allocation size of virtual machines. First, we investigate relation among virtual machine memory size, page cache hit ratio in the guest OS, and I/O performance. Then, we show that providing memory to virtual machines with high cache hit ratio is effective in I/O performance improvement. Second, we propose a method for improving I/O performance based on cache hit ratio. Third, we evaluate our proposed method with filesystem benchmark application FFSB and demonstrate that our method can improve I/O performance by dynamic tuning of virtual machine memory size.

  • CANDAR - Dynamic Memory Allocation in Virtual Machines Based on Cache Hit Ratio
    2015 Third International Symposium on Computing and Networking (CANDAR), 2015
    Co-Authors: Masaki Sakamoto, Saneyasu Yamaguchi
    Abstract:

    In Virtualized Environment, several computers run on a physical computer. Many of virtualization systems have a ballooning function with which memory allocation size for a virtual machine can be dynamically changed without restarting the virtual machine. Thus, it is expected that performance of an applications in a virtual machine can be improved by dynamic optimization of the virtual machine memory size. Xen also has a ballooning function, which is called xenballoon. However, it takes account of only memories consumed by processes and does not consider page cache memory size. Thus, I/O performance improvement cannot be expected with this ballooning function. In this paper, we focus on Xen Virtualized Environment and read-only applications, and discuss a method for improving I/O performance by dynamic optimization of memory allocation size of virtual machines. First, we investigate relation among virtual machine memory size, page cache hit ratio in the guest OS, and I/O performance. Then, we show that providing memory to virtual machines with high cache hit ratio is effective in I/O performance improvement. Second, we propose a method for improving I/O performance based on cache hit ratio. Third, we evaluate our proposed method with filesystem benchmark application FFSB and demonstrate that our method can improve I/O performance by dynamic tuning of virtual machine memory size.

  • SRDS Workshops - Energy Efficient Storage Management Cooperated with Data Intensive Applications in Virtual Machines
    2014 IEEE 33rd International Symposium on Reliable Distributed Systems Workshops, 2014
    Co-Authors: Shunsuke Yagai, Saneyasu Yamaguchi
    Abstract:

    In data centers, huge amount of computes are running. These computes consume enormous energy. For this issue, an energy efficient storage management method cooperated with data intensive applications was proposed. With this method, data and storage devices are managed with application supports and power consumption of storage devices is significantly decreased. However, the work does not take account of Virtualized Environment. Recently, many data intensive applications run on Virtualized Environment, such as cloud computing Environment. Thus, we think discussion on power effective storage management in Virtualized Environment is also important in addition to this work. In this paper, we focus on Virtualized Environment in which plural virtual machines run on a physical computer and a data intensive application runs on each virtual machine. We apply this storage management method for this Environment, and evaluate performance and power consumption. Then, we proposed two storage placing methods, which are symmetric placing and asymmetric placing, for Virtualized Environment. Our experiments demonstrated that the proposed methods could create long HDD access intervals enough to save power consumption of storage devices. In addition, our results showed that asymmetric method could better performance than symmetric method.

  • filesystem layout reorganization in Virtualized Environment
    Autonomic and Trusted Computing, 2012
    Co-Authors: Masaya Yamada, Saneyasu Yamaguchi
    Abstract:

    Reorganization of storage data layout is a famous I/O performance improvement method. Many methods have been studied. These methods improve I/O performance by reducing storage seek distances. In Virtualized Environment, a storage device has several huge image files. Thus, I/O performance is severely decreased by many long distance seeks among these files. Accordingly, I/O performance is expected to be significantly improved by reorganizing data layout in this Environment. However, existing reorganizing methods were published before Virtualized Environment has emerged and take no account of Virtualized Environment. Thus, the existing methods cannot improve I/O performance largely in Virtualized Environment. In this paper, we present performance evaluation of the existing reorganizing methods in Virtualized Environment. After that, we propose a reorganizing method suitable for Virtualized Environment. Finally, we demonstrate that the propose method outperforms the existing methods with experiments.

Robson De Oliveira Albuquerque - One of the best experts on this subject based on the ideXlab platform.

  • A proposal to provide automated information technology infrastructure with integrated service catalog
    2011
    Co-Authors: Osmar Ribeiro Torres, Flavio Elias Gomes De Deus, Robson De Oliveira Albuquerque
    Abstract:

    The main objective of this article is to create a service catalog aiming the possibility to automate the availability in an IT (Information Technology) infrastructure, aligning the serves' virtualization concepts and infrastructure management tools. This work emerged from the analysis on the problems regarding the services availability in the IT infrastructure. This article demonstrates that the use of Virtualized Environment, with a standard services catalog and specific infrastructure management tools, provides a time saving, reducing the request interval to a new Server from several days to a few hours.

  • ATC - Virtualization with automated services catalog for providing integrated information technology infrastructure
    Lecture Notes in Computer Science, 2011
    Co-Authors: Robson De Oliveira Albuquerque, Osmar Ribeiro Torres, Luis Javier García Villalba, Flavio Elias Gomes De Deus
    Abstract:

    This paper proposes a service catalog service integrated with Virtualized systems aiming at the possibility of raising and automating the availability in an IT (Information Technology) infrastructure. This paper demonstrates that aligning the server virtualization concepts and infrastructure management tools is possible to have gains in time and costs when compared to systems without automated service catalog. The main results presented illustrates that the use of a Virtualized Environment, with a standard services catalog and specific tools for infrastructure management, provides a time saving, reducing the request interval to a new server from several days to a few hours.

Luís Henrique M. K. Costa - One of the best experts on this subject based on the ideXlab platform.

  • Vulnerabilities and solutions for isolation in FlowVisor-based virtual network Environments
    Journal of Internet Services and Applications, 2015
    Co-Authors: Victor T. Costa, Luís Henrique M. K. Costa
    Abstract:

    In a Virtualized Environment, different virtual networks can operate over the same physical infrastructure. Each virtual network has its own protocols and share the available resources, thus highlighting the need of resource isolation mechanisms. Investigating the isolation mechanisms provided by FlowVisor, we have discovered vulnerabilities previously unknown regarding addressing space isolation.We show that, in the presence of a malicious controller, FlowVisor’s isolation can be broken allowing different attacks. This paper addresses these vulnerabilities by proposing an Action Slicing mechanism, that allows FlowVisor to limit which actions can be used by each virtual network controller, thus extending the virtual network definition. Our experimental results show that using the proposed Action Slicing mechanism can effectively neutralize the discovered vulnerabilities