Virtualized Server

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2973 Experts worldwide ranked by ideXlab platform

Xiaobo Zhou - One of the best experts on this subject based on the ideXlab platform.

  • coordinated power and performance guarantee with fuzzy mimo control in Virtualized Server clusters
    IEEE Transactions on Computers, 2015
    Co-Authors: Palden Lama, Xiaobo Zhou
    Abstract:

    It is important but challenging to assure the performance of multi-tier Internet applications with the power consumption cap of Virtualized Server clusters mainly due to system complexity of shared infrastructure and dynamic and bursty nature of workloads. This paper presents PERFUME, a system that simultaneously guarantees power and performance targets with flexible tradeoffs and service differentiation among co-hosted applications while assuring control accuracy and system stability. Based on the proposed fuzzy MIMO control technique, it effectively controls both the throughput and percentile-based response time of multi-tier applications due to its novel self-adaptive fuzzy modeling that integrates the strengths of fuzzy logic, MIMO control and artificial neural network. Furthermore, we address an important challenge of pro-actively avoiding violations of power and performance targets in anticipation of future workload changes. We implement PERFUME in a testbed of Virtualized blade Servers hosting multi-tier RUBiS applications. Performance evaluation based on synthetic and real-world Web workloads demonstrates its control accuracy, flexibility in selecting tradeoffs between conflicting targets, service differentiation capability and robustness against highly dynamic and bursty workloads. It outperforms a representative utility based approach in providing guarantee of the system throughput, percentile-based response time and power budget.

  • automated and agile Server parametertuning by coordinated learning and control
    IEEE Transactions on Parallel and Distributed Systems, 2014
    Co-Authors: Palden Lama, Changjun Jiang, Xiaobo Zhou
    Abstract:

    Automated Server parameter tuning is crucial to performance and availability of Internet applications hosted in cloud environments. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and Virtualized Server infrastructure. In this paper, we investigate automated and agile Server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based Server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptiveness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in Server parameter tuning due to its quantitative control outputs. We implemented the new approach on a testbed of Virtualized data center hosting RUBiS and WikiBench benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.

  • automated and agile Server parameter tuning with learning and control
    International Parallel and Distributed Processing Symposium, 2012
    Co-Authors: Palden Lama, Xiaobo Zhou
    Abstract:

    Server parameter tuning in Virtualized data centers is crucial to performance and availability of hosted Internet applications. It is challenging due to high dynamics and burstiness of workloads, multi-tier service architecture, and Virtualized Server infrastructure. In this paper, we investigate automated and agile Server parameter tuning for maximizing effective throughput of multi-tier Internet applications. A recent study proposed a reinforcement learning based Server parameter tuning approach for minimizing average response time of multi-tier applications. Reinforcement learning is a decision making process determining the parameter tuning direction based on trial-and-error, instead of quantitative values for agile parameter tuning. It relies on a predefined adjustment value for each tuning action. However it is nontrivial or even infeasible to find an optimal value under highly dynamic and bursty workloads. We design a neural fuzzy control based approach that combines the strengths of fast online learning and self-adaptive ness of neural networks and fuzzy control. Due to the model independence, it is robust to highly dynamic and bursty workloads. It is agile in Server parameter tuning due to its quantitative control outputs. We implement the new approach on a test bed of Virtualized HP Pro Liant blade Servers hosting RUBiS benchmark applications. Experimental results demonstrate that the new approach significantly outperforms the reinforcement learning based approach for both improving effective system throughput and minimizing average response time.

  • amoss automated multi objective Server provisioning with stress strain curving
    International Conference on Parallel Processing, 2011
    Co-Authors: Palden Lama, Xiaobo Zhou
    Abstract:

    A modern data center built upon Virtualized Server clusters for hosting Internet applications has multiple correlated and conflicting objectives. Utility-based approaches are often used for optimizing multiple objectives. However, it is difficult to define a local utility function to suitably represent one objective and to apply different weights on multiple local utility functions. Furthermore, choosing weights statically may not be effective in the face of highly dynamic workloads. In this paper, we propose an automated multi-objective Server provisioning with stress-strain curving approach (aMOSS). First, we formulate a multi-objective optimization problem that is to minimize the number of physical machines used, the average response time and the total number of virtual Servers allocated for multi-tier applications. Second, we propose a novel stress-strain curving method to automatically select the most efficient solution from a Pareto-optimal set that is obtained as the result of a nondominated sorting based optimization technique. Third, we enhance the method to reduce Server switching cost and improve the utilization of physical machines. Simulation results demonstrate that compared to utility-based approaches, aMOSS automatically achieves the most efficient tradeoff between performance and resource allocation efficiency. We implement aMOSS in a test bed of Virtualized blade Servers and demonstrate that it outperforms a representative dynamic Server provisioning approach in achieving the average response time guarantee and in resource allocation efficiency for a multi-tier Internet service. aMOSS provides a unique perspective to tackle the challenging autonomic Server provisioning problem.

  • perfume power and performance guarantee with fuzzy mimo control in Virtualized Servers
    International Workshop on Quality of Service, 2011
    Co-Authors: Palden Lama, Xiaobo Zhou
    Abstract:

    It is important but challenging to assure the performance of multi-tier Internet applications with the power consumption cap of Virtualized Server clusters mainly due to system complexity of shared infrastructure and dynamic and bursty nature of workloads. This paper presents PERFUME, a system that simultaneously guarantees power and performance targets with flexible tradeoffs while assuring control accuracy and system stability. Based on the proposed fuzzy MIMO control technique, it accurately controls both the throughput and percentile-based response time of multi-tier applications due to its novel fuzzy modeling that integrates strengths of fuzzy logic, MIMO control and artificial neural network. It is self-adaptive to highly dynamic and bursty workloads due to online learning of control model parameters using a computationally efficient weighted recursive least-squares method. We implement PERFUME in a testbed of Virtualized blade Servers hosting two multi-tier RUBiS applications. Experimental results demonstrate its control accuracy, system stability, flexibility in selecting tradeoffs between conflicting targets and robustness against highly dynamic variation and burstiness in workloads. It outperforms a representative utility based approach in providing guarantee of the system throughput, percentile-based response time and power budget in the face of highly dynamic and bursty workloads.

Guofei Jiang - One of the best experts on this subject based on the ideXlab platform.

  • Power and performance management of Virtualized computing environments via lookahead control
    Cluster Computing, 2009
    Co-Authors: Dara Marie Kusic, Nagarajan Kandasamy, James E Hanson, Jeffrey O Kephart, Guofei Jiang
    Abstract:

    There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A Virtualized Server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning Servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher Server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for Virtualized Server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a Server cluster managed by the controller conserves, on average, 22% of the power required by a system without dynamic control while still maintaining QoS goals. Finally, we use trace-based simulations to analyze controller performance on Server clusters larger than our testbed, and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.

  • Power and Performance Management of Virtualized Computing Environments Via Lookahead Control
    2008 International Conference on Autonomic Computing, 2008
    Co-Authors: Dara Marie Kusic, Nagarajan Kandasamy, James E Hanson, Jeffrey O Kephart, Guofei Jiang
    Abstract:

    There is growing incentive to reduce the power consumed by large-scale data centers that host online services such as banking, retail commerce, and gaming. Virtualization is a promising approach to consolidating multiple online services onto a smaller number of computing resources. A Virtualized Server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. By dynamically provisioning virtual machines, consolidating the workload, and turning Servers on and off as needed, data center operators can maintain the desired quality-of-service (QoS) while achieving higher Server utilization and energy efficiency. We implement and validate a dynamic resource provisioning framework for Virtualized Server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme. The proposed approach accounts for the switching costs incurred while provisioning virtual machines and explicitly encodes the corresponding risk in the optimization problem. Experiments using the Trade6 enterprise application show that a Server cluster managed by the controller conserves, on average, 26% of the power required by a system without dynamic control while still maintaining QoS goals.

Chenyang Lu - One of the best experts on this subject based on the ideXlab platform.

  • cloudpowercap integrating power budget and resource management across a Virtualized Server cluster
    arXiv: Distributed Parallel and Cluster Computing, 2014
    Co-Authors: Yong Fu, Anne Holler, Chenyang Lu
    Abstract:

    In many datacenters, Server racks are highly underutilized. Rack slots are left empty to keep the sum of the Server nameplate maximum power below the power provisioned to the rack. And the Servers that are placed in the rack cannot make full use of available rack power. The root cause of this rack underutilization is that the Server nameplate power is often much higher than can be reached in practice. To address rack underutilization, Server vendors are shipping support for per-host power caps, which provide a Server-enforced limit on the amount of power that the Server can draw. Using this feature, datacenter operators can set power caps on the hosts in the rack to ensure that the sum of those caps does not exceed the rack's provisioned power. While this approach improves rack utilization, it burdens the operator with managing the rack power budget across the hosts and does not lend itself to flexible allocation of power to handle workload usage spikes or to respond to changes in the amount of powered-on Server capacity in the rack. In this paper we present CloudPowerCap, a practical and scalable solution for power budget management in a Virtualized environment. CloudPowerCap manages the power budget for a cluster of Virtualized Servers, dynamically adjusting the per-host power caps for hosts in the cluster. We show how CloudPowerCap can provide better use of power than per-host static settings, while respecting virtual machine resource entitlements and constraints.

  • cloudpowercap integrating power budget and resource management across a Virtualized Server cluster
    International Conference on Autonomic Computing, 2014
    Co-Authors: Yong Fu, Anne Holler, Chenyang Lu
    Abstract:

    In many datacenters, Server racks are highly underutilized. Rack slots are left empty to keep the sum of the Server nameplate maximum power below the power provisioned to the rack. And the Servers that are placed in the rack cannot make full use of available rack power. The root cause of this rack underutilization is that the Server nameplate power is often much higher than can be reached in practice. To address rack underutilization, Server vendors are shipping support for per-host power caps, which provide a Server-enforced limit on the amount of power that the Server can draw. Using this feature, datacenter operators can set power caps on the hosts in the rack to ensure that the sum of those caps does not exceed the rack’s provisioned power. While this approach improves rack utilization, it burdens the operator with managing the rack power budget across the hosts and does not lend itself to flexible allocation of power to handle workload usage spikes or to respond to changes in the amount of powered-on Server capacity in the rack. In this paper we present CloudPowerCap, a practical and scalable solution for power budget management in a Virtualized environment. CloudPowerCap manages the power budget for a cluster of Virtualized Servers, dynamically adjusting the per-host power caps for hosts in the cluster. We show how CloudPowerCap can provide better use of power than per-host static settings, while respecting virtual machine resource entitlements and constraints. Keywords-power cap; resource management; virtualization; cloud computing

Xiaorui Wang - One of the best experts on this subject based on the ideXlab platform.

  • Performance-controlled Server consolidation for Virtualized data centers with multi-tier applications
    Sustainable Computing: Informatics and Systems, 2014
    Co-Authors: Yefu Wang, Xiaorui Wang
    Abstract:

    Abstract Modern data centers must provide performance assurance for complex system software such as multi-tier web applications. In addition, the power consumption of data centers needs to be minimized to reduce operating costs and avoid system overheating. Various power-efficient performance management strategies have been proposed based on dynamic voltage and frequency scaling (DVFS). Virtualization technologies have also made it possible to consolidate multiple virtual machines (VMs) onto a smaller number of active physical Servers for even greater power savings, but at the cost of a higher overhead. This paper proposes a performance-controlled power optimization solution for Virtualized Server clusters with multi-tier applications. While most existing work relies on either DVFS or Server consolidation in a separate manner, our solution utilizes both strategies for maximized power savings by integrating feedback control with optimization strategies. At the application level, a novel multi-input–multi-output controller is designed to achieve the desired performance for applications spanning multiple VMs, on a short time scale, by reallocating the CPU resources and conducting DVFS. At the cluster level, a power optimizer is proposed to incrementally consolidate VMs onto the most power-efficient Servers on a longer time scale. Empirical results on a hardware testbed demonstrate that our solution outperforms pMapper, a state-of-the-art Server consolidation algorithm, by having greater power savings and smaller consolidation overheads while achieving the required application performance. Extensive simulation results, based on a trace file of 5415 real Servers, demonstrate the efficacy of our solution in large-scale data centers.

  • virtual batching request batching for Server energy conservation in Virtualized data centers
    IEEE Transactions on Parallel and Distributed Systems, 2013
    Co-Authors: Yefu Wang, Xiaorui Wang
    Abstract:

    Many power management strategies have been proposed for enterprise Servers based on dynamic voltage and frequency scaling (DVFS), but those solutions cannot further reduce the energy consumption of a Server when the Server processor is already at the lowest DVFS level and the Server utilization is still low (e.g., 10 percent or lower). To achieve improved energy efficiency, request batching can be conducted to group received requests into batches and put the processor into sleep between the batches. However, it is challenging to perform request batching on a Virtualized Server because different virtual machines on the same Server may have different workload intensities. Hence, putting the shared processor into sleep may severely impact the application performance of all the virtual machines. This paper proposes Virtual Batching, a novel request batching solution for Virtualized Servers with primarily light workloads. Our solution dynamically allocates CPU resources such that all the virtual machines can have approximately the same performance level relative to their allowed peak values. Based on this uniform level, Virtual Batching determines the time length for periodically batching incoming requests and putting the processor into sleep. When the workload intensity changes from light to moderate, request batching is automatically switched to DVFS to increase processor frequency for performance guarantees. Virtual Batching is also extended to integrate with Server consolidation for maximized energy conservation with performance guarantees for Virtualized data centers. Empirical results based on a hardware testbed and real trace files show that Virtual Batching can achieve the desired performance with more energy conservation than several well-designed baselines, e.g., 63 percent more, on average, than a solution based on DVFS only.

  • coordinating power control and performance management for Virtualized Server clusters
    IEEE Transactions on Parallel and Distributed Systems, 2011
    Co-Authors: Xiaorui Wang, Yefu Wang
    Abstract:

    Today's data centers face two critical challenges. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, Server power consumption must be controlled in order to avoid failures caused by power capacity overload or system overheating due to increasing high Server density. However, existing work controls power and application-level performance separately, and thus, cannot simultaneously provide explicit guarantees on both. In addition, as power and performance control strategies may come from different hardware/software vendors and coexist at different layers, it is more feasible to coordinate various strategies to achieve the desired control objectives than relying on a single centralized control strategy. This paper proposes Co-Con, a novel cluster-level control architecture that coordinates individual power and performance control loops for Virtualized Server clusters. To emulate the current practice in data centers, the power control loop changes hardware power states with no regard to the application-level performance. The performance control loop is then designed for each virtual machine to achieve the desired performance even when the system model varies significantly due to the impact of power control. Co-Con configures the two control loops rigorously, based on feedback control theory, for theoretically guaranteed control accuracy and system stability. Empirical results on a physical testbed demonstrate that Co-Con can simultaneously provide effective control on both application-level performance and underlying power consumption.

  • co con coordinated control of power and application performance for Virtualized Server clusters
    International Workshop on Quality of Service, 2009
    Co-Authors: Xiaorui Wang, Yefu Wang
    Abstract:

    Today's data centers face two critical challenges. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, Server power consumption must be controlled in order to avoid failures caused by power capacity overload or system overheating due to increasing high Server density. However, existing work controls power and application-level performance separately and thus cannot simultaneously provide explicit guarantees on both. This paper proposes Co-Con, a novel cluster-level control architecture that coordinates individual power and performance control loops for Virtualized Server clusters. To emulate the current practice in data centers, the power control loop changes hardware power states with no regard to the application-level performance. The performance control loop is then designed for each virtual machine to achieve the desired performance even when the system model varies significantly due to the impact of power control. Co-Con configures the two control loops rigorously, based on feedback control theory, for theoretically guaranteed control accuracy and system stability. Empirical results demonstrate that Co-Con can simultaneously provide effective control on both application-level performance and underlying power consumption.

Daniel Mosse - One of the best experts on this subject based on the ideXlab platform.

  • optimized management of power and performance for Virtualized heterogeneous Server clusters
    IEEE ACM International Symposium Cluster Cloud and Grid Computing, 2011
    Co-Authors: Vinicius Petrucci, Orlando Loques, Enrique V Carrera, Julius C B Leite, Daniel Mosse
    Abstract:

    This paper proposes and evaluates an approach for power and performance management in Virtualized Server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of Servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the Virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning Servers on/off and VM relocations. We show the effectiveness of the approach applied to a Server cluster test bed. Our experiments show that our approach conserves about 50% of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved.

  • dynamic optimization of power and performance for Virtualized Server clusters
    ACM Symposium on Applied Computing, 2010
    Co-Authors: Vinicius Petrucci, Orlando Loques, Daniel Mosse
    Abstract:

    In this paper we present an optimization solution for power and performance management in a platform running multiple independent applications. Our approach assumes a Virtualized Server environment and includes an optimization model and strategy to dynamically control the cluster power consumption, while meeting the application's workload demands.