Server Hardware

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 966 Experts worldwide ranked by ideXlab platform

Georg Carle - One of the best experts on this subject based on the ideXlab platform.

  • ANRW - Behind the scenes: what device benchmarks can tell us
    Proceedings of the Applied Networking Research Workshop, 2018
    Co-Authors: Simon Bauer, Daniel Raumer, Paul Emmerich, Georg Carle
    Abstract:

    While software-based packet forwarding devices and middle-boxes are gaining momentum, also device diversity increases. This also challenges the area of device benchmarking. In this paper, we present our new benchmarking framework for OpenFlow switches that is running just on commodity Server Hardware. We present results of benchmarks and discuss how benchmarks reveal information about inner details of devices. As case study we implemented a new test to determine queue sizes and service rates based on a simple queuing theory model.

  • efficient serving of vpn endpoints on cots Server Hardware
    IEEE International Conference on Cloud Networking, 2016
    Co-Authors: Daniel Raumer, Sebastian Gallenmuller, Paul Emmerich, Lukas Mardian, Georg Carle
    Abstract:

    Of late an increasing amount of functionalityin computer networks is provided by commodityx86 Hardware wherein the CPU is the main bottleneck. Relieving the CPU from a portion of its computationalstress leads to a lowered number of cycles spent on eachsingle packet. Subsequently, Servers are able to dealwith millions of packets per second. We show a casestudy in which we used the cryptographic offloadingfunctionality of commodity NICs to build a VPN IPsecgateway on an x86 Server, where we required only oneCPU core to serve 10 GbE line rate. The source codeof the NIC-accelerated VPN gateway in our case studyis publicly available. Our case study shows the tradeoffsbetween manifold software-and high performance offloading Hardware-provided functionality.

  • CloudNet - Efficient Serving of VPN Endpoints on COTS Server Hardware
    2016 5th IEEE International Conference on Cloud Networking (Cloudnet), 2016
    Co-Authors: Daniel Raumer, Sebastian Gallenmuller, Paul Emmerich, Lukas Mardian, Georg Carle
    Abstract:

    Of late an increasing amount of functionalityin computer networks is provided by commodityx86 Hardware wherein the CPU is the main bottleneck. Relieving the CPU from a portion of its computationalstress leads to a lowered number of cycles spent on eachsingle packet. Subsequently, Servers are able to dealwith millions of packets per second. We show a casestudy in which we used the cryptographic offloadingfunctionality of commodity NICs to build a VPN IPsecgateway on an x86 Server, where we required only oneCPU core to serve 10 GbE line rate. The source codeof the NIC-accelerated VPN gateway in our case studyis publicly available. Our case study shows the tradeoffsbetween manifold software-and high performance offloading Hardware-provided functionality.

Babak Falsafi - One of the best experts on this subject based on the ideXlab platform.

  • A Case for Specialized Processors for Scale-Out Workloads
    IEEE Micro, 2014
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads need extensive amounts of computational resources. However, datacenters using modern Server Hardware face physical constraints in space and power, limiting further expansion and requiring improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency. In this work, we demonstrate that modern Server processors are highly inefficient for running cloud workloads. To address this problem, we investigate the microarchitectural behavior of scale-out workloads and present opportunities to enable specialized processor designs that closely match the needs of the cloud.

  • Quantifying the Mismatch between Emerging Scale-Out Applications and Modern Processors
    ACM Transactions on Computer Systems, 2012
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out workloads. In this work, we introduce CloudSuite, a benchmark suite of emerging scale-out workloads. We use performance counters on modern Servers to study scale-out workloads, finding that today’s predominant processor microarchitecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core microarchitecture. Moreover, while today’s predominant microarchitecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key microarchitectural needs of scale-out workloads, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.

  • ASPLOS - Clearing the clouds: a study of emerging scale-out workloads on modern Hardware
    Proceedings of the seventeenth international conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS '12, 2012
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out workloads. In this work, we introduce CloudSuite, a benchmark suite of emerging scale-out workloads. We use performance counters on modern Servers to study scale-out workloads, finding that today's predominant processor micro-architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core micro-architecture. Moreover, while today's predominant micro-architecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.

  • clearing the clouds a study of emerging scale out workloads on modern Hardware
    Architectural Support for Programming Languages and Operating Systems, 2012
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out workloads. In this work, we introduce CloudSuite, a benchmark suite of emerging scale-out workloads. We use performance counters on modern Servers to study scale-out workloads, finding that today's predominant processor micro-architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core micro-architecture. Moreover, while today's predominant micro-architecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.

  • Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware
    2011
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out cloud applications need extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy use. Therefore, continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out cloud applications. We use performance counters on modern Servers to study a wide range of cloud applications, finding that today’s predominant processor architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the application needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core architecture. Moreover, while today’s predominant architectures are inefficient when executing scale-out cloud applications, we find that the current Hardware trends further exacerbate the mismatch. In this work, we identify the key micro-architectural needs of cloud applications, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.

Eric Kralicek - One of the best experts on this subject based on the ideXlab platform.

  • Server Hardware Strategy
    The Accidental SysAdmin Handbook, 2016
    Co-Authors: Eric Kralicek
    Abstract:

    Servers tend to last longer than laptops and desktop computers. Server Hardware is built to take a lot of punishment. Servers come preloaded with redundant devices so they don’t need to go offline as often as workstations would. Servers are generally noisy and consume a lot of power. Servers require additional air conditioning and cost more because they do more. Most Servers share basic configuration characteristics:

Daniel Raumer - One of the best experts on this subject based on the ideXlab platform.

  • ANRW - Behind the scenes: what device benchmarks can tell us
    Proceedings of the Applied Networking Research Workshop, 2018
    Co-Authors: Simon Bauer, Daniel Raumer, Paul Emmerich, Georg Carle
    Abstract:

    While software-based packet forwarding devices and middle-boxes are gaining momentum, also device diversity increases. This also challenges the area of device benchmarking. In this paper, we present our new benchmarking framework for OpenFlow switches that is running just on commodity Server Hardware. We present results of benchmarks and discuss how benchmarks reveal information about inner details of devices. As case study we implemented a new test to determine queue sizes and service rates based on a simple queuing theory model.

  • efficient serving of vpn endpoints on cots Server Hardware
    IEEE International Conference on Cloud Networking, 2016
    Co-Authors: Daniel Raumer, Sebastian Gallenmuller, Paul Emmerich, Lukas Mardian, Georg Carle
    Abstract:

    Of late an increasing amount of functionalityin computer networks is provided by commodityx86 Hardware wherein the CPU is the main bottleneck. Relieving the CPU from a portion of its computationalstress leads to a lowered number of cycles spent on eachsingle packet. Subsequently, Servers are able to dealwith millions of packets per second. We show a casestudy in which we used the cryptographic offloadingfunctionality of commodity NICs to build a VPN IPsecgateway on an x86 Server, where we required only oneCPU core to serve 10 GbE line rate. The source codeof the NIC-accelerated VPN gateway in our case studyis publicly available. Our case study shows the tradeoffsbetween manifold software-and high performance offloading Hardware-provided functionality.

  • CloudNet - Efficient Serving of VPN Endpoints on COTS Server Hardware
    2016 5th IEEE International Conference on Cloud Networking (Cloudnet), 2016
    Co-Authors: Daniel Raumer, Sebastian Gallenmuller, Paul Emmerich, Lukas Mardian, Georg Carle
    Abstract:

    Of late an increasing amount of functionalityin computer networks is provided by commodityx86 Hardware wherein the CPU is the main bottleneck. Relieving the CPU from a portion of its computationalstress leads to a lowered number of cycles spent on eachsingle packet. Subsequently, Servers are able to dealwith millions of packets per second. We show a casestudy in which we used the cryptographic offloadingfunctionality of commodity NICs to build a VPN IPsecgateway on an x86 Server, where we required only oneCPU core to serve 10 GbE line rate. The source codeof the NIC-accelerated VPN gateway in our case studyis publicly available. Our case study shows the tradeoffsbetween manifold software-and high performance offloading Hardware-provided functionality.

Michael Ferdman - One of the best experts on this subject based on the ideXlab platform.

  • A Case for Specialized Processors for Scale-Out Workloads
    IEEE Micro, 2014
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads need extensive amounts of computational resources. However, datacenters using modern Server Hardware face physical constraints in space and power, limiting further expansion and requiring improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency. In this work, we demonstrate that modern Server processors are highly inefficient for running cloud workloads. To address this problem, we investigate the microarchitectural behavior of scale-out workloads and present opportunities to enable specialized processor designs that closely match the needs of the cloud.

  • Quantifying the Mismatch between Emerging Scale-Out Applications and Modern Processors
    ACM Transactions on Computer Systems, 2012
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out workloads. In this work, we introduce CloudSuite, a benchmark suite of emerging scale-out workloads. We use performance counters on modern Servers to study scale-out workloads, finding that today’s predominant processor microarchitecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core microarchitecture. Moreover, while today’s predominant microarchitecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key microarchitectural needs of scale-out workloads, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.

  • ASPLOS - Clearing the clouds: a study of emerging scale-out workloads on modern Hardware
    Proceedings of the seventeenth international conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS '12, 2012
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out workloads. In this work, we introduce CloudSuite, a benchmark suite of emerging scale-out workloads. We use performance counters on modern Servers to study scale-out workloads, finding that today's predominant processor micro-architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core micro-architecture. Moreover, while today's predominant micro-architecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.

  • clearing the clouds a study of emerging scale out workloads on modern Hardware
    Architectural Support for Programming Languages and Operating Systems, 2012
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out workloads. In this work, we introduce CloudSuite, a benchmark suite of emerging scale-out workloads. We use performance counters on modern Servers to study scale-out workloads, finding that today's predominant processor micro-architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core micro-architecture. Moreover, while today's predominant micro-architecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.

  • Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware
    2011
    Co-Authors: Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, Babak Falsafi
    Abstract:

    Emerging scale-out cloud applications need extensive amounts of computational resources. However, data centers using modern Server Hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per Server and in the per-operation energy use. Therefore, continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing Server efficiency to ensure that Server Hardware closely matches the needs of scale-out cloud applications. We use performance counters on modern Servers to study a wide range of cloud applications, finding that today’s predominant processor architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the application needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core architecture. Moreover, while today’s predominant architectures are inefficient when executing scale-out cloud applications, we find that the current Hardware trends further exacerbate the mismatch. In this work, we identify the key micro-architectural needs of cloud applications, calling for a change in the trajectory of Server processors that would lead to improved computational density and power efficiency in data centers.