Mainframe

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7083 Experts worldwide ranked by ideXlab platform

Anthony Saporito - One of the best experts on this subject based on the ideXlab platform.

  • the ibm z15 high frequency Mainframe branch predictor industrial product
    International Symposium on Computer Architecture, 2020
    Co-Authors: Narasimha R Adiga, James J Bonanno, Adam B Collura, Matthias D Heizmann, Brian R Prasky, Anthony Saporito
    Abstract:

    The design of the modern, enterprise-class IBM z15 branch predictor is described. Implemented as a multilevel look-ahead structure, the branch predictor is capable of predicting branch direction and target addresses, augmented with multiple auxiliary direction, target, and power predictors. Predictions are made asynchronously, and later integrated into the processor pipeline. The design is optimized for the unique workloads executed on these enterprise-class systems, including compute intensive and both large instruction and data footprint workloads. This paper highlights the major operations and functions of the IBM z15 branch predictor, including its pipeline, prediction structures and verification methodology. Explanations as to how the design matured to its current state are also provided.

  • ibm z14 processor characterization and power management for high reliability Mainframe systems
    IEEE Journal of Solid-state Circuits, 2019
    Co-Authors: Christopher J Berry, David H Wolpert, Christos Vezrytzis, Richard F Rizzolo, S Carey, Yaniv Maroz, Hunter Shi, Dureseti Chidambarrao, Christian Jacobi, Anthony Saporito
    Abstract:

    The IBM z14 is the latest update in the storied history of IBM Mainframes. Reliability, availability, security, and scalability are the foundation of the IBM Mainframe line. System reliability and availability targets are in excess of 10 years, requiring rigorous chip characterization processes. In this paper, we discuss some of the many processes used to ensure that lifetime. An additional part of this reliability is power management (PM). The 5.2-GHz high-power design of the central processor chip requires advanced on-die PM capabilities to adapt to power intensive instruction streams. We also discuss a number of improvements to the critical path monitoring design used to manage power supply voltage droops, reducing response time and the impact on system performance. Finally, we compare a set of simulations and hardware results to validate our power fluctuation models.

Kenneth L Kraemer - One of the best experts on this subject based on the ideXlab platform.

  • Mainframe and pc computing in american cities myths and realities
    Center for Research on Information Technology and Organizations, 1996
    Co-Authors: Donald F Norris, Kenneth L Kraemer
    Abstract:

    Mainframe AND PC COMPUTING IN AMERICAN CITIES: MYTHS AND REALITIES Working Paper #URB-083 by: Donald F. Norris University of Maryland Baltimore County and Kenneth L. Kraemer University of California, Irvine

  • Mainframe and pc computing in american cities myths and realities
    Public Administration Review, 1996
    Co-Authors: Donald F Norris, Kenneth L Kraemer
    Abstract:

    How much can PCs aid city management? This article is based on a 1993 survey that compares computing in cities that use only personal computers (PCs) with computing in cities that use central computer systems. The authors found that claims that PCs would speed up automation of governmental functions were not substantiated. Central system cities were more widely automated, had more widespread use among staff and were more likely to deploy leading-edge computer technologies than PC-only cities. Moreover, respondents in central cities were positive about computer impacts and satisfied with computing. PC-only cities had an edge over central-system cities in that they reported fewer problems with computers, but the test of statistical significance showed only a weak relationship. The authors argue that PC-only cities' reliance on ad hoc solutions, out-sourcing, or "computergurus, " results in a failure to develop ongoing support capabilities. In contrast, central-system cities have developed and enhanced these capabilities over time, thereby providing greater support for the computing function and a more stab1e technology platform. Both elected officials and professional managers in local governments believe in the value of computers, especially personal computers (PCs), to their own work and the work of government. Various academic studies have demonstrated this belief over the years (e.g., Dutton and Kraemer, 1979; Perry and Kraemer, 1980; and Norris, 1989 and 1992). Yet, policy makers are continually confronted with claims about computing that they find difficult to assess and that occasionally defy rationality. For example, within recent memory it has been claimed that privatization or outsourcing would take the computing problem off the hands of local officials at less cost and that geographic information systems would enable officials to make Solomon-like judgments about such important matters as land-use planning (e.g., Richter, 1991; Loh and Venkatraman, 1992; Public Technology Inc., 1991). More recent claims are that client-server computing is the new low-cost way to governmental automation (Gagliardi, 1994) and that desktop computers are the means to increase employee productivity and to empower workers to deliver better services to citizens (Greisemer, 1983 and 1984). One of the most persistent claims, which has at least a decade of history, is that the PC can effectively replace larger central computer systems in local governments (i.e., Mainframes and minicomputers). For example, it is frequently asserted that, unlike Mainframe computing, the introduction of PCs is an easy, low-cost solution to automation in government. It is believed that by adopting PCs, latecomers to computing can leap-frog the brain-dead Mainframe and minicomputer technologies and still gain all the benefits of these earlier, cumbersome technologies--and then some. All that is needed is basic investment in the technology and the empowerment of workers to use the technology in their jobs. The need for Management Information Systems (MIS) departments will be only to help make the transition and to train users in the new technology (see, for example, Greisemer, 1983 and 1984; and Voss and Eikemeier, 1984). PCs may be all that small local governments or even some small units within larger governments need to conduct their business. However, it is extremely unlikely that even the most powerful and sophisticated PCs on the market today can solve all of the automation needs of local government. Indeed, recent studies of the lifecycle cost of PCs, actual experience with PCs, and recent reports on PC-based client-server computing call several of these assertions into question. For example, while the initial cost of PC-based client-server computing has been shown to be lower than Mainframe or minicomputer alternatives by 20 to 30 percent, the five-year costs of PCs were found to be two to three times as great per employee (Nolan, Norton, and Company, 1992; Miller, 1993; Ambrosio, 1993; and "Client/Server," 1994). …

Christian Jacobi - One of the best experts on this subject based on the ideXlab platform.

  • history of ibm z Mainframe processors
    IEEE Micro, 2020
    Co-Authors: Christian Jacobi, Charles F. Webb
    Abstract:

    IBM Z is both the oldest and among the most modern of computing platforms. Launched as S/360 in 1964, the Mainframe became synonymous with large-scale computing for business and remains the workhorse of enterprise computing for businesses worldwide. Most of the world's largest banks, insurers, retailers, airlines, and enterprises from many other industries have IBM Z at the center of their IT infrastructure. This article presents an overview of the evolution of the IBM Z microprocessors over the past six generations. The article discusses some of the underlying workload characteristics and how these have influenced the microarchitecture enhancements driving the performance and capacity improvements. The article then describes how the focus shifted over time from speeds and feeds to new features, functions, and accelerators, and presents some examples on improved availability, enhanced security and cryptography, and embedded data compression acceleration.

  • ibm z14 processor characterization and power management for high reliability Mainframe systems
    IEEE Journal of Solid-state Circuits, 2019
    Co-Authors: Christopher J Berry, David H Wolpert, Christos Vezrytzis, Richard F Rizzolo, S Carey, Yaniv Maroz, Hunter Shi, Dureseti Chidambarrao, Christian Jacobi, Anthony Saporito
    Abstract:

    The IBM z14 is the latest update in the storied history of IBM Mainframes. Reliability, availability, security, and scalability are the foundation of the IBM Mainframe line. System reliability and availability targets are in excess of 10 years, requiring rigorous chip characterization processes. In this paper, we discuss some of the many processes used to ensure that lifetime. An additional part of this reliability is power management (PM). The 5.2-GHz high-power design of the central processor chip requires advanced on-die PM capabilities to adapt to power intensive instruction streams. We also discuss a number of improvements to the critical path monitoring design used to manage power supply voltage droops, reducing response time and the impact on system performance. Finally, we compare a set of simulations and hardware results to validate our power fluctuation models.

  • ibm zec12 the third generation high frequency Mainframe microprocessor
    IEEE Micro, 2013
    Co-Authors: K Shum, Fadi Y Busaba, Christian Jacobi
    Abstract:

    The zEnterprise EC12 is the latest generation of IBM's System z Enterprise Class Mainframe servers. The microprocessor operates at an ultra-high frequency of 5.5 GHz and incorporates many pipeline-optimization and instruction-processing techniques. It also supports innovative instruction- set-architecture extensions for future software exploitation to acquire performance gains. This article highlights the various factors inside the zEC12 microprocessor for achieving the best possible computing performance.

Donald F Norris - One of the best experts on this subject based on the ideXlab platform.

  • Mainframe and pc computing in american cities myths and realities
    Center for Research on Information Technology and Organizations, 1996
    Co-Authors: Donald F Norris, Kenneth L Kraemer
    Abstract:

    Mainframe AND PC COMPUTING IN AMERICAN CITIES: MYTHS AND REALITIES Working Paper #URB-083 by: Donald F. Norris University of Maryland Baltimore County and Kenneth L. Kraemer University of California, Irvine

  • Mainframe and pc computing in american cities myths and realities
    Public Administration Review, 1996
    Co-Authors: Donald F Norris, Kenneth L Kraemer
    Abstract:

    How much can PCs aid city management? This article is based on a 1993 survey that compares computing in cities that use only personal computers (PCs) with computing in cities that use central computer systems. The authors found that claims that PCs would speed up automation of governmental functions were not substantiated. Central system cities were more widely automated, had more widespread use among staff and were more likely to deploy leading-edge computer technologies than PC-only cities. Moreover, respondents in central cities were positive about computer impacts and satisfied with computing. PC-only cities had an edge over central-system cities in that they reported fewer problems with computers, but the test of statistical significance showed only a weak relationship. The authors argue that PC-only cities' reliance on ad hoc solutions, out-sourcing, or "computergurus, " results in a failure to develop ongoing support capabilities. In contrast, central-system cities have developed and enhanced these capabilities over time, thereby providing greater support for the computing function and a more stab1e technology platform. Both elected officials and professional managers in local governments believe in the value of computers, especially personal computers (PCs), to their own work and the work of government. Various academic studies have demonstrated this belief over the years (e.g., Dutton and Kraemer, 1979; Perry and Kraemer, 1980; and Norris, 1989 and 1992). Yet, policy makers are continually confronted with claims about computing that they find difficult to assess and that occasionally defy rationality. For example, within recent memory it has been claimed that privatization or outsourcing would take the computing problem off the hands of local officials at less cost and that geographic information systems would enable officials to make Solomon-like judgments about such important matters as land-use planning (e.g., Richter, 1991; Loh and Venkatraman, 1992; Public Technology Inc., 1991). More recent claims are that client-server computing is the new low-cost way to governmental automation (Gagliardi, 1994) and that desktop computers are the means to increase employee productivity and to empower workers to deliver better services to citizens (Greisemer, 1983 and 1984). One of the most persistent claims, which has at least a decade of history, is that the PC can effectively replace larger central computer systems in local governments (i.e., Mainframes and minicomputers). For example, it is frequently asserted that, unlike Mainframe computing, the introduction of PCs is an easy, low-cost solution to automation in government. It is believed that by adopting PCs, latecomers to computing can leap-frog the brain-dead Mainframe and minicomputer technologies and still gain all the benefits of these earlier, cumbersome technologies--and then some. All that is needed is basic investment in the technology and the empowerment of workers to use the technology in their jobs. The need for Management Information Systems (MIS) departments will be only to help make the transition and to train users in the new technology (see, for example, Greisemer, 1983 and 1984; and Voss and Eikemeier, 1984). PCs may be all that small local governments or even some small units within larger governments need to conduct their business. However, it is extremely unlikely that even the most powerful and sophisticated PCs on the market today can solve all of the automation needs of local government. Indeed, recent studies of the lifecycle cost of PCs, actual experience with PCs, and recent reports on PC-based client-server computing call several of these assertions into question. For example, while the initial cost of PC-based client-server computing has been shown to be lower than Mainframe or minicomputer alternatives by 20 to 30 percent, the five-year costs of PCs were found to be two to three times as great per employee (Nolan, Norton, and Company, 1992; Miller, 1993; Ambrosio, 1993; and "Client/Server," 1994). …

Christopher J Berry - One of the best experts on this subject based on the ideXlab platform.

  • ibm z14 processor characterization and power management for high reliability Mainframe systems
    IEEE Journal of Solid-state Circuits, 2019
    Co-Authors: Christopher J Berry, David H Wolpert, Christos Vezrytzis, Richard F Rizzolo, S Carey, Yaniv Maroz, Hunter Shi, Dureseti Chidambarrao, Christian Jacobi, Anthony Saporito
    Abstract:

    The IBM z14 is the latest update in the storied history of IBM Mainframes. Reliability, availability, security, and scalability are the foundation of the IBM Mainframe line. System reliability and availability targets are in excess of 10 years, requiring rigorous chip characterization processes. In this paper, we discuss some of the many processes used to ensure that lifetime. An additional part of this reliability is power management (PM). The 5.2-GHz high-power design of the central processor chip requires advanced on-die PM capabilities to adapt to power intensive instruction streams. We also discuss a number of improvements to the critical path monitoring design used to manage power supply voltage droops, reducing response time and the impact on system performance. Finally, we compare a set of simulations and hardware results to validate our power fluctuation models.

  • ibm z14 14nm microprocessor for the next generation Mainframe
    International Solid-State Circuits Conference, 2018
    Co-Authors: Christopher J Berry, J Warnock, John Isakson, John Badar, Brian Bell, Frank Malgioglio, Guenter Mayer, Dina Hamid, Jesse Surprise, David Wolpert
    Abstract:

    The IBM Z microprocessor in the z14 system has been redesigned to improve performance, system capacity, and security [1] over the previous z13 system [2]. The system contains up to 24 central processor (CP) and 4 system controller (SC) chips. Each CP, shown in die photo A (Fig. 2.2.7), operates at 5.2GHz and is comprised of 10 cores, 2 PCIe Gen3 interfaces, an IO bus controller (GX), 128MB of L3 embedded DRAM (eDRAM) cache, X-BUS interfaces connecting to 2 other CP chips and one SC chip, and a redundant array of independent memory (RAIM) interface. Each core on the CP chip has 4MB of eDRAM L2 Data cache and 2MB of eDRAM L2 Instruction cache, with 128KB SRAM Instruction and 128KB SRAM Data L1 caches. Each SC, shown in die photo B (Fig. 2.2.7), operates at 2.6GHz and has 672MB of L4 eDRAM cache, X-BUS interfaces connecting to CP chips in the drawer and A-BUS interfaces connecting SCs on the other drawers. Both chips are 696mm2 and are designed in Global Foundries 14nm high performance (14HP) SOI FinFET technology with 17 layers of copper interconnect [3]. The CP contains 6.1B transistors, while the SC contains 9.7B transistors. The total IO bandwidth of the CP and SC are 2.9Tb/s and 5.5Tb/s, respectively.