Quantum Computers

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 20688 Experts worldwide ranked by ideXlab platform

Moinuddin K Qureshi - One of the best experts on this subject based on the ideXlab platform.

  • ensemble of diverse mappings improving reliability of Quantum Computers by orchestrating dissimilar mistakes
    International Symposium on Microarchitecture, 2019
    Co-Authors: Swamit S Tannu, Moinuddin K Qureshi
    Abstract:

    Near-term Quantum Computers do not have the ability to perform error correction. Such Noisy Intermediate Scale Quantum (NISQ) Computers can produce incorrect output as the computation is subjected to errors. The applications on a NISQ machine try to infer the correct output by running the same program thousands of times and logging the output. If the error rates are low and the errors are not correlated, then the correct answer can be inferred as the one appearing with the highest frequency. Unfortunately, Quantum Computers are subjected to correlated errors, which can cause an incorrect answer to appear more frequently than the correct answer. We observe that recent work on qubit mapping (including the recent work on variation-aware mapping) tries to obtain the best possible qubit allocation and uses it for all the trials. This approach significantly increases the vulnerability to correlated errors -- if the mapping becomes susceptible to a particular form of error, then all the trials will get subjected to the same error, which can cause the same wrong answer to appear as the output for a significant fraction of the trials. To mitigate the vulnerability to such correlated errors, this paper leverages the concept of diversity and proposes an Ensemble of Diverse Mappings (EDM). EDM uses diversity in qubit allocation to run copies of an input program with a diverse set of mappings, thus steering the trials towards making different mistakes. By combining the output probability distributions of the diverse ensemble, EDM amplifies the correct answer by suppressing the incorrect answers. Our experiments with ibmq-melbourne (14-qubit) machine shows that EDM improves the inference quality by 2.3x compared to the current state-of-the-art mapping algorithms.

  • a case for multi programming Quantum Computers
    International Symposium on Microarchitecture, 2019
    Co-Authors: Poulami Das, Swamit S Tannu, Prashant J Nair, Moinuddin K Qureshi
    Abstract:

    Existing and near-term Quantum Computers face significant reliability challenges because of high error rates caused by noise. Such machines are operated in the Noisy Intermediate Scale Quantum (NISQ) model of computing. As NISQ machines exhibit high error-rates, only programs that require a few qubits can be executed reliably. Therefore, NISQ machines tend to underutilize its resources. In this paper, we propose to improve the throughput and utilization of NISQ machines by using multi-programming and enabling the NISQ machine to concurrently execute multiple workloads. Multi-programming a NISQ machine is non-trivial. This is because, a multi-programmed NISQ machine can have an adverse impact on the reliability of the individual workloads. To enable multi-programming in a robust manner, we propose three solutions. First, we develop methods to partition the qubits into multiple reliable regions using error information from machine calibration so that each program can have a fair allocation of reliable qubits. Second, we observe that when two programs are of unequal lengths, measurement operations can impact the reliability of the co-running program. To reduce this interference, we propose a Delayed Instruction Scheduling (DIS) policy that delays the start of the shorter program so that all the measurement operations can be performed at the end. Third, we develop an Adaptive Multi-Programming (AMP) design that monitors the reliability at runtime and reverts to single program mode if the reliability impact of multi-programming is greater than a predefined threshold. Our evaluations with IBM-Q16 show that our proposals can improve resource utilization and throughput by up to 2x, while limiting the impact on reliability.

  • not all qubits are created equal a case for variability aware policies for nisq era Quantum Computers
    Architectural Support for Programming Languages and Operating Systems, 2019
    Co-Authors: Swamit S Tannu, Moinuddin K Qureshi
    Abstract:

    Existing and near-term Quantum Computers are not yet large enough to support fault-tolerance. Such systems with few tens to few hundreds of qubits are termed as Noisy Intermediate Scale Quantum Computers (NISQ), and these systems can provide benefits for a class of Quantum algorithms. In this paper, we study the problems of Qubit-Allocation (mapping of program qubits to machine qubits) and Qubit-Movement (routing qubits from one location to another for entanglement). We observe that there can be variation in the error rates of different qubits and links, which can impact the decisions for qubit movement and qubit allocation. We analyze publicly available characterization data for the IBM-Q20 to quantify the variation and show that there is indeed significant variability in the error rates of the qubits and the links connecting them. We show that the device variability has a significant impact on the overall system reliability. To exploit the variability in error rate, we propose Variation-Aware Qubit Movement (VQM) and Variation-Aware Qubit Allocation (VQA), policies that optimize the movement and allocation of qubits to avoid the weaker qubits and links, and guide more operations towards the stronger qubits and links. Our evaluations, with a simulation-based model of IBM-Q20, show that Variation-Aware policies can improve the system reliability by up to 1.7x. We also evaluate our policies on the IBM-Q5 machine and demonstrate that our proposal significantly improves the reliability of real systems (up to 1.9X).

  • a case for variability aware policies for nisq era Quantum Computers
    arXiv: Quantum Physics, 2018
    Co-Authors: Swamit S Tannu, Moinuddin K Qureshi
    Abstract:

    Recently, IBM, Google, and Intel showcased Quantum Computers ranging from 49 to 72 qubits. While these systems represent a significant milestone in the advancement of Quantum computing, existing and near-term Quantum Computers are not yet large enough to fully support Quantum error-correction. Such systems with few tens to few hundreds of qubits are termed as Noisy Intermediate Scale Quantum Computers (NISQ), and these systems can provide benefits for a class of Quantum algorithms. In this paper, we study the problems of Qubit-Allocation (mapping of program qubits to machine qubits) and Qubit-Movement(routing qubits from one location to another to perform entanglement). We observe that there exists variation in the error rates of different qubits and links, which can have an impact on the decisions for qubit movement and qubit allocation. We analyze characterization data for the IBM-Q20 Quantum computer gathered over 52 days to understand and quantify the variation in the error-rates and find that there is indeed significant variability in the error rates of the qubits and the links connecting them. We define reliability metrics for NISQ Computers and show that the device variability has the substantial impact on the overall system reliability. To exploit the variability in error rate, we propose Variation-Aware Qubit Movement (VQM) and Variation-Aware Qubit Allocation (VQA), policies that optimize the movement and allocation of qubits to avoid the weaker qubits and links and guide more operations towards the stronger qubits and links. We show that our Variation-Aware policies improve the reliability of the NISQ system up to 2.5x.

  • taming the instruction bandwidth of Quantum Computers via hardware managed error correction
    International Symposium on Microarchitecture, 2017
    Co-Authors: Swamit S Tannu, Zachary Myers, Prashant J Nair, Douglas Carmean, Moinuddin K Qureshi
    Abstract:

    A Quantum computer consists of Quantum bits (qubits) and a control processor that acts as an interface between the programmer and the qubits. As qubits are very sensitive to noise, they rely on continuous error correction to maintain the correct state. Current proposals rely on software-managed error correction and require large instruction bandwidth, which must scale in proportion to the number of qubits. While such a design may be reasonable for small-scale Quantum Computers, we show that instruction bandwidth tends to become a critical bottleneck for scaling Quantum Computers. In this paper, we show that 99.999% of the instructions in the instruction stream of a typical Quantum workload stem from error correction. Using this observation, we propose QuEST (Quantum Error-Correction Substrate), an architecture that delegates the task of Quantum error correction to the hardware. QuEST uses a dedicated programmable micro-coded engine to continuously replay the instruction stream associated with error correction. The instruction bandwidth requirement of QuEST scales in proportion to the number of active qubits (typically < < 0.1%) rather than the total number of qubits. We analyze the effectiveness of QuEST with area and thermal constraints and propose a scalable microarchitecture using typical Quantum Error Correction Code (QECC) execution patterns. Our evaluations show that QuEST reduces instruction bandwidth demand of several key workloads by ftve orders of magnitude while ensuring deterministic instruction delivery. Apart from error correction, we also observe a large instruction bandwidth requirement for fault tolerant Quantum instructions (magic state distillation). We extend QuEST to manage these instructions in hardware and provide additional reduction in bandwidth. With QuEST, we reduce the total instruction bandwidth by eight orders of magnitude. CCS CONCEPTS • Computer systems organization → Quantum computing;

Martin J Savage - One of the best experts on this subject based on the ideXlab platform.

  • Quantum classical computation of schwinger model dynamics using Quantum Computers
    Physical Review A, 2018
    Co-Authors: Natalie Klco, Alexander J Mccaskey, T D Morris, Raphael C Pooser, Eugene F Dumitrescu, Mikel Sanz, E Solano, Pavel Lougovski, Martin J Savage
    Abstract:

    We present a Quantum-classical algorithm to study the dynamics of the two-spatial-site Schwinger model on IBM's Quantum Computers. Using rotational symmetries, total charge, and parity, the number of qubits needed to perform computation is reduced by a factor of $\ensuremath{\sim}5$, removing exponentially large unphysical sectors from the Hilbert space. Our work opens an avenue for exploration of other lattice Quantum field theories, such as Quantum chromodynamics, where classical computation is used to find symmetry sectors in which the Quantum computer evaluates the dynamics of Quantum fluctuations.

  • Quantum classical computation of schwinger model dynamics using Quantum Computers
    Physical Review A, 2018
    Co-Authors: Natalie Klco, Alexander J Mccaskey, T D Morris, Raphael C Pooser, Eugene F Dumitrescu, Mikel Sanz, E Solano, Pavel Lougovski, Martin J Savage
    Abstract:

    We present a Quantum-classical algorithm to study the dynamics of the two-spatial-site Schwinger model on IBM's Quantum Computers. Using rotational symmetries, total charge, and parity, the number of qubits needed to perform computation is reduced by a factor of $\sim 5$, removing exponentially-large unphysical sectors from the Hilbert space. Our work opens an avenue for exploration of other lattice Quantum field theories, such as Quantum chromodynamics, where classical computation is used to find symmetry sectors in which the Quantum computer evaluates the dynamics of Quantum fluctuations.

E Solano - One of the best experts on this subject based on the ideXlab platform.

  • Quantum classical computation of schwinger model dynamics using Quantum Computers
    Physical Review A, 2018
    Co-Authors: Natalie Klco, Alexander J Mccaskey, T D Morris, Raphael C Pooser, Eugene F Dumitrescu, Mikel Sanz, E Solano, Pavel Lougovski, Martin J Savage
    Abstract:

    We present a Quantum-classical algorithm to study the dynamics of the two-spatial-site Schwinger model on IBM's Quantum Computers. Using rotational symmetries, total charge, and parity, the number of qubits needed to perform computation is reduced by a factor of $\ensuremath{\sim}5$, removing exponentially large unphysical sectors from the Hilbert space. Our work opens an avenue for exploration of other lattice Quantum field theories, such as Quantum chromodynamics, where classical computation is used to find symmetry sectors in which the Quantum computer evaluates the dynamics of Quantum fluctuations.

  • Quantum classical computation of schwinger model dynamics using Quantum Computers
    Physical Review A, 2018
    Co-Authors: Natalie Klco, Alexander J Mccaskey, T D Morris, Raphael C Pooser, Eugene F Dumitrescu, Mikel Sanz, E Solano, Pavel Lougovski, Martin J Savage
    Abstract:

    We present a Quantum-classical algorithm to study the dynamics of the two-spatial-site Schwinger model on IBM's Quantum Computers. Using rotational symmetries, total charge, and parity, the number of qubits needed to perform computation is reduced by a factor of $\sim 5$, removing exponentially-large unphysical sectors from the Hilbert space. Our work opens an avenue for exploration of other lattice Quantum field theories, such as Quantum chromodynamics, where classical computation is used to find symmetry sectors in which the Quantum computer evaluates the dynamics of Quantum fluctuations.

Swamit S Tannu - One of the best experts on this subject based on the ideXlab platform.

  • ensemble of diverse mappings improving reliability of Quantum Computers by orchestrating dissimilar mistakes
    International Symposium on Microarchitecture, 2019
    Co-Authors: Swamit S Tannu, Moinuddin K Qureshi
    Abstract:

    Near-term Quantum Computers do not have the ability to perform error correction. Such Noisy Intermediate Scale Quantum (NISQ) Computers can produce incorrect output as the computation is subjected to errors. The applications on a NISQ machine try to infer the correct output by running the same program thousands of times and logging the output. If the error rates are low and the errors are not correlated, then the correct answer can be inferred as the one appearing with the highest frequency. Unfortunately, Quantum Computers are subjected to correlated errors, which can cause an incorrect answer to appear more frequently than the correct answer. We observe that recent work on qubit mapping (including the recent work on variation-aware mapping) tries to obtain the best possible qubit allocation and uses it for all the trials. This approach significantly increases the vulnerability to correlated errors -- if the mapping becomes susceptible to a particular form of error, then all the trials will get subjected to the same error, which can cause the same wrong answer to appear as the output for a significant fraction of the trials. To mitigate the vulnerability to such correlated errors, this paper leverages the concept of diversity and proposes an Ensemble of Diverse Mappings (EDM). EDM uses diversity in qubit allocation to run copies of an input program with a diverse set of mappings, thus steering the trials towards making different mistakes. By combining the output probability distributions of the diverse ensemble, EDM amplifies the correct answer by suppressing the incorrect answers. Our experiments with ibmq-melbourne (14-qubit) machine shows that EDM improves the inference quality by 2.3x compared to the current state-of-the-art mapping algorithms.

  • a case for multi programming Quantum Computers
    International Symposium on Microarchitecture, 2019
    Co-Authors: Poulami Das, Swamit S Tannu, Prashant J Nair, Moinuddin K Qureshi
    Abstract:

    Existing and near-term Quantum Computers face significant reliability challenges because of high error rates caused by noise. Such machines are operated in the Noisy Intermediate Scale Quantum (NISQ) model of computing. As NISQ machines exhibit high error-rates, only programs that require a few qubits can be executed reliably. Therefore, NISQ machines tend to underutilize its resources. In this paper, we propose to improve the throughput and utilization of NISQ machines by using multi-programming and enabling the NISQ machine to concurrently execute multiple workloads. Multi-programming a NISQ machine is non-trivial. This is because, a multi-programmed NISQ machine can have an adverse impact on the reliability of the individual workloads. To enable multi-programming in a robust manner, we propose three solutions. First, we develop methods to partition the qubits into multiple reliable regions using error information from machine calibration so that each program can have a fair allocation of reliable qubits. Second, we observe that when two programs are of unequal lengths, measurement operations can impact the reliability of the co-running program. To reduce this interference, we propose a Delayed Instruction Scheduling (DIS) policy that delays the start of the shorter program so that all the measurement operations can be performed at the end. Third, we develop an Adaptive Multi-Programming (AMP) design that monitors the reliability at runtime and reverts to single program mode if the reliability impact of multi-programming is greater than a predefined threshold. Our evaluations with IBM-Q16 show that our proposals can improve resource utilization and throughput by up to 2x, while limiting the impact on reliability.

  • not all qubits are created equal a case for variability aware policies for nisq era Quantum Computers
    Architectural Support for Programming Languages and Operating Systems, 2019
    Co-Authors: Swamit S Tannu, Moinuddin K Qureshi
    Abstract:

    Existing and near-term Quantum Computers are not yet large enough to support fault-tolerance. Such systems with few tens to few hundreds of qubits are termed as Noisy Intermediate Scale Quantum Computers (NISQ), and these systems can provide benefits for a class of Quantum algorithms. In this paper, we study the problems of Qubit-Allocation (mapping of program qubits to machine qubits) and Qubit-Movement (routing qubits from one location to another for entanglement). We observe that there can be variation in the error rates of different qubits and links, which can impact the decisions for qubit movement and qubit allocation. We analyze publicly available characterization data for the IBM-Q20 to quantify the variation and show that there is indeed significant variability in the error rates of the qubits and the links connecting them. We show that the device variability has a significant impact on the overall system reliability. To exploit the variability in error rate, we propose Variation-Aware Qubit Movement (VQM) and Variation-Aware Qubit Allocation (VQA), policies that optimize the movement and allocation of qubits to avoid the weaker qubits and links, and guide more operations towards the stronger qubits and links. Our evaluations, with a simulation-based model of IBM-Q20, show that Variation-Aware policies can improve the system reliability by up to 1.7x. We also evaluate our policies on the IBM-Q5 machine and demonstrate that our proposal significantly improves the reliability of real systems (up to 1.9X).

  • a case for variability aware policies for nisq era Quantum Computers
    arXiv: Quantum Physics, 2018
    Co-Authors: Swamit S Tannu, Moinuddin K Qureshi
    Abstract:

    Recently, IBM, Google, and Intel showcased Quantum Computers ranging from 49 to 72 qubits. While these systems represent a significant milestone in the advancement of Quantum computing, existing and near-term Quantum Computers are not yet large enough to fully support Quantum error-correction. Such systems with few tens to few hundreds of qubits are termed as Noisy Intermediate Scale Quantum Computers (NISQ), and these systems can provide benefits for a class of Quantum algorithms. In this paper, we study the problems of Qubit-Allocation (mapping of program qubits to machine qubits) and Qubit-Movement(routing qubits from one location to another to perform entanglement). We observe that there exists variation in the error rates of different qubits and links, which can have an impact on the decisions for qubit movement and qubit allocation. We analyze characterization data for the IBM-Q20 Quantum computer gathered over 52 days to understand and quantify the variation in the error-rates and find that there is indeed significant variability in the error rates of the qubits and the links connecting them. We define reliability metrics for NISQ Computers and show that the device variability has the substantial impact on the overall system reliability. To exploit the variability in error rate, we propose Variation-Aware Qubit Movement (VQM) and Variation-Aware Qubit Allocation (VQA), policies that optimize the movement and allocation of qubits to avoid the weaker qubits and links and guide more operations towards the stronger qubits and links. We show that our Variation-Aware policies improve the reliability of the NISQ system up to 2.5x.

  • taming the instruction bandwidth of Quantum Computers via hardware managed error correction
    International Symposium on Microarchitecture, 2017
    Co-Authors: Swamit S Tannu, Zachary Myers, Prashant J Nair, Douglas Carmean, Moinuddin K Qureshi
    Abstract:

    A Quantum computer consists of Quantum bits (qubits) and a control processor that acts as an interface between the programmer and the qubits. As qubits are very sensitive to noise, they rely on continuous error correction to maintain the correct state. Current proposals rely on software-managed error correction and require large instruction bandwidth, which must scale in proportion to the number of qubits. While such a design may be reasonable for small-scale Quantum Computers, we show that instruction bandwidth tends to become a critical bottleneck for scaling Quantum Computers. In this paper, we show that 99.999% of the instructions in the instruction stream of a typical Quantum workload stem from error correction. Using this observation, we propose QuEST (Quantum Error-Correction Substrate), an architecture that delegates the task of Quantum error correction to the hardware. QuEST uses a dedicated programmable micro-coded engine to continuously replay the instruction stream associated with error correction. The instruction bandwidth requirement of QuEST scales in proportion to the number of active qubits (typically < < 0.1%) rather than the total number of qubits. We analyze the effectiveness of QuEST with area and thermal constraints and propose a scalable microarchitecture using typical Quantum Error Correction Code (QECC) execution patterns. Our evaluations show that QuEST reduces instruction bandwidth demand of several key workloads by ftve orders of magnitude while ensuring deterministic instruction delivery. Apart from error correction, we also observe a large instruction bandwidth requirement for fault tolerant Quantum instructions (magic state distillation). We extend QuEST to manage these instructions in hardware and provide additional reduction in bandwidth. With QuEST, we reduce the total instruction bandwidth by eight orders of magnitude. CCS CONCEPTS • Computer systems organization → Quantum computing;

Natalie Klco - One of the best experts on this subject based on the ideXlab platform.

  • Quantum classical computation of schwinger model dynamics using Quantum Computers
    Physical Review A, 2018
    Co-Authors: Natalie Klco, Alexander J Mccaskey, T D Morris, Raphael C Pooser, Eugene F Dumitrescu, Mikel Sanz, E Solano, Pavel Lougovski, Martin J Savage
    Abstract:

    We present a Quantum-classical algorithm to study the dynamics of the two-spatial-site Schwinger model on IBM's Quantum Computers. Using rotational symmetries, total charge, and parity, the number of qubits needed to perform computation is reduced by a factor of $\ensuremath{\sim}5$, removing exponentially large unphysical sectors from the Hilbert space. Our work opens an avenue for exploration of other lattice Quantum field theories, such as Quantum chromodynamics, where classical computation is used to find symmetry sectors in which the Quantum computer evaluates the dynamics of Quantum fluctuations.

  • Quantum classical computation of schwinger model dynamics using Quantum Computers
    Physical Review A, 2018
    Co-Authors: Natalie Klco, Alexander J Mccaskey, T D Morris, Raphael C Pooser, Eugene F Dumitrescu, Mikel Sanz, E Solano, Pavel Lougovski, Martin J Savage
    Abstract:

    We present a Quantum-classical algorithm to study the dynamics of the two-spatial-site Schwinger model on IBM's Quantum Computers. Using rotational symmetries, total charge, and parity, the number of qubits needed to perform computation is reduced by a factor of $\sim 5$, removing exponentially-large unphysical sectors from the Hilbert space. Our work opens an avenue for exploration of other lattice Quantum field theories, such as Quantum chromodynamics, where classical computation is used to find symmetry sectors in which the Quantum computer evaluates the dynamics of Quantum fluctuations.