1
|
Thanasilp S, Wang S, Cerezo M, Holmes Z. Exponential concentration in quantum kernel methods. Nat Commun 2024; 15:5200. [PMID: 38890282 PMCID: PMC11189509 DOI: 10.1038/s41467-024-49287-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 05/31/2024] [Indexed: 06/20/2024] Open
Abstract
Kernel methods in Quantum Machine Learning (QML) have recently gained significant attention as a potential candidate for achieving a quantum advantage in data analysis. Among other attractive properties, when training a kernel-based model one is guaranteed to find the optimal model's parameters due to the convexity of the training landscape. However, this is based on the assumption that the quantum kernel can be efficiently obtained from quantum hardware. In this work we study the performance of quantum kernel models from the perspective of the resources needed to accurately estimate kernel values. We show that, under certain conditions, values of quantum kernels over different input data can be exponentially concentrated (in the number of qubits) towards some fixed value. Thus on training with a polynomial number of measurements, one ends up with a trivial model where the predictions on unseen inputs are independent of the input data. We identify four sources that can lead to concentration including expressivity of data embedding, global measurements, entanglement and noise. For each source, an associated concentration bound of quantum kernels is analytically derived. Lastly, we show that when dealing with classical data, training a parametrized data embedding with a kernel alignment method is also susceptible to exponential concentration. Our results are verified through numerical simulations for several QML tasks. Altogether, we provide guidelines indicating that certain features should be avoided to ensure the efficient evaluation of quantum kernels and so the performance of quantum kernel methods.
Collapse
Affiliation(s)
- Supanut Thanasilp
- Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore.
- Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
- Chula Intelligent and Complex Systems, Department of Physics, Faculty of Science, Chulalongkorn University, Bangkok, Thailand.
| | | | - M Cerezo
- Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA
- Quantum Science Center, Oak Ridge, TN, USA
| | - Zoë Holmes
- Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
- Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA.
| |
Collapse
|
2
|
Doga H, Raubenolt B, Cumbo F, Joshi J, DiFilippo FP, Qin J, Blankenberg D, Shehab O. A Perspective on Protein Structure Prediction Using Quantum Computers. J Chem Theory Comput 2024; 20:3359-3378. [PMID: 38703105 PMCID: PMC11099973 DOI: 10.1021/acs.jctc.4c00067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 04/19/2024] [Accepted: 04/22/2024] [Indexed: 05/06/2024]
Abstract
Despite the recent advancements by deep learning methods such as AlphaFold2, in silico protein structure prediction remains a challenging problem in biomedical research. With the rapid evolution of quantum computing, it is natural to ask whether quantum computers can offer some meaningful benefits for approaching this problem. Yet, identifying specific problem instances amenable to quantum advantage and estimating the quantum resources required are equally challenging tasks. Here, we share our perspective on how to create a framework for systematically selecting protein structure prediction problems that are amenable for quantum advantage, and estimate quantum resources for such problems on a utility-scale quantum computer. As a proof-of-concept, we validate our problem selection framework by accurately predicting the structure of a catalytic loop of the Zika Virus NS3 Helicase, on quantum hardware.
Collapse
Affiliation(s)
- Hakan Doga
- IBM Quantum,
Almaden Research Center, San Jose, California 95120, United States
| | - Bryan Raubenolt
- Center
for Computational Life Sciences, Lerner
Research Institute, The Cleveland Clinic, Cleveland, Ohio 44106, United States
| | - Fabio Cumbo
- Center
for Computational Life Sciences, Lerner
Research Institute, The Cleveland Clinic, Cleveland, Ohio 44106, United States
| | - Jayadev Joshi
- Center
for Computational Life Sciences, Lerner
Research Institute, The Cleveland Clinic, Cleveland, Ohio 44106, United States
| | - Frank P. DiFilippo
- Center
for Computational Life Sciences, Lerner
Research Institute, The Cleveland Clinic, Cleveland, Ohio 44106, United States
| | - Jun Qin
- Center
for Computational Life Sciences, Lerner
Research Institute, The Cleveland Clinic, Cleveland, Ohio 44106, United States
| | - Daniel Blankenberg
- Center
for Computational Life Sciences, Lerner
Research Institute, The Cleveland Clinic, Cleveland, Ohio 44106, United States
| | - Omar Shehab
- IBM
Quantum, IBM Thomas J Watson Research Center, Yorktown Heights, New York 10598, United States
| |
Collapse
|
3
|
Qian Y, Wang X, Du Y, Wu X, Tao D. The Dilemma of Quantum Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5603-5615. [PMID: 36191113 DOI: 10.1109/tnnls.2022.3208313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The core of quantum machine learning is to devise quantum models with good trainability and low generalization error bounds than their classical counterparts to ensure better reliability and interpretability. Recent studies confirmed that quantum neural networks (QNNs) have the ability to achieve this goal on specific datasets. In this regard, it is of great importance to understand whether these advantages are still preserved on real-world tasks. Through systematic numerical experiments, we empirically observe that current QNNs fail to provide any benefit over classical learning models. Concretely, our results deliver two key messages. First, QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets. Second, the trainability of QNNs is insensitive to regularization techniques, which sharply contrasts with the classical scenario. These empirical results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
Collapse
|
4
|
Gil-Fuster E, Eisert J, Bravo-Prieto C. Understanding quantum machine learning also requires rethinking generalization. Nat Commun 2024; 15:2277. [PMID: 38480684 PMCID: PMC10938005 DOI: 10.1038/s41467-024-45882-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 02/06/2024] [Indexed: 03/17/2024] Open
Abstract
Quantum machine learning models have shown successful generalization performance even when trained with few data. In this work, through systematic randomization experiments, we show that traditional approaches to understanding generalization fail to explain the behavior of such quantum models. Our experiments reveal that state-of-the-art quantum neural networks accurately fit random states and random labeling of training data. This ability to memorize random data defies current notions of small generalization error, problematizing approaches that build on complexity measures such as the VC dimension, the Rademacher complexity, and all their uniform relatives. We complement our empirical results with a theoretical construction showing that quantum neural networks can fit arbitrary labels to quantum states, hinting at their memorization ability. Our results do not preclude the possibility of good generalization with few training data but rather rule out any possible guarantees based only on the properties of the model family. These findings expose a fundamental challenge in the conventional understanding of generalization in quantum machine learning and highlight the need for a paradigm shift in the study of quantum models for machine learning tasks.
Collapse
Affiliation(s)
- Elies Gil-Fuster
- Dahlem Center for Complex Quantum Systems, Freie Universität Berlin, Berlin, Germany
- Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| | - Jens Eisert
- Dahlem Center for Complex Quantum Systems, Freie Universität Berlin, Berlin, Germany.
- Fraunhofer Heinrich Hertz Institute, Berlin, Germany.
- Helmholtz-Zentrum Berlin für Materialien und Energie, Berlin, Germany.
| | - Carlos Bravo-Prieto
- Dahlem Center for Complex Quantum Systems, Freie Universität Berlin, Berlin, Germany.
| |
Collapse
|
5
|
Du Y, Yang Y, Tao D, Hsieh MH. Problem-Dependent Power of Quantum Neural Networks on Multiclass Classification. PHYSICAL REVIEW LETTERS 2023; 131:140601. [PMID: 37862647 DOI: 10.1103/physrevlett.131.140601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/17/2023] [Indexed: 10/22/2023]
Abstract
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood. Some QNNs with specific encoding methods can be efficiently simulated by classical surrogates, while others with quantum memory may perform better than classical classifiers. Here we systematically investigate the problem-dependent power of quantum neural classifiers (QCs) on multiclass classification tasks. Through the analysis of expected risk, a measure that weighs the training loss and the generalization error of a classifier jointly, we identify two key findings: first, the training loss dominates the power rather than the generalization ability; second, QCs undergo a U-shaped risk curve, in contrast to the double-descent risk curve of deep neural classifiers. We also reveal the intrinsic connection between optimal QCs and the Helstrom bound and the equiangular tight frame. Using these findings, we propose a method that exploits loss dynamics of QCs to estimate the optimal hyperparameter settings yielding the minimal risk. Numerical results demonstrate the effectiveness of our approach to explain the superiority of QCs over multilayer Perceptron on parity datasets and their limitations over convolutional neural networks on image datasets. Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
Collapse
Affiliation(s)
- Yuxuan Du
- JD Explore Academy, Beijing 10010, China
| | - Yibo Yang
- JD Explore Academy, Beijing 10010, China
- King Abdullah University of Science and Technology, Thuwal 4700, Kingdom of Saudi Arabia
| | - Dacheng Tao
- JD Explore Academy, Beijing 10010, China
- Sydney AI Centre, School of Computer Science, The University of Sydney, New South Wales 2008, Australia
| | - Min-Hsiu Hsieh
- Hon Hai (Foxconn) Research Institute, Taipei 114699, Taiwan
| |
Collapse
|
6
|
Tian J, Sun X, Du Y, Zhao S, Liu Q, Zhang K, Yi W, Huang W, Wang C, Wu X, Hsieh MH, Liu T, Yang W, Tao D. Recent Advances for Quantum Neural Networks in Generative Learning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:12321-12340. [PMID: 37126624 DOI: 10.1109/tpami.2023.3272029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Quantum computers are next-generation devices that hold promise to perform calculations beyond the reach of classical computers. A leading method towards achieving this goal is through quantum machine learning, especially quantum generative learning. Due to the intrinsic probabilistic nature of quantum mechanics, it is reasonable to postulate that quantum generative learning models (QGLMs) may surpass their classical counterparts. As such, QGLMs are receiving growing attention from the quantum physics and computer science communities, where various QGLMs that can be efficiently implemented on near-term quantum machines with potential computational advantages are proposed. In this paper, we review the current progress of QGLMs from the perspective of machine learning. Particularly, we interpret these QGLMs, covering quantum circuit Born machines, quantum generative adversarial networks, quantum Boltzmann machines, and quantum variational autoencoders, as the quantum extension of classical generative learning models. In this context, we explore their intrinsic relations and their fundamental differences. We further summarize the potential applications of QGLMs in both conventional machine learning tasks and quantum physics. Last, we discuss the challenges and further research directions for QGLMs.
Collapse
|
7
|
Faílde D, Viqueira JD, Mussa Juane M, Gómez A. Using Differential Evolution to avoid local minima in Variational Quantum Algorithms. Sci Rep 2023; 13:16230. [PMID: 37758791 PMCID: PMC10533904 DOI: 10.1038/s41598-023-43404-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/23/2023] [Indexed: 09/29/2023] Open
Abstract
Variational Quantum Algorithms (VQAs) are among the most promising NISQ-era algorithms for harnessing quantum computing in diverse fields. However, the underlying optimization processes within these algorithms usually deal with local minima and barren plateau problems, preventing them from scaling efficiently. Our goal in this paper is to study alternative optimization methods that can avoid or reduce the effect of these problems. To this end, we propose to apply the Differential Evolution (DE) algorithm to VQAs optimizations. Our hypothesis is that DE is resilient to vanishing gradients and local minima for two main reasons: (1) it does not depend on gradients, and (2) its mutation and recombination schemes allow DE to continue evolving even in these cases. To demonstrate the performance of our approach, first, we use a robust local minima problem to compare state-of-the-art local optimizers (SLSQP, COBYLA, L-BFGS-B and SPSA) against DE using the Variational Quantum Eigensolver algorithm. Our results show that DE always outperforms local optimizers. In particular, in exact simulations of a 1D Ising chain with 14 qubits, DE achieves the ground state with a 100% success rate, while local optimizers only exhibit around 40%. We also show that combining DE with local optimizers increases the accuracy of the energy estimation once avoiding local minima. Finally, we demonstrate how our results can be extended to more complex problems by studying DE performance in a 1D Hubbard model.
Collapse
Affiliation(s)
- Daniel Faílde
- Centro de Supercomputación de Galicia (CESGA), 15705, Santiago de Compostela, Spain.
| | - José Daniel Viqueira
- Centro de Supercomputación de Galicia (CESGA), 15705, Santiago de Compostela, Spain
- Computer Graphics and Data Engineering (COGRADE), Departamento de Electrónica e Computación, Universidade de Santiago de Compostela, 15782, Santiago de Compostela, Spain
| | - Mariamo Mussa Juane
- Centro de Supercomputación de Galicia (CESGA), 15705, Santiago de Compostela, Spain
| | - Andrés Gómez
- Centro de Supercomputación de Galicia (CESGA), 15705, Santiago de Compostela, Spain
| |
Collapse
|
8
|
Li Y, Wang Z, Han R, Shi S, Li J, Shang R, Zheng H, Zhong G, Gu Y. Quantum recurrent neural networks for sequential learning. Neural Netw 2023; 166:148-161. [PMID: 37487411 DOI: 10.1016/j.neunet.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 05/14/2023] [Accepted: 07/04/2023] [Indexed: 07/26/2023]
Abstract
Quantum neural network (QNN) is one of the promising directions where the near-term noisy intermediate-scale quantum (NISQ) devices could find advantageous applications against classical resources. Recurrent neural networks are the most fundamental networks for sequential learning, but up to now there is still a lack of canonical model of quantum recurrent neural network (QRNN), which certainly restricts the research in the field of quantum deep learning. In the present work, we propose a new kind of QRNN which would be a good candidate as the canonical QRNN model, where, the quantum recurrent blocks (QRBs) are constructed in the hardware-efficient way, and the QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices. That is, our QRNN is much more accessible on NISQ devices. Furthermore, the performance of the present QRNN model is verified concretely using three different kinds of classical sequential data, i.e., meteorological indicators, stock price, and text categorization. The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning, and can predict the changing details of temporal sequence data. The practical circuit structure and superior performance indicate that the present QRNN is a promising learning model to find quantum advantageous applications in the near term.
Collapse
Affiliation(s)
- Yanan Li
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China
| | - Zhimin Wang
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China.
| | - Rongbing Han
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China
| | - Shangshang Shi
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China
| | - Jiaxin Li
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China
| | - Ruimin Shang
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China
| | - Haiyong Zheng
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China
| | - Guoqiang Zhong
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China
| | - Yongjian Gu
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao, 266100, China.
| |
Collapse
|
9
|
Caro MC, Huang HY, Ezzell N, Gibbs J, Sornborger AT, Cincio L, Coles PJ, Holmes Z. Out-of-distribution generalization for learning quantum dynamics. Nat Commun 2023; 14:3751. [PMID: 37407571 PMCID: PMC10322910 DOI: 10.1038/s41467-023-39381-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 06/09/2023] [Indexed: 07/07/2023] Open
Abstract
Generalization bounds are a critical tool to assess the training data requirements of Quantum Machine Learning (QML). Recent work has established guarantees for in-distribution generalization of quantum neural networks (QNNs), where training and testing data are drawn from the same data distribution. However, there are currently no results on out-of-distribution generalization in QML, where we require a trained model to perform well even on data drawn from a different distribution to the training distribution. Here, we prove out-of-distribution generalization for the task of learning an unknown unitary. In particular, we show that one can learn the action of a unitary on entangled states having trained only product states. Since product states can be prepared using only single-qubit gates, this advances the prospects of learning quantum dynamics on near term quantum hardware, and further opens up new methods for both the classical and quantum compilation of quantum circuits.
Collapse
Affiliation(s)
- Matthias C Caro
- Department of Mathematics, Technical University of Munich, Garching, Germany.
- Munich Center for Quantum Science and Technology (MCQST), Munich, Germany.
- Dahlem Center for Complex Quantum Systems, Freie Universität Berlin, Berlin, Germany.
- Institute for Quantum Information and Matter, Caltech, Pasadena, CA, USA.
| | - Hsin-Yuan Huang
- Institute for Quantum Information and Matter, Caltech, Pasadena, CA, USA
- Department of Computing and Mathematical Sciences, Caltech, Pasadena, CA, USA
| | - Nicholas Ezzell
- Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA
- Department of Physics & Astronomy, University of Southern California, Los Angeles, CA, USA
| | - Joe Gibbs
- Department of Physics, University of Surrey, Guildford, GU2 7XH, UK
- AWE, Aldermaston, Reading, RG7 4PR, UK
| | | | - Lukasz Cincio
- Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, USA
| | - Patrick J Coles
- Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, USA
- Normal Computing Corporation, New York, NY, USA
| | - Zoë Holmes
- Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA
- Institute of Physics, Ecole Polytechnique Fédéderale de Lausanne (EPFL), CH-1015, Lausanne, Switzerland
| |
Collapse
|
10
|
Liu J, Najafi K, Sharma K, Tacchino F, Jiang L, Mezzacapo A. Analytic Theory for the Dynamics of Wide Quantum Neural Networks. PHYSICAL REVIEW LETTERS 2023; 130:150601. [PMID: 37115896 DOI: 10.1103/physrevlett.130.150601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 11/11/2022] [Accepted: 03/02/2023] [Indexed: 06/19/2023]
Abstract
Parametrized quantum circuits can be used as quantum neural networks and have the potential to outperform their classical counterparts when trained for addressing learning problems. To date, much of the results on their performance on practical problems are heuristic in nature. In particular, the convergence rate for the training of quantum neural networks is not fully understood. Here, we analyze the dynamics of gradient descent for the training error of a class of variational quantum machine learning models. We define wide quantum neural networks as parametrized quantum circuits in the limit of a large number of qubits and variational parameters. Then, we find a simple analytic formula that captures the average behavior of their loss function and discuss the consequences of our findings. For example, for random quantum circuits, we predict and characterize an exponential decay of the residual training error as a function of the parameters of the system. Finally, we validate our analytic results with numerical experiments.
Collapse
Affiliation(s)
- Junyu Liu
- Pritzker School of Molecular Engineering, The University of Chicago, Chicago, Illinois 60637, USA
- Chicago Quantum Exchange, Chicago, Illinois 60637, USA
- Kadanoff Center for Theoretical Physics, The University of Chicago, Chicago, Illinois 60637, USA
| | - Khadijeh Najafi
- IBM Quantum, IBM T. J. Watson Research Center, Yorktown Heights, New York 10598, USA
| | - Kunal Sharma
- IBM Quantum, IBM T. J. Watson Research Center, Yorktown Heights, New York 10598, USA
- Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, Maryland 20742, USA
| | | | - Liang Jiang
- Pritzker School of Molecular Engineering, The University of Chicago, Chicago, Illinois 60637, USA
- Chicago Quantum Exchange, Chicago, Illinois 60637, USA
| | - Antonio Mezzacapo
- IBM Quantum, IBM T. J. Watson Research Center, Yorktown Heights, New York 10598, USA
| |
Collapse
|
11
|
Caro MC, Huang HY, Cerezo M, Sharma K, Sornborger A, Cincio L, Coles PJ. Generalization in quantum machine learning from few training data. Nat Commun 2022; 13:4919. [PMID: 35995777 PMCID: PMC9395350 DOI: 10.1038/s41467-022-32550-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 08/04/2022] [Indexed: 11/19/2022] Open
Abstract
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number N of training data points. We show that the generalization error of a quantum machine learning model with T trainable gates scales at worst as [Formula: see text]. When only K ≪ T gates have undergone substantial change in the optimization process, we prove that the generalization error improves to [Formula: see text]. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.
Collapse
Affiliation(s)
- Matthias C Caro
- Department of Mathematics, Technical University of Munich, Garching, Germany.
- Munich Center for Quantum Science and Technology (MCQST), Munich, Germany.
| | - Hsin-Yuan Huang
- Institute for Quantum Information and Matter, Caltech, Pasadena, CA, USA
- Department of Computing and Mathematical Sciences, Caltech, Pasadena, CA, USA
| | - M Cerezo
- Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
- Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Kunal Sharma
- Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD, 20742, USA
| | - Andrew Sornborger
- Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
- Quantum Science Center, Oak Ridge, TN, 37931, USA
| | - Lukasz Cincio
- Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Patrick J Coles
- Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| |
Collapse
|