QED-C benchmark suite
The QED-C benchmark suite [1] was introduced in 2021. The development of this benchmark suite is supported by the Quantum Economic Development Consortium (QED-C), USA. The initial development included both quantum companies such as IonQ, D-Wave Systems, Quantinuum, Rigetti, Quantum Circuit Inc and research institutions such as the Sandia National Laboratories, Princeton University and Colorado School of Mines.
Motivation
The motivation for this benchmark suite is to create a set of quantum circuits for evaluating the performance of quantum computers in executing quantum applications. The initial contribution [1] focuses on quantum circuits; analog quantum computers were added to the framework [2] in 2023. This suite is open source and aims to be an evolving codebase.
Architecture
The benchmark suite is initially built upon the Volumetric Benchmark methodology [3], aiming at representing the performance of a quantum computer at running a quantum circuit with respect to its width and depth in a 2-dimensional map. The benchmark suite is composed of four main components:
- A set of algorithms \(\mathbb{A}\) used for the benchmark.
- For each algorithm and a specified size \(n\), there is a recipe which turns the algorithm into a quantum circuit (in a hardware-agnostic gate set). It forms a circuit set \(\mathbb{C}_n\) for each size \(n\).
- A procedure to randomly select the circuit from the circuit set.
- A method for analyzing the data.
An individual benchmark experiment consists of fixing the following parameters:
- The size \(n\) for the algorithm being tested (equivalent to the circuit with, i.e., the number of qubits).
- The number of circuits being evaluated that are drawn from the set \(\{C\}_n\).
- The number of runs of the quantum circuit on the quantum computer.
A two-dimensional plot is then built from these experiments to visualize the area where the quantum computer performs well. The following picture is a reproduction of Fig. 1 [1] with results on tutorial, subroutine and functional quantum circuits (from left to right) on a noisy simulator :
The architecture of the QED-C benchmark suite was updated in 2025 to improve its modularity and ease the integration of other benchmark suites [4]. In this work, the authors illustrate architectural enhancements by integrating the pyGSTi benchmark suite, which targets low-level and system-level benchmarking, and by integrating CUDA-Q to enable distributed quantum simulations.
Fidelity
The authors use a normalized fidelity derived from the classical fidelity to measure the success of running the quantum circuit. The classical fidelity between two distributions \(p\) and \(\tilde{p}\) over a bitstring \(x\) of \(n\) bits is expressed as:
\[F(p, \tilde{p}) = \left( \sum_{x \in \{0, 1\}^n} \sqrt{p(x)\tilde{p}(x)}\right)^2\]The authors of [1] then normalize this fidelity with respect to a random uniform distribution \(p_\mathrm{uni}\). Let \(p\) be the ideal and \(\tilde{p}\) the distribution corresponding to the output of the quantum computer. The normalized fidelity is computed as:
\[F_\mathrm{norm}(p, \tilde{p}) = \max \left\{ \frac{F(p, \tilde{p}) - F(p, p_\mathrm{uni})}{1-F(p, p_\mathrm{uni})} , 0 \right\}\]A detailed discussion on the limitations of this figure of merit is given in section III.G of [1].
Circuit depth
The authors assume that the circuit is expressed in terms of the gate set \(\{ rx, ry, rz, CNOT \}\) to remove the gate-set dependence. This popular universal gate set allows the quantum circuit to be expressed in a hardware- and algorithmically independent way. The authors also assume that gates on distinct sets of qubits can be applied in parallel and that the quantum chip is fully connected. The depth of the circuits expressed with this gate set is called the normalized depth and is used to define the reference depth in the volumetric plots. For each benchmark on a specific quantum computer, the circuit is then transpiled to the hardware-specific gate set.
Time
The authors of [1] suggest measuring the quantum computer’s computation time in two different ways:
- Total execution time (wall clock time): the time required to complete a given number of shots, including all associated classical pre- and post-processing steps. These include circuit compilation, parameter optimization (e.g., in variational circuits), communication between classical and quantum hardware, and execution on the quantum processor.
- Quantum execution time: the time needed to run a specific number of shots on a quantum computer while excluding any classical pre- and post-processing, focusing solely on the execution of the quantum circuit itself.
Benchmark instances
The following table lists the instances implemented in the QED-C benchmarking library. The QED-C benchmark suite implements algorithms in various programming languages, including Qiskit, Cirq, Barket, and Cuda-Q. The instances were initially organized into three categories:
- Tutorial: Includes algorithms used to introduce concepts in quantum computing.
- Subroutine: Part of quantum algorithms that may be used in many different quantum algorithms.
- Functional: Complete algorithms that are anticipated to be useful.
- Application: Complete algorithm dedicated to a specific application.
| Algorithm | Category | Qiskit | Cirq | Braket | CUDA-Q |
|---|---|---|---|---|---|
| Deutsch-Josza | Tutorial | x | x | x | |
| Bernstein-Vazirani | Tutorial | x | x | x | x |
| Hidden-shift | Tutorial | x | x | x | x |
| Quantum Fourier Transform | Subroutine | x | x | x | x |
| Phase Estimation | Subroutine | x | x | x | x |
| Amplitude Estimation | Subroutine | x | x | ||
| Hamiltonian Simulation | Functional | x | x | x | |
| HamLib Simulation | Functional | x | x | ||
| Grover’s Search | Functional | x | x | x | |
| Monte Carlo Sampling | Functional | x | x | ||
| Variational Quantum Eigensolver | Functional | x | |||
| Shor’s Order Finding | Functional | x | |||
| HHL Linear Solver | Application | x | |||
| Maxcut QAOA | Application | x | |||
| Hydrogen Lattice VQE | Application | x | |||
| Image Recognition | Application | x |
Devices being benchmarked
The QED-C benchmark suite has been used to benchmark many different quantum computers. We list the companies having their quantum computers benchmarked and their related publications:
- IonQ (Aria, Harmony), IBM (ibmq_casablanca, ibmq_guadalupe, ibmq_lagos), Quantinuum (H1.1), Rigetti (Aspen-9) in [1] for the initial evaluation of the benchmark suite.
- D-Wave (Advantage4.1), IonQ (Aria), IBM (ibmq_algiers, ibmq_guadalupe) in [2] for solving optimization problems.
- Quantinuum (H1.1) for the HLL algorithm and IBM (ibmq_guadalupe, ibmq_algiers, ibm_brisbane) for VQE algorithms in [5]
- IBM (ibm_pittsburgh, ibm_torino) in [4] for QPE, QFT and reinforcement learning tasks.
- IBM (ibm_strasbourg) in [6] for Hamiltonian simulation tasks.
- Neutral-atom simulator in [7] for tutorial, subroutine and functional instances.
Extensions
The QED-C benchmark suite has been extended to evaluate the performance of quantum computers on optimization problems in [2], with a main focus on the Max-cut problem. In this work, the authors introduce a new methodology and visualization framework for characterizing performance profiles. For varying instance sizes (y-axis), the visualization (see next figure) reports both the computational time and the corresponding approximation ratio (defined as the ratio of the average solution cost to the optimal cost) as a function of these two parameters. The following figure is a reproduction of Fig. 1a [1] showing the performance profile of the QAOA algorithm on a noisy simulator for solving the Max-cut problem:
The QED-C benchmark suite was further extended in [5] to include implementations of the VQE and HHL algorithms. The authors introduce a new figure of merit, termed accuracy ratio, to quantify the performance of the VQE. This figure of merit computes the energy difference between the Full Configuration Interaction result and the quantum processor result. In addition, the benchmark suite is expanded to encompass machine learning applications for image recognition tasks and add methods to assess the impact of compilation and error mitigation strategies on the sampled results.
A Quantum Reinforcement learning application is added to the benchmark suite in [4] to illustrate updates to the QED-C benchmark architecture.
Hamiltonian simulation benchmarking problems were incorporated into the suite with the work of A. Chatterjee et al. [8], where five problems from the HamLib data collection are integrated in the QED-C benchmark suite. The authors conduct benchmarking studies on Trotterized quantum evolutions and use the normalized fidelity to assess the quantum computer’s performance across a range of benchmarking scenarios. These analyses examine the effects of both Trotterization error and hardware-induced noise on the performance of the quantum computer. In addition, Hamiltonian simulation is assessed using mirror benchmarking techniques.
S. Niu et al. [6] use the QED-C framework to establish new methods for efficiently computing the observables associated with quantum simulation problems.
Implementation
The QED-C source code is open source and is often updated. The data related to the first publication [1] are hosted on Zenodo.
References
- [1]T. Lubinski et al., “Application-oriented performance benchmarks for quantum computing,” IEEE Transactions on Quantum Engineering, vol. 4, pp. 1–32, 2023.
- [2]T. Lubinski et al., “Optimization applications as quantum performance benchmarks,” ACM Transactions on Quantum Computing, vol. 5, no. 3, pp. 1–44, 2024.
- [3]R. Blume-Kohout and K. C. Young, “A volumetric framework for quantum computer benchmarks,” Quantum, vol. 4, p. 362, Nov. 2020, doi: 10.22331/q-2020-11-15-362. Available at: http://dx.doi.org/10.22331/q-2020-11-15-362
- [4]N. Patel et al., “Platform-Agnostic Modular Architecture for Quantum Benchmarking,” arXiv preprint arXiv:2510.08469, 2025.
- [5]T. Lubinski et al., “Quantum algorithm exploration using application-oriented performance benchmarks,” arXiv preprint arXiv:2402.08985, 2024.
- [6]S. Niu et al., “A Practical Framework for Assessing the Performance of Observable Estimation in Quantum Simulation,” in 2025 IEEE International Conference on Quantum Computing and Engineering (QCE), IEEE, 2025, pp. 375–386.
- [7]N. Wagner, C. Poole, T. M. Graham, and M. Saffman, “Benchmarking a neutral-atom quantum computer,” International Journal of Quantum Information, vol. 22, no. 04, p. 2450001, 2024.
- [8]A. Chatterjee et al., “A comprehensive cross-model framework for benchmarking the performance of quantum hamiltonian simulations,” IEEE Transactions on Quantum Engineering, 2025.