qrisp.qaoa.QAOABenchmark.rank#

QAOABenchmark.rank(metric='approx_ratio', print_res=False, average_repetitions=False)[source]#

Ranks the runs of the benchmark according to a given metric.

The default metric is approximation ratio. Similar to .evaluate, the metric can be user specified.

Parameters:
metricstr or callable, optional

The metric according to which should be ranked. The default is “approx_ratio”.

Returns:
list[dict]

List of dictionaries, where the first element has the highest rank.

Examples

We create a MaxCut instance and benchmark several parameters

from qrisp import *
from networkx import Graph
G = Graph()

G.add_edges_from([[0,3],[0,4],[1,3],[1,4],[2,3],[2,4]])

from qrisp.qaoa import maxcut_problem

max_cut_instance = maxcut_problem(G)

benchmark_data = max_cut_instance.benchmark(qarg = QuantumVariable(5),
                           depth_range = [3,4,5],
                           shot_range = [5000, 10000],
                           iter_range = [25, 50],
                           optimal_solution = "11100",
                           repetitions = 2
                           )

To rank the results, we call the according method:

print(benchmark_data.rank()[0])
#Yields: {'layer_depth': 5, 'circuit_depth': 44, 'qubit_amount': 5, 'shots': 10000, 'iterations': 50, 'counts': {'11100': 0.4909, '00011': 0.4909, '00010': 0.002, '11110': 0.002, '00001': 0.002, '11101': 0.002, '10000': 0.0015, '01000': 0.0015, '00100': 0.0015, '11011': 0.0015, '10111': 0.0015, '01111': 0.0015, '00000': 0.0001, '10010': 0.0001, '01010': 0.0001, '11010': 0.0001, '00110': 0.0001, '10110': 0.0001, '01110': 0.0001, '10001': 0.0001, '01001': 0.0001, '11001': 0.0001, '00101': 0.0001, '10101': 0.0001, '01101': 0.0001, '11111': 0.0001, '11000': 0.0, '10100': 0.0, '01100': 0.0, '10011': 0.0, '01011': 0.0, '00111': 0.0}, 'runtime': 1.4269020557403564, 'optimal_solution': '11100'}