BatchedBackend#
- class BatchedBackend(batch_run_func)[source]#
This class tackles the problem that many physical backends have a high-overhead regarding individual circuit execution. This overhead typically comes from finite network latency, authentication procedures, compilation steps etc. Typically this overhead is remedied through supporting the execution of batches of circuits, which however doesn’t really fit that well into the Qrisp programming model, which shields the user from handling individual circuits and automatically decodes the measurement results into human readable labels.
In order to bridge these worlds and still allow automatic decoding, the
BatchedBackend
allows Qrisp users to evaluate measurements from a multi-threading perspective. The idea is here that the circuit batch is collected through several threads, which each execute Qrisp code until a individual backend call is required. This backend call is then saved until the batch is complete. The batch can then be sent through the.dispatch
method, which resumes each thread to execute the post-processing logic.Note
Calling the
.run
method of a BatchedBackend from the main thread will automatically dispatch all queries (including the query set up by the main thread).- Parameters:
- batch_run_funcfunction
A function that recieves a list of tuples in the form list[tuple[QuantumCircuit, int]], which represents the quantum circuits and the corresponding shots to execute on the backend. It should return a list of dictionaries, where each dictionary corresponds to the measurement results of the appropriate backend call.
Examples
We set up a BatchedBackend, which sequentially executes the QuantumCircuits on the Qrisp simulator.
from qrisp import * from qrisp.interface import BatchedBackend def run_func_batch(batch): # Parameters # ---------- # batch : list[tuple[QuantumCircuit, int]] # The circuit and shot batch indicating the backend queries. # Returns # ------- # results : list[dict[string, int]] # The list of results. results = [] for i in range(len(batch)): qc = batch[i][0] shots = batch[i][1] results.append(qc.run(shots = shots)) return results # Set up batched backend bb = BatchedBackend(run_func_batch)
Create some backend calls
a = QuantumFloat(4) b = QuantumFloat(3) a[:] = 1 b[:] = 2 c = a + b d = QuantumFloat(4) e = QuantumFloat(3) d[:] = 2 e[:] = 3 f = d + e
Create threads
import threading results = [] def eval_measurement(qv): results.append(qv.get_measurement(backend = bb)) thread_0 = threading.Thread(target = eval_measurement, args = (c,)) thread_1 = threading.Thread(target = eval_measurement, args = (f,))
Start the threads and subsequently dispatch the batch.
# Start the threads thread_0.start() thread_1.start() # Call the dispatch routine # The min_calls keyword will make it wait # until the batch has a size of 2 bb.dispatch(min_calls = 2) # Wait for the threads to join thread_0.join() thread_1.join() # Inspect the results print(results)
This is automated by the
batched_measurement
:>>> batched_measurement([c,f], backend=bb) [{3: 1.0}, {5: 1.0}]