Counterdiabatic Driving Protocols#
The goal of counterdiabatic driving (CD) is to evolve an initial quantum state \(\ket{\Psi_i} =\ket{\Psi(t=0)}\) over time, without creating transitions to other higher exited states. This way, the state evolves into a target state after some time \(T\): \(\ket{\Psi_f} =\ket{\Psi(t=T)}\). This process, called adiabatic driving, is useful when we know how to create the groundstate but are interested in a state that the system evolves into. One example for a CD application is solving QUBOs, to which we will come later in the tutorial.
First we will introduce the core concept of couterdiabatic driving and explain the driving protocols LCD (local counterdiabatic driving) and COLD (counterdiabatic optimized local driving). Then we will show how to run those algorithms in Qrisp by solving a QUBO problem. After this tutorial you will have learned what driving a system adiabatically means and how we can speed up adiabatic driving with LCD and COLD. You will be able to use qrisp to solve QUBO problems and to experiment with different parameters of the CD protocols.
The concept of counterdiabatic driving#
To drive a system adiabatically, we must meet the adiabatic condition. This condition ensures that the system’s state follows the instantaneous eigenstates of the time-dependent Hamiltonian \(H(\lambda(t))\), where \(\lambda(t) \in [0, 1]\) is a scheduling parameter that interpolates between initial (\(\lambda=0\)) and final (\(\lambda=1\)) Hamiltonians. The energy gaps \(\Delta_{mn} = E_m(\lambda) - E_n(\lambda)\) between eigenstates \(\ket{m(\lambda)}\) and \(\ket{n(\lambda)}\) must be large compared to the rate of change of the Hamiltonian. Specifically, the adiabatic theorem requires:
Here, \(\partial_\lambda H(\lambda)\) is the derivative of the Hamiltonian with respect to \(\lambda\), and \(\frac{\partial \lambda}{\partial t}\) controls the speed of evolution. The matrix element \(\bra{m} \partial_\lambda H \ket{n}\) quantifies coupling between states.
The adiabatic condition roughly tells us that in order to avoid transitions to exited states we must drive the system slow compared to the size of the energetic gaps. Unfortunately, we cannot push this evolution time too strongly as this leads to quantum incoherences and disruption through noise. Therefore we want to mitigate the diabatic transitions so that we can drive the system adiabatically while keeping the evolution time short.
To understand how diabatic transitions can be mitigated, we need to introduce the adiabatic gauge potential (AGP) \(A_{\lambda}\). This is a hermitian operator whose diagonal elements desribe the adiabatic- (phase accumulation) and its off-diagonal elements describe the non-adiabatic (transition-inducing) dynamics of the driven system. It is related to the state evolution in the instantaneous eigenbasis (denoted by \(\sim\)):
Here, \(\tilde{H}\) is the diagonal Hamiltonian in the eigenbasis. The term \(- \frac{\partial \lambda}{\partial t} A_\lambda\) encodes the non-adiabatic effects.
The idea behind counterdiabatic driving is to add a term to the hamiltonian that counteracts the diabatic transitions (hence the name). It turns out that this can be done by adding the negation of the AGP!
Since the exact calculation of the AGP becomes impossible very fast, we need to approximate \(A_{\lambda}\). This is the goal of local counterdiabatic driving (LCD) and counterdiabatic optimized local driving (COLD).
Local counterdiabatic driving (LCD)#
The idea of LCD is to approximate the AGP through local manipulation. This means that our operator basis \(\mathcal{O}_{LCD}\) only consists of single-qubit gates:
The operator basis should be purely imaginary for a real system hamiltonian [1]. So, a common choice are the pauli \(y\) matrices:
The coefficients \(\alpha_j\) can be calculated by minimizing an action \(S\) associated with the AGP:
Where \(G_{\lambda}\) is an operator that is the negation of the generalised force operator.
Roughly speaking, the action \(S\) measures how close our operator is to the actual AGP of the system. For the derivation and physical meaning of the operators, we refer to [2].
Counterdiabatic optimized local driving (COLD)#
We arrive at the core idea of COLD (counterdiabatic optimized local driving) by combining the LCD approach with the study of quantum optimal control theory. While LCD uses a restricted operator basis to find a CD protocol that minimizes the action \(S\), COLD introduces another term that aims to find the optimal path \(\ket{\Psi_i} \rightarrow \ket{\Psi_f}\) such that diabatic transitions are avoided. The optimization objective can be chosen depending on the nature of the physical system. One possible control hamiltonian consists of \(z\)-pulses weighted by pulses of harmonics:
Here, the function \(g(\lambda)\) is the inverse function of \(\lambda(t)\).
To find optimal parameters \(\beta_k\), one has to choose an optimization objective.
The objective strongly depends on the nature of the driven system as well as the information and tools available. If the desired final state is known, a sensible objective would be the state fidelity. Another approach is to minimize the coefficients of the AGP at higher orders. Here the idea is that a minimal AGP norm will minimize the non-adiabatic effects along the path in time [2]. In the case of QUBO problems, one can also choose the minimization of the classical cost function.
In the following paragraph we will show two objectives for solving QUBO problems with COLD. But before that, let us quickly discuss how COLD and counterdiabatic driving are connected.
QUBO problems as CD instances#
A QUBO probelem is a combinatorical optimization problem that can be formulated as
where \(Q\) is a symmetric matrix and \(x\) are binary vectors \(x_i \in \{0, 1\}\).
This optimization problem can be encoded into a spin-glass hamiltonian such that the expectation value \(\bra{x}H_p\ket{x}\) is equal to \(y\). Thus, finding the groundstate of \(H_p\) leads to the solution of the QUBO problem. The problem Hamiltonian reads:
The \(\sigma_z\) denote the Pauli matrices and \(J\) and \(h\) hold the entries of our QUBO matrix: \(J_{ij} = \frac{1}{2} Q_{ij}\) and \(h_i = -\frac{1}{2} Q_{ii} - \sum_j \frac{1}{2} Q_{ij}\).
Now we will use counterdiabatic driving to reach the groundstate of \(H_p\). We prepare our qubits in a well-defined ground state of another initial hamiltonian \(H_i\) and then drive it adiabatically to end up in the groundstate of our problem hamiltonian \(H_p\). The driving hamiltonian looks like this:
So at \(\lambda(t)=1\) the state of our system is our minimal solution of the QUBO problem! To enforce adiabatic driving we now add the AGP approximation from our LCD approach as well as the COLD control hamiltonian. This way, we reach the complete COLD hamiltonian:
Note that the LCD coefficients \(\alpha_i\) depend on the optimization parameters \(\beta\) now.
COLD routine in Qrisp#
Now that we gathered all the components needed for the COLD algorithm, we can implement it in Qrisp.
We will demonstrate the algorithm for a small 6x6 QUBO example. Note that, depending on the simulator or hardware that you use, you can move to significantly larger QUBOs. Let’s start by defining our QUBO matrix.
[1]:
import numpy as np
Q = np.array([[-1.1, 0.6, 0.4, 0.0, 0.0, 0.0],
[0.6, -0.9, 0.5, 0.0, 0.0, 0.0],
[0.4, 0.5, -1.0, -0.6, 0.0, 0.0],
[0.0, 0.0, -0.6, -0.5, 0.6, 0.0],
[0.0, 0.0, 0.0, 0.6, -0.3, 0.5],
[0.0, 0.0, 0.0, 0.0, 0.5, -0.4]])
N = Q.shape[0]
Next, we set up our initial- and problem Hamiltonian, as well as the control Hamiltonian:
[2]:
from qrisp.operators.qubit import X, Y, Z
h = -0.5 * np.diag(Q) - 0.5 * np.sum(Q, axis=1)
J = 0.5 * Q
H_init = 1 * sum([X(i) for i in range(N)])
H_prob = (sum([sum([J[i][j]*Z(i)*Z(j) for j in range(i)]) for i in range(N)])
+ sum([h[i]*Z(i) for i in range(N)]))
H_control = sum([Z(i) for i in range(N)])
The AGP approximation are the y-pulses with the coefficients \(\alpha_i\):
[3]:
A_lam = [Y(i) for i in range(N)] # non-uniform
The coefficients are computed by minimizing the action \(S\). They depend on the timestep \(\lambda\) as well as the rest of the Hamiltonian, including the optimization parameters \(f\), \(\partial_{\lambda} f\). For our spin-glass Hamiltonian and a non-uniform AGP this results in the following calculation:
[4]:
def alpha(lam, f, f_deriv):
nom = [h[i] + f + (1-lam) * f_deriv
for i in range(N)]
denom = [2 * ((lam*h[i] + f)**2 + (1-lam)**2 +
lam**2 * sum([J[i][j] for j in range(N) if j != i]))
for i in range(N)]
alph = [nom[i]/denom[i] for i in range(N)]
return alph
Now all that is left to define is the scheduling function \(\lambda(t, T)\) and its inverse \(g(\lambda, T)\). The simplest choice is \(\lambda(t, T) = t/T\). We are going to define them as sympy functions, as they will be differentiated during the COLD routine.
[5]:
import sympy as sp
def lam():
t, T = sp.symbols("t T", real=True)
lam_expr = t/T
return lam_expr
def g():
lam, T = sp.symbols("lam T")
g_expr = lam * T
return g_expr
Finally, we can create a DCQO instance (digitized counterdiabatic quantum optimization problem) by handing over all the callables and operators.
[6]:
from qrisp.algorithms.cold import DCQOProblem
cold_problem = DCQOProblem(Q, H_init, H_prob, A_lam, alpha, lam, g, H_control)
Before running the algorithm, we need to choose the objective by which the parameters \(f\) are optimized. Right now, the two possibilitis implemented in Qrisp are exp_value and agp_coeff_magnitude.
The exp_value objective will use the expectation value of \(H_p\) for the minimization and thus run a quantum simulation for each iteration of the optimization. This makes the objective very accurate but the simulation time scales exponentially with the problem size.
The other objective choice agp_coeff_magnitude is going to minimize the magnitude of the AGP coefficients of 1st and 2nd order (note that the second order introduces new coefficients \(\beta_i\), \(\gamma_i\)). The routine is handled in the module AGP_params module.
We suggest that you try out both objectives for your problem, as the optimal choice strongly depends on the QUBO matrix. For our example, we stick with the default objective, which is agp_coeff_magnitude.
We create a QuantumVariable with the size of the QUBO and choose to simulate the Hamiltonian in 6 timesteps over a total evolution time T=50. You can decide on the number of optimization parameters \(\beta\) with the argument N_opt.
[7]:
from qrisp import QuantumVariable
qarg = QuantumVariable(N)
N_steps = 4
T = 8
method = "COLD"
N_opt = 1
bounds = (-3, 3)
result = cold_problem.run(qarg, N_steps, T, method, N_opt, bounds=bounds)
print(result)
{'101101': [0.2744, np.float64(-3.4)], '011101': [0.1568, np.float64(-3.0)], '111101': [0.1022, np.float64(-2.1)], '101100': [0.0842, np.float64(-3.0)], '011100': [0.0454, np.float64(-2.6)], '101110': [0.0454, np.float64(-2.0999999999999996)], '111100': [0.0348, np.float64(-1.7000000000000002)], '011110': [0.0244, np.float64(-1.7000000000000002)], '100101': [0.0212, np.float64(-2.0)], '111001': [0.0198, np.float64(-0.4)], '001101': [0.019, np.float64(-3.1)], '010101': [0.0176, np.float64(-1.7999999999999998)], '111110': [0.0174, np.float64(-0.7999999999999999)], '110101': [0.0158, np.float64(-1.7000000000000002)], '101011': [0.011, np.float64(-1.0)], '111000': [0.0086, np.float64(0.0)], '110100': [0.0074, np.float64(-1.3000000000000003)], '101111': [0.0066, np.float64(-1.5)], '000101': [0.0056, np.float64(-0.9)], '101000': [0.0054, np.float64(-1.3)], '100100': [0.0054, np.float64(-1.6)], '100001': [0.0054, np.float64(-1.5)], '011011': [0.0052, np.float64(-0.6000000000000001)], '100110': [0.005, np.float64(-0.7000000000000001)], '101001': [0.0048, np.float64(-1.7000000000000002)], '001100': [0.0044, np.float64(-2.7)], '100000': [0.0036, np.float64(-1.1)], '011000': [0.0036, np.float64(-0.9)], '110010': [0.0028, np.float64(-1.1)], '010001': [0.0028, np.float64(-1.3)], '010100': [0.0026, np.float64(-1.4)], '110001': [0.0026, np.float64(-1.2000000000000002)], '010110': [0.0022, np.float64(-0.5)], '011111': [0.0022, np.float64(-1.1)], '011001': [0.002, np.float64(-1.3)], '100011': [0.002, np.float64(-0.8000000000000002)], '001110': [0.0018, np.float64(-1.8)], '110011': [0.0018, np.float64(-0.5000000000000002)], '111011': [0.0018, np.float64(0.3)], '010000': [0.0016, np.float64(-0.9)], '000100': [0.0016, np.float64(-0.5)], '111111': [0.0016, np.float64(-0.19999999999999996)], '000010': [0.0012, np.float64(-0.3)], '100111': [0.001, np.float64(-0.10000000000000009)], '110111': [0.001, np.float64(0.19999999999999984)], '001000': [0.0008, np.float64(-1.0)], '101010': [0.0008, np.float64(-1.6)], '100010': [0.0006, np.float64(-1.4000000000000001)], '010010': [0.0006, np.float64(-1.2)], '111010': [0.0006, np.float64(-0.29999999999999993)], '000011': [0.0006, np.float64(0.3)], '010011': [0.0006, np.float64(-0.6000000000000001)], '001011': [0.0006, np.float64(-0.7000000000000001)], '001111': [0.0006, np.float64(-1.2000000000000002)], '011010': [0.0004, np.float64(-1.2000000000000002)], '001001': [0.0004, np.float64(-1.4)], '000110': [0.0002, np.float64(0.39999999999999997)], '000001': [0.0002, np.float64(-0.4)]}
The measurement dictionary returns the probability and cost of each result in the form {“state”: [probability, cost]}. We can see that the most likely result is ‘101101’ with a QUBO cost of -3.4. This is the minimal result, thus the success probability is around 27%.
Quickly run a DCQO problem#
If you want to run a DCQO problem without defining the operators and callables yourself, you can use our solve_QUBO method of the COLD module. Here, you only have to hand over the problem- and run arguments as dictionaries. This method defines all the operators needed, creates the problem instance and runs the algorithm.
[8]:
import numpy as np
from qrisp.algorithms.cold import solve_QUBO
Q = np.array([[-1.1, 0.6, 0.4, 0.0, 0.0, 0.0],
[0.6, -0.9, 0.5, 0.0, 0.0, 0.0],
[0.4, 0.5, -1.0, -0.6, 0.0, 0.0],
[0.0, 0.0, -0.6, -0.5, 0.6, 0.0],
[0.0, 0.0, 0.0, 0.6, -0.3, 0.5],
[0.0, 0.0, 0.0, 0.0, 0.5, -0.4]])
problem_args = {"method": "COLD", "uniform": False}
run_args = {"N_steps": 4, "T": 8, "N_opt": 1, "CRAB": False,
"objective": "agp_coeff_magnitude", "bounds": (-3, 3)}
result = solve_QUBO(Q, problem_args, run_args)
print(result)
{'101101': [0.2792, np.float64(-3.4)], '011101': [0.1488, np.float64(-3.0)], '111101': [0.1008, np.float64(-2.1)], '101100': [0.0868, np.float64(-3.0)], '011100': [0.0484, np.float64(-2.6)], '101110': [0.0456, np.float64(-2.0999999999999996)], '111100': [0.0354, np.float64(-1.7000000000000002)], '100101': [0.0212, np.float64(-2.0)], '011110': [0.0208, np.float64(-1.7000000000000002)], '110101': [0.0204, np.float64(-1.7000000000000002)], '111001': [0.0194, np.float64(-0.4)], '001101': [0.0188, np.float64(-3.1)], '111110': [0.0158, np.float64(-0.7999999999999999)], '010101': [0.0128, np.float64(-1.7999999999999998)], '101011': [0.011, np.float64(-1.0)], '111000': [0.009, np.float64(0.0)], '000101': [0.0082, np.float64(-0.9)], '100001': [0.0066, np.float64(-1.5)], '101000': [0.006, np.float64(-1.3)], '100100': [0.0054, np.float64(-1.6)], '110100': [0.0054, np.float64(-1.3000000000000003)], '011011': [0.0054, np.float64(-0.6000000000000001)], '101111': [0.0054, np.float64(-1.5)], '100110': [0.0048, np.float64(-0.7000000000000001)], '011000': [0.0042, np.float64(-0.9)], '010100': [0.0042, np.float64(-1.4)], '101001': [0.0036, np.float64(-1.7000000000000002)], '110010': [0.0034, np.float64(-1.1)], '011111': [0.0032, np.float64(-1.1)], '001100': [0.003, np.float64(-2.7)], '010001': [0.0028, np.float64(-1.3)], '011001': [0.0028, np.float64(-1.3)], '100000': [0.0026, np.float64(-1.1)], '100011': [0.0026, np.float64(-0.8000000000000002)], '110011': [0.0026, np.float64(-0.5000000000000002)], '111011': [0.0026, np.float64(0.3)], '001110': [0.0024, np.float64(-1.8)], '000100': [0.0022, np.float64(-0.5)], '101010': [0.0022, np.float64(-1.6)], '110001': [0.0022, np.float64(-1.2000000000000002)], '010000': [0.0018, np.float64(-0.9)], '100010': [0.0014, np.float64(-1.4000000000000001)], '010110': [0.0012, np.float64(-0.5)], '111111': [0.0012, np.float64(-0.19999999999999996)], '110111': [0.0008, np.float64(0.19999999999999984)], '110000': [0.0006, np.float64(-0.8000000000000002)], '001000': [0.0006, np.float64(-1.0)], '000010': [0.0006, np.float64(-0.3)], '000011': [0.0006, np.float64(0.3)], '011010': [0.0004, np.float64(-1.2000000000000002)], '010011': [0.0004, np.float64(-0.6000000000000001)], '001011': [0.0004, np.float64(-0.7000000000000001)], '000111': [0.0004, np.float64(1.0)], '100111': [0.0004, np.float64(-0.10000000000000009)], '010010': [0.0002, np.float64(-1.2)], '111010': [0.0002, np.float64(-0.29999999999999993)], '000110': [0.0002, np.float64(0.39999999999999997)], '001001': [0.0002, np.float64(-1.4)], '010111': [0.0002, np.float64(0.09999999999999998)], '001111': [0.0002, np.float64(-1.2000000000000002)]}
Run a LCD problem#
We can also use the LCD algorithm without the optimized control pulse. This can be done by ommiting the control hamiltonian and choosing method = "LCD". Here is how you run the above example with LCD:
[9]:
from qrisp.algorithms.cold import solve_QUBO
Q = np.array([[-1.1, 0.6, 0.4, 0.0, 0.0, 0.0],
[0.6, -0.9, 0.5, 0.0, 0.0, 0.0],
[0.4, 0.5, -1.0, -0.6, 0.0, 0.0],
[0.0, 0.0, -0.6, -0.5, 0.6, 0.0],
[0.0, 0.0, 0.0, 0.6, -0.3, 0.5],
[0.0, 0.0, 0.0, 0.0, 0.5, -0.4]])
problem_args = {"method": "LCD", "uniform": False}
run_args = {"N_steps": 4, "T": 8,}
result = solve_QUBO(Q, problem_args, run_args)
print(result)
{'101110': [0.073, np.float64(-2.0999999999999996)], '101101': [0.0666, np.float64(-3.4)], '011110': [0.0568, np.float64(-1.7000000000000002)], '101010': [0.0516, np.float64(-1.6)], '101100': [0.0512, np.float64(-3.0)], '011101': [0.046, np.float64(-3.0)], '110010': [0.04, np.float64(-1.1)], '100010': [0.0318, np.float64(-1.4000000000000001)], '011100': [0.03, np.float64(-2.6)], '011010': [0.0286, np.float64(-1.2000000000000002)], '101001': [0.0282, np.float64(-1.7000000000000002)], '001110': [0.028, np.float64(-1.8)], '010010': [0.0262, np.float64(-1.2)], '001101': [0.0262, np.float64(-3.1)], '101011': [0.0226, np.float64(-1.0)], '101111': [0.0218, np.float64(-1.5)], '001010': [0.0202, np.float64(-1.3)], '100101': [0.0202, np.float64(-2.0)], '110101': [0.0194, np.float64(-1.7000000000000002)], '110011': [0.0194, np.float64(-0.5000000000000002)], '010101': [0.0186, np.float64(-1.7999999999999998)], '011001': [0.0162, np.float64(-1.3)], '011111': [0.0162, np.float64(-1.1)], '110100': [0.016, np.float64(-1.3000000000000003)], '001100': [0.016, np.float64(-2.7)], '101000': [0.0152, np.float64(-1.3)], '011011': [0.015, np.float64(-0.6000000000000001)], '100100': [0.014, np.float64(-1.6)], '100011': [0.0128, np.float64(-0.8000000000000002)], '001001': [0.0126, np.float64(-1.4)], '100001': [0.012, np.float64(-1.5)], '010011': [0.0106, np.float64(-0.6000000000000001)], '010100': [0.0104, np.float64(-1.4)], '001111': [0.0084, np.float64(-1.2000000000000002)], '011000': [0.0082, np.float64(-0.9)], '000110': [0.0074, np.float64(0.39999999999999997)], '010001': [0.0074, np.float64(-1.3)], '001011': [0.0074, np.float64(-0.7000000000000001)], '110001': [0.006, np.float64(-1.2000000000000002)], '100000': [0.005, np.float64(-1.1)], '111101': [0.005, np.float64(-2.1)], '110000': [0.0044, np.float64(-0.8000000000000002)], '001000': [0.0044, np.float64(-1.0)], '000101': [0.0042, np.float64(-0.9)], '111110': [0.004, np.float64(-0.7999999999999999)], '100110': [0.0034, np.float64(-0.7000000000000001)], '111100': [0.0032, np.float64(-1.7000000000000002)], '010110': [0.0032, np.float64(-0.5)], '000010': [0.003, np.float64(-0.3)], '111010': [0.003, np.float64(-0.29999999999999993)], '010000': [0.0026, np.float64(-0.9)], '000001': [0.0026, np.float64(-0.4)], '000100': [0.002, np.float64(-0.5)], '000011': [0.002, np.float64(0.3)], '110110': [0.0016, np.float64(-0.4000000000000002)], '111001': [0.0016, np.float64(-0.4)], '000111': [0.0016, np.float64(1.0)], '000000': [0.0014, np.float64(0.0)], '010111': [0.0012, np.float64(0.09999999999999998)], '100111': [0.0008, np.float64(-0.10000000000000009)], '111111': [0.0008, np.float64(-0.19999999999999996)], '111000': [0.0006, np.float64(0.0)], '111011': [0.0002, np.float64(0.3)]}
With LCD our minimal result only appears with a success probability of 6% as the second most likely result.
Advanced usage: COLD-CRAB#
An extension of the COLD algorithm is the COLD-CRAB method. CRAB stands for chopped random-basis quantum optimization. While the COLD routine uses the basis
we now add random parameters \(r_k \in (-0.5, 0.5)\) to each basis state:
This leads to a small variation of each basis vector and can reduce the risk to get stuck in a local optimization minimum [2]. To use the CRAB extention of COLD, simply hand over "CRAB": True in the run_args dictionary.
Recap#
In this tutorial we have learned:
What it means to drive a quantum system adiabatically from one state to another.
Two methods to speed up adiabatic driving: LCD and COLD.
With LCD we have seen how we can estimate the adiabatic gauge potential to mitigate non-adiabatic transitions. The COLD algorithm builds on that and introduces an additional quantum control pulse that can further optimize the adiabatic evolution. We have seen how we can use these methods to solve QUBO problems: If we encode the QUBO matrix into a spin-glass hamiltonian and let the groundstate evolve over time, we arrive at the groundstate of our QUBO cost hamiltonian.
The qrisp module COLD lets you either define the necessarry operators yourself or you can use the solve_QUBO method to run the algorithm with different pre-defined instances. We encourage you to play around with the different options like the evolution time, size of timesteps, optimization objectives and AGP types!
[1] D. Sels, & A. Polkovnikov, Minimizing irreversible losses in quantum systems by local counterdiabatic driving, Proc. Natl. Acad. Sci. U.S.A. 114 (20) E3909-E3916, https://doi.org/10.1073/pnas.1619826114 (2017).
[2] Čepaitė et. al., Counterdiabatic optimized local driving. PRX Quantum, 4(1), 010312. https://doi.org/10.1103/PRXQuantum.4.010312 (2023).