You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/refs.bib
+1-1
Original file line number
Diff line number
Diff line change
@@ -172,7 +172,7 @@ @article{GoerzQ2022
172
172
173
173
174
174
@article{GoerzPRA2015,
175
-
Author = {Goerz, Michael H. and Gualdi, Giulia and Reich, Daniel M. and Koch, Christiane P. and Motzoi, Felix and Whaley, K. Birgitta and Vala, Ji\vr\'i and Müller, Matthias M. and Montangero, Simone and Calarco, Tommaso},
175
+
Author = {Goerz, Michael H. and Gualdi, Giulia and Reich, Daniel M. and Koch, Christiane P. and Motzoi, Felix and Whaley, K. Birgitta and Vala, Jiří and Müller, Matthias M. and Montangero, Simone and Calarco, Tommaso},
176
176
Title = {Optimizing for an arbitrary perfect entangler. II. Application},
# We consider the Hamiltonian $\op{H}_{0} = - \frac{\omega}{2} \op{\sigma}_{z}$, representing a simple qubit with energy level splitting $\omega$ in the basis $\{\ket{0},\ket{1}\}$. The control field $\epsilon(t)$ is assumed to couple via the Hamiltonian $\op{H}_{1}(t) = \epsilon(t) \op{\sigma}_{x}$ to the qubit, i.e., the control field effectively drives transitions between both qubit states.
52
-
#
53
-
# We we will use
52
+
# We consider the Hamiltonian ``\op{H}_{0} = - \frac{\omega}{2} \op{\sigma}_{z}``, representing a simple qubit with energy level splitting ``\omega`` in the basis ``\{\ket{0},\ket{1}\}``. The control field ``\epsilon(t)`` is assumed to couple via the Hamiltonian ``\op{H}_{1}(t) = \epsilon(t) \op{\sigma}_{x}`` to the qubit, i.e., the control field effectively drives transitions between both qubit states.
# The control field here switches on from zero at $t=0$ to it's maximum amplitude
78
-
# 0.2 within the time period 0.3 (the switch-on shape is half a [Blackman pulse](https://en.wikipedia.org/wiki/Window_function#Blackman_window)).
79
-
# It switches off again in the time period 0.3 before the
80
-
# final time $T=5$). We use a time grid with 500 time steps between 0 and $T$:
76
+
# The control field here switches on from zero at ``t=0`` to it's maximum amplitude 0.2 within the time period 0.3 (the switch-on shape is half a [Blackman pulse](https://en.wikipedia.org/wiki/Window_function#Blackman_window)). It switches off again in the time period 0.3 before the final time ``T=5``). We use a time grid with 500 time steps between 0 and ``T``:
81
77
82
78
tlist =collect(range(0, 5, length=500));
83
79
@@ -115,15 +111,15 @@ using LinearAlgebra #src
115
111
@testdot(ket(0), ket(1)) ≈0#src
116
112
#-
117
113
118
-
# The physical objective of our optimization is to transform the initial state $\ket{0}$ into the target state $\ket{1}$ under the time evolution induced by the Hamiltonian $\op{H}(t)$.
114
+
# The physical objective of our optimization is to transform the initial state ``\ket{0}`` into the target state ``\ket{1}`` under the time evolution induced by the Hamiltonian ``\op{H}(t)``.
# The full control problem includes this trajectory, information about the time grid for the dynamics, and the functional to be used (the square modulus of the overlap $\tau$ with the target state in this case).
122
+
# The full control problem includes this trajectory, information about the time grid for the dynamics, and the functional to be used (the square modulus of the overlap ``\tau`` with the target state in this case).
127
123
128
124
using QuantumControl.Functionals: J_T_sm
129
125
@@ -142,7 +138,7 @@ problem = ControlProblem(
142
138
143
139
# ## Simulate dynamics under the guess field
144
140
145
-
# Before running the optimization procedure, we first simulate the dynamics under the guess field $\epsilon_{0}(t)$. The following solves equation of motion for the defined trajective, which contains the initial state $\ket{\Psi_{\init}}$ and the Hamiltonian $\op{H}(t)$ defining its evolution.
141
+
# Before running the optimization procedure, we first simulate the dynamics under the guess field ``\epsilon_{0}(t)``. The following solves equation of motion for the defined trajectory, which contains the initial state ``\ket{\Psi_{\init}}`` and the Hamiltonian ``\op{H}(t)`` defining its evolution.
146
142
147
143
148
144
guess_dynamics =propagate_trajectory(
@@ -166,43 +162,30 @@ display(fig) #src
166
162
167
163
# ## Optimization with LBFGSB
168
164
169
-
# In the following we optimize the guess field $\epsilon_{0}(t)$ such that the intended state-to-state transfer $\ket{\Psi_{\init}} \rightarrow \ket{\Psi_{\tgt}}$ is solved.
165
+
# In the following we optimize the guess field ``\epsilon_{0}(t)`` such that the intended state-to-state transfer ``\ket{\Psi_{\init}} \rightarrow \ket{\Psi_{\tgt}}`` is solved.
170
166
171
-
# The GRAPE package performs the optimization by calculating the gradient of $J_T$ with respect to the values of the control field at each point in time. This gradient is then fed into a backend solver that calculates an appropriate update based on that gradient.
167
+
# The GRAPE package performs the optimization by calculating the gradient of ``J_T`` with respect to the values of the control field at each point in time. This gradient is then fed into a backend solver that calculates an appropriate update based on that gradient.
172
168
173
169
using GRAPE
174
170
175
-
# By default, this backend is [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl), a wrapper around the true and tested [L-BFGS-B Fortran library](http://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html). L-BFGS-B is a pseudo-Hessian method: it efficiently estimates the second-order Hessian from the gradient information. The search direction determined from that Hessian dramatically improves convergence compared to using the gradient directly as a search direction. The L-BFGS-B method performs its own linesearch to determine how far to go in the search direction.
176
-
177
-
# It can be quite instructive to see how the improvement in the pseudo-Hessian search direction compares to the gradient, how the linesearch finds an appropriate step width. For this purpose, we have a [GRAPELinesearchAnalysis](https://github.com/JuliaQuantumControl/GRAPELinesearchAnalysis.jl) package that automatically generates plots in every iteration of the optimization showing the linesearch behavior
178
-
179
-
using GRAPELinesearchAnalysis
180
-
181
-
# We feed this into the optimization as part of the `info_hook`.
171
+
# By default, this backend is [`LBFGSB.jl`](https://github.com/Gnimuc/LBFGSB.jl), a wrapper around the true and tested [L-BFGS-B Fortran library](http://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html). L-BFGS-B is a pseudo-Hessian method: it efficiently estimates the second-order Hessian from the gradient information. The search direction determined from that Hessian dramatically improves convergence compared to using the gradient directly as a search direction. The L-BFGS-B method performs its own linesearch to determine how far to go in the search direction.
# When going through this tutorial locally, the [generated images for the linesearch](https://github.com/JuliaQuantumControl/GRAPE.jl/tree/data-dump/TLS/Linesearch/LBFGSB) can be found in `datadir("TLS", "Linesearch, "LBFGS")`
fig =plot_control(opt_result_LBFGSB.optimized_controls[1], tlist)# This is test
206
189
#md fig |> DisplayAs.PNG #hide
207
190
display(fig) #src
208
191
#-
@@ -212,7 +195,7 @@ display(fig) #src
212
195
213
196
# Our GRAPE implementation includes the analytic gradient of the optimization functional `J_T_sm`. Thus, we only had to pass the functional itself to the optimization. More generally, for functionals where the analytic gradient is not known, semi-automatic differentiation can be used to determine it automatically. For illustration, we may re-run the optimization forgoing the known analytic gradient and instead using an automatically determined gradient.
214
197
215
-
# As shown in Goerz et al., arXiv:2205.15044, by evaluating the gradient of ``J_T`` via a chain rule in the propagated states, the dependency of the gradient on the final time functional is pushed into the boundary condition for the backward propagation, ``|χ_k⟩ = -∂J_T/∂⟨ϕ_k|``. For functionals that can be written in terms of the overlaps ``τ_k`` of the forward-propagated states and target states, such as the `J_T_sm` used here, a further chain rule leaves derivatives of `J_T` with respect to the overlaps ``τ_k``, which are easily obtained via automatic differentiation. The `optimize` function takes an optional parameter `chi` that may be passed a function to calculate ``|χ_k⟩``. A suitable function can be obained using
198
+
# As shown in Goerz et al., arXiv:2205.15044, by evaluating the gradient of ``J_T`` via a chain rule in the propagated states, the dependency of the gradient on the final time functional is pushed into the boundary condition for the backward propagation, ``|χ_k⟩ = -∂J_T/∂⟨ϕ_k|``. For functionals that can be written in terms of the overlaps ``τ_k`` of the forward-propagated states and target states, such as the `J_T_sm` used here, a further chain rule leaves derivatives of `J_T` with respect to the overlaps ``τ_k``, which are easily obtained via automatic differentiation. The `optimize` function takes an optional parameter `chi` that may be passed a function to calculate ``|χ_k⟩``. A suitable function can be obtained using
216
199
217
200
using QuantumControl.Functionals: make_chi
218
201
@@ -232,26 +215,21 @@ opt_result_LBFGSB_via_χ
232
215
233
216
# ## Optimization with Optim.jl
234
217
235
-
# As an alternative to the default L-BFGS-B backend, we can also use any of the gradient-based optimizers in [Optiml.jl](https://github.com/JuliaNLSolvers/Optim.jl). This also gives full control over the linesearch method.
218
+
# As an alternative to the default L-BFGS-B backend, we can also use any of the gradient-based optimizers in [`Optim.jl`](https://github.com/JuliaNLSolvers/Optim.jl). This also gives full control over the linesearch method.
236
219
237
220
import Optim
238
221
import LineSearches
239
222
240
-
# Here, we use the LBFGS implementation that is part of Optim (which is not exactly the same as L-BFGS-B; "B" being the variant of LBFGS with optional additional bounds on the control) with a Hager-Zhang linesearch
223
+
# Here, we use the LBFGS implementation that is part of `Optim` (which is not exactly the same as L-BFGS-B; "B" being the variant of LBFGS with optional additional bounds on the control) with a Hager-Zhang linesearch
# We can see that the choice of linesearch parameters in particular strongly influence the convergence and the resulting field. Play around with different methods and parameters, and compare the different [plots generated by `GRAPELinesearchAnalysis`](https://github.com/JuliaQuantumControl/GRAPE.jl/tree/data-dump/TLS/Linesearch/OptimLBFGS)!
248
+
# We can see that the choice of linesearch parameters in particular strongly influence the convergence and the resulting field. Play around with different methods and parameters!
271
249
#
272
250
# Empirically, we find the default L-BFGS-B to have a very well-behaved linesearch.
273
251
274
252
# ## Simulate the dynamics under the optimized field
275
253
276
-
# Having obtained the optimized control field, we can simulate the dynamics to verify that the optimized field indeed drives the initial state $\ket{\Psi_{\init}} = \ket{0}$ to the desired target state $\ket{\Psi_{\tgt}} = \ket{1}$.
254
+
# Having obtained the optimized control field, we can simulate the dynamics to verify that the optimized field indeed drives the initial state ``\ket{\Psi_{\init}} = \ket{0}`` to the desired target state ``\ket{\Psi_{\tgt}} = \ket{1}``.
0 commit comments