Choose a Minimizer

Goal

Choose which classical optimization algorithm to use for adjusting the variational parameters in a VQE calculation. These are commonly referred to as minimizers since the end goal is to minimize the energy expectation value with respect to the variational parameters.

Overview

When constructing a VQE calculator instance, you can select the parameter optimizer via the .choose_minimizer() step:

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
      # .<pick-a-minimizer>(...)
    .create()
)

Available Minimizers

Preset Minimizers

Preset minimizers are convenient, pre-tuned configurations for typical use cases. These are the available options:

Quick Default Preset Minimizer

The quick default preset minimizer is a recommended minimizer for users who prioritize speed over accuracy.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .quick_default()
    .create()
)

This choice sets the minimizer to a greedy configuration, with the FftMinimizer optimizing only the last variable. See FFT-Based Last-Variable Minimizer for details.

Balanced Default Preset Minimizer (default)

The balanced default preset minimizer is the default minimizer automatically used if no other minimizer is explicitly specified. It provides a good balance between speed and accuracy for most use cases.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .balanced_default()
    .create()
)

It uses the same fast FFT-based optimizer as quick_default for normal iterations (optimizing only the last variable), but every 10th iteration it switches to a reminimizer that optimizes the last 10 parameters simultaneously using a LastNParameterMinimizer wrapping a ScipyMinimizer with default settings (COBYLA).

Precise Default Preset Minimizer

The precise default preset minimizer is the recommended minimizer for users who prioritize accuracy over speed.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .precise_default()
    .create()
)

It set the minimizer identical to quick_default, optimizing the last variable only, but every 10th iteration also optimizes all parameters. The all parameter optimization is done by cycling through the full list of parameters in blocks of 2. Optimizing those 2 variables simultaneously, while keeping the rest fixed. The 2-parameter optimization is done using a 2D FFT-based minimizer. The parameters are sorted and then grouped based on operator commutativity. Note that the normal minimizer still uses the 1D FFT for the last variable only.

Very Precise Default Preset Minimizer

The very precise default preset minimizer is the recommended minimizer for users who need optimal accuracy, but cannot afford to run full parameter space optimization with the SciPy minimizer (See SciPy Minimizer).

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .very_precise_default()
    .create()
)

It is similar to the precise_default minimizer, but optimizes the last 2 variables during normal iterations using the 2D FFT minimizer. Every 10th iteration, the reminimizer uses a CyclerMinimizer that optimizes all parameters in blocks of 10 using the ScipyMinimizer (COBYLA). Parameters are sorted by operator commutativity for the cycler.

Other Minimizers

These minimizers expose low-level optimization algorithms directly.

SciPy Minimizer

The SciPy minimizer wraps standard SciPy optimization algorithms.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .scipy(
        method="L-BFGS-B",
        options=qc.options.ScipyMinimizerOptions(...),
    )
    .create()
)

This choice optimizes all variational parameters simultaneously using the specified SciPy method at every iteration. The optimization step will become slower as the circuit grows in size, since all parameters are optimized at once. It may quickly become intractable for large parameter sets and many iterations. However, it is a robust and well-tested optimization method, that provides high accuracy.

Default method: If no method is specified, the minimizer defaults to "COBYLA".

Valid ``method`` values:

  • "Nelder-Mead"

  • "Powell"

  • "CG"

  • "BFGS"

  • "L-BFGS-B"

  • "TNC"

  • "COBYLA"

  • "COBYQA"

  • "SLSQP"

Check out ScipyMinimizerOptions for details on the available options.

Bounds support: SciPy methods have varying support for parameter bounds. Some methods (e.g., L-BFGS-B, TNC, SLSQP) fully support bounds, while others (e.g., CG, BFGS) do not and will ignore them with a warning. Methods like COBYLA support bounds but may evaluate the objective outside the bounds during optimization.

FFT-Based Last-Variable Minimizer

The FFT-based last-variable minimizer uses a Fourier transform based method to optimize the most recently added variational parameter(s). All other parameters are held fixed during the optimization.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .last_variable_fft(options=qc.options.FftMinimizerOptions(...))
    .create()
)

This is particularly useful in Adaptive VQE where each iteration adds a single new parameter. It is very efficient in this context, but the efficiency comes at a cost: the accuracy of the final result may be lower since not all parameters are optimized simultaneously.

Depending on the options this choice is identical to using the quick_default.

Function evaluations: The FFT minimizer can optimize 1 or 2 parameters simultaneously:

  • 1D case (1 parameter): Requires exactly 5 function evaluations to sample the trigonometric polynomial

  • 2D case (2 parameters): Requires exactly 25 function evaluations (5×5 grid)

Bounds support: The FFT minimizer operates on the natural periodic domain [-π, π] and ignores any provided bounds. The refinement step (if enabled) uses bounded optimization, but the initial FFT-based search is unbounded.

Check out FftMinimizerOptions for details on the available options.

Cycler Minimizer

The cycler minimizer continually optimizes a function by cycling through subsets of variables with a fixed-size block, optimizing each block using the given minimizer.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .cycler(
       minimizer=minimizer
       options=qc.options.CyclerMinimizerOptions(...)
       parameter_sorting_strategy="OperatorCommutativitySortingStrategy",
    )
    .create()
)

See CyclerMinimizerOptions for details on the available options.

The parameters are sorted according to the specified strategy before being grouped into blocks for optimization. The default parameter sorting strategy sorts by descending absolute value (DescendingAbsoluteValueSortingStrategy). If no parameter_sorting_strategy is provided, the cycler will sort parameters by their absolute values in descending order.

If no minimizer is provided, the cycler defaults to using FFT-Based Last-Variable Minimizer.

Bounds support: The cycler minimizer passes bounds through to the underlying sub-minimizer, so bounds support depends on which minimizer is used for the blocks.

Note this minimizer requires another minimizer to be provided as argument. This can be any of the other available minimizers, including another cycler.

See Creating Minimizers for an example of how to create a minimizer that can be provided to the cycler.

For details on available parameter sorting strategies and their use cases, see Parameter Sorting Strategies.

Sequential Minimizer

The sequential minimizer optimizes a function sequentially using a list of minimizers; the output of one minimizer is used as the input to the next minimizer.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .sequential(
       minimizers=[minimizer1, minimizer2, ...]
    )
    .create()
)

Default behavior: If no minimizers list is provided, a default sequential minimizer is used that: 1. First optimizes parameters one by one using a CyclerMinimizer with FftMinimizer 2. Then optimizes a subset of the 10 largest parameters (by absolute value) using SubsetMinimizer with ScipyMinimizer

Note this minimizer requires another minimizer to be provided as argument. This can be any of the other available minimizers, including another sequential minimizer.

See Creating Minimizers for an example of how to create a minimizer that can be provided to the cycler.

Subset Minimizer

The subset minimizer focuses on a subset of parameters while keeping the remaining parameters fixed.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .subset(
       number_of_parameters_in_subset=10,
       minimizer=minimizer
    )
    .create()
)

This minimizer focuses on a subset of parameters while keeping the remaining parameters fixed. The number_of_parameters_in_subset is the number of parameters to optimize, in this case 10 parameters are optimized. The subset is selected by choosing the parameters with the largest absolute values.

Note that this minimizer requires another minimizer to be provided as an argument. This can be any of the other available minimizers, including another subset minimizer.

See Creating Minimizers for an example of how to create a minimizer that can be provided to the subset minimizer.

Last N Parameters Minimizer

The minimizer optimizes the last N (most recently appended) parameters. Earlier parameters are held fixed. This pattern is useful in Adaptive VQEs where new parameters are added at each iteration; at certain stages, only the recently introduced subset would be optimized.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .last_n_variable(
       options=qc.options.LastNParameterMinimizerOptions(...),
       minimizer=minimizer
    )
    .create()
)

See LastNParameterMinimizerOptions for details on the available options.

Default settings: If no minimizer is provided, it defaults to FFT-Based Last-Variable Minimizer. If no options are provided, num_last_parameters defaults to 1.

Note this minimizer requires another minimizer to be provided as argument. This can be any of the other available minimizers, including another subset minimizer.

See Creating Minimizers for an example of how to create a minimizer that can be provided to the last N variable minimizer.

Every Nth Reminimizer Minimizer

This choice optimizes the parameters using a provided minimizer during the regular Adaptive VQE iterations, and uses a different reminimizer during every i’th VQE iteration.

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .every_nth_reminimizer(
       minimizer=minimizer
       reminimizer=reminimizer,
       options=qc.options.EveryNthReminimizerMinimizerOptions(...),
    )
    .create()
)

See EveryNthReminimizerMinimizerOptions for details on the available options.

Default settings: - If not specified, minimizer defaults to LastNParameterMinimizer with FftMinimizer optimizing the last variable - reminimizer defaults to a CyclerMinimizer with FftMinimizer optimizing 1 variable at a time for 1 cycle - every_nth_iteration defaults to 10

Note that this minimizer requires two minimizers to be provided as arguments: one for the regular optimization steps, and another for the reminimization steps. These can be any of the other available minimizers.

See Creating Minimizers for an example of how to create a minimizers that can be provided to the every n’th reminimizer minimizer.

Creating Minimizers

Several of the minimizers require another minimizer to be provided as argument. A minimizer can be created using the minimizer_creator. For this, several options are available:

import qrunch as qc

fft_minimizer = (
    qc.minimizer_creator()
    .fft()
    .with_options(options=qc.options.FftMinimizerOptions(...))
    .create()
)

scipy_minimizer = (
    qc.minimizer_creator()
    .scipy()
    .with_method("L-BFGS-B")
    .with_options(options=qc.options.ScipyMinimizerOptions(...))
    .create()
)

cycler_minimizer = (
    qc.minimizer_creator()
    .cycler()
    .with_minimizer(minimizer=scipy_minimizer)
    .with_options(options=qc.options.CyclerMinimizerOptions(...))
    .with_parameter_sorting_strategy("OperatorCommutativitySortingStrategy")
    .create()
)

sequential_minimizer = (
    qc.minimizer_creator()
    .sequential()
    .with_minimizers(minimizers=[fft_minimizer, scipy_minimizer, cycler_minimizer])
    .create()
)

subset_minimizer = (
    qc.minimizer_creator()
    .subset()
    .with_minimizer(minimizer=fft_minimizer)
    .with_number_of_parameters_in_subset(5)
    .create()
)


last_n_minimizer = (
    qc.minimizer_creator()
    .last_n_variable()
    .with_minimizer(minimizer=fft_minimizer)
    .with_options(options=qc.options.LastNParameterMinimizerOptions(...))
    .create()
)

Reminimizers

The adaptive/iterative VQE calculators, can always be augmented by a setting a reminimizer, which you can select via the .choose_reminimizer() method:

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_reminimizer()
      # .<pick-a-reminimizer>(...)
    .create()
)

The available reminimizers are identical to the available minimizers described above. The only difference is that the reminimizer is only called once at the end of the calculation (or at particular iterations if using the every i’th reminimizer). Overall, the reminimizers offer a way to refine the variational parameters after greedy-style optimization during the main VQE iterations.

Parameter Sorting Strategies

Overview

Several minimizers (particularly the Cycler Minimizer) support parameter sorting strategies that determine the order in which parameters are optimized. The choice of ordering can significantly impact convergence speed and final accuracy in sequential optimization schemes.

Why Order Matters

Consider an ADAPT-VQE like setting where a quantum circuit is built iteratively by adding parameterized gates and optimizing their parameters. The circuit at iteration \(t\) consists of a sequence of parameterized quantum gates:

\[U(\boldsymbol{\theta}) = \prod_{i=1}^{N} G_i(\theta_i)\]

where \(G_i(\theta_i)\) represents the \(i^{\text{th}}\) parameterized gate with parameter \(\theta_i\).

In iterative optimization schemes where parameters are not all updated jointly, the order of optimization can influence convergence. Specifically, for any non-separable objective function [1], the optimal value for any parameter \(\theta_i\) is contingent on the current value of any other parameters it couples to, e.g. \(\theta_j\). As such, sequential optimization (e.g., coordinate descent) is a path-dependent process; the direction chosen at each step influences the part of the parameter landscape available at any subsequent step.

In these types of scenarios, a greedy heuristic is commonly employed, and the optimization of parameters that have the most substantial impact on the objective function is prioritized.

In a greedy coordinate descent, the problem reduces to determining an “importance/impact metric” that can rank each parameter based on its estimated impact on the objective function.

In our particular context, where evaluating the objective function is considered expensive as it requires evaluating the quantum circuit many times, typical gradient/Hessian-type metrics are ruled out.

Available Strategies

Operator Commutativity Strategy

The operator commutativity strategy (OperatorCommutativitySortingStrategy) ranks parameters based on how strongly their corresponding gate generators interact with the Hamiltonian.

Mathematical foundation:

The sensitivity of the energy with respect to a parameter \(\theta_j\) is encoded in the gradient:

\[\frac{\partial E}{\partial \theta_j} = i \, \langle \psi(\boldsymbol{\theta}) | [H, G_j] | \psi(\boldsymbol{\theta}) \rangle,\]

where \(G_j\) is the Hermitian generator of the gate \(U_j(\theta_j) = \exp(-i \theta_j G_j)\). This expression reveals a fundamental fact: the only way in which a parameter can influence the energy is through the non-commuting part of its generator with the Hamiltonian.

If \(G_j\) commutes with \(H\), then \([H, G_j]=0\) and the corresponding gradient vanishes identically, independent of the state. Conversely, the more “extensively” \(G_j\) fails to commute with \(H\), the larger the potential for \(\theta_j\) to drive energy changes.

The heuristic:

Because evaluating the full gradient is expensive in practice, we focus on the algebraic structure of the commutator rather than the expectation value. If the Hamiltonian is expressed as a weighted sum of Pauli strings:

\[H = \sum_a c_a P_a,\]

and the generator likewise admits a Pauli decomposition:

\[G_j = \sum_r a_{j,r} S_{j,r},\]

then the commutator \([H,G_j]\) only has contributions from those pairs of Pauli operators that anti-commute. A natural relevance score for the \(j^{\text{th}}\) parameter is:

\[s_j = \sum_a |c_a| \sum_{r : [S_{j,r}, P_a] \neq 0} |a_{j,r}|^2.\]

This score \(s_j\) serves as a proxy for the maximum possible strength of the gradient \(\partial E / \partial \theta_j\), quantifying the latent capacity of a parameter to influence the energy landscape without requiring quantum circuit evaluations.

Usage:

import qrunch as qc

cycler = (
    qc.minimizer_creator()
    .cycler()
    .with_parameter_sorting_strategy("OperatorCommutativitySortingStrategy")
    .create()
)

Characteristics: - State-independent - Deterministic - Requires the operator (Hamiltonian) to be provided - Best for problems where gate-operator commutation structure is informative

Descending Absolute Value Strategy

The descending absolute value strategy (DescendingAbsoluteValueSortingStrategy) sorts parameters by descending absolute magnitude \(|\theta_i|\). This is the default strategy for the CyclerMinimizer.

Rationale: Parameters with larger absolute values are often assumed to have been adjusted more significantly during previous optimization steps, potentially indicating they have greater impact on the objective function.

Usage:

import qrunch as qc

cycler = (
    qc.minimizer_creator()
    .cycler()
    .with_parameter_sorting_strategy("DescendingAbsoluteValueSortingStrategy")
    .create()
)

Random Sorting Strategy

The random sorting strategy (RandomSortingStrategy) randomly permutes the parameter order.

Usage:

import qrunch as qc

cycler = (
    qc.minimizer_creator()
    .cycler()
    .with_parameter_sorting_strategy("RandomSortingStrategy")
    .create()
)

Use Case: CyclerMinimizer

The parameter sorting strategies are particularly relevant for the CyclerMinimizer, which optimizes parameters in sequential blocks. The strategy determines which parameters are prioritized for optimization in each cycle.

Example with operator commutativity:

import qrunch as qc

vqe = (
    qc.calculator_creator()
    .vqe()
    .iterative()
    .standard()
    .choose_minimizer()
    .cycler(
        minimizer=qc.minimizer_creator().fft().create(),
        options=qc.options.CyclerMinimizerOptions(
            num_optimization_variables=2,
            max_cycler_iterations=5,
        ),
        parameter_sorting_strategy="OperatorCommutativitySortingStrategy",
    )
    .create()
)

See Also