A simple counting argument suffices to show that any general-purpose function-minimization algorithm in n dimensions must involve at least 𝒪 (n2) function evaluations (see, for example, Press et al. [125Jump To The Next Citation Point, Section 10.6]): Suppose the function to be minimized is f : ℜn → ℜ, and suppose f has a local minimum near some point x0 ∈ ℜn. Taylor-expanding f in a neighborhood of x0 gives f(x) = f(x0)+ aT(x− x0)+ (x− x0)TB(x− x0)+ 𝒪 (∥x− x0∥3), where a ∈ ℜn, B ∈ ℜn×n is symmetric, and vT denotes the transpose of the column vector v ∈ ℜn. Neglecting the higher order terms (i.e. approximating f as a quadratic form in x in a neighborhood of x0), and ignoring f(x0) (which does not affect the position of the minimum), there are a total of N = n+ 12n (n + 1) coefficients in this expression. Changing any of these coefficients may change the position of the minimum, and at each function evaluation the algorithm “learns” only a single number (the value of f at the selected evaluation point), so the algorithm must make at least N = 𝒪(n2) function evaluations to (implicitly) determine all the coefficients. Actual functions are not exact quadratic forms, so in practice there are additional 𝒪 (1) multiplicative factors in the number of function evaluations. Minimization algorithms may also make additional performance and/or space-versus-time trade-offs to improve numerical robustness or to avoid explicitly manipulating n × n Jacobian matrices.