numericalderivative.GillMurraySaundersWright

class numericalderivative.GillMurraySaundersWright(function, x, absolute_precision=1e-15, c_threshold_min=0.001, c_threshold_max=0.1, args=None, verbose=False)

Compute an approximately optimal step for the forward F.D. formula of the first derivative

The method is based on three steps:

  • compute an approximate optimal step \(h_\Phi\) for the second derivative using central finite difference formula,

  • compute the approximate second derivative using central finite difference formula,

  • compute the approximate optimal step for the first derivative using the forward finite difference formula.

Finally, this approximately optimal step can be use to compute the first derivative using the forward finite difference formula.

The goal of the method is to compute the approximation of \(f'(x)\) using the forward finite difference formula (see (G, M, S & W, 1983) eq. 1 page 311):

\[f'(x) \approx \frac{f(x + h) - f(x)}{h}\]

where \(f\) is the function, \(x \in \mathbb{R}\) is the input point and \(h > 0\) is the step. If \(f''(x) \neq 0\), then the step which minimizes the total error is:

\[h^\star = 2 \sqrt{\frac{\epsilon_f}{\left|f''(x)\right|}}\]

where \(\epsilon_f > 0\) is the absolute error of the function evaluation. The goal of the method is to compute \(h^\star\) using function evaluations only. An approximate value of the second derivative can be computed from the central finite difference formula (see (G, M, S & W, 1983) eq. 8 page 314):

\[f''(x) \approx \Phi(h_\Phi) = \frac{f(x + h_\Phi) - 2 f(x) + f(x - h_\Phi)}{h_\Phi^2}.\]

where \(\Phi\) is the approximation of \(f''(x)\) from the central finite difference formula and \(h_\Phi > 0\) is the step of the second derivative finite difference formula. The method is based on the condition error (see (G, M, S & W, 1983) eq. 1 page 315):

\[c(h_\Phi) = \frac{4 \epsilon_f}{h_\Phi^2 |\Phi|}.\]

The condition error is a decreasing function of \(h_\Phi\). The algorithm searches for a step \(h_\Phi\) such that:

\[c_{\min} \leq c(h_\Phi) \leq c_{\max}\]

where \(c_{\min}\) and \(c_{\max}\) are thresholds defined by the user.

This algorithm is a simplified version of the algorithm published in (Gill, Murray, Saunders & Wright, 1983) section 3.2 page 316. While (Gill, Murray, Saunders & Wright, 1983) simultaneously considers the finite difference step of the forward, backward formula for the first derivative and the central formula for the second derivative, this algorithm only searches for the optimal step for the central formula for the second derivative.

The algorithm can fail in the case where the function is linear or approximately linear because the second derivative is zero or close to zero. For example, the function \(f(x) = \sin(x)\) for any real number \(x\) is linear at the point \(x = \pm \pi\). In this case, the second derivative is zero, which produces a value of \(\Phi\) zero or close to zero. This produces an infinite value of the condition error. The same problem appears at \(x = 0\).

In this algorithm fails to produce a consistent step, one can compute an approximately optimal step using compute_step(). Since the value of the second derivative is unknown, we can make the hypothesis that \(f''(x) \approx 1\).

The method can fail if the absolute precision of the function value is set to zero. This can happen if the user computes it depending on the relative precision and the absolute value of the function: if the value of the function at point \(x\) is zero, then the absolute precision is zero.

Parameters:
functionfunction

The function to differentiate.

xfloat

The point where the derivative is approximated.

absolute_precisionfloat, optional

The absolute error of the value of the function f at the point x. If the absolute precision is unknown and if the function value at point x is nonzero, then the absolute precision can be computed from the relative precision using the equation : \(\epsilon_f = \epsilon_r |f(x)|\) where \(\epsilon_r > 0\) is the relative precision.

c_threshold_minfloat, optional, > 0

The minimum value of the condition error.

c_threshold_maxfloat, optional, > c_threshold_min

The maximum value of the condition error.

argslist

A list of optional arguments that the function takes as inputs. By default, there is no extra argument and calling sequence of the function must be y = function(x). If there are extra arguments, then the calling sequence of the function must be y = function(x, arg1, arg2, ...) where arg1, arg2, ..., are the items in the args list.

verbosebool

Set to True to print intermediate messages

Methods

compute_condition(k)

Compute the condition error for given step k.

compute_first_derivative(step)

Compute an approximate first derivative using finite differences

compute_step_for_second_derivative(kmin, kmax)

Compute the optimal step k suitable to approximate the second derivative.

find_step(kmin, kmax[, iteration_maximum, ...])

Compute the optimal step suitable to approximate the fist derivative.

get_absolute_precision()

Return the absolute precision of the function evaluation

get_number_of_function_evaluations()

Returns the number of function evaluations.

get_step_history()

Return the history of steps during the search.

get_threshold_min_max()

Return the threshold min and max of the condition error

Returns:
None.

References

  • Gill, P. E., Murray, W., Saunders, M. A., & Wright, M. H. (1983). Computing forward-difference intervals for numerical optimization. SIAM Journal on Scientific and Statistical Computing, 4(2), 310-321.

Examples

Compute the step of a badly scaled function.

>>> import numericalderivative as nd
>>>
>>> def scaled_exp(x):
>>>     alpha = 1.e6
>>>     return np.exp(-x / alpha)
>>>
>>> x = 1.0e-2
>>> kmin = 1.0e-8
>>> kmax = 1.0e8
>>> algorithm = nd.GillMurraySaundersWright(
>>>     scaled_exp, x,
>>> )
>>> h_optimal, number_of_iterations = algorithm.find_step(kmin=kmin, kmax=kmax)
>>> f_prime_approx = algorithm.compute_first_derivative(h_optimal)
__init__(function, x, absolute_precision=1e-15, c_threshold_min=0.001, c_threshold_max=0.1, args=None, verbose=False)

Methods

__init__(function, x[, absolute_precision, ...])

compute_condition(k)

Compute the condition error for given step k.

compute_first_derivative(step)

Compute an approximate first derivative using finite differences

compute_step_for_second_derivative(kmin, kmax)

Compute the optimal step k suitable to approximate the second derivative.

find_step(kmin, kmax[, iteration_maximum, ...])

Compute the optimal step suitable to approximate the fist derivative.

get_absolute_precision()

Return the absolute precision of the function evaluation

get_number_of_function_evaluations()

Returns the number of function evaluations.

get_step_history()

Return the history of steps during the search.

get_threshold_min_max()

Return the threshold min and max of the condition error

compute_condition(k)

Compute the condition error for given step k.

This function evaluates the condition error \(c(h_\Phi)\) of the finite difference formula of the second derivative finite difference depending on the step \(h_\Phi\):

\[c(h_\Phi) = \frac{4 \epsilon_f}{h_\Phi^2 |\Phi|}.\]
Parameters:
kfloat

The step used for the finite difference approximation of the second derivative.

Returns:
cfloat

The condition error.

compute_first_derivative(step)

Compute an approximate first derivative using finite differences

This method uses the formula:

\[f'(x) \approx \frac{f(x + h) - f(x)}{h}\]
Parameters:
stepfloat, > 0

The step size.

Returns:
f_prime_approxfloat

The approximation of f'(x).

compute_step_for_second_derivative(kmin, kmax, iteration_maximum=53, logscale=True)

Compute the optimal step k suitable to approximate the second derivative.

Then the approximate value of the second derivative can be computed using this step.

The update formula depends on logscale. If it is true, then the logarithmic scale is used:

\[h = \exp\left(\frac{\log(k_{\min}) + \log(k_{\max})}{2}\right)\]

where \(k_\min\) is the current lower bound of the search interval and \(k_\max\) is the current upper bound. This implies that the update is the geometrical mean:

\[h = \sqrt{k_{\min} k_{\max}}.\]

Otherwise, we use the arithmetic mean:

\[h = \frac{k_{\min} + k_{\max}}{2}.\]
Parameters:
kminfloat, > 0

The minimum finite difference step k for the second derivative.

kmaxfloat, > kmin

The maximum step finite difference k for the second derivative.

iteration_maximumin, optional

The maximum number of iterations.

logscalebool, optional

Set to True to use a logarithmic scale to update k. Set to False to use a linear scale.

Returns:
step_second_derivativefloat, > 0

The optimum step step_second_derivative.

number_of_iterationsint

The number of iterations required to compute step_second_derivative.

find_step(kmin, kmax, iteration_maximum=53, logscale=True)

Compute the optimal step suitable to approximate the fist derivative.

This method computes the approximately optimal step for the second derivative. Then the approximate value of the second derivative can be computed using this step.

Parameters:
kminfloat, > 0

The minimum step k for the second derivative.

kmaxfloat, > kmin

The maximum step k for the second derivative.

iteration_maximumin, optional

The maximum number of iterations.

logscalebool, optional

Set to True to use a logarithmic scale to update k. Set to False to use a linear scale.

Returns:
stepfloat, > 0

The optimum step for the first derivative.

number_of_iterationsint

The number of iterations required to compute the step.

get_absolute_precision()

Return the absolute precision of the function evaluation

Returns:
absolute_precisionfloat

The absolute precision of evaluation of f.

get_number_of_function_evaluations()

Returns the number of function evaluations.

Returns:
number_of_function_evaluationsint

The number of function evaluations.

get_step_history()

Return the history of steps during the search.

Returns:
step_historylist(float)

The list of steps h during intermediate iterations of the search. This is updated by compute_step_for_second_derivative().

get_threshold_min_max()

Return the threshold min and max of the condition error

Returns:
c_threshold_minfloat, > 0

The minimum value of the threshold of the condition error.

c_threshold_maxfloat, > 0

The maxiimum value of the threshold of the condition error.