numericalderivative.DumontetVignes

class numericalderivative.DumontetVignes(function, x, relative_precision=1e-15, absolute_precision=1e-15, number_of_digits=53, ell_3=2.0, ell_4=15.0, args=None, verbose=False)

Compute an approximately optimal step for the central F.D. formula for the first derivative

The method is based on computing the third derivative. Then the optimal step for the central formula for the first derivative is computed from the third derivative.

The goal of the method is to compute the step of the central finite difference approximation of the first derivative (see (Dumontet & Vignes, 1977) eq. 2 page 13):

\[f'(x) \approx \frac{f(x + h) - f(x - h)}{2h}\]

where \(f\) is the function, \(x \in \mathbb{R}\) is the input point and \(h > 0\) is the step. The optimal step is (see (Dumontet & Vignes, 1977) eq. 24 page 18):

\[h^\star = \left( \frac{3 \epsilon_f}{\left|f^{(3)}(x)\right|} \right)^{1/3}\]

if \(f^{(3)}(x) \neq 0\), where \(\epsilon_f > 0\) is the absolute error of the function evaluation. Notice that (Dumontet & Vignes, 1977) uses the non-classical constant 1.67 instead of 3 in the previous equation, but this does not have a significant impact on the result. If the order of magnitude of the third derivative can be guessed, then compute_step() can be used. Alternatively, the goal of DumontetVignes is to compute \(h^\star\) using function evaluations only.

The third derivative is approximated using the central finite difference formula (see (Dumontet & Vignes, 1977) eq. 25 page 18):

\[f^{(3)}_k(x) \approx \frac{f(x + 2k) - f(x - 2k) - 2 (f(x + k) - f(x - k))}{2k^3}\]

where \(k > 0\) is the step used for the third derivative. The method introduces \(f^{(3)}_{inf}(x)\) and \(f^{(3)}_{sup}(x)\) such that:

\[f^{(3)}_{inf}(x) \leq f^{(3)}_k(x) \leq f^{(3)}_{sup}(x).\]

We evaluate the function on 4 points (see (Dumontet & Vignes, 1977) eq. 28 page 19, with 2 errors corrected with respect to the original paper):

\[\begin{split}& T_1 = f(x + 2k), \qquad T_2 = -f(x - 2k), \\ & T_3 = -2f(x + k) , \qquad T_4 = 2f(x - k).\end{split}\]

Let \(A\) and \(B\) defined by (see (Dumontet & Vignes, 1977) page 19):

\[A = \sum_{i = 1}^4 \max(T_i, 0), \quad B = \sum_{i = 1}^4 \min(T_i, 0).\]

The lower and upper bounds of \(f^{(3)}(x)\) are computed from (see (Dumontet & Vignes, 1977) eq. 30 page 20):

\[f^{(3)}_{inf}(x) = \frac{1}{2 k^3} \left(\frac{A}{1 + \epsilon_r} + \frac{B}{1 - \epsilon_r}\right), \qquad f^{(3)}_{sup}(x) = \frac{1}{2 k^3} \left(\frac{A}{1 - \epsilon_r} + \frac{B}{1 + \epsilon_r}\right). \]

where \(\epsilon_r > 0\) is the relative error of the function evaluation. We introduce the ratio (see (Dumontet & Vignes, 1977) eq. 32 page 20):

\[L(k) = \frac{f^{(3)}_{sup}(x)}{f^{(3)}_{inf}(x)}.\]

If \(0 < f^{(3)}_{inf}(x) \leq f^{(3)}_{sup}(x)\), then \(L(k) > 1\). If \(f^{(3)}_{inf}(x) \leq f^{(3)}_{sup}(x) < 0\), then \(L(k) < 1\). Moreover, if \(k \rightarrow 0\), then \(L(k) \rightarrow -1\).

We search for \(k\) such that the ratio \(L\) is:

  • neither too close to 1 because it would mean that \(k\) is too large meaning that the truncation error dominates,

  • nor too far away from 1 because it would mean that \(k\) is too small meaning that the rounding error dominates.

Let \(k_{inf}\) and \(k_{sup}\) two real numbers representing the minimum and maximum bounds for \(k\). We search for \(k \in [k_{inf}, k_{sup}]\) such that (see (Dumontet & Vignes, 1977) eq. 33 page 20):

\[L(k) \in [L_1, L_2] \cup [L_3, L_4]\]

where:

  • \(L_3 = 2\) and \(L_2 = \frac{1}{L_3}\),

  • \(L_4 = 15\) and \(L_1 = \frac{1}{L_4}\).

This method cannot manage the case where \(f(x) = 0\). This is because the relative error has no meaning in this case.

This method does not perform correctly if the third derivative is zero or is close to zero (see (Dumontet & Vignes, 1977) remark page 22). This produces a L ratio which is negative, so that there is no value of the step \(k\) such that the condition is satisfied. This happens for example for the PolynomialProblem, which has a zero third derivative for any \(x\).

The method can fail if the absolute precision of the function value is set to zero. This can happen if the user computes it depending on the relative precision and the absolute value of the function: if the value of the function at point \(x\) is zero, then the absolute precision is zero.

Parameters:
functionfunction

The function to differentiate.

xfloat

The point where the derivative is to be evaluated.

relative_precisionfloat, > 0, optional

The relative precision of evaluation of f.

absolute_precisionfloat, > 0, optional

The absolute precision of evaluation of f.

number_of_digitsint

The maximum number of digits of the floating point system.

ell_3float

The minimum bound of the L ratio.

ell_4float, > ell_1

The maximum bound of the L ratio.

argslist

A list of optional arguments that the function takes as inputs. By default, there is no extra argument and calling sequence of the function must be y = function(x). If there are extra arguments, then the calling sequence of the function must be y = function(x, arg1, arg2, ...) where arg1, arg2, ..., are the items in the args list.

verbosebool, optional

Set to True to print intermediate messages.

Methods

compute_ell(k)

Compute the L ratio depending on k.

compute_first_derivative(step)

Compute first derivative using central finite difference.

compute_third_derivative([kmin, kmax, ...])

Compute an approximate third derivative of the function

find_step([kmin, kmax, iteration_maximum, ...])

Compute an approximate optimum step for the first derivative

get_ell_min_max()

Return the minimum and maximum bounds of the L ratio

get_number_of_function_evaluations()

Returns the number of function evaluations.

get_relative_precision()

Return the relative precision of the function evaluation

get_step_history()

Return the history of steps during the bissection search.

References

  • Dumontet, J., & Vignes, J. (1977). Détermination du pas optimal dans le calcul des dérivées sur ordinateur. RAIRO. Analyse numérique, 11 (1), 13-25.

Examples

Compute the step of a badly scaled function.

>>> import numericalderivative as nd
>>>
>>> def scaled_exp(x):
>>>     alpha = 1.e6
>>>     return np.exp(-x / alpha)
>>>
>>> x = 1.0e-2
>>> kmin = 1.0e-10
>>> kmax = 1.0e+8
>>> algorithm = nd.DumontetVignes(
>>>     scaled_exp, x,
>>> )
>>> h_optimal, number_of_iterations = algorithm.find_step(kmin=kmin, kmax=kmax)
>>> f_prime_approx = algorithm.compute_first_derivative(h_optimal)
__init__(function, x, relative_precision=1e-15, absolute_precision=1e-15, number_of_digits=53, ell_3=2.0, ell_4=15.0, args=None, verbose=False)

Methods

__init__(function, x[, relative_precision, ...])

compute_ell(k)

Compute the L ratio depending on k.

compute_first_derivative(step)

Compute first derivative using central finite difference.

compute_third_derivative([kmin, kmax, ...])

Compute an approximate third derivative of the function

find_step([kmin, kmax, iteration_maximum, ...])

Compute an approximate optimum step for the first derivative

get_ell_min_max()

Return the minimum and maximum bounds of the L ratio

get_number_of_function_evaluations()

Returns the number of function evaluations.

get_relative_precision()

Return the relative precision of the function evaluation

get_step_history()

Return the history of steps during the bissection search.

compute_ell(k)

Compute the L ratio depending on k.

A negative L ratio indicates that \(f'''_{inf}(x)\) and \(f'''_{sup}(x)\) do not have the same sign. This is because \(f'''_{inf}(x) < 0 < f'''_{sup}(x)\). This can happen if the exact third derivative is zero. In this case, the method cannot perform correctly.

It may happen that \(f'''_{inf}(x)\) is zero. This prevents from evaluating the L ratio. This might happen if the step k is too small (i.e. too close to zero).

Parameters:
kfloat, > 0

The finite difference step for the third derivative.

Returns:
ellfloat

The ratio \(\frac{f'''_{sup}(x)}{f'''_{inf}(x)}\).

f3inffloat

The lower bound of the third derivative

f3supfloat

The upper bound of the third derivative

compute_first_derivative(step)

Compute first derivative using central finite difference.

This is based on the central finite difference formula:

\[f'(x) \approx \frac{f(x + h) - f(x - h)}{2h}\]
Parameters:
stepfloat, > 0

The finite difference step

Returns:
f_prime_approxfloat

The approximate first derivative at point x.

compute_third_derivative(kmin=None, kmax=None, iteration_maximum=53, logscale=False)

Compute an approximate third derivative of the function

To do this, we must compute an approximately optimal step for the third derivative. Hence, the main goal is to compute a step h which is supposed to be optimal to compute the third derivative f'''(x) using central finite differences. The finite difference formula for the third derivative is:

\[f'''(x) \approx \frac{f(x + 2 h) - f(x - 2 h) - 2 f(x + h) + 2 f(x - h)}{2 h^3}\]

The method computes the optimal step h for f'''(x). Then this step is used to compute an approximate value of f'''(x).

Parameters:
iteration_maximumint, optional

The number of number_of_iterations.

kminfloat, kmin > 0

A minimum bound for k. If no value is provided, the default is to compute the smallest possible kmin using number_of_digits and x.

kmaxfloat, kmax > kmin > 0

A maximum bound for k. If no value is provided, the default is to compute the largest possible kmax using number_of_digits and x.

logscalebool, optional

Set to True to use a logarithmic scale when updating the step k during the search. Set to False to use a linear scale when updating the step k during the search.

Returns:
third_derivativefloat

The approximate value of the third derivative using the step k.

number_of_iterationsint

The number of number_of_iterations required to reach that optimum.

find_step(kmin=None, kmax=None, iteration_maximum=53, logscale=False)

Compute an approximate optimum step for the first derivative

This step is approximately optimal for the central finite difference for f'. The central finite difference formula for the first derivative is:

\[f'(x) \approx \frac{f(x + h) - f(x - h)}{2 h}\]

The method computes the optimal step h for f'''(x). Then this step is used to compute an approximate value of f'''(x). This is used to compute the step h for f'.

Parameters:
iteration_maximumint, optional

The number of number_of_iterations.

kminfloat, kmin > 0

A minimum bound for the finite difference step of the third derivative. If no value is provided, the default is to compute the smallest possible kmin using number_of_digits and x.

kmaxfloat, kmax > kmin > 0

A maximum bound for the finite difference step of the third derivative. If no value is provided, the default is to compute the largest possible kmax using number_of_digits and x.

logscalebool, optional

Set to True to use a logarithmic scale when updating the step k during the search. Set to False to use a linear scale when updating the step k during the search.

Returns:
stepfloat, > 0

The finite difference step for the first derivative

number_of_iterationsint

The number of iterations used to compute the step

get_ell_min_max()

Return the minimum and maximum bounds of the L ratio

The parameters L1 and L2 can be computed from the equations:

\[L_2 = \frac{1}{L_3}, \qquad L_1 = \frac{1}{L_4}.\]
Returns:
ell_3float, > 0

The lower bound of the L ratio.

ell_4float, > 0

The upper bound of the L ratio.

get_number_of_function_evaluations()

Returns the number of function evaluations.

Returns:
number_of_function_evaluationsint

The number of function evaluations.

get_relative_precision()

Return the relative precision of the function evaluation

Returns:
relative_precisionfloat

The relative precision of evaluation of f.

get_step_history()

Return the history of steps during the bissection search.

Returns:
step_historylist(float)

The list of steps k during intermediate iterations of the bissection search. This is updated by compute_third_derivative().