numericalderivative.GeneralFiniteDifference

class numericalderivative.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy, direction='central', args=None)

Create a general finite difference formula

Methods

compute(step)

Computes the degree d approximate derivative of f at point x.

compute_b_constant()

Compute the constant b involved in the finite difference formula.

compute_error(step[, ...])

Computes the total error

compute_step([...])

Computes the optimal step

get_coefficients()

Return the coefficients of the finite difference formula

get_differentiation_order()

Return the differentiation order

get_formula_accuracy()

Return the formula accuracy

get_function()

Returns the function which derivative is to be approximated

get_indices_min_max()

Return the indices of the finite difference formula

get_x()

Returns the point x where the derivative is to be approximated

__init__(function, x, differentiation_order, formula_accuracy, direction='central', args=None)

Create a general finite difference formula

Let \(d \in \mathbb{N}\) be the order of differentiation with \(d \geq 1\) and \(p \in \mathbb{N}\) be the order of precision with \(p \geq 1\). The indices are:

  • if the formula is "forward", then \(i_\min = 0\) and \(i_\max = d + p - 1\),

  • if the formula is "backward", then \(i_\min = -(d + p - 1)\) and \(i_\max = 0\),

  • if the formula is "central", then \(d + p\) must be odd. Then \(i_\max = \left\lfloor \frac{d + p - 1}{2} \right\rfloor\) and \(i_\min = -i_\max\).

Uses the finite difference formula (see (Shi, Xie, Xuan & Nocedal, 2022) eq. 3.2 page 7):

\[f^{(d)}(x) = \frac{d!}{h^d} \sum_{i = i_{\min}}^{i_\max} c_i f(x + h i) - \frac{d!}{(d + p)!} b_{d + p} f^{(d + p)}(\xi) h^p\]

where \(h > 0\) is the step, \(\boldsymbol{c} \in \mathbb{R}^{d + p}\) is the vector of coefficients, \(i_\min \in \mathbb{N}\) is the minimum index, \(i_\max \in \mathbb{N}\) is the maximum index, \(\xi \in (x, x + h)\) and \(\epsilon_f > 0\) is the absolute precision of the function evaluation. We have \(i_\max \geq i_\min\). The particular values of \(i_\min\) and \(i_\max\) depend on the direction of the F.D. formula, the order of differentiation and the order of precision see _compute_indices(). The coefficient \(b_{d + p}\) is (see (Shi, Xie, Xuan & Nocedal, 2022) eq. page 7):

\[b_{d + p} = \sum_{i = i_{\min}}^{i_\max} i^{d + p} c_i.\]

The F.D. approximation has order \(p\):

\[f^{(d)}(x) = \frac{d!}{h^d} \sum_{i = i_{\min}}^{i_\max} c_i f(x + h i) + O(h^p)\]

when \(h \rightarrow 0\).

If direction is "central" and if \(p\) is odd, then the order of precision is actually \(p + 1\). This implies that any central F.D. formula has an even order of precision.

The number of coefficients in the system of equations is:

\[n_c = i_{max} - i_{min} + 1.\]

For a "forward" or "backward" finite difference formula, the number of unknown is equal to:

\[n_c = d + p.\]

For a "central" finite difference formula, the number of unknown is equal to:

\[\begin{split}n_c = \begin{cases} d + p - 1 & \textrm{ if } d + p \textrm{ is even}, \\ d + p & \textrm{ if } d + p \textrm{ is odd}. \end{cases}\end{split}\]

Let \(A \in \mathbb{R}^{n_c \times n_c}\) be the matrix defined by the equation:

\[a_{ji} = i^j\]

for \(i = i_\min, ..., i_\max\) and \(j = 0, ..., n_c - 1\). The matrix \(A\) is the transpose of a Vandermonde matrix. Let \(\boldsymbol{b} \in \mathbb{R}^{n_c}\) be the vector of coefficients defined by the equation:

\[\begin{split}b_{j} = \begin{cases} 0 & \textrm{ if } j \in \{0, ..., d - 1\}, \\ 1 & \textrm{ if } j = d,\\ 0 & \textrm{ if } j \in \{d + 1, ..., n_c - 1\}, \end{cases}\end{split}\]

for \(j = 0, ..., n_c - 1\). Then the vector of coefficients \(\boldsymbol{c} \in \mathbb{R}^{n_c}\) is the solution of the linear system of equations:

\[A \boldsymbol{c} = \boldsymbol{b}.\]

These coefficiens have some specific properties. The sum is zero:

\[\sum_{i=i_{min}}^{i_{max}} c_i = 0.\]

For a central formula:

  • if \(d\) is odd, then:

\[c_i = -c_{-i}\]

for \(i=i_{min}, ..., j_{max}\) and \(c_0 = 0\) ;

  • if \(d\) is even, then:

\[c_i = c_{-i}\]

for \(i=i_{min}, ..., i_{max}\).

For a central formula, if \(p\) is odd therefore:

\[\sum_{i=i_{min}}^{i_{max}} i^{d+p} c_i = 0.\]
Parameters:
functionfunction

The function to differentiate.

xfloat

The point where the derivative is to be evaluated.

differentiation_orderint

The order of the derivative. For example differentiation_order = 1 is the first derivative.

formula_accuracyint

The order of precision of the formula. For the central F.D. formula, then the formula accuracy is necessarily even. If required, increase the formula accuracy by 1 unit.

directionstr, optional

The direction of the formula. The direction can be "forward", "backward" or "central".

argslist

A list of optional arguments that the function takes as inputs. By default, there is no extra argument and calling sequence of the function must be y = function(x). If there are extra arguments, then the calling sequence of the function must be y = function(x, arg1, arg2, ...) where arg1, arg2, ..., are the items in the args list.

References

  • David Eberly. « Derivative approximation by finite differences ». 2008.

  • Nicholas Maxwell. « Notes on the derivation of Finite Difference kernels on regularly spaced grids, using arbitrary sample points ». 2010.

  • Bengt Fornberg. « Classroom Note :Calculation of Weights in Finite Difference Formulas ». In : SIAM Review 40.3 (1998), p. 685-691.

    1. Fornberg. « Finite difference method ». In : Scholarpedia 6.10 (2011), p. 9685

  • H.Z. Hassan, A.A. Mohamad et G.E. Atteia. « An algorithm for the finite difference approximation of derivatives with arbitrary degree and order of accuracy ». In : Journal of Computational and Applied Mathematics 236.10 (2012), p. 2622-2631. issn : 0377-0427.

Methods

__init__(function, x, differentiation_order, ...)

Create a general finite difference formula

compute(step)

Computes the degree d approximate derivative of f at point x.

compute_b_constant()

Compute the constant b involved in the finite difference formula.

compute_error(step[, ...])

Computes the total error

compute_step([...])

Computes the optimal step

get_coefficients()

Return the coefficients of the finite difference formula

get_differentiation_order()

Return the differentiation order

get_formula_accuracy()

Return the formula accuracy

get_function()

Returns the function which derivative is to be approximated

get_indices_min_max()

Return the indices of the finite difference formula

get_x()

Returns the point x where the derivative is to be approximated

compute(step)

Computes the degree d approximate derivative of f at point x.

Parameters:
stepfloat

The finite difference step.

Returns:
zfloat

A approximation of the d-th derivative of f at point x.

Raises:
ValueError

If direction is "central", d + formula_accuracy must be odd.

Examples

The next example computes an approximate third derivative of the sin function. To do so, we use a central F.D. formula of order 2.

>>> import numericalderivative as nd
>>> import numpy as np
>>> x = 1.0
>>> differentiation_order = 3  # Compute f'''
>>> formula_accuracy = 2  # Use order 2 precision
>>> formula = nd.GeneralFiniteDifference(
>>>     np.sin, x, differentiation_order, formula_accuracy
>>> )
>>> step = 1.0  # A first guess
>>> third_derivative = formula.compute(step)

We can use either forward, backward or central finite differences.

>>> formula = nd.GeneralFiniteDifference(
>>>     np.sin,
>>>     x,
>>>     differentiation_order,
>>>     formula_accuracy,
>>>     "forward"
>>> )

We can compute the step provided the derivative of order 5 is known. A first guess of this value can be set, or the default value (equal to 1). Then the step can be used to compute the derivative.

>>> formula = nd.GeneralFiniteDifference(
>>>     np.sin,
>>>     x,
>>>     differentiation_order,
>>>     formula_accuracy,
>>>     "central"
>>> )
>>> step, absolute_error = formula.compute_step()
>>> third_derivative = formula.compute(step)
>>> # Set the fifth derivative, if known
>>> fifth_derivative_value = 1.0  # This may work
>>> step, absolute_error = formula.compute_step(fifth_derivative_value)
>>> # Set the absolute error of the function evaluation
>>> absolute_precision = 1.0e-14
>>> step, absolute_error = formula.compute_step(
>>>     fifth_derivative_value, absolute_precision
>>> )

Given the step, we can compute the absolute error.

>>> absolute_error = formula.compute_error(step)
compute_b_constant()

Compute the constant b involved in the finite difference formula.

The coefficient \(b_{d + p}\) is (see (Shi, Xie, Xuan & Nocedal, 2022) eq. page 7):

\[b_{d + p} = \sum_{i = i_{\min}}^{i_\max} i^{d + p} c_i.\]
Returns:
b_constantfloat

The b parameter

compute_error(step, higher_order_derivative_value=1.0, absolute_precision=2.220446049250313e-16)

Computes the total error

The total error is the sum of the rounding error in the finite difference formula and the truncation error in the Taylor expansion (see Baudin, 2023) eq. (9.16) page 224 and (Shi, Xie, Xuan & Nocedal, 2022) page 7):

\[e(h) = \frac{d! \|\boldsymbol{c}\|_1 \epsilon_f}{h^d} + \frac{d!}{(d + p)!} \left|b_{d + p} f^{(d + p)}(x) \right| h^p\]

where \(h > 0\) is the step, \(d \in \mathbb{N}\) is the order of differentiation, \(p \in \mathbb{N}\) is the order of precision, \(\boldsymbol{c} \in \mathbb{R}^{d + p}\) is the vector of weights and \(\epsilon_f > 0\) is the absolute precision of the function evaluation and \(b_{d + p}\) is equal to:

\[b_{d + p} = \sum_{i = i_\min}^{i_\max} i^{d + p} c_i.\]

In the previous equation, the one-norm of the vector of coefficients is:

\[\|\boldsymbol{c}\|_1 = \sum_{i = i_\min}^{i_\max} |c_i|.\]
Parameters:
stepfloat, > 0

The finite difference step.

higher_order_derivative_valuefloat

The value of the derivative of order differentiation_order + formula_accuracy. For example, if differentiation_order = 2 and the formula_accuracy = 3, then this must be the derivative of order 3 + 2 = 5.

absolute_precisionfloat, > 0

The absolute precision of the function evaluation

Returns:
absolute_errorfloat

The absolute error.

References

    1. Baudin (2023). Méthodes numériques. Dunod.

  • Shi, H. J. M., Xie, Y., Xuan, M. Q., & Nocedal, J. (2022). Adaptive finite-difference interval estimation for noisy derivative-free optimization. SIAM Journal on Scientific Computing, 44 (4), A2302-A2321.

compute_step(higher_order_derivative_value=1.0, absolute_precision=2.220446049250313e-16)

Computes the optimal step

This step minimizes the total error of the derivative central finite difference (see compute() and compute_error()). This step minimizes the total error, taking into account for the rounding error in the finite difference formula and the truncation error in the Taylor expansion (see Baudin, 2023) eq. (9.16) page 224 and (Shi, Xie, Xuan & Nocedal, 2022) page 7).

The optimal step is:

\[h^\star = \left(\frac{d}{p} (d + p)! \frac{\|\boldsymbol{c}\|_1}{|b_{d + p}|} \frac{\epsilon_f}{\left|f^{(d + p)}(x)\right|}\right)^{\frac{1}{d + p}}.\]
Parameters:
higher_order_derivative_valuefloat

The value of the derivative of order differentiation_order + formula_accuracy. For example, if differentiation_order = 2 and the formula_accuracy = 3, then this must be the derivative of order 3 + 2 = 5.

absolute_precisionfloat, > 0

The absolute precision of the function evaluation

Returns:
stepfloat

The finite difference step.

absolute_errorfloat

The absolute error.

References

    1. Baudin (2023). Méthodes numériques. Dunod.

  • Shi, H. J. M., Xie, Y., Xuan, M. Q., & Nocedal, J. (2022). Adaptive finite-difference interval estimation for noisy derivative-free optimization. SIAM Journal on Scientific Computing, 44 (4), A2302-A2321.

get_coefficients()

Return the coefficients of the finite difference formula

Returns:
coefficientsnp.array(number_of_coefficients)

The coefficients of the F.D. formula

Examples

>>> import numericalderivative as nd
>>>
>>> def scaled_exp(x):
>>>     alpha = 1.e6
>>>     return np.exp(-x / alpha)
>>>
>>> x = 1.0e-2
>>> differentiation_order = 3  # Compute f'''
>>> formula_accuracy = 6  # Use differentiation_order 6 formula
>>> c = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy).get_coefficients()
>>> c = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy, "forward").get_coefficients()
>>> c = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy, "backward").get_coefficients()
>>> c = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy, "central").get_coefficients()
get_differentiation_order()

Return the differentiation order

Returns:
differentiation_orderint

The differentiation order

get_formula_accuracy()

Return the formula accuracy

Returns:
formula_accuracyint

The accuracy of the formula

get_function()

Returns the function which derivative is to be approximated

Returns function : function

The function

get_indices_min_max()

Return the indices of the finite difference formula

Returns:
iminint

The minimum index of the F.D. formula

imaxint

The maximum index of the F.D. formula

Examples

>>> import numericalderivative as nd
>>>
>>> def scaled_exp(x):
>>>     alpha = 1.e6
>>>     return np.exp(-x / alpha)
>>>
>>> x = 1.0e-2
>>> differentiation_order = 3  # Compute f'''
>>> formula_accuracy = 6  # Use differentiation_order 6 formula
>>> imin, imax = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy).get_indices_min_max()
>>> imin, imax = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy, "forward").get_indices_min_max()
>>> imin, imax = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy, "backward").get_indices_min_max()
>>> imin, imax = nd.GeneralFiniteDifference(function, x, differentiation_order, formula_accuracy, "central").get_indices_min_max()
get_x()

Returns the point x where the derivative is to be approximated

Returns x : float

The input point