numericalderivative.SteplemanWinarsky¶
- class numericalderivative.SteplemanWinarsky(function, x, beta=4.0, args=None, verbose=False)¶
Compute an approximately optimal step for the central F.D. formula of the first derivative
Uses central finite difference to compute an approximate value of f'(x). The approximate optimal step for f'(x) is computed using a monotony property.
The central F.D. is:
\[f'(x) \approx d(h) := \frac{f(x + h) - f(x - h)}{2 h}\]where \(f\) is the function, \(x \in \mathbb{R}\) is the input point and \(h > 0\) is the step. In the previous equation, the function \(d\) is the central finite difference formula which is considered in this method. The goal of the method is to compute an approximately optimal \(h^\star\) for the central F.D. formula using function evaluations only. Alternatively,
DumontetVignescan be used.The method introduces the function \(g\) defined by:
\[g(t) = f(x + t) - f(x - t)\]for any \(t \in \mathbb{R}\). We introduce the monotonic sequence of step sizes \(\{h_i\}_{i \geq 0}\) defined by the equation:
\[h_{i + 1} = \frac{h_i}{\beta}, \quad i=0,1,2,...\]Therefore, under some smoothness hypotheses on \(g\), there exists \(N \in \mathbb{N}\) such that for any \(i \geq N\), we have:
\[\left|d(h_{i + 1}) - d(h_i)\right| \leq \left|d(h_{i}) - d(h_{i - 1})\right|.\]The previous theorem states that the sequence \(\left(\left|d(h_{i}) - d(h_{i - 1})\right|\right)_{i \geq N}\) is monotonic in exact arithmetic.
The method starts from an initial step \(h_0 > 0\). It then reduces this step iteratively until the monotonicity property is broken.
- Parameters:
- functionfunction
The function to differentiate.
- xfloat
The point where the derivative is to be evaluated.
- betafloat, > 1.0
The reduction factor of h at each iteration. A value of beta closer to 1 can improve the accuracy of the optimum step, but may increase the number of iterations.
- argslist
A list of optional arguments that the function takes as inputs. By default, there is no extra argument and calling sequence of the function must be y = function(x). If there are extra arguments, then the calling sequence of the function must be y = function(x, arg1, arg2, ...) where arg1, arg2, ..., are the items in the args list.
- verbosebool, optional
Set to True to print intermediate messages.
Methods
compute_first_derivative(step)Compute an approximate value of f'(x) using central finite difference.
find_step(initial_step[, iteration_maximum])Compute an approximate optimum step for central derivative using monotony properties.
get_args()Return the arguments
get_beta()Return the multiplier
Return the function
Returns the number of function evaluations.
Return the history of steps during the search.
get_x()Return the point
- Returns:
- None.
References
Adaptive numerical differentiation. R. S. Stepleman and N. D. Winarsky. Journal: Math. Comp. 33 (1979), 1257-1264
Examples
Compute the step of a badly scaled function.
>>> import numericalderivative as nd >>> >>> def scaled_exp(x): >>> alpha = 1.e6 >>> return np.exp(-x / alpha) >>> >>> x = 1.0e-2 >>> initial_step = 1.0e8 >>> algorithm = nd.SteplemanWinarsky(scaled_exp, x) >>> h_optimal, number_of_iterations = algorithm.find_step(initial_step) >>> f_prime_approx = algorithm.compute_first_derivative(h_optimal)
- __init__(function, x, beta=4.0, args=None, verbose=False)¶
Methods
__init__(function, x[, beta, args, verbose])compute_first_derivative(step)Compute an approximate value of f'(x) using central finite difference.
find_step(initial_step[, iteration_maximum])Compute an approximate optimum step for central derivative using monotony properties.
get_args()Return the arguments
get_beta()Return the multiplier
Return the function
Returns the number of function evaluations.
Return the history of steps during the search.
get_x()Return the point
- compute_first_derivative(step)¶
Compute an approximate value of f'(x) using central finite difference.
The denominator is, however, implemented using the equation 3.4 in Stepleman & Winarsky (1979).
- Parameters:
- stepfloat, > 0
The step size.
- Returns:
- f_prime_approxfloat
The approximation of f'(x).
- find_step(initial_step, iteration_maximum=53)¶
Compute an approximate optimum step for central derivative using monotony properties.
This function computes an approximate optimal step h for the central finite difference.
- Parameters:
- initial_stepfloat, > 0.0
The initial value of the differentiation step. The algorithm produces a sequence of decreasing steps. Hence, the initial step should be an upper bound of the true optimal step. To find such a suitable initial step, the example below provides an heuristic designed by the authors of the method. If the order of magnitude of the third derivative can be guessed, then
compute_step()can be used. Moreover,find_initial_step()can help to find an appropriate initial step.- iteration_maximumint, optional
The number of iterations.
- Returns:
- estim_stepfloat
A step size which is near to optimal.
- number_of_iterationsint
The number of iterations required to reach that optimum.
Examples
The following heuristic can be used to set the initial step (see (Stepleman and Winarsky, 1979) eq. 3.9 page 1261):
>>> beta = 4.0 >>> relative_precision = 1.0e-16 >>> x = 1.0 >>> initial_step = beta * relative_precision ** (1.0 / 3.0) * abs(x)
- get_args()¶
Return the arguments
- Returns:
- argslist
A list of optional arguments that the function takes as inputs. By default, there is no extra argument and calling sequence of the function must be y = function(x). If there are extra arguments, then the calling sequence of the function must be y = function(x, arg1, arg2, ...) where arg1, arg2, ..., are the items in the args list.
- get_beta()¶
Return the multiplier
- Returns:
- betafloat, > 1.0
The reduction factor of h at each iteration. A value of beta closer to 1 can improve the accuracy of the optimum step, but may increase the number of iterations.
- get_function()¶
Return the function
- Returns:
- functionfunction
The function to differentiate.
- get_number_of_function_evaluations()¶
Returns the number of function evaluations.
- Returns:
- number_of_function_evaluationsint
The number of function evaluations.
- get_step_history()¶
Return the history of steps during the search.
Let n be the number of iterations. The best step h is not the last one (with index n-1), since this is the step which breaks the monotony. The best step has index n - 2.
- Returns:
- step_historylist(float)
The list of steps h during intermediate iterations of the search. This is updated by
find_step().
- get_x()¶
Return the point
- Returns:
- xfloat
The point where the derivative is to be evaluated.