numericalderivative.ShiXieXuanNocedalForward¶
- class numericalderivative.ShiXieXuanNocedalForward(function, x, absolute_precision=1e-15, minimum_test_ratio=1.5, maximum_test_ratio=6.0, args=None, verbose=False)¶
Compute an approximately optimal step for the forward F.D. formula of the first derivative
Uses forward finite difference to compute an approximate value of f'(x):
\[f'(x) \approx \frac{f(x + h) - f(x)}{h}\]where \(f\) is the function, \(x \in \mathbb{R}\) is the point and \(h > 0\) is the differentiation step. If \(f''(x) \neq 0\), then the step which minimizes the total error is (see (Shi, Xie, Xuan & Nocedal, 2022) eq. 2.2 page 4):
\[h^\star = 2 \sqrt{\frac{\epsilon_f}{\left|f''(x)\right|}}\]where \(\epsilon_f > 0\) is the absolute error of the function evaluation. If the order of magnitude of the second derivative can be guessed, then
compute_step()can be used. Alternatively, the goal ofShiXieXuanNocedalForwardis to compute \(h^\star\) using function evaluations only and without estimating \(f''(x)\).The algorithm considers the test ratio (see (Shi, Xie, Xuan & Nocedal, 2022) eq. 2.3 page 4):
\[r(h) = \frac{\left|f(x + 4h) - 4f(x + h) + 3f(x)\right|}{8 \epsilon_f}\]where \(h > 0\) is the step and \(\epsilon_f> 0\) is the absolute precision of evaluation of the function. The goal of the algorithm is to find the step such that (see (Shi, Xie, Xuan & Nocedal, 2022) eq. 2.4 page 4):
\[r_\ell \leq r(h) \leq r_u\]where \(r_\ell > 0\) is the lower bound of the test ratio and \(r_u\) is the upper bound. The algorithm is based on bisection.
If the algorithm succeeds, the method produces a step \(\widetilde{h}\) such that:
\[\widetilde{h} \in \frac{1}{\sqrt{3}} \left[\sqrt{r_\ell - 1}, \sqrt{r_u + 1}\right] h^\star.\]With \(r_\ell = 1.5\) and \(r_u = 6\), the previous interval is:
\[\widetilde{h} \in \left[0.41, 1.5\right] h^\star.\]This method can fail if the value of the second derivative of the function is equal to zero. In this case, the test ratio is zero and there is no value of the step \(h\) which satisfies the required inequalities. For example, the function \(f(x) = \sin(x)\) for any real number \(x\) has a zero derivative at the point \(x = \pm \pi\). This algorithm will fail to compute a suitable step in this case.
The method can fail if the absolute precision of the function value is set to zero. This can happen if the user computes it depending on the relative precision and the absolute value of the function: if the value of the function at point \(x\) is zero, then the absolute precision is zero. For example, the function \(f(x) = x^2\) for any real number \(x\) has a zero value at the point \(x = 0\).
- Parameters:
- functionfunction
The function to differentiate.
- xfloat
The point where the derivative is to be evaluated.
- absolute_precisionfloat, > 0, optional
The absolute precision of evaluation of f. If the function value is close to zero (e.g. for the sin function at x = np.pi where f(x) is close to 1.0e-32), then the absolute precision cannot always be computed from the relative precision.
- minimum_test_ratiofloat, > 1
The minimum value of the test ratio.
- maximum_test_ratiofloat, > minimum_test_ratio
The maximum value of the test ratio.
- argslist
A list of optional arguments that the function takes as inputs. By default, there is no extra argument and calling sequence of the function must be y = function(x). If there are extra arguments, then the calling sequence of the function must be y = function(x, arg1, arg2, ...) where arg1, arg2, ..., are the items in the args list.
- verbosebool, optional
Set to True to print intermediate messages.
Methods
compute_first_derivative(step)Compute an approximate value of f'(x) using central finite difference.
compute_test_ratio(step[, function_values])Compute the test ratio
find_step([initial_step, iteration_maximum, ...])Compute an approximate optimum step for forward F.D.
Return the absolute precision of the function evaluation
Returns the number of function evaluations.
Return the minimum and maximum of the test ratio
Return the history of steps during the bissection search.
- Returns:
- None.
See also
References
Shi, H. J. M., Xie, Y., Xuan, M. Q., & Nocedal, J. (2022). Adaptive finite-difference interval estimation for noisy derivative-free optimization. SIAM Journal on Scientific Computing, 44 (4), A2302-A2321.
Examples
Compute the step of a badly scaled function.
>>> import numericalderivative as nd >>> >>> def scaled_exp(x): >>> alpha = 1.e6 >>> return np.exp(-x / alpha) >>> >>> x = 1.0e-2 >>> algorithm = nd.ShiXieXuanNocedalForward( >>> scaled_exp, x, >>> ) >>> h_optimal, number_of_iterations = algorithm.find_step() >>> f_prime_approx = algorithm.compute_first_derivative(h_optimal)
Set the initial step.
>>> initial_step = 1.0e8 >>> h_optimal, number_of_iterations = algorithm.find_step(initial_step)
- __init__(function, x, absolute_precision=1e-15, minimum_test_ratio=1.5, maximum_test_ratio=6.0, args=None, verbose=False)¶
Methods
__init__(function, x[, absolute_precision, ...])compute_first_derivative(step)Compute an approximate value of f'(x) using central finite difference.
compute_test_ratio(step[, function_values])Compute the test ratio
find_step([initial_step, iteration_maximum, ...])Compute an approximate optimum step for forward F.D.
Return the absolute precision of the function evaluation
Returns the number of function evaluations.
Return the minimum and maximum of the test ratio
Return the history of steps during the bissection search.
- compute_first_derivative(step)¶
Compute an approximate value of f'(x) using central finite difference.
- Parameters:
- stepfloat, > 0
The step size.
- Returns:
- f_prime_approxfloat
The approximation of f'(x).
- compute_test_ratio(step, function_values=None)¶
Compute the test ratio
- Parameters:
- stepfloat, > 0
The finite difference step
- function_valueslist(3 floats)
The function values f(x), f(x + h), f(x + 4h). If function_values is None, then compute the funtion values.
- Returns:
- test_ratiofloat, > 0
The test ratio
- find_step(initial_step=None, iteration_maximum=53, logscale=True)¶
Compute an approximate optimum step for forward F.D. formula
If it is not provided by the user, the default initial step is based on the hypothesis that the second derivative is equal to 1 and is computed from
compute_step(). This is slightly different from the one suggested in (Shi, Xie, Xuan & Nocedal, 2022) which involves a \(\sqrt{3}\). This initial guess is not always accurate and can lead to failure of the algorithm.- Parameters:
- initial_stepfloat, > 0
The initial step in the algorithm.
- iteration_maximumint, optional
The number of number_of_iterations.
- logscalebool, optional
Set to True to use a logarithmic scale when updating the step k during the search. Set to False to use a linear scale when updating the step k during the search.
- Returns:
- estim_stepfloat
A step size which is near to optimal.
- number_of_iterationsint
The number of iterations required to reach that optimum.
- get_absolute_precision()¶
Return the absolute precision of the function evaluation
- Returns:
- absolute_precisionfloat
The absolute precision of evaluation of f.
- get_number_of_function_evaluations()¶
Returns the number of function evaluations.
- Returns:
- number_of_function_evaluationsint
The number of function evaluations.
- get_ratio_min_max()¶
Return the minimum and maximum of the test ratio
- Returns:
- minimum_test_ratiofloat, > 0
The lower bound of the test ratio.
- maximum_test_ratiofloat, > 0
The upper bound of the test ratio.
- get_step_history()¶
Return the history of steps during the bissection search.
- Returns:
- step_historylist(float)
The list of steps k during intermediate iterations of the bissection search. This is updated by
find_step().