.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_example/plot_finite_differences.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_example_plot_finite_differences.py: Use the finite differences formulas =================================== This example shows how to use finite difference (F.D.) formulas. References ---------- - M. Baudin (2023). Méthodes numériques. Dunod. .. GENERATED FROM PYTHON SOURCE LINES 16-20 .. code-block:: Python import numericalderivative as nd import numpy as np import pylab as pl .. GENERATED FROM PYTHON SOURCE LINES 21-23 Compute the first derivative using forward F.D. formula ------------------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 26-27 This is the function we want to compute the derivative of. .. GENERATED FROM PYTHON SOURCE LINES 27-32 .. code-block:: Python def scaled_exp(x): alpha = 1.0e6 return np.exp(-x / alpha) .. GENERATED FROM PYTHON SOURCE LINES 33-34 Use the F.D. formula .. GENERATED FROM PYTHON SOURCE LINES 34-40 .. code-block:: Python x = 1.0 finite_difference = nd.FirstDerivativeForward(scaled_exp, x) step = 1.0e-3 # A first guess f_prime_approx = finite_difference.compute(step) print(f"Approximate f'(x) = {f_prime_approx}") .. rst-class:: sphx-glr-script-out .. code-block:: none Approximate f'(x) = -9.999989725174565e-07 .. GENERATED FROM PYTHON SOURCE LINES 41-42 To check our result, we define the exact first derivative. .. GENERATED FROM PYTHON SOURCE LINES 45-50 .. code-block:: Python def scaled_exp_prime(x): alpha = 1.0e6 return -np.exp(-x / alpha) / alpha .. GENERATED FROM PYTHON SOURCE LINES 51-52 Compute the exact derivative and the absolute error. .. GENERATED FROM PYTHON SOURCE LINES 52-57 .. code-block:: Python f_prime_exact = scaled_exp_prime(x) print(f"Exact f'(x) = {f_prime_exact}") absolute_error = abs(f_prime_approx - f_prime_exact) print(f"Absolute error = {absolute_error}") .. rst-class:: sphx-glr-script-out .. code-block:: none Exact f'(x) = -9.999990000005e-07 Absolute error = 2.748304361188279e-14 .. GENERATED FROM PYTHON SOURCE LINES 58-59 Define the error function: this will be useful later. .. GENERATED FROM PYTHON SOURCE LINES 62-74 .. code-block:: Python def compute_absolute_error(step, x=1.0, verbose=True): finite_difference = nd.FirstDerivativeForward(scaled_exp, x) f_prime_approx = finite_difference.compute(step) f_prime_exact = scaled_exp_prime(x) absolute_error = abs(f_prime_approx - f_prime_exact) if verbose: print(f"Approximate f'(x) = {f_prime_approx}") print(f"Exact f'(x) = {f_prime_exact}") print(f"Absolute error = {absolute_error}") return absolute_error .. GENERATED FROM PYTHON SOURCE LINES 75-77 Compute the exact step for the forward F.D. formula --------------------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 79-82 This step depends on the second derivative. Firstly, we assume that this is unknown and use a first guess of it, equal to 1. .. GENERATED FROM PYTHON SOURCE LINES 84-90 .. code-block:: Python second_derivative_value = 1.0 step, absolute_error = nd.FirstDerivativeForward.compute_step(second_derivative_value) print(f"Approximately optimal step (using f''(x) = 1) = {step}") print(f"Approximately absolute error = {absolute_error}") _ = compute_absolute_error(step, True) .. rst-class:: sphx-glr-script-out .. code-block:: none Approximately optimal step (using f''(x) = 1) = 2e-08 Approximately absolute error = 2e-08 Approximate f'(x) = -9.992007171418978e-07 Exact f'(x) = -9.999990000005e-07 Absolute error = 7.982828586023276e-10 .. GENERATED FROM PYTHON SOURCE LINES 91-94 We see that the new step is much better than the our initial guess: the approximately optimal step is much smaller, which leads to a smaller absolute error. .. GENERATED FROM PYTHON SOURCE LINES 96-98 In our particular example, the second derivative is known: let's use this information and compute the exactly optimal step. .. GENERATED FROM PYTHON SOURCE LINES 101-106 .. code-block:: Python def scaled_exp_2nd_derivative(x): alpha = 1.0e6 return np.exp(-x / alpha) / (alpha**2) .. GENERATED FROM PYTHON SOURCE LINES 107-115 .. code-block:: Python second_derivative_value = scaled_exp_2nd_derivative(x) print(f"Exact second derivative f''(x) = {second_derivative_value}") step, absolute_error = nd.FirstDerivativeForward.compute_step(second_derivative_value) print(f"Approximately optimal step (using f''(x) = 1) = {step}") print(f"Approximately absolute error = {absolute_error}") _ = compute_absolute_error(step, True) .. rst-class:: sphx-glr-script-out .. code-block:: none Exact second derivative f''(x) = 9.999990000005e-13 Approximately optimal step (using f''(x) = 1) = 0.0200000100000025 Approximately absolute error = 1.99999900000025e-14 Approximate f'(x) = -9.999989887714385e-07 Exact f'(x) = -9.999990000005e-07 Absolute error = 1.1229061571641635e-14 .. GENERATED FROM PYTHON SOURCE LINES 116-176 .. code-block:: Python def plot_step_sensitivity( finite_difference, x, function_derivative, step_array, higher_derivative_value, relative_error=1.0e-16, ): """ Compute the approximate derivative using central F.D. formula. Create a plot representing the absolute error depending on step. Parameters ---------- finite_difference : FiniteDifferenceFormula The F.D. formula x : float The input point function_derivative : function The exact derivative of the function. step_array : array(n_points) The array of steps to consider higher_derivative_value : float The value of the higher derivative required for the optimal step for the derivative """ number_of_points = len(step_array) error_array = np.zeros((number_of_points)) for i in range(number_of_points): f_prime_approx = finite_difference.compute(step_array[i]) error_array[i] = abs(f_prime_approx - function_derivative(x)) pl.figure() pl.plot(step_array, error_array, label="Computed") pl.title(finite_difference.__class__.__name__) pl.xlabel("h") pl.ylabel("Error") pl.xscale("log") pl.legend(bbox_to_anchor=(1.0, 1.0)) pl.yscale("log") # Compute the error using the model function = finite_difference.get_function().get_function() absolute_precision_function_eval = abs(function(x)) * relative_error error_array = np.zeros((number_of_points)) for i in range(number_of_points): error_array[i] = finite_difference.compute_error( step_array[i], higher_derivative_value, absolute_precision_function_eval ) pl.plot(step_array, error_array, "--", label="Model") # Compute the optimal step size and optimal error optimal_step, absolute_error = finite_difference.compute_step( higher_derivative_value, absolute_precision_function_eval ) pl.plot([optimal_step], [absolute_error], "o", label=r"$(h^*, e(h^*))$") # pl.tight_layout() return .. GENERATED FROM PYTHON SOURCE LINES 177-181 For the forward F.D. formula, the absolute error is known if the second derivative value can be computed. The next script uses this feature from the `compute_error()` method to plot the upper bound of the error. .. GENERATED FROM PYTHON SOURCE LINES 183-190 .. code-block:: Python number_of_points = 1000 step_array = np.logspace(-10.0, 5.0, number_of_points) finite_difference = nd.FirstDerivativeForward(scaled_exp, x) plot_step_sensitivity( finite_difference, x, scaled_exp_prime, step_array, second_derivative_value ) .. image-sg:: /auto_example/images/sphx_glr_plot_finite_differences_001.png :alt: FirstDerivativeForward :srcset: /auto_example/images/sphx_glr_plot_finite_differences_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 191-194 These features are available with most F.D. formulas: the next sections show how the module provides the exact optimal step and the exact error for other formulas. .. GENERATED FROM PYTHON SOURCE LINES 196-198 Central F.D. formula for first derivative ----------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 200-201 Let us see how this behaves with central F.D. .. GENERATED FROM PYTHON SOURCE LINES 201-210 .. code-block:: Python # For the central F.D. formula, the exact step depends on the # third derivative def scaled_exp_3d_derivative(x): alpha = 1.0e6 return -np.exp(-x / alpha) / (alpha**3) .. GENERATED FROM PYTHON SOURCE LINES 211-219 .. code-block:: Python number_of_points = 1000 step_array = np.logspace(-10.0, 5.0, number_of_points) finite_difference = nd.FirstDerivativeCentral(scaled_exp, x) third_derivative_value = scaled_exp_3d_derivative(x) plot_step_sensitivity( finite_difference, x, scaled_exp_prime, step_array, third_derivative_value ) .. image-sg:: /auto_example/images/sphx_glr_plot_finite_differences_002.png :alt: FirstDerivativeCentral :srcset: /auto_example/images/sphx_glr_plot_finite_differences_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 220-222 Central F.D. formula for second derivative ------------------------------------------ .. GENERATED FROM PYTHON SOURCE LINES 225-226 Let us see how this behaves with central F.D. for the second derivative. .. GENERATED FROM PYTHON SOURCE LINES 229-231 For the central F.D. formula of the second derivative, the exact step depends on the fourth derivative .. GENERATED FROM PYTHON SOURCE LINES 231-236 .. code-block:: Python def scaled_exp_4th_derivative(x): alpha = 1.0e6 return np.exp(-x / alpha) / (alpha**4) .. GENERATED FROM PYTHON SOURCE LINES 237-245 .. code-block:: Python number_of_points = 1000 step_array = np.logspace(-5.0, 7.0, number_of_points) finite_difference = nd.SecondDerivativeCentral(scaled_exp, x) fourth_derivative_value = scaled_exp_4th_derivative(x) plot_step_sensitivity( finite_difference, x, scaled_exp_2nd_derivative, step_array, fourth_derivative_value ) .. image-sg:: /auto_example/images/sphx_glr_plot_finite_differences_003.png :alt: SecondDerivativeCentral :srcset: /auto_example/images/sphx_glr_plot_finite_differences_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 246-248 Central F.D. formula for third derivative ----------------------------------------- .. GENERATED FROM PYTHON SOURCE LINES 251-252 Let us see how this behaves with central F.D. for the third derivative. .. GENERATED FROM PYTHON SOURCE LINES 255-257 For the central F.D. formula of the third derivative, the exact step depends on the fifth derivative .. GENERATED FROM PYTHON SOURCE LINES 257-262 .. code-block:: Python def scaled_exp_5th_derivative(x): alpha = 1.0e6 return np.exp(-x / alpha) / (alpha**5) .. GENERATED FROM PYTHON SOURCE LINES 263-271 .. code-block:: Python number_of_points = 1000 step_array = np.logspace(-5.0, 7.0, number_of_points) finite_difference = nd.ThirdDerivativeCentral(scaled_exp, x) fifth_derivative_value = scaled_exp_5th_derivative(x) plot_step_sensitivity( finite_difference, x, scaled_exp_3d_derivative, step_array, fifth_derivative_value ) .. image-sg:: /auto_example/images/sphx_glr_plot_finite_differences_004.png :alt: ThirdDerivativeCentral :srcset: /auto_example/images/sphx_glr_plot_finite_differences_004.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 0.615 seconds) .. _sphx_glr_download_auto_example_plot_finite_differences.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_finite_differences.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_finite_differences.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_finite_differences.zip `