.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_example/plot_openturns.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_example_plot_openturns.py: Applies Stepleman & Winarsky method to an OpenTURNS function ============================================================ .. GENERATED FROM PYTHON SOURCE LINES 10-16 .. code-block:: Python import openturns as ot import numericalderivative as nd from openturns.usecases import chaboche_model from openturns.usecases import cantilever_beam from openturns.usecases import fireSatellite_function .. GENERATED FROM PYTHON SOURCE LINES 17-21 Chaboche model -------------- Load the Chaboche model .. GENERATED FROM PYTHON SOURCE LINES 21-27 .. code-block:: Python cm = chaboche_model.ChabocheModel() model = ot.Function(cm.model) inputMean = cm.inputDistribution.getMean() print(f"inputMean = {inputMean}") meanStrain, meanR, meanC, meanGamma = inputMean .. rst-class:: sphx-glr-script-out .. code-block:: none inputMean = [0.035,7.5e+08,2.75e+09,10] .. GENERATED FROM PYTHON SOURCE LINES 28-29 Print the derivative from OpenTURNS .. GENERATED FROM PYTHON SOURCE LINES 29-33 .. code-block:: Python derivative = model.gradient(inputMean) print(f"derivative = ") derivative .. rst-class:: sphx-glr-script-out .. code-block:: none derivative = .. raw:: html

[[ 1.93789e+09 ]
[ 1 ]
[ 0.0297619 ]
[ -1.33845e+06 ]]



.. GENERATED FROM PYTHON SOURCE LINES 34-43 Here is the derivative with default step size in OpenTURNS : .. code-block:: derivative = [[ 1.93789e+09 ] [ 1 ] [ 0.0297619 ] [ -1.33845e+06 ]] .. GENERATED FROM PYTHON SOURCE LINES 45-46 Print the gradient function .. GENERATED FROM PYTHON SOURCE LINES 46-50 .. code-block:: Python gradient = model.getGradient() gradient .. raw:: html

No analytical gradient available. Try using finite difference instead.



.. GENERATED FROM PYTHON SOURCE LINES 51-53 Derivative with respect to strain Define a function which takes only strain as input and returns a float .. GENERATED FROM PYTHON SOURCE LINES 53-59 .. code-block:: Python def functionStrain(strain, r, c, gamma): x = [strain, r, c, gamma] sigma = model(x) return sigma[0] .. GENERATED FROM PYTHON SOURCE LINES 60-61 Algorithm to detect h* for Strain .. GENERATED FROM PYTHON SOURCE LINES 61-73 .. code-block:: Python h0 = 1.0e0 args = [meanR, meanC, meanGamma] algorithm = nd.SteplemanWinarsky(functionStrain, meanStrain, args=args, verbose=True) h_optimal, iterations = algorithm.find_step(h0) number_of_function_evaluations = algorithm.get_number_of_function_evaluations() print("Optimum h =", h_optimal) print("iterations =", iterations) print("Function evaluations =", number_of_function_evaluations) f_prime_approx = algorithm.compute_first_derivative(h_optimal) print(f"Derivative wrt strain= {f_prime_approx:.17e}") .. rst-class:: sphx-glr-script-out .. code-block:: none + find_step() initial_step=1.000e+00 number_of_iterations=0, h=2.5000e-01, |FD(h_current) - FD(h_previous)|=2.1296e+12 number_of_iterations=1, h=6.2500e-02, |FD(h_current) - FD(h_previous)|=2.6233e+09 number_of_iterations=2, h=1.5625e-02, |FD(h_current) - FD(h_previous)|=1.2076e+08 number_of_iterations=3, h=3.9062e-03, |FD(h_current) - FD(h_previous)|=7.4021e+06 number_of_iterations=4, h=9.7656e-04, |FD(h_current) - FD(h_previous)|=4.6207e+05 number_of_iterations=5, h=2.4414e-04, |FD(h_current) - FD(h_previous)|=2.8877e+04 number_of_iterations=6, h=6.1035e-05, |FD(h_current) - FD(h_previous)|=1.8048e+03 number_of_iterations=7, h=1.5259e-05, |FD(h_current) - FD(h_previous)|=1.1280e+02 number_of_iterations=8, h=3.8147e-06, |FD(h_current) - FD(h_previous)|=7.0430e+00 number_of_iterations=9, h=9.5367e-07, |FD(h_current) - FD(h_previous)|=4.5312e-01 number_of_iterations=10, h=2.3842e-07, |FD(h_current) - FD(h_previous)|=0.0000e+00 Stop because zero difference. Optimum h = 9.5367431640625e-07 iterations = 10 Function evaluations = 24 Derivative wrt strain= 1.93789224675000000e+09 .. GENERATED FROM PYTHON SOURCE LINES 74-76 Derivative with respect to R Define a function which takes only R as input and returns a float .. GENERATED FROM PYTHON SOURCE LINES 76-82 .. code-block:: Python def functionR(r, strain, c, gamma): x = [strain, r, c, gamma] sigma = model(x) return sigma[0] .. GENERATED FROM PYTHON SOURCE LINES 83-84 Algorithm to detect h* for R .. GENERATED FROM PYTHON SOURCE LINES 84-96 .. code-block:: Python h0 = 1.0e9 args = [meanStrain, meanC, meanGamma] algorithm = nd.SteplemanWinarsky(functionR, meanR, args=args, verbose=True) h_optimal, iterations = algorithm.find_step(h0) number_of_function_evaluations = algorithm.get_number_of_function_evaluations() print(f"Optimum h = {h_optimal:e}") print("iterations =", iterations) print("Function evaluations =", number_of_function_evaluations) f_prime_approx = algorithm.compute_first_derivative(h_optimal) print(f"Derivative wrt R= {f_prime_approx:.17e}") .. rst-class:: sphx-glr-script-out .. code-block:: none + find_step() initial_step=1.000e+09 number_of_iterations=0, h=2.5000e+08, |FD(h_current) - FD(h_previous)|=2.2204e-16 number_of_iterations=1, h=6.2500e+07, |FD(h_current) - FD(h_previous)|=2.2204e-16 number_of_iterations=2, h=1.5625e+07, |FD(h_current) - FD(h_previous)|=0.0000e+00 Stop because zero difference. Optimum h = 6.250000e+07 iterations = 2 Function evaluations = 8 Derivative wrt R= 1.00000000000000000e+00 .. GENERATED FROM PYTHON SOURCE LINES 97-99 Derivative with respect to C Define a function which takes only C as input and returns a float .. GENERATED FROM PYTHON SOURCE LINES 99-105 .. code-block:: Python def functionR(c, strain, r, gamma): x = [strain, r, c, gamma] sigma = model(x) return sigma[0] .. GENERATED FROM PYTHON SOURCE LINES 106-107 Algorithm to detect h* for C .. GENERATED FROM PYTHON SOURCE LINES 107-119 .. code-block:: Python h0 = 1.0e15 args = [meanStrain, meanR, meanGamma] algorithm = nd.SteplemanWinarsky(functionR, meanC, args=args, verbose=True) h_optimal, iterations = algorithm.find_step(h0) number_of_function_evaluations = algorithm.get_number_of_function_evaluations() print(f"Optimum h = {h_optimal:e}") print("iterations =", iterations) print("Function evaluations =", number_of_function_evaluations) f_prime_approx = algorithm.compute_first_derivative(h_optimal) print(f"Derivative wrt C= {f_prime_approx:.17e}") .. rst-class:: sphx-glr-script-out .. code-block:: none + find_step() initial_step=1.000e+15 number_of_iterations=0, h=2.5000e+14, |FD(h_current) - FD(h_previous)|=3.4694e-18 number_of_iterations=1, h=6.2500e+13, |FD(h_current) - FD(h_previous)|=3.4694e-18 number_of_iterations=2, h=1.5625e+13, |FD(h_current) - FD(h_previous)|=0.0000e+00 Stop because zero difference. Optimum h = 6.250000e+13 iterations = 2 Function evaluations = 8 Derivative wrt C= 2.95311910281286574e-02 .. GENERATED FROM PYTHON SOURCE LINES 120-122 Derivative with respect to Gamma Define a function which takes only C as input and returns a float .. GENERATED FROM PYTHON SOURCE LINES 122-128 .. code-block:: Python def functionGamma(gamma, strain, r, c): x = [strain, r, c, gamma] sigma = model(x) return sigma[0] .. GENERATED FROM PYTHON SOURCE LINES 129-130 Algorithm to detect h* for Gamma .. GENERATED FROM PYTHON SOURCE LINES 130-142 .. code-block:: Python h0 = 1.0e0 args = [meanStrain, meanR, meanC] algorithm = nd.SteplemanWinarsky(functionGamma, meanGamma, args=args, verbose=True) h_optimal, iterations = algorithm.find_step(h0) number_of_function_evaluations = algorithm.get_number_of_function_evaluations() print(f"Optimum h = {h_optimal:e}") print("iterations =", iterations) print("Function evaluations =", number_of_function_evaluations) f_prime_approx = algorithm.compute_first_derivative(h_optimal) print(f"Derivative wrt Gamma= {f_prime_approx:.17e}") .. rst-class:: sphx-glr-script-out .. code-block:: none + find_step() initial_step=1.000e+00 number_of_iterations=0, h=2.5000e-01, |FD(h_current) - FD(h_previous)|=1.2204e+02 number_of_iterations=1, h=6.2500e-02, |FD(h_current) - FD(h_previous)|=7.6272e+00 number_of_iterations=2, h=1.5625e-02, |FD(h_current) - FD(h_previous)|=4.7670e-01 number_of_iterations=3, h=3.9062e-03, |FD(h_current) - FD(h_previous)|=2.9785e-02 number_of_iterations=4, h=9.7656e-04, |FD(h_current) - FD(h_previous)|=1.8768e-03 number_of_iterations=5, h=2.4414e-04, |FD(h_current) - FD(h_previous)|=1.2207e-04 number_of_iterations=6, h=6.1035e-05, |FD(h_current) - FD(h_previous)|=2.4414e-04 Stop because no monotony anymore. Optimum h = 2.441406e-04 iterations = 6 Function evaluations = 16 Derivative wrt Gamma= -1.33845466918945312e+06 .. GENERATED FROM PYTHON SOURCE LINES 143-144 Derivative with respect to [strain, r, c, gamma] .. GENERATED FROM PYTHON SOURCE LINES 144-156 .. code-block:: Python def genericFunction(x, xIndex, referenceInput): inputDimension = referenceInput.getDimension() complementIndices = [i for i in range(inputDimension) if i != xIndex] modelInput = ot.Point(inputDimension) modelInput[xIndex] = x for i in complementIndices: modelInput[i] = referenceInput[i] y = model(modelInput) modelOutput = y[0] return modelOutput .. GENERATED FROM PYTHON SOURCE LINES 157-158 Default step size for all components .. GENERATED FROM PYTHON SOURCE LINES 158-186 .. code-block:: Python initialStep = ot.Point([1.0e0, 1.0e8, 1.0e8, 1.0e0]) inputDimension = model.getInputDimension() referenceInput = cm.inputDistribution.getMean() inputDescription = cm.inputDistribution.getDescription() optimalStep = ot.Point(inputDimension) for xIndex in range(inputDimension): inputMarginal = referenceInput[xIndex] print( f"+ Derivative with respect to {inputDescription[xIndex]} " f"at point {inputMarginal}" ) args = [xIndex, referenceInput] algorithm = nd.SteplemanWinarsky( genericFunction, inputMarginal, args=args, verbose=True, ) h_optimal, iterations = algorithm.find_step(initialStep[xIndex]) number_of_function_evaluations = algorithm.get_number_of_function_evaluations() print(f" Optimum h = {h_optimal:e}") print(" Iterations =", iterations) print(" Function evaluations =", number_of_function_evaluations) f_prime_approx = algorithm.compute_first_derivative(h_optimal) print(f" Derivative wrt {inputDescription[xIndex]}= {f_prime_approx:.17e}") # Store optimal point optimalStep[xIndex] = h_optimal .. rst-class:: sphx-glr-script-out .. code-block:: none + Derivative with respect to Strain at point 0.035 + find_step() initial_step=1.000e+00 number_of_iterations=0, h=2.5000e-01, |FD(h_current) - FD(h_previous)|=2.1296e+12 number_of_iterations=1, h=6.2500e-02, |FD(h_current) - FD(h_previous)|=2.6233e+09 number_of_iterations=2, h=1.5625e-02, |FD(h_current) - FD(h_previous)|=1.2076e+08 number_of_iterations=3, h=3.9062e-03, |FD(h_current) - FD(h_previous)|=7.4021e+06 number_of_iterations=4, h=9.7656e-04, |FD(h_current) - FD(h_previous)|=4.6207e+05 number_of_iterations=5, h=2.4414e-04, |FD(h_current) - FD(h_previous)|=2.8877e+04 number_of_iterations=6, h=6.1035e-05, |FD(h_current) - FD(h_previous)|=1.8048e+03 number_of_iterations=7, h=1.5259e-05, |FD(h_current) - FD(h_previous)|=1.1280e+02 number_of_iterations=8, h=3.8147e-06, |FD(h_current) - FD(h_previous)|=7.0430e+00 number_of_iterations=9, h=9.5367e-07, |FD(h_current) - FD(h_previous)|=4.5312e-01 number_of_iterations=10, h=2.3842e-07, |FD(h_current) - FD(h_previous)|=0.0000e+00 Stop because zero difference. Optimum h = 9.536743e-07 Iterations = 10 Function evaluations = 24 Derivative wrt Strain= 1.93789224675000000e+09 + Derivative with respect to R at point 750000000.0000002 + find_step() initial_step=1.000e+08 number_of_iterations=0, h=2.5000e+07, |FD(h_current) - FD(h_previous)|=0.0000e+00 Stop because zero difference. Optimum h = 1.000000e+08 Iterations = 0 Function evaluations = 4 Derivative wrt R= 1.00000000000000000e+00 + Derivative with respect to C at point 2750000000.0 + find_step() initial_step=1.000e+08 number_of_iterations=0, h=2.5000e+07, |FD(h_current) - FD(h_previous)|=1.1935e-15 number_of_iterations=1, h=6.2500e+06, |FD(h_current) - FD(h_previous)|=2.3870e-15 Stop because no monotony anymore. Optimum h = 2.500000e+07 Iterations = 1 Function evaluations = 6 Derivative wrt C= 2.95311910281300556e-02 + Derivative with respect to Gamma at point 10.0 + find_step() initial_step=1.000e+00 number_of_iterations=0, h=2.5000e-01, |FD(h_current) - FD(h_previous)|=1.2204e+02 number_of_iterations=1, h=6.2500e-02, |FD(h_current) - FD(h_previous)|=7.6272e+00 number_of_iterations=2, h=1.5625e-02, |FD(h_current) - FD(h_previous)|=4.7670e-01 number_of_iterations=3, h=3.9062e-03, |FD(h_current) - FD(h_previous)|=2.9785e-02 number_of_iterations=4, h=9.7656e-04, |FD(h_current) - FD(h_previous)|=1.8768e-03 number_of_iterations=5, h=2.4414e-04, |FD(h_current) - FD(h_previous)|=1.2207e-04 number_of_iterations=6, h=6.1035e-05, |FD(h_current) - FD(h_previous)|=2.4414e-04 Stop because no monotony anymore. Optimum h = 2.441406e-04 Iterations = 6 Function evaluations = 16 Derivative wrt Gamma= -1.33845466918945312e+06 .. GENERATED FROM PYTHON SOURCE LINES 187-190 .. code-block:: Python print("The optimal step for central finite difference is") print(f"optimalStep = {optimalStep}") .. rst-class:: sphx-glr-script-out .. code-block:: none The optimal step for central finite difference is optimalStep = [9.53674e-07,1e+08,2.5e+07,0.000244141] .. GENERATED FROM PYTHON SOURCE LINES 191-193 Configure the model with the optimal step computed from SteplemanWinarsky .. GENERATED FROM PYTHON SOURCE LINES 193-200 .. code-block:: Python gradStep = ot.ConstantStep(optimalStep) # Constant gradient step model.setGradient(ot.CenteredFiniteDifferenceGradient(gradStep, model.getEvaluation())) # Now the gradient uses the optimal step sizes derivative = model.gradient(inputMean) print(f"derivative = ") print(derivative) .. rst-class:: sphx-glr-script-out .. code-block:: none derivative = [[ 1.93789e+09 ] [ 1 ] [ 0.0295312 ] [ -1.33845e+06 ]] .. GENERATED FROM PYTHON SOURCE LINES 201-210 Derivative with step size computed from SteplemanWinarsky : .. code-block:: derivative = [[ 1.93789e+09 ] [ 1 ] [ 0.0295312 ] # <- This is a change in the 3d decimal [ -1.33845e+06 ]] .. GENERATED FROM PYTHON SOURCE LINES 212-217 Compute the step of a ot.Function using Stepleman & Winarsky ------------------------------------------------------------ The function below computes the step of a finite difference formula applied to an OpenTURNS function using Stepleman & Winarsky's method. .. GENERATED FROM PYTHON SOURCE LINES 220-304 .. code-block:: Python def computeSteplemanWinarskyStep( model, initial_step, referenceInput, beta=4.0, verbose=False, ): """ Uses SteplemanWinarsky to compute a step size for central finite differences The central F.D. is: f'(x) ~ (f(x + h) - f(x - h)) / (2 * h) Parameters ---------- model : ot.Function(inputDimension, 1) The model, which output dimension equal to 1. initial_step : ot.Point(inputDimension) The initial step size. referenceInput : ot.Point(inputDimension) The point X where the derivative is to be computed. verbose : bool, optional Set to True to print intermediate messages. The default is False. Returns ------- optimalStep : ot.Point(inputDimension) The optimal step for central finite difference. """ def genericFunction(x, xIndex, referenceInput): if verbose: print("x = ", x) inputDimension = referenceInput.getDimension() complementIndices = [i for i in range(inputDimension) if i != xIndex] modelInput = ot.Point(inputDimension) modelInput[xIndex] = x for i in complementIndices: modelInput[i] = referenceInput[i] y = model(modelInput) modelOutput = y[0] if verbose: print("y = ", y) return modelOutput inputDimension = model.getInputDimension() inputDescription = model.getInputDescription() optimalStep = ot.Point(inputDimension) if verbose: print(f"Input dimension = {inputDimension}") print(f"Input description = {inputDescription}") for xIndex in range(inputDimension): inputMarginal = referenceInput[xIndex] if verbose: print( f"+ Derivative with respect to {inputDescription[xIndex]} " f"at point {inputMarginal}" ) args = [xIndex, referenceInput] algorithm = nd.SteplemanWinarsky( genericFunction, inputMarginal, beta=beta, args=args, verbose=verbose, ) h_optimal, iterations = algorithm.find_step( initial_step[xIndex], ) number_of_function_evaluations = algorithm.get_number_of_function_evaluations() f_prime_approx = algorithm.compute_first_derivative(h_optimal) if verbose: print(f" Optimum h = {h_optimal:e}") print(" Iterations =", iterations) print(" Function evaluations =", number_of_function_evaluations) print( f" Derivative wrt {inputDescription[xIndex]} = {f_prime_approx:.17e}" ) # Store optimal point optimalStep[xIndex] = h_optimal return optimalStep .. GENERATED FROM PYTHON SOURCE LINES 305-310 Cantilever beam model --------------------- Apply the same method to the cantilever beam model Load the cantilever beam model .. GENERATED FROM PYTHON SOURCE LINES 310-315 .. code-block:: Python cb = cantilever_beam.CantileverBeam() model = ot.Function(cb.model) inputMean = cb.distribution.getMean() print(f"inputMean = {inputMean}") .. rst-class:: sphx-glr-script-out .. code-block:: none inputMean = [6.70455e+10,300,2.55,1.45385e-07] .. GENERATED FROM PYTHON SOURCE LINES 316-317 Print the derivative from OpenTURNS .. GENERATED FROM PYTHON SOURCE LINES 317-321 .. code-block:: Python derivative = model.gradient(inputMean) print(f"derivative = ") print(f"{derivative}") .. rst-class:: sphx-glr-script-out .. code-block:: none derivative = [[ -2.53725e-12 ] [ 0.000567037 ] [ 0.200131 ] [ -1.17008e+06 ]] .. GENERATED FROM PYTHON SOURCE LINES 322-331 Derivative with OpenTURNS's default step size : .. code-block:: derivative = [[ -2.53725e-12 ] [ 0.000567037 ] [ 0.200131 ] [ -1.17008e+06 ]] .. GENERATED FROM PYTHON SOURCE LINES 333-336 Notice that the CantileverBeam model has an exact gradient in OpenTURNS, because it is symbolic. Hence, using the optimal step should not make any difference. .. GENERATED FROM PYTHON SOURCE LINES 338-339 Compute step from SteplemanWinarsky .. GENERATED FROM PYTHON SOURCE LINES 339-351 .. code-block:: Python initialStep = ot.Point(inputMean) / 2 optimalStep = computeSteplemanWinarskyStep(model, initialStep, inputMean) print("The optimal step for central finite difference is") print(f"optimalStep = {optimalStep}") gradStep = ot.ConstantStep(optimalStep) # Constant gradient step model.setGradient(ot.CenteredFiniteDifferenceGradient(gradStep, model.getEvaluation())) # Now the gradient uses the optimal step sizes derivative = model.gradient(inputMean) print(f"derivative = ") print(f"{derivative}") .. rst-class:: sphx-glr-script-out .. code-block:: none The optimal step for central finite difference is optimalStep = [127879,37.5,4.86374e-06,2.77299e-13] derivative = [[ -2.53725e-12 ] [ 0.000567037 ] [ 0.200131 ] [ -1.17008e+06 ]] .. GENERATED FROM PYTHON SOURCE LINES 352-363 Derivative with SteplemanWinarskyStep .. code-block:: derivative = [[ -2.53725e-12 ] [ 0.000567037 ] [ 0.200131 ] [ -1.17008e+06 ]] We see that this is the correct step size. .. GENERATED FROM PYTHON SOURCE LINES 366-371 Fire satellite model -------------------- Load the Fire satellite use case with total torque as output Print the derivative from OpenTURNS .. GENERATED FROM PYTHON SOURCE LINES 371-382 .. code-block:: Python m = fireSatellite_function.FireSatelliteModel() inputMean = m.inputDistribution.getMean() m.modelTotalTorque.setInputDescription( ["H", "Pother", "Fs", "theta", "Lsp", "q", "RD", "Lalpha", "Cd"] ) m.modelTotalTorque.setOutputDescription(["Total torque"]) model = ot.Function(m.modelTotalTorque) derivative = model.gradient(inputMean) print(f"derivative = ") print(f"{derivative}") .. rst-class:: sphx-glr-script-out .. code-block:: none derivative = 9x1 [[ -4.78066e-10 ] [ -3.46945e-13 ] [ 2.30666e-09 ] [ 4.90736e-09 ] [ 1.61453e-06 ] [ 2.15271e-06 ] [ 5.17815e-10 ] [ 0.00582819 ] [ 0.0116564 ]] .. GENERATED FROM PYTHON SOURCE LINES 383-398 From OpenTURNS with default settings: .. code-block:: derivative = 9x1 [[ -4.78066e-10 ] [ -3.46945e-13 ] [ 2.30666e-09 ] [ 4.90736e-09 ] [ 1.61453e-06 ] [ 2.15271e-06 ] [ 5.17815e-10 ] [ 0.00582819 ] [ 0.0116564 ]] .. GENERATED FROM PYTHON SOURCE LINES 400-413 There is a specific difficulty with FireSatellite model for the derivative with respect to Pother. Indeed, the gradient of the TotalTorque with respect to Pother is close to zero. Furthermore, the nominal value (mean) of Pother is 1000. Therefore, in order to get a sufficiently large number of lost digits, the algorithm is forced to use a very large step h, close to 10^4. But this leads to a negative value of Pother - h, which produces a math domain error. In other words, the model has an input range which is ignored by the algorithm. To solve this issue the interval which defines the set of possible values of x should be introduced. .. GENERATED FROM PYTHON SOURCE LINES 415-416 Compute step from SteplemanWinarsky .. GENERATED FROM PYTHON SOURCE LINES 416-426 .. code-block:: Python initialStep = ot.Point(inputMean) / 2 optimalStep = computeSteplemanWinarskyStep( model, initialStep, inputMean, verbose=False, ) print("The optimal step for central finite difference is") print(f"optimalStep = {optimalStep}") .. rst-class:: sphx-glr-script-out .. code-block:: none The optimal step for central finite difference is optimalStep = [34.3323,1.95312,0.683594,0.00732422,0.000244141,0.000244141,0.15625,0.000244141,0.00012207] .. GENERATED FROM PYTHON SOURCE LINES 427-434 .. code-block:: Python gradStep = ot.ConstantStep(optimalStep) # Constant gradient step model.setGradient(ot.CenteredFiniteDifferenceGradient(gradStep, model.getEvaluation())) # Now the gradient uses the optimal step sizes derivative = model.gradient(inputMean) print(f"derivative = ") print(f"{derivative}") .. rst-class:: sphx-glr-script-out .. code-block:: none derivative = 9x1 [[ -4.78157e-10 ] [ -2.91776e-13 ] [ 2.30671e-09 ] [ 4.90745e-09 ] [ 1.61453e-06 ] [ 2.15271e-06 ] [ 5.17805e-10 ] [ 0.00582819 ] [ 0.0116564 ]] .. GENERATED FROM PYTHON SOURCE LINES 435-450 From SteplemanWinarsky .. code-block:: derivative = 9x1 [[ -4.78157e-10 ] [ -2.91776e-13 ] # <- This is a minor change [ 2.30671e-09 ] [ 4.90745e-09 ] [ 1.61453e-06 ] [ 2.15271e-06 ] [ 5.17805e-10 ] [ 0.00582819 ] [ 0.0116564 ]] .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 0.215 seconds) .. _sphx_glr_download_auto_example_plot_openturns.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_openturns.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_openturns.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_openturns.zip `