Lab 06 210021110
Lab 06 210021110
Techniques
Theory:
𝑓(𝑥+△𝑋)−𝑓(𝑥)
𝑓'(𝑥) = lim △𝑥
△𝑥 → 0
Where,
△𝑥 = 𝑥𝑖+1 − 𝑥𝑖
𝑓(𝑥)− 𝑓(𝑥−△𝑋))
𝑓'(𝑥) = lim △𝑥
△𝑥 → 0
Where,
△𝑥 = 𝑥𝑖 − 𝑥𝑖−1
𝑓(𝑥+△𝑋)−𝑓(𝑥−△𝑥)
𝑓'(𝑥) = lim 2△𝑥
△𝑥 → 0
Task :
Write a matlab code which will show that error while using central difference method for
approximating first derivative of function is O(x2) but O(x) if forward or backward difference
method is used.
Code:
f = @(x) exp(x);
f_exact_deriv = @(x) exp(x);
forward_approx = zeros(n,1);
backward_approx = zeros(n,1);
central_approx = zeros(n,1);
forward_error = zeros(n,1);
backward_error = zeros(n,1);
central_error = zeros(n,1);
% Backward difference
backward_approx(i) = (f(x0) - f(x0 - h(i))) / h(i);
backward_error(i) = abs(backward_approx(i) - exact);
% Central difference
central_approx(i) = (f(x0 + h(i)) - f(x0 - h(i))) / (2*h(i));
central_error(i) = abs(central_approx(i) - exact);
end
% Create table to display results
T = table(h, forward_error, backward_error, central_error, ...
'VariableNames', {'h', 'Forward_Error', 'Backward_Error', 'Central_Error'});
% Display table
disp('Error Analysis for Numerical Differentiation:');
disp(T);
if i == n
fprintf('Final convergence rates:\n');
fprintf('Forward: %.2f (expected ~1 for O(h))\n', rate_forward);
fprintf('Backward: %.2f (expected ~1 for O(h))\n', rate_backward);
fprintf('Central: %.2f (expected ~2 for O(h^2))\n', rate_central);
end
end
% Plot errors
figure;
semilogy(h, forward_error, 'bo-', 'LineWidth', 1.5, 'DisplayName', 'Forward');
hold on;
semilogy(h, backward_error, 'r*-', 'LineWidth', 1.5, 'DisplayName', 'Backward');
semilogy(h, central_error, 'gs-', 'LineWidth', 1.5, 'DisplayName', 'Central');
grid on;
xlabel('Step Size h');
ylabel('Absolute Error');
title('Error vs Step Size for f(x) = e^x at x = 0.5');
legend('Location', 'best');
set(gca, 'XDir', 'reverse'); % Reverse x-axis to show decreasing h
hold off;
Output:
Discussion:
In the experiment , we can see the first order differentiation of the function f(x) = e^x
The difference of step size progression starts at 0.1 and halves each time.
Outputs results in a table showing forward error, central error and backward error .
We calculate convergence rates using log2 of error ratios because h is ½ times in each
progression. So log2(error(i-1)/forward_error(i)) gives us the order