0% found this document useful (0 votes)
52 views

PH206 Lab Assignment No. - 9: Mayank Sharma 180121022

This Python code applies the Gauss-Newton method to find the values of parameters a0 and a1 that minimize the square sum error of a given nonlinear function. The code defines functions for the nonlinear function, its partial derivatives, and the difference matrix. It then iteratively calculates the partial derivative matrix, difference matrix, and parameter updates using matrix operations until the error percentage is less than 0.001% for both parameters. The output displays the final parameter values and number of iterations needed for convergence.

Uploaded by

Mayank Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

PH206 Lab Assignment No. - 9: Mayank Sharma 180121022

This Python code applies the Gauss-Newton method to find the values of parameters a0 and a1 that minimize the square sum error of a given nonlinear function. The code defines functions for the nonlinear function, its partial derivatives, and the difference matrix. It then iteratively calculates the partial derivative matrix, difference matrix, and parameter updates using matrix operations until the error percentage is less than 0.001% for both parameters. The output displays the final parameter values and number of iterations needed for convergence.

Uploaded by

Mayank Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

PH206 Lab Assignment No.

- 9
Mayank Sharma 180121022

Python Code for the solution -


1. import​ numpy as np
2. import​ itertools
3.
4. N = 5 #Number of Data Points
5. X = np.array([0.25,0.75,1.25,1.75,2.25])
6. Y = np.array([0.28,0.57,0.68,0.74,0.79])
7. a0 = [1] #Given Guess Value
8. a1 = [1] #Given Guess Value
9.
10. def​ func(x,a_0,a_1): #Given Function
11. A = a_0 * (1 - np.exp(-a_1*x))
12. ​return​ A
13.
14. def​ func1(x,a_0,a_1): #Partial derivative with respect to a0
15. A = 1 - np.exp(-a_1*x)
16. ​return​ A
17.
18. def​ func2(x,a_0,a_1): #Partial derivative with respect to a0
19. A = (a_0) * (x) * (np.exp(-a_1*x))
20. ​return​ A
21.
22. def​ Zfunc(X,a_0,a_1): #Obtaining the Partial Derivative Matrix
23. Z = np.zeros((N,2))
24. ​for​ i ​in​ range(0,N):
25. Z[i][0] = func1(X[i], a_0, a_1)
26. Z[i][1] = func2(X[i], a_0, a_1)
27. ​return​ Z
28.
29. def​ Dfunc(X,Y,a_0,a_1): #Obtaining the Difference Matrix
30. D = np.zeros((N,1))
31. ​for​ i ​in​ range(0,N):
32. D[i][0] = Y[i] - [func(X[i],a_0,a_1)]
33. ​return​ D
34.
35. def​ sqsum(D): #Calculating the Square Sum Error
36. sqsum = 0
37. ​for​ i ​in​ range(0,N):
38. sqsum = sqsum + (D[i][0])**2
39. ​return​ sqsum
40.
41. def​ operate(X,Y,a0,a1): #Applying the Gauss Newton Method
42.
43. ​for​ j ​in​ itertools.count():
44.
45. Z = Zfunc(X, a0[j], a1[j])

1
46. Ztrans = np.transpose(Z)
47. C = np.linalg.inv(np.matmul(Ztrans,Z))
48.
49. D = Dfunc(X,Y,a0[j], a1[j])
50.
51. ​if​ j == 0:
52. sqerror = [sqsum(D)]
53. ​if​ j != 0:
54. sqerror = np.append(sqerror, [sqsum(D)], axis = 0)
55.
56. A = np.matmul(C,np.matmul(Ztrans,D))
57.
58. a0 = np.append(a0, [a0[j] + A[0][0]], axis=0)
59. a1 = np.append(a1, [a1[j] + A[1][0]], axis=0)
60.
61. e0 = abs(((a0[j+1] - a0[j])/(a0[j+1])) * 100)
62. e1 = abs(((a1[j+1] - a1[j])/(a1[j+1])) * 100)
63.
64. ​if​ e0 < 0.001: #Checking the error
65. ​if​ e1 < 0.001:
66. ​break
67.
68. ​return​ a0,a1,sqerror,j
69.
70. print​(operate(X,Y,a0,a1))

In the above code, the ​Gauss Newton method​ has been applied to find the values of
a0 and a1 that minimize the square sum error with error percentage as 0.01%.

Output

Value of a0 after each iteration​ ​ - [1, 0.72852264, 0.7910431 , 0.79185177, 0.7918669 ,


0.79186765]

Value of a1 for various iteration​ -​ [1 , 1.50193087, 1.67770112, 1.67526844, 1.6751459 ,


1.67513958]

Value of square sum error after each iteration​ - [0.02475064, 0.02424071, 0.00066269,
0.00066166, 0.00066166]

Number of Iterations​ - 4

The final values are:​


a0​ - 0.79186765 a1​ - 1.67513958
With the ​Square Sum error as​ 0.00066166.

2
3

You might also like