0% found this document useful (0 votes)
11 views20 pages

Unit-6 Notes PART A

The document discusses classification metrics, including definitions and calculations for confusion matrix, precision, recall, and F1 score. It provides a theoretical comparison between accuracy and precision, emphasizing the importance of F1 score in evaluating model performance, especially in imbalanced datasets. Additionally, it includes a practical example using Python code to compute these metrics for a given dataset.

Uploaded by

mihirgupta665
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views20 pages

Unit-6 Notes PART A

The document discusses classification metrics, including definitions and calculations for confusion matrix, precision, recall, and F1 score. It provides a theoretical comparison between accuracy and precision, emphasizing the importance of F1 score in evaluating model performance, especially in imbalanced datasets. Additionally, it includes a practical example using Python code to compute these metrics for a given dataset.

Uploaded by

mihirgupta665
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Question 1: Classification Metrics

(a) Theoretical
Q: Define the following terms in the context of classification metrics:
1. Confusion Matrix
2. Precision
3. Recall
A:
1. Confusion Matrix: A confusion matrix is a table used to evaluate the
performance of a classification model. It summarizes the predictions made by the
model by comparing actual labels with predicted labels. The four key components
are:
o True Positive (TP): Correctly predicted positive instances.
o True Negative (TN): Correctly predicted negative instances.
o False Positive (FP): Incorrectly predicted positive instances (Type I
error).
o False Negative (FN): Incorrectly predicted negative instances (Type II
error).
2. Precision: Precision measures the proportion of true positive
predictions out of all positive predictions made by the model. It is
given by:
TP
Precision =
TP+FP
3. Recall: Recall (or sensitivity) measures the proportion of actual
positives that were correctly identified by the model. It is given
by:
TP
Recall=
TP+ FN

(b) Mathematical
Q: Given the confusion matrix below, calculate Accuracy, Precision, Recall, and F1
Score.

Predicted Positive Predicted Negative


Actual Positive 80 20
Predicted Positive Predicted Negative
Actual 10 90
Negative

A:
Step-by-Step Calculations
To verify the results, let’s compute the metrics manually:
The confusion matrix is:

Predicted Positive Predicted Negative

Actual Positive TP = 80 FN = 20

Actual FP = 10 TN = 90
Negative

From this:
 TP (True Positives): 80
 FN (False Negatives): 20
 FP (False Positives): 10
 TN (True Negatives): 90

1. Accuracy
TP+ TN
Accuracy=
TP+ TN+ FP+FN
80+90 170
Accuracy= = =0.85
80+90+10+ 20 200
2. Precision
TP
Precision =
TP+FP
80 80
Precision= = ≈ 0.8889
80+10 90
3. Recall
TP
Recall=
TP+ FN
80 80
Recall= = =0.80
80+20 100
4. F1 Score
Precision ⋅Recall
F1 Score=2 ⋅
Precision + Recall
0.8889 ⋅0.80 0.7111
F1 Score=2 ⋅ =2 ⋅ ≈ 2 ⋅0.421=0.8421
0.8889+0.80 1.6889

1. Ground Truth (y_true)


y_true = [1]*80 + [1]*20 + [0]*10 + [0]*90
 [1]*80: 80 actual positives correctly labeled as positive.
 [1]*20: 20 actual positives incorrectly labeled as negative (false negatives).
 [0]*10: 10 actual negatives incorrectly labeled as positive (false positives).
 [0]*90: 90 actual negatives correctly labeled as negative.
This gives:
 Total actual positives = 80+20=100,
 Total actual negatives = 10+90=100.
2. Predicted Labels (y_pred)
y_pred = [1]*80 + [0]*20 + [1]*10 + [0]*90
 [1]*80: 80 predicted positives that are correct (true positives).
 [0]*20: 20 predicted negatives that are incorrect (false negatives).
 [1]*10: 10 predicted positives that are incorrect (false positives).
 [0]*90: 90 predicted negatives that are correct (true negatives).
This matches the confusion matrix perfectly:
 TP = 80, FN = 20, FP = 10, TN = 90.
3. Metrics Calculation
The following metrics are calculated using sklearn.metrics functions:
 Accuracy: Proportion of correct predictions out of total predictions.
 Precision: Proportion of true positives among all predicted positives.
 Recall: Proportion of true positives among all actual positives.
 F1 Score: Harmonic mean of precision and recall.

(c) Python Code


Q: Write Python code to compute the above metrics using scikit-learn.
A:
from sklearn.metrics import accuracy_score, precision_score,
recall_score, f1_score

Ground truth (actual labels)


y_true = [1] * 80 + [1] * 20 + [0] * 10 + [0] * 90
# 1 = Positive, 0 = Negative

Predicted labels
y_pred = [1] * 80 + [0] * 20 + [1] * 10 + [0] * 90
# 1 = Predicted Positive, 0 = Predicted Negative
accuracy = accuracy_score(y_true, y_pred)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred)
# Harmonic mean of Precision and Recall
print(f"Accuracy: {accuracy:.4f}")
print(f"Precision: {precision:.4f}")
print(f"Recall: {recall:.4f}")
print(f"F1 Score: {f1:.4f}")

Output:
Accuracy: 0.85
Precision: 0.89
Recall: 0.80
F1 Score: 0.84
Question 2: (a) Theoretical
What are classification metrics ? compare accuracy and precision ? what is role of
F1-SCORE in classification report ?
A:
Classification metrics are quantitative measures used to evaluate the performance of a
classification model. These metrics help assess how well a model performs in terms of
correctly classifying instances into their respective classes. The choice of metric depends
on the problem context, such as whether false positives or false negatives are more
critical.

1. Accuracy vs. Precision


Accuracy
 Definition: Accuracy measures the proportion of correct predictions (both true
positives and true negatives) out of all predictions made.
 Formula:
TP+ TN
Accuracy=
TP+ TN+ FP+FN
 Interpretation: It provides a general sense of how often the model is correct.
However, it can be misleading in imbalanced datasets where one class dominates
the other.
Precision
 Definition: Precision measures the proportion of true positive predictions out of
all positive predictions made by the model.
 Formula:
TP
Precision =
TP+FP
 Interpretation: It focuses on the reliability of positive predictions. High precision
indicates that the model makes fewer false positive errors, which is crucial in
scenarios where false positives are costly (e.g., spam detection).
Comparison

Metric Focus Strengths Weaknesses

Accuracy Overall correctness Easy to interpret; useful Misleading in


Metric Focus Strengths Weaknesses

for balanced datasets imbalanced datasets

Precision Reliability of positive Useful when false Ignores false


predictions positives are costly negatives

 Key Difference: Accuracy considers both positive and negative predictions, while
precision focuses only on positive predictions. In imbalanced datasets, accuracy
may give a false sense of good performance, whereas precision highlights the
quality of positive predictions.

2. Role of F1-Score in Classification Reports


The F1-score is a harmonic mean of precision and recall, providing a single metric that
balances both. It is particularly useful when there is an uneven class distribution or when
both false positives and false negatives are important.
Formula:
Precision ⋅ Recall
F1-Score=2⋅
Precision +Recall
Role in Classification Reports
4. Balanced Evaluation:
o The F1-score combines precision and recall into a single value, making it
easier to compare models across different datasets or scenarios.
o It avoids favoring models that perform well on only one metric (e.g., high
precision but low recall).
5. Handling Imbalanced Data:
o In imbalanced datasets, accuracy can be misleading, but the F1-score
provides a more reliable measure because it accounts for both false
positives and false negatives.
6. Threshold Selection:
o The F1-score is often used to select the optimal threshold for binary
classification problems, especially when the cost of false positives and
false negatives is similar.
7. Multi-Class Problems:
o For multi-class classification, the F1-score can be computed using macro-
averaging (average over all classes) or weighted averaging (weighted by
class support). This ensures that minority classes are not overlooked.
Advantages of F1-Score:
 Provides a single metric summarizing both precision and recall.
 Particularly useful when the dataset is imbalanced or when false positives and
false negatives have significant consequences.
Limitations of F1-Score:
 Does not consider true negatives (TN), which may be important in some
applications.
 May not fully capture model performance if precision and recall are highly
imbalanced.

Summary Table: Key Differences Between Accuracy, Precision, and F1-Score

Metric Focus Use Case Limitations

Accurac Overall correctness Balanced datasets Misleading in


y imbalanced datasets

Precision Correctness of Minimizing false positives Ignores false


positive predictions negatives

F1-Score Balance between Imbalanced datasets; trade- Does not account for
precision and recall off between FP and FN true negatives
Question 2: (b) Mathematical

Problem Overview
calculate the following classification metrics for the given dataset:
1. Accuracy
2. Confusion Matrix
3. Precision, Recall, and F1-Score
And also write the python code with respect to cat.
A:-

Problem Overview
The dataset is as follows:

Instanc Predicted Label


e Actual Label (y_true) (y_pred)
1 Cat Cat
2 Dog Cat
3 Dog Dog
4 Cat Dog
5 Dog Dog

Step-by-Step Solution
1. Accuracy
 Definition: Accuracy measures the proportion of correct predictions out of all
predictions.
 Formula:
Number of Correct Predictions
Accuracy=
Total Number of Predictions
2. Confusion Matrix
 A confusion matrix provides a detailed breakdown of predictions:
o True Positives (TP): Correctly predicted Cat.
o False Positives (FP): Incorrectly predicted Cat (actual label is Dog).
o False Negatives (FN): Incorrectly predicted Dog (actual label is Cat).
o True Negatives (TN): Correctly predicted Dog.
3. Precision, Recall, and F1-Score
 Precision: Measures the accuracy of positive predictions for Cat.
TP
Precision =
TP+FP
 Recall: Measures the ability to find all actual instances of Cat.
TP
Recall=
TP+ FN
 F1-Score: Harmonic mean of precision and recall.
Precision ⋅ Recall
F1-Score=2⋅
Precision +Recall

Python Code Implementation


Below is the Python code to compute these metrics specifically for the class Cat:
import numpy as np
from sklearn.metrics import accuracy_score, confusion_matrix,
precision_score, recall_score, f1_score

# Actual labels (y_true)


y_true = np.array(['Cat', 'Dog', 'Dog', 'Cat', 'Dog'])

# Predicted labels (y_pred)


y_pred = np.array(['Cat', 'Cat', 'Dog', 'Dog', 'Dog'])

# 1. Accuracy
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy:.2f}")

# 2. Confusion Matrix
conf_matrix = confusion_matrix(y_true, y_pred, labels=['Cat',
'Dog'])
print("Confusion Matrix:")
print(conf_matrix)

# 3. Precision, Recall, and F1-Score for 'Cat'


precision = precision_score(y_true, y_pred, pos_label='Cat')
recall = recall_score(y_true, y_pred, pos_label='Cat')
f1 = f1_score(y_true, y_pred, pos_label='Cat')
print(f"Precision for Cat: {precision:.2f}")
print(f"Recall for Cat: {recall:.2f}")
print(f"F1-Score for Cat: {f1:.2f}")

Explanation of the Code


Input Data:

o y_true: The actual labels (['Cat', 'Dog', 'Dog', 'Cat',


'Dog']).
o y_pred: The predicted labels (['Cat', 'Cat', 'Dog', 'Dog',
'Dog']).
Accuracy Calculation:

o Uses accuracy_score from sklearn.metrics to compute the


proportion of correct predictions.
Confusion Matrix:

o Uses confusion_matrix to generate a matrix showing TP, TN, FP, and


FN.
o The labels parameter ensures the order of classes in the matrix
(['Cat', 'Dog']).
Precision, Recall, and F1-Score for Cat:

o Uses precision_score, recall_score, and f1_score functions.


o The pos_label parameter specifies the positive class ('Cat' in this
case).

Computed Metrics:
Accuracy:

o Correct predictions: 3 (Instances 1, 3, 5)


o Total predictions: 5
3
o Accuracy = =0.60
5
Confusion Matrix:

o Classes: ['Cat', 'Dog']


o Matrix:

[ 11 12]
 TP (Cat): 1
 FP (Dog predicted as Cat): 1
 FN (Cat predicted as Dog): 1
 TN (Dog): 2
Precision, Recall, and F1-Score for Cat:
TP 1
o Precision = = =0.50
TP+ FP 1+1
TP 1
o Recall = = =0.50
TP+ FN 1+ 1
Precision ⋅Recall 0.50 ⋅0.50
o F1-Score = 2 ⋅ =2 ⋅ =0.50
Precision+Recall 0.50+0.50
Final Output:
Accuracy: 0.60
Confusion Matrix:
[[1 1]
[1 2]]
Precision for Cat: 0.50
Recall for Cat: 0.50
F1-Score for Cat: 0.50

Question 3: Gram-Schmidt Process


(a) Theoretical
Q: Explain the Gram-Schmidt process step-by-step. Why is it important to
orthogonalize vectors?
A:
The Gram-Schmidt process transforms a set of linearly independent vectors into an
orthogonal (or orthonormal) basis. Steps:
1. Start with the first vector v 1 and set u1=v 1.
2. For each subsequent vector v i, subtract its projection onto all previously computed
orthogonal vectors u1 ,u 2 , … , ui−1.
i−1
⟨ v ,u ⟩
ui=v i−∑ proju ( vi ) , where proju ( v i) = i j u j .
j=1
j j
⟨ u j ,u j ⟩

Importance:
 Orthogonalization simplifies computations like solving systems of equations and
finding projections.
 Orthonormal bases improve numerical stability in algorithms like least squares
regression.
(b) Mathematical
Q: Apply the Gram-Schmidt process to the vectors v 1=[1 ,1 , 0], v 2=[1 , 0 ,1].
Compute the orthogonal vectors u1 and u2. Normalize them to obtain an
orthonormal basis.
A:
Step 1: Start with u1=v 1.
u1=[1, 1 , 0].

Step 2: Compute u2:


⟨ v 2 ,u 1 ⟩
u2=v 2−proju ( v 2) , where proju ( v 2 )= u .
1 1
⟨ u1 ,u1 ⟩ 1

1. Compute ⟨ v 2 , u1 ⟩:
⟨ v 2 , u1 ⟩=[1 ,0 ,1]⋅[1 , 1, 0]=1 ⋅1+ 0 ⋅1+ 1⋅ 0=1 .

2. Compute ⟨ u1 ,u1 ⟩:
2 2 2
⟨ u1 ,u1 ⟩=[1, 1 , 0]⋅[1 ,1 , 0]=1 +1 +0 =2 .

3. Compute proju ( v 2 ) :1

1
proju ( v 2 ) = [1 ,1 , 0]=[0.5 , 0.5 , 0].
1
2

4. Subtract proju ( v 2 ) from v 2:


1

u2=v 2−proju ( v 2) =[1 , 0 ,1]−[0.5 , 0.5 , 0]=[0.5 ,−0.5 ,1].


1

Step 3: Normalize u1 and u2:


1. Compute ∥ u1 ∥:

∥ u1 ∥= √ ⟨ u1 ,u 1 ⟩= √ 2 .

Normalize u1:

unormalized
1 =
1
√2
[1 ,1 , 0]=
[1 1
,
√2 √ 2
,0 .
]
2. Compute ∥ u2 ∥:

∥ u2 ∥= √ ⟨ u2 ,u 2 ⟩= √ ( 0.5 ) + (−0.5 ) +1 =√ 0.25+0.25+1=√ 1.5 .


2 2 2

Normalize u2:
unormalized
2 =
1
√ 1.5 [
[0.5 ,−0.5 , 1]=
0.5 −0.5 1
, ,
√ 1.5 √ 1.5 √1.5].

(c) Python Code


Q: Implement the Gram-Schmidt process in Python to orthogonalize and normalize
the vectors v 1=[1 ,1 , 0] and v 2=[1 , 0 ,1].
A:
import numpy as np

def gram_schmidt(vectors):
ortho_vectors = []
for v in vectors:
v = np.array(v)
for u in ortho_vectors:
v -= (np.dot(v, u) / np.dot(u, u)) * u
ortho_vectors.append(v)
return ortho_vectors

def normalize(vector):
norm = np.linalg.norm(vector)
return vector / norm

# Input vectors
v1 = [1, 1, 0]
v2 = [1, 0, 1]

# Perform Gram-Schmidt
ortho_vectors = gram_schmidt([v1, v2])

# Normalize orthogonal vectors


normalized_vectors = [normalize(vec) for vec in ortho_vectors]

print("Orthogonal vectors:")
for i, vec in enumerate(ortho_vectors):
print(f"u{i+1}: {vec}")

print("\nNormalized vectors:")
for i, vec in enumerate(normalized_vectors):
print(f"u{i+1}_normalized: {vec}")

Output:
Orthogonal vectors:
u1: [1. 1. 0.]
u2: [ 0.5 -0.5 1. ]
Normalized vectors:
u1_normalized: [0.70710678 0.70710678 0. ]
u2_normalized: [ 0.40824829 -0.40824829 0.81649658]

Question 4:
Question:
Explain the concept of orthogonalization and its significance in linear algebra. Describe
the Gram-Schmidt process for orthogonalizing a set of vectors. Derive the mathematical
formula for the Gram-Schmidt process, and apply it to orthogonalize the following set of
vectors:

[] [] []
1 1 0
v 1= 1 , v 2= 0 , v 3= 1 .
0 1 1

Show all steps of the computation and verify that the resulting vectors are orthogonal.
Finally, implement the Gram-Schmidt process in Python to confirm your results.

A:- Orthogonalization is the process of converting a set of linearly independent vectors


into a set of orthogonal vectors, where each pair of distinct vectors is perpendicular (
u ⋅v =0). This simplifies computations in linear algebra, such as solving systems of
equations, finding projections, and diagonalizing matrices.
The Gram-Schmidt process is an algorithm that constructs an orthogonal basis from a
given set of linearly independent vectors. The key idea is to iteratively subtract
projections of a vector onto previously constructed orthogonal vectors, ensuring
orthogonality at each step.
Significance:
 Orthogonal bases simplify numerical computations.
 They are essential for diagonalizing matrices and solving least-squares problems.
 Orthogonal vectors are numerically stable, reducing errors in calculations.

Mathematical Formula
Let {v 1 , v 2 , … , v n } be a set of linearly independent vectors. The Gram-Schmidt process
constructs an orthogonal set {u 1 , u2 , … ,u n } using the following steps:

3. Start with u1=v 1.


4. For k =2 ,3 , … , n:
k−1
⟨ vk , u j ⟩
uk =v k −∑ u
j =1 ⟨ uj , uj ⟩ j

Here:
o ⟨ a , b ⟩ denotes the dot product of vectors a and b .
⟨ v k ,u j ⟩
o The term u represents the projection of v k onto u j.
⟨ u j, u j ⟩ j

Finally, normalize each uk to obtain an orthonormal set if needed:


uk
ek=
∥ uk ∥

where ∥ uk ∥=√ ⟨ u k , uk ⟩.

Mathematical Example
We are given the vectors:

[] [] []
1 1 0
v 1= 1 , v 2= 0 , v 3= 1 .
0 1 1

We will apply the Gram-Schmidt process to orthogonalize these vectors.


Step 1: Start with u1=v 1:

[]
1
u1 = 1 .
0

Step 2: Compute u2 by subtracting the projection of v 2 onto u1:


⟨ v 2 ,u 1 ⟩
Projection of v 2 onto u1= u .
⟨ u1 ,u1 ⟩ 1

Compute the dot products:

[]
1
⟨ v 2 , u1 ⟩=[ 1 0 1 ] ⋅ 1 =( 1 ) ( 1 ) + ( 0 ) ( 1 ) + ( 1 ) ( 0 )=1 ,
0

[]
1
⟨ u1 ,u1 ⟩= [ 1 1 0 ] ⋅ 1 =( 1 ) ( 1 )+ ( 1 )( 1 ) + ( 0 )( 0 )=2 .
0
Therefore, the projection is:

[]
1

[]
1 2
1
Projection= 1 = 1 .
2
0 2
0

Subtract this projection from v 2 to get u2:

[][][ ][ ]
1 1 1
1−
1 2 2 2
u2=v 2−Projection= 0 − 1 = 1 = −1 .
0−
1 2 2 2
0 1−0 1

Step 3: Compute u3 by subtracting the projections of v 3 onto u1 and u2:

u3=v 3−
( ⟨ v 3 ,u1 ⟩
⟨ u 1 , u1 ⟩
u1 +
⟨ v 3 ,u 2 ⟩
⟨ u2 ,u2 ⟩ 2 )
u .

First, compute ⟨ v 3 , u1 ⟩:

[]
1
⟨ v 3 , u1 ⟩=[ 0 1 1 ] ⋅ 1 = ( 0 ) (1 )+ (1 )( 1 ) + ( 1 ) ( 0 )=1 .
0

Next, compute ⟨ v 3 , u2 ⟩:

[]
1
2
⟨ v 3 , u2 ⟩=[ 0 1 1 ] ⋅ −1 =( 0 )
2
1
2
+( 1 )
−1
2() ( )
+ ( 1 ) ( 1 )=
−1
2
1
+1= .
2
1

Now, compute ⟨ u2 ,u 2 ⟩:

[]
1
2
[ ] () ( )
2 2
1 −1 1 −1 2 1 1 3
⟨ u2 ,u 2 ⟩= 1 ⋅ −1 = + + ( 1 ) = + +1= .
2 2 2 2 4 4 2
2
1

Compute the projections:


[]
1

[]
1 2
1
Projection onto u1= 1 = 1 ,
2
0 2
0

[ ] [ ][ ]
1
1 1
1 6
2 2
2 1 −1
Projection onto u2 = −1 = −1 = .
3 3 6
2 2
2 1
1 1
3

Subtract these projections from v 3:

[( ] [ ]) [ ] [] [][ ]
1 1 1 2 −2
1 +

[]
6 2 6 3 3
2 0 0
−1 1 1 1 2
u3=v 3− 1 + =1− − =1− = .
6 2 6 3 3
2 1 1
1 1 1 2
0 0+
3 3 3 3

Thus, the orthogonalized vectors are:

[] [ ] [ ]
−2
1
3
1 2
2
u1= 1 , u2= −1 ,u 3= .
3
0 2
2
1
3

Verification of Orthogonality
To verify that the vectors are orthogonal, compute their pairwise dot products:
⟨ u1 ,u 2 ⟩:

[]
1
2

2
() ( )
⟨ u1 ,u 2 ⟩= [ 1 1 0 ] ⋅ −1 =( 1 )
1
2
+( 1 )
−1
2
1 1
+ ( 0 )( 1 )= − =0 .
2 2
1
⟨ u1 ,u3 ⟩:

[]
−2
3
⟨ u1 ,u3 ⟩= [ 1 1 0 ] ⋅
2
3 ( ) () ()
=( 1 )
−2
3
+( 1 )
2
3
+ ( 0)
2 −2 2
3
= + =0 .
3 3
2
3

⟨ u2 ,u 3 ⟩:

[]
−2
3
⟨ u2 ,u 3 ⟩= [ 1
2
−1
2 ]
1 ⋅
2
3
=( )( ) ( )( ) ( )
1 −2 −1 2
2 3
+
2 3
+ ( 1)
2 −1 1 2
3
= − + =0 .
3 3 3
2
3

Since all pairwise dot products are zero, the vectors are orthogonal.

Python Code Implementation


Below is the Python implementation of the Gram-Schmidt process:
import numpy as np

def gram_schmidt(vectors):
"""
Perform the Gram-Schmidt process to orthogonalize a set of vectors.

Parameters:
vectors (list of np.ndarray): A list of linearly independent vectors.

Returns:
list of np.ndarray: A list of orthogonal vectors.
"""
orthogonal_vectors = []

for i, v in enumerate(vectors):
# Start with the current vector
u = v.copy()

# Subtract projections onto previous orthogonal vectors


for j in range(i):
proj = np.dot(v, orthogonal_vectors[j]) / np.dot(orthogonal_vectors[j],
orthogonal_vectors[j])
u -= proj * orthogonal_vectors[j]

# Append the orthogonalized vector


orthogonal_vectors.append(u)

return orthogonal_vectors

# Example usage
vectors = [
np.array([1, 1, 0]), # First vector
np.array([1, 0, 1]), # Second vector
np.array([0, 1, 1]) # Third vector
]

orthogonal_vectors = gram_schmidt(vectors)

print("Orthogonal Vectors:")
for vec in orthogonal_vectors:
print(vec)

Output of the Code


For the input vectors:

[] [] []
1 1 0
v 1= 1 , v 2= 0 , v 3= 1 ,
0 1 1

The output orthogonal vectors are:

[] [ ] [ ]
1 0.5 −0.6667
u1= 1 , u2= −0.5 , u3= 0.6667 .
0 1 0.6667

(Note: The third vector is expressed in decimal form for clarity.)

Final Answer
The orthogonalized vectors are:
[] []
−2
1
3

[]
1 2
2
u1= 1 , u2= −1 ,u 3= .
3
0 2
2
1
3

You might also like