0% found this document useful (0 votes)
3 views3 pages

Assig1 2023

Uploaded by

Kamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views3 pages

Assig1 2023

Uploaded by

Kamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

ECE 712 Assignment 1 2023

Due Tues. Oct. 31, 2023


James P. Reilly

This assignment will be graded. You are encouraged to discuss this assign-
ment with anyone you wish and particularly those in the class, but the write-up
you submit must be due entirely to you. This includes the computer programs.
Acknowledge individuals who have given you ideas. Feel free to consult with me
if you need help. If you need more time, please ask.
For all questions, explain your method carefully. Hand in a copy of your
code with your assignment.

1. This problem has to do with classification of two different classes of sig-


nals using PCA coefficients as features. On Avenue you will find two
matlab files x1.mat and x2.mat. Each file contains a sequence of data
generated from its respective, distinct autoregressive (AR) model. The
resulting sequences correspond to class 1 and 2 data respectively. The ob-
jective of our classification exercise is to discriminate between these two
classes There are four poles for the class 1 AR model, which are located at
0.95e±j2π(0.14) and 0.98e±j2π(0.28) . The class 2 AR model has their poles
at 0.96e±j2π(0.1) and 0.99e±j2π(0.4) . Data sequences x1 [n] and x2 [n] were
generated according to these AR definitions.
The features used for this classification problem are the PCA cofficients.
From each sequence, form the respective X-matrix using a window length
of n = 100 samples in the manner discussed in class to yield the matrices
X 1 and X 2 of size m × n, where m = 1000. Calculate the covariance
matrices for both classes 1 and 2 and then calculate the sorted eigenvector
matrices V 1 and V 2 , which are both 100×100. Then calculate the matrix
of PCA coefficients C ∈ R2m×2n using both sets of eigenvectors for both

1
classes 1 and 2 in the following way:
" #
X1
C= [V 1 V 2 ] .
X2

It is also necessary to provide a vector of labels, ` ∈ R2m . This can


achieved by constructing a vector whose first m entries have the value
e.g., 1 and the following m entries have the value 2, corresponding to the
class 1 and 2 entries respectively in C.
Use the classification learner app in matlab to perform the actual classifi-
cation task. Access the Classification Learner App by pressing the APPS
tab in matlab, then press the down arrow button on the right. You will see
“Classification Learner” in the group “Machine Learning and Deep Learn-
ing”. After you start up the app, hit the down arrow beside “new session”
in the upper left corner and then select “from workspace”. Then select the
data set variable, which in this case is C and select “start session”. Then
hit the green “train” button. When it finishes you will see a scatterplot
of the classes and the training set accuracy. In the classification learner
app parlance, the PCA coefficients in C are referred to as predictors, and
the labels as responses.
The performance of the classifier will improve greatly by preceeding the
classification process with a feature selection process. That is, we form
C by selecting only those features (in the form of columns of C) which
are most discriminative between the classes; i.e., which change the most
between class 1 and class 2 data. The matlab function fscmrmr is very
helpful in this respect. There is a brief writeup of this method in the
course notes.
Play around with it to try to get the best accuracy. You can change the
number of selected features and the type of classifier. Good accuracy can
be obtained using 10 selected features, but other values may work better.
Plot the most discriminating eigenvector waveforms corresponding to the
selected features and see if you can explain what makes them discriinating
between the classes.
Write a brief report summarizing your findings.

2. Provide an explanation of the phenomenon where the X–matrix corre-

2
sponding to a set of observations approaches rank deficiency as the vari-
ables (columns of X) become more correlated.

3. On Avenue you will find a file assig1 Q3 2023.mat, which contains a tall
matrix A and a random vector y. Describe a method to project y onto
R(A) and implement your method in matlab. Demonstrate that the pro-
jected vector x is indeed in R(A).

4. Chapter 1, Q5 and Q6. The signals y1 (n) and y2 (n) are available in the
files y1.mat and y2.mat. The signals f1 (n) and f2 (n) are available in the
file Ch1Q6. You will need the paper by Miyoshi and Kaneda for reference
material. It is available on Avenue.

You might also like