19ECE357 - V Sem End - Odd 2023
19ECE357 - V Sem End - Odd 2023
: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Amrita Vishwa Vidyapeetham
Amrita School of Engineering, Coimbatore
B.Tech Degree Examinations –December 2023
Fifth Semester
Electronics and Communication Engineering/ Computer and Communication Engineering
19ECE357 Pattern Recognition
Duration: Three hours Maximum: 100 Marks
CO Course Outcomes
CO01 Able to apply the knowledge of mathematics for obtaining solutions in pattern
recognition domain
CO02 Able to apply various algorithms for pattern recognition
CO03 Able to map the pattern recognition concepts for solving real life problems
CO04 Able to carry out implementation of algorithms using different simulation tools
Answer all questions
1. A marketing team runs an email campaign to promote a new product. Historical data indicates that
the average conversion rate for this type of campaign is 10%. Identify the probability distribution
that can model the number of successful conversions in a given number of email responses. Calculate
the probability of the following scenarios: exactly 3 successful conversions in a sample of 20 email
responses, atleast 5 successful conversions in a sample of 50 email responses. [4][CO01][BTL 2]
3. Differentiate adaline and perceptron neural network. Design an adaline network to implement the
ANDNOT logic function with two inputs (𝑋1 , 𝑋2 ) and one output 𝑡. Use bipolar inputs and targets.
Initial weights, bias and learning rate = 0.2. Draw the neural network architecture after two epochs.
[10][CO02][BTL 3]
4. Explain the need for Fisher linear discriminant analysis. For a quality control task in a manufacturing
plant, the system relies on sensor data from various machinery components. The manufacturing
process involves two distinct classes: normal operation (Class 1) and potential defects (Class 2).
Each sample corresponds to sensor readings at specific time intervals. The details are as follows:
Class 1 (Normal Operation): 𝑋1 = (2,3), 𝑋2 = (3,4), 𝑋3 = (3,5) Class 2 (Potential Defects):
𝑋4 = (5,6), 𝑋5 = (6,7), 𝑋6 = (7,8) . Apply Fisher linear discriminant analysis to enhance the
performance of a defect detection system. [12][CO03][BTL 3]
5. Explain the principle of back propagation algorithm. Imagine you are a data scientist, working on
training a neural network for a predictive maintenance system in a manufacturing plant. The
architecture of the neural network, with weights and biases initialized, is shown in Fig. 1. Using the
Page 1 of 4
binary sigmoid function, compute the activations for each of the units when the input vector
(𝑋1 , 𝑋2 , 𝑋3 ) = (1, 4, 5) representing the sensor readings and the targets (𝑡1 , 𝑡2 ) = (0.1, 0.05) is
presented. Find the error component between the output and hidden layer. Using a learning rate of
𝛼 = 0.5, compute the weight corrections. Find the new weights and biases between the output and
hidden layer after one iteration. [12][CO03][BTL 3]
6. Consider a medical diagnostics scenario where the Bayesian classifier is employed for disease
prediction based on two physiological parameters: blood pressure and cholesterol levels. The dataset
includes labels indicating whether it is healthy or affected. Class healthy is normally distributed with
means (110 𝑚𝑚𝐻𝑔, 180 𝑚𝑔/𝑑𝐿) and class affected is normally distributed with means
(80 𝑚𝑚𝐻𝑔, 200 𝑚𝑔/𝑑𝐿). Assume a standard deviation of (2,4), a correlation coefficient of 0.5,
and probability of features in the entire population of 0.5, for both the classes. Find the equation of
optimal decision boundary? Compute the likelihood ratio to determine whether an individual with
blood pressure and cholesterol levels (120 𝑚𝑚𝐻𝑔, 200 𝑚𝑔/𝑑𝐿) is predicted healthy or not.
[8][CO03][BTL 3]
7. The dataset 𝐷 with two features (𝑓1, 𝑓2 ) is as follows, 𝐷 = (72, 41.1), (45, 27.4), (45, 26.1), (15, 8.5),
(73, 40). Obtain the mean feature vector and the covariance matrix. [6][CO01][BTL 2]
8. Explain the minimum squared error procedure. Two samples from class 𝐴 are located at (0,0) and
(1,0) . Two samples from class 𝐵 are located at (2,0) and (2,1) . ee want a linear discriminant
function ‘𝐷’ equal to 1 for members of class 𝐴 and −1 for members of class 𝐵. Obtain the optimal
discriminant function. ehat is the discriminant function 𝐷(𝑥, 𝑦)? [10][CO02][BTL 2]
Page 2 of 4
9. Explain the components of an artificial neuron. Obtain the Hebb neuron model of NAND logical
function using bipolar inputs and targets. Also, draw the decision boundary for each training
samples. [10][CO03][BTL 3]
10. Differentiate eager learner and lazy learner. Assume we have a sample dataset with the following
information for each integrated circuit (IC) chip: power consumption in milliwatts and delay in
nanoseconds as shown in Table 1. Using 𝐾 nearest neighbour algorithm, predict whether the IC with
power consumption 15 𝑚𝑊 and delay 6 𝑛𝑠 is faulty or not. Obtain the decision for 𝐾 = 1, 2, 3, 4
using Eucledian distance and give your inferences. [10][COO3][BTL 2]
11. State Bayes theorem from classification perspective. Based on the following data given in Table 2,
apply naïve Bayes’ assumption to estimate the probability that a sample with 𝑥 = 1 and 𝑦 = 1,
belongs to class 𝐴 and class 𝐵 . The prior probabilities are to be estimated from this randomly
sampled data. [8][CO02][BTL 2]
12. The probabilistic output of a classifier for a fault detection system, used to identify the class as
𝑓𝑎𝑐𝑢𝑙𝑡𝑦 or 𝑔𝑜𝑜𝑑, in an integrated chip manufacturing company is depicted in Table 3. The variable
‘𝑥’ is the feature used for building the classifier. Formulate the confusion matrix and calculate the
following metrics: accuracy, precision and recall. [6] [CO01] [BTL 2]
Page 3 of 4
Table 3 Classifier output
𝑇𝑟𝑢𝑒 𝐶𝑙𝑎𝑠𝑠 𝑃(𝑓𝑎𝑢𝑙𝑡𝑦|𝑥) 𝑃(𝑔𝑜𝑜𝑑|𝑥)
𝐹 0.7 0.3
𝐺 0.3 0.7
𝐺 0.6 0.4
𝐹 0.4 0.6
𝐹 0.7 0.3
𝐺 0.1 0.9
𝐺 0.8 0.2
𝐹 0.4 0.6
𝐹 0.7 0.3
𝐺 0.7 0.3
𝐺 0.6 0.4
***
Page 4 of 4