Semester I: Discipline: Electronics and Communication Stream: EC3
Semester I: Discipline: Electronics and Communication Stream: EC3
Preamble: The purpose of this course is to expose students to the basic theory of linear
algebra and probability.
Course Outcomes: The COs shown are only indicative. For each course, there can be 4 to 6
COs. After the completion of the course the student will be able to
Assessment Pattern
Apply 20
Analyse 20
Evaluate 20
Create
Mark distribution
Part A
Part B
Answer ANY FIVE Questions, one from each module
(5 x 7 marks = 35marks)
2 2 1
9. Find the eigen values and eigen vectors of 𝐴𝐴 = �1 3 1�.
1 2 2
1 2
10. Find the least square solution to the equation 𝐴𝐴𝐴𝐴 = 𝑏𝑏, where 𝐴𝐴 = �13� and 𝑏𝑏 =
0 0
4
�5�, Obtain the projection matrix 𝑃𝑃 which projects 𝑏𝑏 on to the column space of 𝐴𝐴.
6
3 2
11. Let T be the linear transformation from R to R defined by T(x,y,z) =(x+y, 2z-x). Let
B1, B2 be standard ordered bases of R3 and R2 respectively. Compute the matrix of T
relative to the pair B1, B2.
12. Let V be a finite-dimensional complex inner product space, and let T be any linear
operator on V. Show that there is an orthonormal basis for V in which the matrix of T
is upper triangular.
**********************************
Syllabus
Module 3 Random Processes. Poisson Process, Wiener Process, Markov Process, Birth-
Death Markov Chains, Chapman- Kolmogorov Equations,
Groups, Rings, homomorphism of rings. Field. Vector Space. Subspaces. direct sum. Linear
independence, span. Basis. Dimension. Finite dimensional vector spaces. Coordinate
representation of vectors. Row spaces and column spaces of matrices.
Topic No. of
No
Lectures
Module I
Axiomatic definition of probability. Independence. Bayes’
1.1 2
theorem and applications.
Random variables. Cumulative distribution function, Probability
1.2 1
Mass Function,
Probability Density function, Conditional and Joint Distributions
1.3 2
and densities, Independence of random variables.
Functions of Random Variables: Two functions of two random
1.4 2
variables. Pdf of functions of random variables using jacobian.
Module II
Expectation, Fundamental theorem of expectation, Conditional
2.1 1
expectation.
2.2 Moment generating functions, Charectristic function. 1
Covariance matrix. Uncorrelated random variables. Pdf of Jointly
2.3 2
Guassian random variables,
Markov and Chebyshev inequalities, Chernoff bound. Central
2.4 2
Limit theorem.
Convergence of random variables. Weak law of large numbers,
2.5 2
Strong law of large numbers.
3 Module III
3.1 Random Processes. Poisson Process, Wiener Process, 2
Markov Process, Birth-Death Markov Chains, Chapman-
3.2 2
Kolmogorov Equations,
Groups, Rings, homomorphism of rings. Field. Vector Space.
3.3 2
Subspaces. direct sum.
Linear independence, span. Basis. Dimension. Finite dimensional
3.4 2
vector spaces.
Coordinate representation of vectors. Rowspaces and column
3.5 1
spaces of matrices.
4 Module IV
Linear Transformations. Four fundamental subspaces of a linear
4.1 2
transformation. Rank and Rank-nullity theorem.
Matrix representation of linear transformation. Change of basis
4.2 1
transformation.
4.3 System of linear equations. Existence and uniqueness of solutions. 2
Linear functionals. Dual, double dual and transpose of a linear
4.4 2
transformation.
5 Module V
5.1 Eigen values, Eigen vectors, Diagonizability. 2
Inner product. Norm. Projection. Least-squares solution. Cauchy-
5.2 2
Schwartz inequality.
Orthonormal bases. Orthogonal complement. Spectral
5.3 2
decomposition theorem.
Reference Books
1. Hoffman Kenneth and Kunze Ray, Linear Algebra, Prentice Hall of India.
2. Jimmie Gilbert and Linda Gilbert, Linear Algebra and Matrix Theory, Elsevier
3. Henry Stark and John W. Woods "Probability and Random Processes withApplications to
Signal Processing", Pearson Education, Third edition.
Course Outcomes: After the completion of the course the student will be able to
CO 4 Illustrate the use of Levinson Durbin algorithm for the solution of normal equations
Compare and contrast LMS algorithm and RLS Algorithm for Adaptive Direct form
CO 5
FIR filters
Develop the efficient realization of QMF filter bank using polyphase decomposition
CO 6
and multirate identities
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 3 2 2
CO 2 3 3 2 2
CO 3 3 2 1 - -
CO 4 3 3 3 3 2
CO 5 3 3 2 2
CO 6 3 2 2 3 2
Assessment Pattern
Apply 80%
Analyse 20%
Evaluate -
Create -
Mark distribution
Total ESE
CIE ESE
Marks Duration
100 40 60 2.5 hours
CORE COURSES
Evaluation shall only be based on application, analysis or design based questions (for both
internal and end semester examinations).
Test paper, 1 no: 10 marks The project shall be done individually. Group projects not
permitted. Test paper shall include minimum 80% of the syllabus.
The end semester examination will be conducted by the University. There will be two parts;
Part A and Part B. Part A contain 5 numerical questions (such questions shall be useful in the
testing of knowledge, skills, comprehension, application, analysis, synthesis, evaluation and
understanding of the students), with 1 question from each module, having 5 marks for each
question. Students shall answer all questions. Part B contains 7 questions (such questions
shall be useful in the testing of overall achievement and maturity of the students in a course,
through long answer questions relating to theoretical/practical knowledge, derivations,
problem solving and quantitative evaluation), with minimum one question from each module
of which student shall answer any five. Each question can carry 7 marks. Total duration of
the examination will be 150 minutes.
Model Question Paper
PART A
1.Prove that the energy density spectrum of a deterministic signal can be obtained from the
Fourier transform of that signal?
2.Differentiate between the MDL criterion and CAT criterion towards the selection of an AR
Model order?
3.Use the Levinson Durbin algorithm to solve the normal equations recursively for an m-step
forward predictor?
4.In adaptive filtering ,iterative schemes are preferred over linear estimations. Justify?
PART B
6.With necessary equations ,substantiate the use of the Welch method-Averaging Modified
Periodogram for spectrum estimation?
8.With necessary equations, explain the Yule‐Walker method for AR model parameters in the
context of parametric spectral estimation?
9.Illustrate how Schur Algorithm can be utilized for the Solution of the Normal Equations?
11.What is the PR condition for a filterbank? Design the analysis filter bank for a perfect
reconstruction two channel Quadrature Mirror filterbank?
b.Express the output y[n] of the multirate system given below, as a function of the input x[n].
Syllabus and Course Plan(For 3 credit courses, the content can be for 40 hrs and for 2 credit
courses, the content can be for 26 hrs)
Syllabus
Course Plan
No. of
No Topic
Lectures
1 Power spectrum estimation
Estimation of spectra from finite duration observation of signals:
Computation of Energy density spectrum-Estimation of the
1.1 2
Autocorrelation and power spectrum of random signals ‐The
periodogram,Use of DFT in power spectrum estimation -
Non-parametric methods for spectral estimation:
1.2 Barlett method-Averaging Periodogram, Welch method-Averaging 3
Modified Periodogram
Blackman and Tukey Method‐Performance characteristics &
1.3 Computational requirements of non-parametric methods for 3
spectral estimation:
2 Parametric spectral estimation
Parametric spectral estimation:
2.1 2
Relationship between Autocorrelation and Model parameters
Yule‐Walker method for AR model parameters, Burg method for
2.2 3
AR model parameters
Selection of AR model order‐ MA and ARMA models for power
2.3 4
spectrum estimation
3 Linear Prediction and optimum linear filters
Linear Prediction : Forward and Backward Linear Prediction
3.1 Optimum reflection coefficients for the Lattice Forward and 3
Backward Predictors.
Solution of the Normal Equations: Levinson Durbin Algorithm,
3.2 3
Schur Algorithm
3.3 Properties of Linear Prediction Filters 2
4 Adaptive filters
Adaptive filters for adaptive channel equalization,adaptive noise
4.1 3
cancellation and Linear Predictive Coding of Speech Signals
Adaptive Direct form FIR filters:Minimum mean square
4.2 2
criteria,LMS algorithm
Adaptive Direct form filters:The RLS algorithm,Fast RLS
4.3 3
Algorithm,Properties of Direct Form RLS algorithm
5 Multirate Signal Processing
Mathematical description of sampling rate converters- Interpolator
5.1 2
and Decimator,Multirate Identities
The Polyphase decomposition-Applications to sub band coding -
5.2 3
Two Channel QMF filer bank-PR condition.
Fourier transform,Short-time(windowed) Fourier transform,The
5.3 discrete wavelet transform-Wavelet- admissibility condition.MRA 3
Axioms,scaling and wavelet function
Text books:
2. Monson H. Hayes, “Statistical Digital Signal Processing and Modeling”, John Wiley and
Sons Inc., New York, 2006.
References:
1. Simon Haykin, “Adaptive Filter Theory”, Prentice Hall, Englewood Cliffs, NJ1986.
2.Steven M Kay,” Modern spectrum Estimation theory and application”, Pearson
India,January 2009
3.D.G. Manolakis, V.K. Ingle and S.M. Kogon: Statistical and Adaptive Signal Processing,
McGraw Hill, 2000
4.Sophoncles J. Orfanidis, “Optimum Signal Processing “, McGraw-Hill, 2000
5. S K Mitra,”Digital Signal Processing: A computer based approach”, Tata-McGraw Hill
4. C S Burrus, R A Gopinath, H. Guo, “Introduction to Wavelets and Wavelet
Transforms: A primer”, Prentice Hall.
CODE COURSE NAME CATEGORY L T P CREDIT
TOPICS IN MACHINE PROGRAM
221TEC004 3 0 0 3
LEARNING CORE 2
Course Outcomes: After the completion of the course the student will be able to
Understand and apply the fundamentals, concepts and terminologies in machine
CO 1 learning, Deep learning and artificial intelligence.
Understand and analyse the principles of supervised and unsupervised learning and
CO 2 illustrate the functionalities of the supervised and unsupervised learning algorithms.
Analyze and evaluate the performance of artificial neural networks and deep learning
CO 4 neural architectures.
Create and evaluate critically the domain specific applications of Machine learning.
CO 5
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 2 3 2
CO 2 2 3 2
CO 3 3 2 2 2 3
CO 4 2 2 3
CO 5 3 3 2
Assessment Pattern
Mark distribution
Total
CIE ESE ESE Duration
Marks
Evaluation shall only be based on application, analysis or design based questions (for
both internal and end semester examinations).
The project shall be done individually. Group projects not permitted. Test paper shall
include minimum 80% of the syllabus.
The end semester examination will be conducted by the University. There will be two parts;
Part A and Part B. Part A contain 5 numerical questions (such questions shall be useful in the
testing of knowledge, skills, comprehension, application, analysis, synthesis, evaluation and
understanding of the students), with 1 question from each module, having 5 marks for each
question. Students shall answer all questions. Part B contains 7 questions (such questions
shall be useful in the testing of overall achievement and maturity of the students in a course,
through long answer question relating to theoretical/practical knowledge, derivations,
problem solving and quantitative evaluation), with minimum one question from each module
of which student shall answer any five. Each question can carry 7 marks. Total duration of
the examination will be 150 minutes.
Model Question Paper
PART A
Answer ALL Questions. Each Question Carries 5 marks.
PART – B
Answer any 5 full questions ; each question carries 7 marks.
6. a) With neat schematics, explain machine learning process flow. Explain the 5 CO4
process involved in model building and validation, interpretaion of model
and data visualization.
6. b) What do you mean by hyperparameters ? Explain the process of fine 2 CO3
tuning the hyper parameters.
7.a) Discuss the principles of density based clustering and explain the basic 3 CO1
algorithm. What do you mean by ε -neighbourhood ?
7. b) With the help of relevant equation explain how do you compute cosine 4 CO5
similarity between two feature vectors.
8.a) State the dual problem of optimization for SVM for linearly non- 2 CO2
separable patterns. Obtain the equation of optimal hyperplane and
optimum weight vectors wo and optimum bias bo for linearly non-
separable patterns. Give expression for finding the label of a test example
xt.
8.b) Explain what do you mean by hard and soft margins. Derive the 5 CO4
expression for the margin of separation for SVM.
9. a) Explain the concept of error back propagation learning used in MLFFNN. 3 CO4
Obtain the weight updation equation for MLFFNN with one hidden layer.
9.b) Discuss the significance of activation functions. With the help of a neat 3 CO1
sketch explain sigmoidal activation function. Explain how spread factor
(β) value is selected
9.c) Discuss the functionalities of different layers in a CNN. 1 CO2
10.a) Compare and contrast between agglomerative and divisive techniques for 2 CO1
clustering. Give the step-by-step algorithm for agglomerative clustering.
What are the proximity measures between clusters ?
10.b) With the help of suitable illustrations explain the selection of number of 5 CO3
clusters using silhouette analysis.
11.a) Discuss the basic concepts of deep learning. With a neat schematic 3 CO1
explain the principles of convolutional neural networks.
11.b) Discuss the need for pooling layer in a CNN architecture. Distinguish 4 CO3
between max pooling and average pooling.
12.a) Illustrate the use of machine learning algorithms in medical image 3 CO5
segmentation for brain tumor detection. Draw neat schematics for the
system and expalin the functionality of each module.
12.b) Explain the use of machine learning algorithms and tools for rain 4 CO5
forecasting in Kerala. Explain the data collection process, data refinement
and models used for prediction. How do you validate the model?
Syllabus
Artificial neural networks- Basic principles of Back propagation, Gradient Descent, Training
Neural Network, Initialisation and actvation functions. Deep learning principles and
achitectures-Dropout, Batch normalisation, Ensemble learning, Data augmentation, Transfer
learning, Convolutional Neural Networks, Recurrent Neural Networks, LSTM, Data
augmentation-GAN.
No. of Lectures
No Topic
[40Hrs]
1 Basics of machine learning.
1.1 Introduction to machine learning. 1
1.2 Artificial intelligence and deeplearning. 1
Learning algorithms-over fitting and under fitting,
1.3 2
hyperparameters and validation sets.
1.4 Estimators, bias and variance, Maximum Likelihood Estimation. 1
Machine learning process flow- define problem, objective, data
acquisition and preprocessing, feature engineering, model building 2
and validation.
2 Semi-supervised and Reinforcement Learning
Supervised Learning- Basic principles of linear regression, logistic
2.1 2
regression.
Text Books
1. Machine Learning: The Art and Science of Algorithms that Make Sense of Data, Peter A
Flach, Cambridge University Press, ISBN-10 1107422221, 2012.
2. Applied Machine Learning, 2nd Edition, M. Gopal, Mc Graw Hill Education, ISBN-10 :
9789353160258, 2018.
3. Neural Networks and Learning Machines, Simon S. Haykin, 3rd Edition, Pearson-Prectice
Hall, ISBN-10: 0-13-147139-2, 2009.
Reference Books
2. Machine Learning, 1st Edition, Saika Dutt, Subramanian Chandramouli, Amit Kumar Das,
Pearson Education, ISBN-10 9353066697, 2018.
3. Machine Learning: A First Course for Engineers and Scientists, Andreas Lindholm, Niklas
Wahlstom, Fredrik Lindsten et al., Cambridge University Press, ISBN-10 1108843603,
2022.
4. Handbook of Reinforcement Learning and Control, Kyriakos G. Vamvoudakis, Yan Wan,
et al., ISBN-10 3030609898, Sprimger, 2021.
5. Neural Networks and Deep Learning, Charu C. Aggarwal, Springer, ISBN: 978-3-319-
94463-0, 2018.
CODE COURSE NAME CATEGORY L T P CREDIT
LABORATORY
221LEC001 SIGNAL PROCESSING LAB 1 0 0 2 1
1
Preamble: To experiment the concepts introduced in the topics : Linear Algebra, Random
processes, Advanced Signal Processing and Machine Learning
Course Outcomes: After the completion of the course the student will be able to
Assessment Pattern
Mark distribution
Total
CIE ESE
Marks
100 100 --
Tools :
Numerical Computing Environment – MATLAB or any other equivalent tool.
Syllabus
No Topics
1 Linear Algebra
1.1 Row Reduced Echelon Form: To reduce the given mxn matrix into Row reduced
Echelon form
1.2 Gram-Schmidt Orthogonalization: To find orthogonal basis vectors for the given
set of vectors. Also find orthonormal basis.
1.3 Least SquaresFit to a Sinusoidal function
1.4 Least Squares fit to a quadratic polynomial
1.5 Eigen Value Decomposition
1.6 Singular Value Decomposition
1.7 Karhunen- Loeve Transform
2 Advanced DSP
2.1 Sampling rate conversion: To implement Down sampler and Up sampler and
study their characteristics
2.2 Two channel Quadrature Mirror Filterbank: Design and implement a two channel
Quadrature Mirror Filterbank
3 Random Processes
3.1 To generate random variables having the following probability distributions (a)
Bernoulli(b) Binomial(c) Geometric(d) Poisson(e)Uniform,(f)
Gaussian(g)Exponential (h) Laplacian
3.2 Central Limit Theorem: To verify the sum of sufficiently large number of
Uniformly distributed random variables is approximately Gaussian distributed
and to estimate the probability density function of the random variable.
4 Machine Learning
4.1 Implementation of K Nearest Neighbours Algorithm with decision region plots
4.2 Implementation of K Means Algorithm with decision region plots
4.3 Implementation of Perceptron Learning Algorithms with decision region plots
4.4 Implementation of SVM algorithmfor classification applications
5 Implement a mini project pertaining to an application of Signal Processing
in real life, make a presentation and submit a report
SEMESTER I
PROGRAM ELECTIVE I
CODE COURSE NAME CATEGORY L T P CREDIT
ADVANCED DIGITAL PROGRAM
221EEC012 3 0 0 3
COMMUNICATION ELECTIVE 1
Course Outcomes: After the completion of the course the student will be able to
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 3 2 3 3
CO 2 3 3 2 3 3 3 2
CO 3 3 2 2 3 3
CO 4 3 2 2 3 3 3 2
CO 5 3 2 2 3 3 3 2
CO 6 3 3 2 3 3 3 2
Assessment Pattern
Mark distribution
Total ESE
CIE ESE
Marks Duration
Evaluation shall only be based on application, analysis or design based questions (for both
internal and end semester examinations).
Part A will contain 5 numerical/short answer questions with 1 question from each module,
having 5 marks for each question (such questions shall be useful in the testing of knowledge,
skills, comprehension, application, analysis, synthesis, evaluation and understanding of the
students). Students should answer all questions.
Part B will contain 7 questions (such questions shall be useful in the testing of overall
achievement and maturity of the students in a course, through long answer questions relating
to theoretical/practical knowledge, derivations, problem solving and quantitative evaluation),
with minimum one question from each module of which student should answer any five.
Each question can carry 7 marks.
Note: The marks obtained for the ESE for an elective course shall not exceed 20% over the
average ESE mark % for the core courses. ESE marks awarded to a student for each elective
course shall be normalized accordingly. For example if the average end semester mark % for
a core course is 40, then the maximum eligible mark % for an elective course is 40+20 = 60
%.
Model Question Paper
Branch:
Part A
Part B
6. Three messages are transmitted over an AWGN channel with noise power spectral
density N0/2 The messages are
1 ; 0 ≤ 𝑡𝑡 ≤ 𝑇𝑇
S1(t) = �
0 ; 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒 𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤
1 ; 0 ≤ 𝑡𝑡 ≤ 𝑇𝑇/2
𝑇𝑇
S2(t)= −S3(t) = � −1; ≤ 𝑡𝑡 ≤ 𝑇𝑇
2
0; 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Syllabus
Module I
Module II
Module III
Module IV
Multi Channel and Multi Carrier Systems: Multichannel Digital communication in AWGN
channels,Multicarrier communication- Discrete implementation of multicarrier modulation.
FFT based multi carrier system,Spread spectrum principles, Generation of PN sequences,
DSSS & FHSS, Synchronization of Spread Spectrum signals.
Module V
No Topic No. of
Lectures
1 Characterization of Communication Signals and Systems:
1.1 Overview of Digital Communication systems. 1
1.2 Communication Channels and Mathematical models 2
1.3 Representation of band pass signals and systems, Signal
2
spacerepresentation.
1.4 Representation of digitally modulated signals 2
1.5 Spectral Characteristics of Digitally Modulated Signals. 2
Reference Books
Preamble : Pattern analysis is the use of machine learning algorithms to identify and
categorise patterns. It classifies data based on statistical information or knowledge gained
from patterns and their representation. The popular pattern analysis tasks are pattern
recognition, classification, clustering and retrival. Students will be able to learn pattern
analysis fundamentals, understand the different types of algorithms in pattern analysis,
develop in-depth knowledge of pattern analysis tasks such as classification, clustering,
matching, retrieval etc.
Course Outcomes: After the completion of the course the student will be able to
Understand and apply the fundamentals, concepts and terminologies in pattern
CO 1 analysis.
Understand and analyse the principles of feature extraction and optimization and
CO 2 illustrate the functionalities of the feature extraction and optimization algorithms.
Understand and analyse the principles of supervised models for pattern analysis and
CO 3 illustrate the functionalities of the supervised pattern analysis algorithms.
Understand and analyse the principles of unsupervised models for pattern analysis and
CO 4 illustrate the functionalities of the unsupervised pattern analysis algorithms.
Create and evaluate critically the domain specific applications of pattern analysis.
CO 5
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 2 3 3
CO 2 2 2 2
CO 3 3 2 2 3 3
CO 4 2 2 2
CO 5 3 3 2
Assessment Pattern
Apply 20 %
Analyse 40 %
Evaluate 20 %
Create 20 %
Mark distribution
Total
CIE ESE ESE Duration
Marks
Test paper shall include minimum 80% of the syllabus. include minimum 80% of the syllabus.
Note: The marks obtained for the ESE for an elective course shall not exceed 20% for each
elective course shall be normalized accordingly. For example if the average end semester
mark % for a core course is 40, then the maximum eligible mark % for an elective course is
40+20 = 60 %.
Model Question paper
PART A
Answer ALL Questions. Each Question Carries 5 marks.
PART – B
Answer any 5 full questions ; Each question carries 7 marks.
6. a) The minimum error rate classification can be achieved by the use of 5 CO4
discriminant functions gi(x) = ln p(x|wi) + ln P(wi). The densities
p(x|wi) are assumed to be multivariate Gaussian. Determine the
discriminant function for (i) Σi = σ^2*I (ii) Σi = Σ and Σi =
arbitrary. Discuss the nature of decision surface in each case.
6. b) Illustrate the working of LDA with a suitable toy dataset. Also 2 CO3
explain how predictions are made.
7.a) Explain Bayes decision theory. Give expression for Bayes decision 3 CO1
rule. Discuss the significance.
7. b) Give the expresion for Bayesian decision function. Use Bayes 4 CO5
decision rule to find the answer to the following problem: Suppose a
drug test is 99% sensitive and 99% specific. That is, the test will
produce 99% true positive results for drug users and 99% true
negative results for non-drug users. Suppose that 0.5% of people are
users of the drug. If a randomly selected individual tests positive,
what is the probability that they are a user?
8.a) Compare and contrast between the use of DFT and DCT for 2 CO2
extracting transform based features from image data.
8.b) Discuss the significance of dimensionality reduction. Explain step- 5 CO4
by step algorithm for PCA
9. a) Discuss gradient descent algorithm used in MLFFNN. Also 3 CO4
comment on local and global minimum
9.b) Explain the concept of error back propagation learning used in 3 CO1
MLFFNN. Obtain the weight updation equation for MLFFNN with
one hidden layer.
9.c) Compare MLFFNN and perceptron based classifiers for pattern 1 CO2
analysis. Comment on the stopping criterion for both.
10.a) Explain what do you mean by hard and soft margins. Derive the 2 CO1
expression for the margin of separation for SVM.
10.b) State the dual problem of optimization for SVM for linearly non- 5 CO3
separable patterns. Obtain the equation of optimal hyperplane and
optimum weight vectors wo and optimum bias bo for linearly non-
separable patterns. Give expression for finding the label of a test
example xt.
11.a) Discuss the principles of fuzzy of k-means clustering 3 CO1
Syllabus
Text Books
1. Pattern classification , Richard O. Duda and Hart P.E, and David G Stork, , 2nd Edn., John
Wiley & Sons Inc., 2001
2. Neural Networks and Learning Machines, Simon S. Haykin, 3rd Edition, Pearson-Prectice
Hall, ISBN-10: 0-13-147139-2, 2009.
Reference Books
2. Advances in Fuzzy Clustering and its Applications, Jose Valente de Olliveira (Editor),
Witold Pedrycz (Editor), ISBN: 978-0-470-02760-8, Wiley 2017.
3. Digital Pattern Recognition, King Sun Fu, ISBN: 978-3-642-96303-2, Springer, 1976
4. Pattern Recognition and Image Analysis Earl Gose, Richard Johnsonbaugh, and Steve
Jost,, PHI Pvt. Ltd., NewDelhi-1, 1999.
Course Outcomes: After the completion of the course the student will be able to
Understand the basic concepts of speech production and apply time domain
CO 1
analysis methods for classification of speech sounds
Analyse speech segments using frequency domain techniques - STFT and Cepstral
CO 2
analysis
CO 3 Apply LPC Analysis to speech signals
Analyse and apply speech coding techniques for speech compression, storage and
CO 4
transmission
CO 5 Understand the fundamentals of speech processing applications
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 3 3 3
CO 2 3 3 3 3
CO 3 3 3 3 3
CO 4 3 3 3 3
CO 5 3 3 3 3
Assessment Pattern
Apply 40%
Analyse 40%
Evaluate 20%
Create
Mark distribution
Total ESE
CIE ESE
Marks Duration
Test paper (Shall include minimum of 80% of the syllabus)1 no.: 10 marks
Note: The marks obtained for the ESE for an elective course shall not exceed 20% over the
average ESE mark % for the core courses. ESE marks awarded to a student for each elective
course shall be normalized accordingly. For example if the average end semester mark % for
a core course is 40, then the maximum eligible mark % for an elective course is 40+20 = 60
%.
Model Question Paper
No. of Pages: 2 D
APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY
FIRST SEMESTER M.TECH DEGREE EXAMINATION
Branch: Electronics and Communication Engineering
Stream(s): Signal Processing
Course Code & Name: 221EEC014 - SPEECH SIGNAL PROCESSING
Max. Marks: 60 Duration:
2.5 hours
PART A
Answer all questions. Each question carries 5 marks.
1. How do you differentiate voiced and unvoiced speech segments using short time zero
crossing rate and short time energy?
2. Why do we do short time analysis in the case of speech signals? Distinguish between
narrow band and wide band spectrograms.
3. Explore the use of AR models in the analysis of speech signals.
4. How is perceptual irrelevancy removal used in speech compression?
5. Investigate the challenges in speech segmentation?
PART B
Answer any five questions. Each question carries 7 marks.
your inferences.
Fig.1(a)
Fig.1(b)
7. How can we use cepstral analysis to separate source and filter characteristics of a
speech signal? Derive the steps involved in obtaining cepstral coefficients.
8. Formulate the Filter Bank Summation (FBS) method of STFT synthesis. Derive the
FBS constraint.
9. How can we use Levinson Durbin algorithm to find the LPC coefficients?
10. How can we use sub band coding for speech compression?
11. How can we convert any given text to speech ?
12. How can we automatically verify the identity of a given speaker?
13.
Syllabus
MODULE I
Speech Production and Short-Time Speech Analysis:Acoustic theory of speech production,
Excitation, Vocal tract model for speech analysis, Formant structure, Pitch, Articulatory
Phonetics, and Acoustic Phonetics, Time domain analysis (Short time energy, short time zero
crossing Rate, ACF),
MODULE II
Frequency domain analysis: Filter Banks, STFT, Spectrogram, Formant Estimation &
Analysis, Cepstral Analysis, MFCC,
MODULE III
Parametric representation of speech: AR model, ARMA model, LPC model, Autocorrelation
method, Covariance method, Levinson-Durbin Algorithm, Lattice form, Sinusoidal Model,
GMM, Hidden Markov Model,
MODULE IV
Speech coding: Phase Vocoder, LPC, Sub-band coding, Adaptive Transform Coding,
Harmonic Coding, Vector Quantization based Coders, CELP
MODULE V
Applications of speech processing: Fundamentals of Speech recognition, Speech
segmentation, Text-to-speech conversion, speech enhancement, Speaker Verification,
Language Identification
Course Plan
No. of
No Topic
Lectures
1 MODULE I
1.1 Acoustic theory of speech production, Excitation, Vocal tract 2
model for speech analysis
1.2 Formant structure, Pitch, Articulatory Phonetics, and Acoustic 2
Phonetics
1.3 Time domain analysis (Short time energy, short time zero crossing 2
Rate, ACF).
2 MODULE II
2.1 Filter Banks, STFT 3
2.2 Spectrogram, Formant Estimation & Analysis 2
2.3 Cepstral Analysis, MFCC 2
3 MODULE III
3.1 AR model, ARMA model, LPC Analysis - LPC model, 3
Autocorrelation method
3.2 Covariance method, Levinson-Durbin Algorithm, Lattice form 3
3.3 Sinusoidal Model, GMM, Hidden Markov Model 3
4 MODULE IV
4.1 Phase Vocoder, LPC, Sub-band coding 3
4.2 Adaptive Transform Coding, Harmonic Coding 3
4.3 Vector Quantization based Coders, CELP 3
5 MODULE V
5.1 Fundamentals of Speech recognition, Speech segmentation. 3
5.2 Text-to-speech conversion, speech enhancement 3
5.3 Speaker Verification, Language Identification 3
Reference Books
Course Outcomes: After the completion of the course the student will be able to
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 2 2
CO 2 3 2 2 3
CO 3 3 3 2 2
CO 4 3 3 3 2
CO 5 3 2 3 3
CO 6 3 3 3 3 3
Assessment Pattern
The end semester examination will be conducted by the respective College. There will be two
parts; Part A and Part B.
Part A will contain 5 numerical/short answer questions with 1 question from each module,
having 5 marks for each question (such questions shall be useful in the testing of knowledge,
skills, comprehension, application, analysis, synthesis, evaluation and understanding of the
students). Students should answer all questions.
Part B will contain 7 questions (such questions shall be useful in the testing of overall
achievement and maturity of the students in a course, through long answer questions relating
to theoretical/practical knowledge, derivations, problem solving and quantitative evaluation),
with minimum one question from each module of which student should answer any five.
Each question can carry 7 marks.
Note: The marks obtained for the ESE for an elective course shall not exceed 20% over the
average ESE mark % for the core courses. ESE marks awarded to a student for each elective
course shall be normalized accordingly. For example, if the average end semester mark % for
a core course is 40, then the maximum eligible mark % for an elective course is 40+20 = 60
%.
Model Question Paper
Program: M.Tech.in EC3 (Signal Processing, Signal Processing & Embedded Systems,
Communication Engineering & Signal Processing)
Part A
1. Mention two application of embedded system in health care and how can it be used
for monitoring purpose.
2. Discuss about some of the deviation of ARM architecture from pure RISC nature.
3. Explain the linear waterfall model of EDLC.
4. Discuss any two product enclosure development techniques
5. What is cross compilation? What are the files generated on cross compilation?
Part B
Module 1: 8 hours
Module 2: 11 hours
Module 3: 7 hours
Module 4: 6 hours
Product enclosure design and development: Concept of firmware, operating system and
application programs. Power supply Design, External Interfaces.
Module 5: 8 hours
No. of
No Topic
Lectures
1 Introduction to Embedded systems
Embedded system examples, Parts of Embedded System, Typical
1.1 Processor architecture, Power supply, clock, memory interface, 4
interrupt, I/O ports, Buffers, Programmable Devices, ASIC, etc
Simple interfacing examples, Memory Technologies, EPROM, Flash,
1.2 4
OTP, SRAM, DRAM, SDRAM etc
2 ARM architecture
1. Van Ess, Currie and Doboli, Laboratory Manual for Introduction to Mixed-Signal,
Embedded Design, Alphagraphics, USA
2. William Hohl, ARM Assembly Language Programming, CRC Press, 2009.
3. Andrew Sloss, Dominic Symes, Christ Wright, ARM System Developer’s guide –
Designing and optimizing software, Elsevier Publishers, 2008.
CODE COURSE NAME CATEGORY L T P CREDIT
INFORMATION HIDING AND PROGRAM
221EEC016 3 0 0 3
DATA ENCRYPTION ELECTIVE 1
Preamble: The course is designed to provide an insight to various data encryption and
information hiding techniques and applying these techniques in various security applications.
The course also aims to develop skills in analysing the strengths and weakness of various
techniques used for information security.
Course Outcomes: After the completion of the course the student will be able to
CO 1 Apply various techniques for data encryption and analyse the performance (K3).
CO 2 Identify and apply information hiding and digital watermarking techniques for given
problems in security systems (K3).
CO 3 Analyse publications related to encryption and information hiding in journals and
conferences and submit report (K4).
CO 4 Choose and solve a research problem in the area of data encryption /and information
hiding (K5).
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 2
CO 2 3 3 2
CO 3 3 3 3 2
CO 4 3 3 3 3 3 2 2
Assessment Pattern
Continuous
End Semester
Bloom’s Category Internal
Examination
Evaluation
Understand 10
Apply 40 10
Analyse 10 15
Evaluate 15
Create
Mark distribution
CO1: Apply various techniques for data encryption and analyse the performance
1. Distinguish between a synchronous and a nonsynchronous stream cipher.
2. In the DSS scheme, if Eve can find the value of r (random secret), can she forge a
message? Explain.
3. Prove that (x3 + x2 + 1) is an irreducible polynomial of degree 3.
CO2: Identify and apply information hiding and digital watermarking techniques for
given problems in secure communication systems
1. Choose a digital watermarking technique for protecting an image and justify
2. Derive a method to hide an image in a video with optimal bandwidth.
3. Describe the hiding techniques in spatial and temporal transform domains and their
applications.
CO3: Analyse publications related to encryption and information hiding in journals and
conferences and submit report.
1. Review any 10 conference papers on digital watermarking techniques, analyse and
prepare a report.
2. Analyse at least 10 journal papers on video steganography and present.
CO4: Choose and solve a research problem in the area of data encryption /and
information hiding.
1. Encrypt any text data using symmetric and asymmetric methods of encryption and
compare the performance.
2. Protect an audio file which has to be protected and transmitted through a vulnerable
and noisy channel using image steganography.
3. Hide an image using any one of the conventional steganographic techniques and also
using network steganography (Protocol) and compare your results.
Model Question Paper
Course Code: 221EEC016
Course Name: INFORMATION HIDING AND DATA ENCRYPTION
Max. Marks: 60 Duration: 2.5 Hours
PART A
Answer all Questions. Each question carries 5 marks
Sl. Question CO
No.
1. Find the remainder when 1653 is divided by 7 1
2. Describe the parameters used for measuring the capability of hiding 1
techniques
3. Identify the techniques used for tamper detection in image and audio and 1
explain how the detection is carried out.
4. Suggest a mathematical model for protecting the bio-medical signals and 2
justify.
5. Explain detection theoretic approach for steganalysis. 2
PART B
Answer any one full question from each module. Each question carries 7 marks
6. Derive a general formula to calculate the number of each kind of
transformation (SubBytes, ShiftRows, MixColumns, and
AddRoundKey) and the number of total transformations for AES192 1
and AES256. The formula should be parametrized on the number of
rounds.
7. Write down algorithm for embedding and retrieval of text data using
2
spread spectrum technique.
8. Derive a mathematical model for hiding an image and explain. 2
9. Differentiate between adaptive and non-adaptive techniques for
information hiding. Which are the adaptive techniques used for hiding 2
audio?
10. Write down the encoding and decoding process for image hiding in the
2
DCT domain and explain.
11. Discuss temporal and transform domain techniques used in video using
2
relevant examples.
12. Illustrate SVM method for steg-analysis 2
Syllabus
Module1:
Module2:
Introduction to Information Hiding: Steganography. Objectives, difference, requirements,
Types – Fragile, Robust. Parameters and metrics - BER, PSNR, WPSNR, Correlation
coefficient, MSE, and Bit per pixel.
Module3:
Digital Watermarking: Algorithms, Types of Digital Watermarks, Applications, Audio
Watermarking
Module4:
Information Hiding in 1D signals: Time and transform techniques, hiding in Audio,
biomedical signals, HAS Adaptive techniques.
Module5:
Information Hiding in video: Temporal and transform domain techniques, Bandwidth
requirements, HVS Adaptive techniques.
Steg analysis: Statistical Methods, HVS based methods, SVM method, Detection theoretic
approach.
Course Plan
Topic No. of
No
Lectures
1 Review of Number Theory and Data Encryption Methods:
1.1 Elementary Number theory 1
Algebraic Structures- Groups, Rings and Finite Fields,
1.2 2
Polynomials over Finite Fields (Fq)
1.3 Introduction to Complexity theory. 1
Introduction to Cryptography, Classical Cryptography, Stream
1.4 2
Ciphers
1.5 Public Key Cryptography based on Knapsack problem 2
1.6 AES. Digital Signature, Zero Knowledge Proofs. 2
2 Information Hiding:
2.1 Steganography. Objectives, difference, requirements 2
Types – Fragile, Robust. Parameters and metrics - BER, PSNR,
2.2 3
WPSNR, Correlation coefficient, MSE, and Bit per pixel.
Information Hiding Approaches: LSB, additive and spread
2.3 3
spectrum methods.
3 Digital Watermarking and applications of Image Hiding:
3.1 Digital Watermarking Algorithms 2
3.2 Types of Digital Watermarks and Applications 1
3.3 Audio Watermarking 1
Applications of Information Hiding: Authentication, annotation,
3.4 2
tamper detection and Digital rights management
3.5 Hiding text and image data, mathematical formulations. 2
4 Information Hiding in 1D and 2D signals:
Information Hiding in 1D signals: Time and transform techniques,
4.1 3
hiding in Audio, biomedical signals, HAS Adaptive techniques.
Information Hiding in 2D signals: Spatial and transform
4.2 techniques-hiding in images, ROI images, HVS Adaptive 4
techniques.
5 Information Hiding in video and Steg analysis:
Information Hiding in video: Temporal and transform domain
5.1 3
techniques, Bandwidth requirements.
Steg analysis: Statistical Methods, HVS based methods, SVM
5.2 4
method, Detection theoretic approach.
Reference Books
1. Neal Koblitz, A Course in Number Theory and Cryptography, 2nd Edition, Springer
2. Stefan Katzenbeisser, Fabien A. P. Petitcolas, Information Hiding Techniques for
Steganography and Digital Watermarking, Artech House Publishers, 2000.
3. Neil F Johnson et al Kluwer, Information hiding: Steganography and Watermarking -
Attacks and Countermeasures,Springer, 2001.
4. Ingemar J Cox, Digital Watermarking, The Morgan Kaufman Series in Multimedia
Information and Systems, 2001
5. Ira S Moskowits, Information Hiding, Proceedings, 4th International Workshop, IH
2001, Pittsburg, USA, April 2001, Eds:2. AVISPA package homepage,
http:/www.avispaproject.org/
6. Handbook of Applied Cryptography, AJ Menezes, CRC Press, 2001.
CODE COURSE NAME CATEGORY L T P CREDIT
PROGRAMMING TOOLS
PROGRAM
221EEC017 FOR MODELING AND 3 0 0 3
ELECTIVE 1
SIMULATION
Preamble:This course is about learning the programming languages used in the development
of embedded systems. Learners can use the concepts learned in this course for the
development of processor based systems.
Course OutcomesAfter the completion of the course the student will be able to
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 3 3 3
CO 2 3 3 3 3
CO 3 3 3 3 3
CO 4 3 3 3 3
CO 5 3 3 3 3
Assessment Pattern
Program: M.Tech. in EC3 (Signal Processing, Signal Processing & Embedded Systems,
Communication Engineering & Signal Processing)
PART A
6. Write a Linux X window program to create a window with a little black square in it
and exits on a key press.
7. Explain the features of the Linux system? Why Linux is regarded as a more secure
operating system than other operating systems?
8. Write a C program to find the transpose of a matrix.
9. Write an embedded C ARM I/O program to display a message on the LCD using 8-
bit mode and delay.
10. Write Python code to multiply two matrices using nested loops and also perform
transpose of the resultant matrix.
11. Write a note on the image processing function in Python.
12. Write a NumPy program to create an element-wise comparison (equal, equal within
a tolerance) of two given arrays
Syllabus
Module I (8 Hrs.)
Linux: Introduction, The shell, Shell script and programming, Shell configuration, Linux
files, directories and archives. The X window system, Xorg, and Display managers, Gnome,
KDE, Linux software management.
Module II (8 Hrs.)
Embedded Programming: C programming, Constants, variables and data types, operators and
Expressions, I/O operations, Control flow statements (if else, switch, loops), Arrays and
strings, Functions, structures and unions, Pointers, file management, Dynamic memory
allocation and Linked lists.
Embedded C Programming using embedded IDE. ARM I/O programming, LED, LCD,
Keypad interfacing, UART,SPI,I2C programming, Timer programming, Interrupt and
Exception programming, ADC, DAC, Sensor interfacing.
Module IV (8 Hrs.)
Python Programming basics: variables, Input, Output, Basic operations, String manipulation,
Loops, functions, Lists, Dictionary.
Module V (8 Hrs.)
Course Plan
No. of
No Topic
Lectures
Linux: Introduction, The shell, Shell script and programming, Shell configuration,
1 Linux files, directories and archives. The X window system, Xorg, and Display
managers, Gnome, KDE, Linux software management.
Text Books
1. David Russell, “Introduction to Embedded systems Using ANSI C and the Arduino
development Environment”, 2010, 1rd edition, Morgan & Claypool Publishers.
2. E.I. Horvath, E.A. Horvath, “Learning IoT with Python and Raspberry Pi”, Learning
IoT LLC, 2019.
3. Sarmad Naimi, Muhammad Ali Mazidi, SepehrNaimi, “The STM32F103 Arm
Microcontroller and Embedded Systems: Using Assembly and C”, MicroDigitalEd,
2020
SEMESTER I
PROGRAM ELECTIVE II
CODE COURSE NAME CATEGORY L T P CREDIT
Preamble: The aim of the course is to give an overview of the commonly used
DSPalgorithms, their applications and various techniques for the algorithmic and
architecturelevel optimisations through various algorithm to architecture mapping which can
lead to efficient hardware implementations. The course also introduces the basic features in
DigitalSignal Processors,Micro controllers with DSP extensions, DSP Architecture with case
studies, the latest architectural trends in DSPsand their programming tools.
Course Outcomes: After the completion of the course the student will be able to
Analyse the basic resource constraints in a practical DSP system and solve them
CO 1 using various techniques/transformations that map the DSP algorithms to efficient
architectures.
Apply the knowledge of various single core and multicore Digital Signal Processor
CO 2 architectures in identifying the optimal processor for solving real life signal
processing problems.
Evaluate the DSP algorithms implemented in dedicated DSP processors and the
CO 3
micro controllers with DSP extensions
Create algorithms to solve signal processing problems using the latest hardware
CO 4
platforms and software tools.
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 1 3 3 3 1
CO 2 1 3 3 3 1
CO 3 1 3 3 3 1
CO 4 2 3 3 3 2
Assessment Pattern
Apply 20
Analyse 15
Evaluate 15
Create 10
Mark distribution
Total ESE
CIE ESE
Marks Duration
Preparing a review article based on peer reviewed Original publications (minimum 10)
Publications shall be referred): 15 marks
The end semester examination question paper consists of two parts; Part A and Part B. Part
A will contain 5 numerical/short answer questions with 1 question from each module, having
5 marks for each question (such questions shall be useful in the testing of knowledge, skills,
comprehension, application, analysis, synthesis, evaluation and understanding of the
students). Students should answer all questions. Part B will contain 7 questions (such
questions shall be useful in the testing of overall achievement and maturity of the students in
a course, through long answer questions relating to theoretical/practical knowledge,
derivations, problem solving and quantitative evaluation), with minimum one question from
each module of which student should answer any five. Each question can carry 7
marks.
Syllabus
Case Study 2: Introduction to ARM Cortex-M Based Microcontrollers with DSP extensions -
ARMv7E-M architecture
No. of
No Topic
Lectures
1 Basics of DSP Algorithm Representation to Architecture Mapping
DSP Algorithm representations –Block Diagram, Signal Flow
1.1 2
Graph,Data Flow Graph, Dependence Graph.
Introduction to Filter structures- Recursive, Non-recursive and Lattice
1.2 1
structures.
Fundamentals of DSP algorithm to architecture mapping - Loop
1.3 2
bound,Iteration Bound, Critical Path.
Algorithms for computing Iteration Bound – Longest Path Matrix
1.4 2
Algorithm, Minimum Cycle Mean Algorithm.
Text Books
Reference Books
1.RulphChassaing, “Digital Signal Processing and Applications with the C6713 and
C6416 DSK”, John Wiley & Sons, 2005.
2.Sen M. Kuo, Woon-Seng S. Gan, Digal Signal Processors: Architectures,
Implementations, and Applications, Prentice Hall, 2004.
3.Lars Wanhammar, DSP Integrated Circuits, Academic Press, 1999.
4.B Venkataramani, M Bhaskar, “Digital Signal Processors: Architecture, Programming
and Applications”, 2nd Ed., Tata McGraw-Hill Education, 2002.
5. A. Kharin, S. Vityazev and V. Vityazev, "Teaching multi-core DSP implementation on
EVM C6678 board," 2017 25th European Signal Processing Conference (EUSIPCO),
2017, pp. 2359-2363, doi: 10.23919/EUSIPCO.2017.8081632
6.Donald S. Reay.. “Digital Signal Processing Using the ARM Cortex M4”, (1st. ed.).
Wiley Publishing, 2015
7.CemÜnsalan, M. ErkinYücel, H. DenizGürhan, “Digital Signal Processing Using Arm
Cortex-M Based Microcontrollers: Theory and Practice”, ARM Education Media, 2018.
Model Question Paper
APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY
FIRST SEMESTER M. TECH DEGREE EXAMINATION, (Model Question Paper)
Course Code: 221EEC018
Course Name: DSP PROCESSORS AND ARCHITECTURE
Max. Marks: 60 Duration: 2.5 Hours
PART A
Answer all Questions. Each Carries 5 mark.
1 Differentiate between Signal Flow Graph (SFG) and Data Flow Graph (DFG) with
example.
2 What is pipelining? Explain with an example, how it helps in reducing the critical
path delay in implementing the DSP systems.
3 In what way the Super Harvard architecture-based DSPs differs from the normal
microprocessors?
4 What is the concept of Heterogeneous Multicore DSP Architecture? Quote an
example processor?
5 Quoting a suitable example, explain the architectural advantages of an FPGA SoC.
PART – B
(Answer any five questions, each carries 7 mark.)
6 Explain the Longest Path Matrix (LPM) Algorithm for computing the iteration bound
of a DFG.
For the DFG shown in figure below, the computation times of the nodes are shown in
parentheses. Compute the iteration bound of this DFG using the LPM algorithm.
7 For the following transfer function given, Derive the basic lattice filter and draw its
structure
8 Consider a direct-form implementation of the FIR filter
y(n) = ax(n) + bx(n-2) +cx(n-3)
Assume that the time required for 1 multiply-add operation is T
i. Pipeline this filter such that the clock period is approximately T
ii. Draw block filter architecture for a block size of three. Pipeline this
block filter such that clock period is about T. What is the system sample
9 The TMS320C6713 processor is used for an application where, it has to read the
audio data inputted through the codec and has to send the data which is band limited
to 1 KHz, to another external device for further processing. If the processor is
connected to the audio codec through the McBSPs of the TMS320C6713 processor.
a)Draw the interconnection diagram showing all the necessary signals, for inputting
an analog signal to the processor for the processing and to send the result there after,
with the entire data transfer initiated through the McBSPs.
b)What are the various registers need to be programmed in order to affect the data
transfer? Explain the role and functionality of each.
10 Draw a neat block schematic of the architecture of TMS320C66x series of
processor. Briefly explain the role of each block.
11 Give an overview of the memory organisation in TMS320C66xx series of
processors. Explain the role of various memory controllers and interfaces in
relieving the CPU load.
12 Give an overview of the latest architectural trends for implementing DSP
algorithms. How will you compare FPGA SoCs and DSP SoCs?.
CODE COURSE NAME CATEGORY L T P CREDIT
PROGRAM
221EEC019 CODING THEORY 3 0 0 3
ELECTIVE 1
Preamble: This course aims at a rigorous analysis of various error correction codes starting
from the earliest Hamming code to the latest polar codes used in 5G
Course Outcomes: After the completion of the course the student will be able to
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 3 2
CO 2 3 2 2
CO 3 3 3 3
CO 4 3 2 3
CO 5 3 3 2
CO 6 3 3 3
Assessment Pattern
Apply 10
Analyse 25
Evaluate 20
Create 5
Mark distribution
PART A
Answer All Questions 5 x 5 marks = 25 marks
1. Explain Hamming bound and Gilbert-Varshamov bound
2. Analyse the effect of short-cycles in the decoding of LDPC codes
3. Show that the dual of an RS code is also an RS code (and hence MDS)
4. Construct the Generator matrix of a RM(1,3) code
5. Discuss the encoding of Turbo codes
PART B
Answer any 5 questions 5 x 7 marks = 35marks
6.The parity check bits of a (8,4) block code are generated by, p0 =m0 +m1 +m3 , p1 =m0 +m1
+m2 ,p2 =m0 +m2 +m3, p3 =m1 +m2 +m3 where m0 , m1, m2 and m3 are the message digits.
a) Find the generator matrix and the parity check matrix for this code in the form
G=[P:Ik ]
b) Find the minimum weight of this code.
c) Find the error-detecting capabilities of this code.
d) Check whether 11101000 and 11100000 are valid codewords using H matrix.
7. a)Explain hard decision decoding of LDPC codes using Bit-flipping algorithm in BSC.
b) .Explain BCJR decoding of convolutional codes
8. For a binary, narrow sense, triple error correcting BCH code of length 15, constructed
using the polynomial x4+x+1
(a) Compute a generator polynomial for this code
(b) Determine the rate of the code
(c) Construct the parity check matrix and generator matrix for this code
9. Form the generator matrix of the first order RM code RM (1,3) of length 8. What is the
minimum distance of the code? Determine its parity check sums and devise a majority logic
decoder for the code. Decode the received vector r = (01000101)
10. Describe the basic ideas of polarization. Analyse mathematically channel polarisation for
N=2 channel
12. Differentiate between the BCH Viewpoint and Vandermonde viewpoints of Reed
Solomon Codes
Syllabus
Module 2 LDPC Codes Regular and Irregular Low Density Parity Check Codes-Tanner
graph, Message Passing decoding-Hard decision and Soft decision, Bit flipping algorithm for
decoding, Bit flipping algorithm for decoding, Belief Propagation decoding: Sum Product
algorithm
Module 3 BCH and RS codes Galois Fields -- Irreducible and Primitive Polynomials- BCH
Codes - Design, BCH Bound, Decoding BCH codes – Decoding BCH – the general outline,
computation of the syndrome, error locator polynomial, Chien Search algorithm, Finding the
error locator polynomial. Berlekamp Massey Algorithm. Burst-error correction capability of
BCH codes. Reed Solomon Codes – BCH code viewpoint. Vandermonde matrix view point
Module 5 Reed-Muller Codes and Polar Codes Reed Muller Codes, Encoding and
decoding of RM (1, m) codes. Majority-logic decoding of Reed-Muller codes. Polar Codes –
Introduction, polarization of BEC channels, Polar transform and frozen bits.
Course Plan
No. of
No Topic
Lectures
1 Linear Block Codes and Bounds
Course Overview-Relevance of Error correction schemes in
1.1 1
communication systems
Repetition coding, concepts of Code rate, Hamming Distance,
1.2 Minimum Distance, Error detecting and correcting capability. 1
1.3 Review of Group,ring, fields and vector spaces 2
Linear Block codes-Generator and parity check matrices,encoding,
1.4 standard array and syndrome decoding, Hamming codes 2
1. Shu Lin, D. J Costello Jr. Error Control Coding: Fundaments and Applications, Prentice
Hall
Reference Books:
Preamble: Multirate systems play a central role in many areas of signal processing, such as
filter bank theory and multiresolution theory. This course imparts a comprehensive
knowledge of topics in multirate signal processing and wavelets, essential in some of the
standard signal processing techniques such as signal analysis, denoising and other
applications.
Course Outcomes: After the completion of the course the student will be able to
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3
CO 2 3 2
CO 3 3
CO 4 3
CO 5 3
CO 6 3 2
Assessment Pattern
Apply 30
Analyse 20
Evaluate 10
Create
Mark distribution
Total ESE
CIE ESE
Marks Duration
Name:
Reg. No: APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY E
FIRST SEMESTER M.TECH DEGREE EXAMINATION
Part A
Part B
Answer any FIVE full questions; each question carries 7 marks. (5x7=35)
6 a) Consider the following multirate system. Obtain the expression for its
output. (3 Marks)
9 a) Derive the constraint equations for the design of a Daubechies 6-tap orthogonal wavelet
system. (7 Marks)
b) Derive the filter bank structure to obtain the fine scale coefficients sj+1(k) of
any signal f(t) from its smooth (sj (k)) and detail coefficients (dj (k)) at coarse
resolution, and hence explain it with the required block diagram. (4 Marks)
11 a) Derive the analysis and synthesis side equations for a biorthogonal wavelet system.
(7 Marks)
12. Deduce the method of decomposing a given sequence into wavelet packets using Haar
wavelets and draw the decomposition tree with an example. (7 marks)
Syllabus
Review of basic multi-rate operations: up sampling and down sampling, time domain and
frequency domain analysis, Need for antialiasing and anti-imaging filters.Interpolator and
decimator design, Noble identities.Type 1 and Type 2 polyphase decomposition, 2-channel
and N-channelpolyphase decomposition .Efficient structures for decimation and interpolation
filters, efficient structures for fractional sampling rate conversion.
Overview of Maximally decimated filter banks and non-maximally decimated filter bank.
Uniform DFT filter banks - design, polyphase implementation.Two-channel critically
sampled filter banks.Amplitude-Complementary 2-Channel Filter Bank
Example - Two channel Haar Filter bank and its polyphase decomposition, Quadrature mirror
filter (QMF) bank, Errors in the QMF bank, conditions for perfect reconstruction, polyphase
implementation. Design of perfect reconstruction M- channel Filter Banks, Overview of
Uniform and non-uniform tree structured filter banks, Dyadic filter bank.
Module 3:Continuous and Discrete Wavelet Transform
Short Time Fourier Transform (STFT), STFT as a bank of filters, Choice of window function
and time frequency trade-off.The Uncertainty Principle and Time Frequency
Tiling,Continuous wavelet transform (CWT) and inverse CWT,Properties of Wavelets used
in CWT, Admissibility condition.Concept of orthogonal and orthonormal basis functions,
function spaces. Discrete Wavelet Transform.Haar Scaling Function, Nested Spaces, Haar
Wavelet Function, Orthogonality of scaling and translate functions, Normalization of Haar
Bases at different Scales, Refinement Relation with respect to normalized bases.Support of a
Wavelet System, Daubechies Wavelets.
Designing Orthogonal Wavelet systems - a direct approach, Frequency domain approach for
designing wavelets. Implementation using tree structured QMF bank and equivalent M-
channel filter bank.Designing Biorthogonal Wavelet systems: Biorthogonality in Vector
Space, Biorthogonal Wavelet Systems.Signal Representation Using Biorthogonal Wavelet
System,Biorthogonal Analysis and Biorthogonal Synthesis.Construction of Biorthogonal
Wavelet Systems-B-splines,Computation of the discrete wavelet transform using Mallat
Algorithm and Lifting Scheme
Course Plan
Topic No. of
No
Lectures
1 Multi-rate System Fundamentals:
Review of basic multi-rate operations: up sampling and down
sampling, time domain and frequency domain analysis, Need for
1.1 3
antialiasing and anti-imaging filters.
Interpolator and decimator design, Noble identities.
Type 1 and Type 2 polyphase decomposition, 2-channel and N-
1.2 2
channelpolyphase decomposition
Efficient structures for decimation and interpolation filters,
1.3 3
efficient structures for fractional sampling rate conversion.
2 Multi-rate Filter Banks
Overview of Maximally decimated filter banks and non-maximally
decimated filter bank. Uniform DFT filter banks - design,
polyphase implementation.
2.1 Two-channel critically sampled filter banks 3
Amplitude-Complementary 2-Channel Filter Bank
Example - Two channel Haar Filter bank and its polyphase
decomposition.
Quadrature mirror filter (QMF) bank , Errors in the QMF bank,
2.2 2
conditions for perfect reconstruction, polyphase implementation
2.3 Design of perfect reconstruction M- channel Filter Banks 2
Overview of Uniform and non-uniform tree structured filter banks. 1
2.4 Dyadic filter bank.
Reference Books
1.P. P. Vaidyanathan, Multirate Systems and Filter Banks, Pearson Education, 2006.
2.Fredric J Harris, Multirate Signal Processing for Communication Systems, 1st Edition,
Pearson Education, 2007
3.Sanjit K. Mitra, Digital Signal Processing: A Computer based Approach, Special Indian
Edition,McGraw Hill, 2013.
4. Spectral Audio Signal Processing, Julius O. Smith III, W3K Publishing, 2011.
Preamble: This course introduces the design and analysis of signal processing algorithms
which can automatically change the system parameters to get a desired output, when a
stationary random signal is applied to it.
Course Outcomes: After the completion of the course the student will be able to
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 1
CO 2 2 2 2
CO 3 2 2 2
CO 4 2 2 3 2
CO 5 2 2 3 3 2
Assessment Pattern
Understand 20%
Apply 70%
Analyse 10%
Evaluate
Create
Mark distribution
Total ESE
CIE ESE
Marks Duration
Name:
Reg. No: E
1 Prove that the autocorrelation of a non-zero wide sense stationary random 5 marks
sequence is positive definite. Compute the 2x2 auto correlation matrix of
, where a and are constants, N(.) refers to normal
distribution.
3. Evaluate the expression for MSE performance surface of a system with 5 marks
at the weight vector
Calculate .
4. Obtain the expression for weight updation in the LMS algorithm. What is 5 marks
the condition for convergence in the LMS algorithm?
5. Illustrate the method for adaptive modelling of earth’s impulse response 5 marks
for geophysical exploration
6 Compute the most significant two Eigen vectors of the correlation 7 marks
matrix
W0 W1
-
+ εk
+
Consider the adaptive system given in figure. Write an expression for the
performance surface. Determine a range for the adaptive gain constant.
Calculate W* and .
Module 2
9. Derive the prediction and correction equations of the discrete Kalman 7 marks
Filter.
10 a. Derive the expressions for the weight updating in adaptive recursive 5 marks
filter.
12 Derive the expression for the covariance of noise in the gradient 7marks
estimation in the case of steepest descent method?
Syllabus
Review of discrete time stochastic process and auto correlation matrix - Introduction to
adaptive systems - performance functions - Gradient search methods -Gradient estimation
and its effects on the adaptation - LMS algorithm - Adaptive recursive filters - discrete
Kalman filter- application of adaptive filtering.
Course Plan
No. of
No Topic
Lectures
1 Review of discrete time stochastic process and auto correlation matrix
1.1 Univariate and Multivariate random sequences, Gaussian noise 3
1.2 Autocorrelation matrix of stationary process and its properties 2
1.3 Eigen decomposition, properties of Eigen vectors, whitening 3
2 Introduction to adaptive systems
Introduction to adaptive systems – definitions– characteristics–
2.1 2
configurations – applications
Adaptive linear combiner – MSE performance function –
2.2 2
Wiener-Hopf equation
Searching the MSE performance function – Newton's method
2.3 for searching the performance function, stability and 2
convergence
Steepest descent algorithm – Stability and convergence –
2.4 2
Learning curve
3 Gradient estimation and its effects on adaptation
3.1 Performance penalty and perturbation 2
3.2 Effects on weight vector solution, covariance of weight vector 3
3.3 Excess MSE, mis adjustment and time constants 2
4 Adaptive algorithms and structures
LMS algorithm, Convergence of weight vector and learning
4.1 2
curve
4.2 Adaptive recursive filters 2
4.3 Discrete Kalman Filter, filtering example 4
5 Applications of adaptive filtering
5.1 Adaptive modelling of multipath communication channel 2
5.2 Adaptive modelling for FIR filter synthesis 2
5.3 Adaptive equalization of telephone channels 2
5.4 Adapting poles and zeros for IIR digital filter synthesis 2
Reference Books
Preamble: The core modules of this elective course include introduction to Internet of
Things (IoT), IoT protocol and software, IoT point to point communication technologies, IoT
security and IoT Platform. This course aims to teach the student to understand the concepts of
IoT and its applications.
Prerequisites: NIL
Course Outcomes: After the completion of the course the student will be able to
PO 1 PO 2 PO 3 PO 4 PO 5 PO 6 PO 7
CO 1 3 3 3 3
CO 2 3 3 3 3
CO 3 3 3 3 3
CO 4 3 3 3 3
CO 5 3 3 3 3
CO 6 3 3 3 3
Assessment Pattern
Apply 20
Analyse 20
Evaluate
Create 20
Mark distribution
Evaluation shall only be based on application, analysis or design-based questions (for both
internal and end semester examinations).
Continuous Internal Evaluation Pattern:
The end semester examination will be conducted by the respective College. There will be two
parts; Part A and Part B.
Part A will contain 5 numerical/short answer questions with 1 question from each module;
having 5 marks for each question (such questions shall be useful in the testing of knowledge,
skills, comprehension, application, analysis, synthesis, evaluation and understanding of the
students). Students should answer all questions.
Part B will contain 7 questions (such questions shall be useful in the testing of overall
achievement and maturity of the students in a course, through long answer questions relating
to theoretical/practical knowledge, derivations, problem solving and quantitative evaluation),
with minimum one question from each module of which student should answer any five.
Each question can carry 7 marks.
Note: The marks obtained for the ESE for an elective course shall not exceed 20% over the
average ESE mark % for the core courses. ESE marks awarded to a student for each elective
course shall be normalized accordingly. For example, if the average end semester mark % for
a core course is 40, then the maximum eligible mark % for an elective course is 40+20 = 60
%.
Model Question Paper
Program: M.Tech. in EC3 (Signal Processing, Signal Processing & Embedded Systems,
Communication Engineering & Signal Processing)
PART A
4. List the standard encryption protocols used in IoT. Explain any one algorithm with
example.
5. With the help of block diagram explain the IoT application in healthcare. Explain each
modules of the network.
PART B
7. Design an IoT based water quality monitoring system using MQTT protocol. Explain the
publish subscribe model in detail.
8. Design a LoRa based smart street lighting system for smart cities (block diagram only).
Discuss the security features of LoRaWAN.
9. Explain the requirements of privacy and security, vulnerabilities from threats, and need of
threat-analysis in IoT
11. How IoT is related to big data analytics? Explain with an example.
12. Design an M2M Smart meter which allow you to track energy consumption in real-time.
Draw the block diagram of the network and explain each modules
SYLLABUS
Module 1: 8 hours
Module 2: 8 hours
IoT protocols and Software: Connecting smart objects: communications criteria, IoT
connectivity technologies(IEEE 802.15.4, Zigbee,Sigfox,LoRA,NB-IoT,Wi-Fi,Bluetooth),IoT
communication technologies: infrastructure protocols (Ipv6,6LoWPAN), Data protocols
(MQTT,CoAp, AMQP, XMPP, SOAP, REST, WebSocket)
Module 3: 8 hours
Module 4: 8 hours
Module 5: 8 hours
IoT application and its Variants: Case studies: IoT for smart cities, health care, agriculture,
smart meters.M2M, Web of things, Cellular IoT, Industrial IoT, Industry 4.0, IoT standards.
Course Plan
No. of
No Topic
Lectures
1 Introduction to IoT:
Basics of Networking- network types,layered network models,
1.1 2
addressing,TCP/IP Transport layer
1.2 Emergence of IoT- evolution of IoT,IoT networking components 3
IoT network architecture and design-comparing IoT
1.3 2
architectures,simplified IoT architecture.
Smart objects: the things in IoT-sensors, actuators and smart
1.4 1
objects,sensor networks
2 IoT protocols and Softwares:
2.1 Connecting smart objects: communications criteria 2
IoT connectivity technologies(IEEE 802.15.4,Zigbee,Sigfox,LoRA,NB-
2.2 3
IoT,Wi-Fi,Bluetooth)
IoT communication technologies: infrastructure
2.3 protocols(IPv6,6LoWPAN),Data 3
protocols(MQTT,CoAp,AMQP,XMPP,SOAP,REST,WebSocket)
3 Introduction to Cloud computation and Big data analytics:
3.1 cloud computing: introduction, cloud models 1
cloud implementation-cloud simulation,open-source cloud(OpenStack),
3.2 2
commercial cloud:AWS
3.3 introduction to data analytics for IoT, machine learning 1
3.4 Big data analytics tools and technology - Hadoop 2
3.5 Edge streaming analytics, Network analytics 2
4 IoT security:
4.1 common challenges in Iot security 1
Quadruple Trust Model for IoT-A – Threat Analysis and model for IoT-
4.3 2
A,
4.4 Cloud security 3
5 IoT application and its Variants:
5.1 Case studies: IoT for smart cities, health care, agriculture, smart meters. 3
Text Books
Reference Books
Preamble: This course aims to enable the students to apply suitable optimization techniques for
various applications.
Prerequisite: Nil
Course Outcomes: After the completion of the course the student will be able to
CO1 Outline the mathematical building blocks of optimization
CO3 Apply principles and techniques for solving nonlinear programming models
Assessment Pattern
Apply 30
Analyze 20
Evaluate 10
Create
Mark distribution
SYLLABUS
Module 1:
Mathematical Background:
Vector norm, Matrix norm, Inner product, Norm ball, Interior point, Closure and boundary,
Complement, scaled sets, and sum of sets, Supremum and infimum, Vector subspace, Function,
Continuity of function, Derivative and gradient, Hessian, Convex sets and convex functions.
Introduction to optimization - Optimal problem formulation, Engineering applications of optimization,
Optimization techniques - Classification.
Module 2:
Linear Programming:
Linear Programming - Formulation of the problem, Graphical method, Simplex method, Artificial
variable techniques, Duality Principle, Dual simplex method.
Module 3:
Non-linear programming:
Unimodal Function, Elimination methods – Fibonacci method, Golden section method, Direct search
methods – Random walk, Grid search method, Indirect search methods – Steepest descent method,
Newton’s method.
Module 4:
Convex optimization:
Standard form of convex optimization problems, Global optimality, An optimality criterion for
differentiable convex function, Lagrange dual function and conjugate function, Lagrange dual
problem, Karush–Kuhn–Tucker (KKT) optimality conditions, Lagrange dual optimization.
Module 5:
Optimization algorithms:
Genetic algorithm, Neural network-based optimization, Ant colony optimization, Particle swarm
optimization. Optimization Libraries in Python: scipy.optimize, CVXPY, CVXOPT.
Text Books
1. Chong-Yung-Chi, Wei-Chiang Li, Chia-Hsiang Lin, Convex Optimization for Signal Processing
and Communications – From fundamentals to applications, CRC press.
2. Sukanta Nayak, Fundamentals of Optimization Techniques with Algorithms, Academic press.
3. Singiresu S. Rao, Engineering Optimization: Theory and Practice, John Wiley and Sons.
Reference Books
1. Igor Griva, ArielaSofer, Stephen G Nash,Linear and Nonlinear Optimization, Second edition,
SIAM.
2. Kalyanmoy Deb,Optimization for Engineering: Design Algorithms and Examples, Second
edition,Prentice Hall.
3. David G Luenberger, Linear and Nonlinear Programming, Second edition, Addison-Wesley.
No.
No Topic ofLectures
1 Mathematical Background:
1.1 Vector norm, Matrix norm, Inner product, Norm ball 1
1.2 Interior point, Closure and boundary 1
1.3 Complement, scaled sets, and sum of sets, Supremum and infimum 1
1.4 Vector subspace, Function, Continuity of function, 1
1.5 Derivative, gradient and Hessian 1
1.6 Convex sets and convex functions 1
1.7 Introduction to optimization - Optimal problem formulation 1
Engineering applications of optimization, Optimization techniques
1.8 1
Classification
2 Linear Programming:
Linear Programming - Formulation of the problem, Graphical
2.1 2
method
2.2 Simplex method 2
2.3 Artificial variable techniques, Duality Principle 2
2.4 Dual simplex method 2
3 Non-linear programming:
3.1 Uni-modal Function 1
3.2 Elimination Methods: (1) Fibonacci Method 1
3.3 Elimination Methods: (2) Golden Section Method 1
3.4 Direct Search Methods: (1) Random Walk 1
3.5 Direct Search Methods: (2) Grid Search Method 1
3.6 Indirect Search Method: (1) Steepest Descent Method 1
3.7 Indirect Search Method: (2) Newton’s Method 2
4 Convex optimization:
4.1 Standard form of convex optimization problems 1
Global optimality, An optimality criterion for differentiable convex
4.2 2
function
4.3 Lagrange dual function and conjugate function 1
4.4 Lagrange dual problem 2
4.5 Karush–Kuhn–Tucker (KKT) optimality conditions 2
5 Optimization algorithms:
5.1 Genetic algorithm 1
5.2 Neural network-based optimization 2
5.3 Ant colony optimization 2
5.4 Particle swarm optimization. 1
Optimization Libraries in Python: scipy.optimize, CVXPY,
5.5 2
CVXOPT
Model Question paper
PARTA
Answer ALL Questions. Each Carries 5 marks.
1. Define the gradient of the function. Demonstrate its importance in the multi-variable
optimization
2 State and prove complementary slackness theorem.
3 Using Newton’s method minimize f=(3x1-1)3 + 4x1x2 + 2x22 by taking initial point as (1,2)
PART–B
Answer any FIVE full questions; each question carries 7 marks.
6.a) 0.5000 0.5000 0.5000 0.5000
0.6533 0.2706 −0.2706 −0.6533
Given, Ѱ1-1= � �,
0.5000 −0.5000 −0.5000 0.5000
0.2706 −0.6533 0.6533 −0.2706
1 1 1 1 4
1 −𝑗𝑗 −1 𝑗𝑗 5 4 marks
Ѱ2-1= � � and 𝑥𝑥 = � �
1 −1 1 −1 5
1 𝑗𝑗 −1 −𝑗𝑗 4
Let Ѱ1 andѰ2 be two change of basis matrices. Obtain the representation of the
given vector 𝑥𝑥 in terms of Ѱ1 andѰ2. Calculatel0, l1 and l∞ norms of the
representation.
6.b) Discuss the convexity and concavity of the following functions
a) 𝑓𝑓(𝑥𝑥) = (𝑥𝑥1 + 𝑥𝑥2 )𝑒𝑒 (𝑥𝑥 1 +𝑥𝑥 2 ) 𝑥𝑥1 > 0, 𝑥𝑥2 > 0
b) 𝑓𝑓(𝑥𝑥) = 𝑥𝑥1 𝑓𝑓1 (𝑥𝑥) + 𝑥𝑥2 𝑓𝑓2 (𝑥𝑥)𝑥𝑥1 ≥ 0, 𝑥𝑥2 ≥ 0 and both 𝑓𝑓1 (𝑥𝑥) and 𝑓𝑓2 (𝑥𝑥) are 3 marks
convex functions
7.a) Congratulations! Upon graduating from college, you have immediately been
offered a high-paying position as president of the Lego Furniture Company.
Your company produces chairs (each requiring 2 square blocks and 1 3 marks
rectangular block) as well as tables (each requiring 2 square blocks and 2
rectangular blocks) and has available resources consisting of 8 rectangular
blocks and 6 square ones. Assume chairs and tables each sell for 5 and 7
respectively, and that your company sells all of what it produces.
(i) Set up an LP whose objective is to maximize your company’s revenue.
(ii) Represent it in the standard form and matrix form.
7.b) Solve the following LPP using dual simplex method
min z = 3x1+2x2
subject to 2x1+ 3x2 ≥ 30
-x1+2x2 ≤ 6 4 marks
x1+3x2> 20
x1, x2 ≥ 0
8a) Solve the following optimization problem using Simplex algorithm. 3 marks
9a) Prove that in Fibonacci search algorithm, at the end of (n-1) iterations, the 4 marks
length of the interval of uncertainty is reduced from (b1-a1) to (b1-a1) / Fn.
Moreover, show that Fibonacci method is more efficient than Golden section
search algorithm.
9 b) Use Newton’s method to solve, 3 marks
minimize 𝑓𝑓(x1,x2) = 5x14+6x24-6x12+2x1x2+5x22+15x1-7x2+13
T
Use the initial guess (1, 1) .
min z = x1 + x2
subject to g1(x1,x2) = x13-x2 ≥ 0
g2(x1,x2) = x1 ≥ 0
g3(x1,x2) = x2 ≥ 0
12 b) Describe Particle Swam Optimization algorithm. List out its advantages, 3 marks
disadvantages and applications.