MCA Syllabus New Updated
MCA Syllabus New Updated
1
Program – MCA (AI & ML)
Duration - 2 years
Batch – 2022-24
2
COURSE DETAILS
The Artificial Intelligence market size is expected to reach $65.48 billion by 2020. According to the projections made for the
year 2030, this value is to reach $1,581.70 Billion, increasing at the rate of 38.0% of CAGR from the year 2021 to 2030. AI has
almost become synonymous to the future of technology. With AI outperforming human efforts, organizations are opting for AI
more and more to increase efficiency and slashing down the costs in the long run. AI along with Big Data, ML and NLP is ranked
2nd amongst the top tech priorities for 2021-22 according to NASSCOM Tech CEO Survey 2022.
3
4. Career Opportunities / Prospects and Career Path
The job outlook for AI-ML professionals is extremely promising nowadays, the number of AI start-ups in India has shot up in
recent years. It is expected that this number is going to keep rising in the future, which implies an abundance of career
opportunities. Searching LinkedIn recently for "artificial intelligence" jobs revealed more than 45,000 results at a range of
companies. The students pursuing our Master’s degree in AI and ML will have opportunities to become: AI & ML
Engineer/Developer: is responsible for performing statistical analysis, running statistical tests, and implementing statistical
designs. Furthermore, they manage ML programs, implement ML algorithms, and develop deep learning systems. AI
Analyst/Specialist: The key responsibility is to cater to AI-oriented solutions and schemes to enhance the services delivered by a
certain industry using the data analyzing skills to study the trends and patterns of certain datasets. Business Intelligence (BI)
Developer: is an engineer that’s in charge of developing, deploying, and maintaining BI interfaces. The list includes query tools,
dashboards and interactive visualizations, ad hoc reporting, and data modelling tools. And other exciting roles in market like
Human-centered Machine Learning Designer, Data Architect, Research Scientist, NLP Engineer.
Data driven organizations, companies in Artificial Intelligence & Data Science domains focused on sectors like Information
technology & Services, Consumer goods, Manufacturing and various other sectors.
PEO1: Be able to analyze the problems by applying the principles of computer science, mathematics, and scientific
investigation and to design and implement industry accepted solutions using latest technologies
PEO2: Apply AI/ML methods, techniques and tools to create AI/ML solutions for various business problems, build and deploy
production grade of AI/ML applications.
PEO3: Be an ethical and socially responsible professional, engage in life-long learning and have an entrepreneurial mindset.
4
7. Programme Outcomes (POs) / Programme Specific Outcomes (PSOs)
PO1:The ability to apply knowledge of fundamentals, specializations, mathematics, and domain knowledge to problem solving
and creating computing models that represent an abstraction of requirements.
PO2: Problem Analysis: Identify, formulate, design and solve complex computing problems providing concrete conclusions
using fundamental principles of mathematics, computing sciences, and relevant domain disciplines.
PO3: Develop solutions to problems in computing by constructing components, implementing processes, and evaluating solutions
that meet defined specifications.
PO4: Modern Tool/Techniques usage: Select, adapt, and apply appropriate tools, techniques, resources to various computing
activities, with an understanding of their limitations.
PO5: Professional Ethics: Understand and commit to professional ethics and cyber regulations, responsibilities, and norms of
professional computing practices.
PO6: Life-long learning: Recognize the need and have the capability to engage in independent learning for continuous
professional development.
PO7: Communication Efficiency: Communicate effectively about computing activities with the computing community, and with
society at large, through the ability to comprehend and write effective reports, design documentation, deliver effective
presentations, and give and understand clear instructions.
PO8: Societal and Environmental Concern: Understand and assess societal, environmental, health, safety, legal, and cultural issues
within local and global contexts, and the consequential responsibilities relevant to professional computing practices.
5
PO9: Individual and Team work: I
n multidisciplinary environment and diverse teams, function as an effective individual, member or a leader.
PO10: Innovation and Entrepreneurship: Create value and wealth for the individual and society at large by identifying a timely
opportunity and using innovation to pursue that opportunity.
PO11: Conduct Investigations of complex computing problems: Apply research-based knowledge and research methods,
including the design of experiments, the analysis and interpretation of data, and the synthesis of the information to provide
valid conclusions.
PO12: Project management and finance: The ability to understand and apply computing and management principles to one's own
work, as a leader or as a member of a team, in order to manage projects in a multidisciplinary environment.
PSO2: Develop computational knowledge and project development skills to solve societal problems in AI & ML.
PSO3: Develop the ability to qualify for Employment, higher studies and Research in Artificial Intelligence and Data science with
ethical values.
PSO4: Inculcate the ability to work in a team and act as an individual in a multidisciplinary environment with lifelong learning and
social awareness.
6
8. Eligibility Criteria (Tentative)
The degree requirements of a student for the MCA programme are as follows:
i) B.C.A./B.Sc./ B.Com/B.A. degree with Mathematics as one of the subjects at 10+2 level or at Graduation Level (with
additional bridge courses as per the norms of the University)
ii) Obtained at least 50% marks (45% marks in case of candidates belonging to reserved category) in the qualifying
Examination.
9. Proposed Venue : Jain (Deemed-To Be) University, Jayanagar 9th T Block, Bengaluru
10. Intake of the students (Proposed) : 60
7
SCHEME FOR 2022-2024
1ST SEMESTER
PRACTIC
SUBJECT Hrs/WEEK THEORY
TITLE OF THE SUBJECT CREDITS AL TOTAL
CODE L-T-P
UE IA UE CA
CORE COURSES
8
2nd SEMESTER
LEARNING LABS
22MCAC201L Advanced DBMS Lab 0-0-2 1 - - - 100 100
22MCAC202L Data structures with algorithms Lab 0-0-2 1 - - - 100 100
22MCAG203L Programming in Python Lab 0-0-2 1 - - - 100 100
Research Publication I 1-0-2 2 - - 50 50 100
PCL – Research and Entrepreneurship 1-0-4 3 - -
Project 50 50 100
9
3rd SEMESTER
22MCAG305 50 50 - - 100
Internet of Things 4-0-0 4
Open Elective 3-0-0 3 50 50 - - 100
LEARNING LAB
10
4th SEMESTER
Business
22MCAGE4031/
22MCAGE4032 Intelligence/Robotics/Recommender 4-0-0 4 50 50 - - 100
Systems
11
CBCS STRUCTURE
HC-1 HC-6
HC-2 HC-7
HARD CORE HC-3 HC-8
8x 4 = 32
COURSES HC-4
HC-5
SPECIALIZATI SC-3
SC -1 SC-4
ON CORE / SC-8 9x4=36
SC-2 SC-5
ELECTIVE SC-9
COURSE SC-6
SC-7
HC-1L HC-6L
LEARNING SC-3L
HC-2L HC-7L 8x1=8
LABS SC-4L
HC-3L SC-1L
12
OPEN
OE-1 OE-2 2x2=4
ELECTIVE
RESEARCH
RP-1 RP-2 2x2=4
PUBLICATION
3x2=6
PCL Project I Project II Project III Project IV 1 x 10 = 10
CREDITS 25 25 26 24 100
Semester Credits
1 25
2 29
3 30
4 16
Total 100
13
TITLE MACHINE LEARNING
SUBJECT CODE 22MCAG301
HOURS PER WEEK 4
CREDITS 4
COURSE OBJECTIVES:
COB1 To enable the students to understand the fundamentals of Machine Learning algorithms.
COB2 To enhance the skill of implementing different Machine Learning algorithms from the scratch.
COB3 To help the student to develop knowledge to choose the appropriate Model based on the task.
14
COB4 To enrich the skill of differentiating Regression, Classification and Clustering tasks.
COB5 To develop the knowledge to build a generalized model by fine-tuning the hyperparameters.
COURSE OUTCOMES: Bloom’s Taxonomy Level
CO1 Describe the basics of Machine Learning. Level 2
CO2 Demonstrate various regression techniques. Level 3
CO3 Categorize the purpose of different Classification techniques. Level 4
CO4 Explain the importance of Dimensionality Reduction. Level 4
CO5 Analyse the process of building Neural Networks. Level 4
15
Regression:
Linear Regression - Gradient Descent: Batch Gradient Descent - Stochastic
Gradient Descent - Mini-batch Gradient Descent. Polynomial Regression -
MODULE – II Learning Curves - Regularized Linear Models: Ridge Regression - Lasso
Regression - Elastic Net - Early Stopping. Logistic Regression: Estimating 12 Hours
Probabilities - Training and Cost Function - Decision Boundaries - Softmax
Regression. SVM Regression: Decision Function and Predictions - Training
Objective - Quadratic Programming - The Dual Problem - Kernelized SVM -
Online SVMs.
Classification:
Training a Binary Classifier - Performance Measures - Measuring Accuracy
MODULE – III Using Cross-Validation - Confusion Matrix - Precision and Recall -
Precision/Recall Trade-off - The ROC Curve - Multiclass Classification - Error 12 Hours
Analysis - Multilabel Classification - Multioutput Classification. Linear SVM
Classification - Soft Margin Classification - Nonlinear SVM Classification -
Polynomial Kernel - Adding Similarity Features - Gaussian RBF Kernel -
Computational Complexity.
16
Artificial Neural Networks:
From Biological to Artificial Neurons - Biological Neurons - Logical
Computations with Neurons - The Perceptron - Multi-Layer Perceptron and
MODULE – V Backpropagation - Training an MLP with TensorFlow’s High-Level API -
Training a DNN Using Plain TensorFlow - Fine-Tuning Neural Network 12 Hours
Hyperparameters - Number of Hidden Layers - Number of Neurons per Hidden
Layer - Activation Functions. Vanishing/Exploding Gradients Problems - Reusing
Pretrained Layers - Faster Optimizers - Avoiding Overfitting Through
Regularization.
TEXTBOOKS:
1. Géron, Aurélien. “Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow”, 3 rd Edition, O'Reilly Media, Inc, 2022.
REFERENCE BOOKS:
1. Mitchell, Tom M., and Tom M. Mitchell. “Machine learning”, Vol. 1(9). , 1st Edition, New York: McGraw-hill, 1997.
2. Müller, Andreas C., and Sarah Guido. “Introduction to machine learning with Python: a guide for data scientists.” 1st Edition, O'Reilly Media, Inc.
2016.
COURSE OBJECTIVES:
COB1 To enhance the skills to both design and critique visualizations.
17
COB2 To develop the understanding and importance of visualization as a part of data analysis.
COB3 To impart knowledge about the components involved in visualization design.
COB4 To enhance the skills on visualization of time series, proportions and associations.
COB5 To develop the skill of visualizing data helpful for people with color-vision Deficiency.
COURSE OUTCOMES: Bloom’s Taxonomy Level
CO1 Understand basics of Data Visualization Level 2
CO3 Write programs on visualization of time series, proportions & associations Level 4
SYLLABUS
18
VISUALIZING DATA-Mapping Data onto Aesthetics, Aesthetics
and Types of Data, Scales Map Data Values onto Aesthetics,
Coordinate Systems and Axes- Cartesian Coordinates, Nonlinear
Axes, Coordinate Systems with Curved Axes, Color Scales-Color as
a Tool to Distinguish, Color to Represent Data Values, Color as a
II Tool to Highlight, Directory of Visualizations- Amounts, 12
Distributions, Proportions, x–y relationships, Geospatial Data
19
Histograms, Contour Lines, Common Pitfalls of Color Use-Encoding
Too Much or Irrelevant Information, Using Non-monotonic Color
V 12
Scales to Encode Data Values, Not Designing for Color-Vision
Deficiency
TEXT BOOKS:
1. Claus Wilke, “Fundamentals of Data Visualization: A Primer on Making Informative and Compelling Figures”, 1 st Edition, O’Reilly
Media Inc, 2019
2. Ossama Embarak, “Data Analysis and Visualization Using Python: Analyze Data to Create Visualizations for BI Systems”, Apress, 1 st
Edition, 2018
REFERENCE BOOKS:
1. Tony Fischetti, Brett Lantz, R: Data Analysis and Visualization, O’Reilly, 1 st Edition, 2016.
2. https://www.netquest.com/hubfs/docs/ebook-data-visualization-EN.pdf
CREDITS
4
COURSE OBJECTIVES:
COB1 To study basic inferential statistics and sampling distribution.
COB2 To understand the concept of parameter estimation using fundamental tests and testing of hypotheses.
20
COB3 To understand the techniques of variance analysis.
COB5 To perform a case study with any available sample data sets.
COURSE OUTCOMES:
CO1 Understand the concept of sampling.
CO3 Demonstrate the skills to perform various tests in the given data.
SYLLABUS
MODULE CONTENTS ASSESSMENTS AND CO PO MAPPING
NO. ACTIVITY
MODULE – I Inferential Statistics CO1 P01, PO2
12 Hours INFERENTIAL STATISTICS - I 9 Fundamentals LinkedIn Course
Populations – samples – random sampling –
probability and statistics. Sampling distribution –
creating a sampling distribution – mean of all
sample means – standard error of the mean – other
sampling distributions. Hypothesis testing – z-test –
21
z-test procedure – statement of the problem – null
hypothesis – alternate hypotheses – decision rule –
calculations – decisions – interpretations.
INFERENTIAL STATISTICS - II 9
MODULE – Why hypothesis tests? – Strong or weak decisions – Real Life Case Study
II discussions on Inferential C02 P01, PO2, PO3
one-tailed and two-tailed tests – case studies
12 Hours Statistics
Influence of sample size – power and sample size.
Estimation – point estimate – confidence interval –
level of confidence – effect of sample size.
T – Test
T-test for one sample – sampling distribution of t
– t-test procedure – degrees of freedom –
MODULE – estimating the standard error – case studies. T-test for Certification on an Online
III Course on Inferential Statistics. CO3 PO1, PO2, PO3, PO4
12 Hours two independent samples – statistical hypotheses –
sampling distribution – test procedure – p-value –
statistical significance – estimating effect size – meta
analysis t-test for two related samples.
22
tests – two-factor ANOVA – other types of ANOVA
Introduction to chi-square tests.
TEXT BOOKS
1. Robert S. Witte and John S. Witte, “Statistics”, Eleventh Edition, Wiley Publications, 2017.
2. Allen B. Downey, “Think Stats: Exploratory Data Analysis in Python”, Green Tea Press, 2014. [Unit V]
REFERENCES
1. David Spiegel halter, “The Art of Statistics: Learning from Data”, Pelican Books, 4th Edition, 2019.
2. Peter Bruce, Andrew Bruce, and Peter Gedek, “Practical Statistics for Data Scientists”, 2nd Edition, O’Reilly Publishers,
2020.
3. Charles R. Severance, “Python for Everybody: Exploring Data in Python 3”, 2 n d Edition, Shroff Publishers, 2017.
4. Bradley Efron and Trevor Hastie, “Computer Age Statistical Inference”, 2 n d E d i t i o n , Cambridge Press, 2016.
23
HOURS PER WEEK 4
CREDITS 4
COURSE OBJECTIVES:
COB1
24
CO 5 Understand the different modes and distribution of Hadoop L6
SYLLABUS
MODULE CONTENT HOURS
25
12
III
Map Reduce
26
YARN and Hadoop Cluster:
Text Books:
1) Dirk deRoos, Paul C. Zikopoulos, Bruce Brown, “Hadoop: The Definitive Guide, “A Wiley brand, 4th Edition, 2015.
2) Jim Keogh, “J2EE, “The Complete Reference”, Tata McGraw Hill, 5th Edition, 2011.
Reference Books:
1) Hadoop for Dummies, Dirk deRoos, Paul C. Zikopoulos, Bruce Brown, A Wiley brand, 5th Edition, 2019.
2) Hadoop in Action, Chuck Lam, Manning Publications, 2nd Edition, 2014.
27
TITLE INTERNET OF THINGS
SUBJECT CODE 22MCAG306
HOURS PER WEEK 4
CREDITS 4
COURSE OBJECTIVES
COB 1 To get the students on fast track on components essential for IoT
COB 2 To learn about the designing and prototyping of IoT for daily life
COB 3 To give hands-on experience in the implementation
COURSE OUTCOMES
CO 1 To understand the basics of the IoT concepts L2
CO 2 To create a base ground to build a prototype of IoT L4
CO 3 understanding the working and architecture of the chips used L3
CO 4 Understand and build the business model and how to implement the prototype in L4
real-time cases
CO 5 To enforce the ethics and protocol of control for the production L5
28
Introduction to Internet of Things- Definition and Characteristics of IoT, Wireless Sensor
Module 1
Networks, Cloud Computing, the concept of cloud environment, issues with the migration 12 hours
– application performance Degradation, Network congestion- migration time
Design principles and connected devices: calm and ambient technology, IoT design &
Module 2 privacy, web thinking for connected devices, internet communication overview,
12 hours
Prototyping of an IoT (Rapid IoT Prototyping Kit), Thinking about prototyping –
sketching, familiarity, and the cost versus ease of prototyping. Prototyping and production.
Textbooks:
1. Adrain McEwen & Hakim Cassimally, “Design the Internet of Things”, 2nd Edition, Wiley Publishers, 2013.
2. Sudha Jamthe, “IoT Disruptions: The Internet of Things - Innovations & Jobs”, Kindle edition By
3. Learning IoT with Particle Photon and Electron Kindle Edition by Rashid Khan , KajariGhoshdastidar , Ajith Vasudevan.
4. INTERNET OF THINGS - A HANDS-ON APPROACH by ArsheepBahga, Vijay Madisetti
29
TITLE MACHINE LEARNING LAB
SUBJECT CODE 22MCAG301L
HOURS PER WEEK 2
CREDITS 1
EXPERIMENTNO. TITLE
Experiment - 1 Installation of all the pre-requisites to build Machine Learning Models:
a. Anaconda
b. Tensorflow
c. Keras
d. Scikit-learn
e. CUDA (if the system has GPU support)
Experiment –2 Write a Python program to implement Simple Linear Regression.
Experiment –3 Write a Python program to implement Multiple Linear Regression for House Price Prediction using
sklearn.
Experiment –4 Write a Python program to implement KNN Classifier for a benchmarking dataset using sklearn.
Experiment –5 Write a Python program to implement Logistic Regression for a suitable benchmarking dataset using
sklearn.
Experiment –6 Write a Python program to implement the Naïve Bayes classifier for a sample training data set stored as
a .csv file. Compute the accuracy of the classifier, considering few test data sets.
Experiment –7 Write a Python program to classify data using Support Vector Machines (SVMs): SVM-RBF Kernels.
Experiment –8 Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new sample.
Experiment –9 Write a Python program to implement XG Boost.
Experiment –10 Write a Python program to implementation of K-Means Clustering.
30
Experiment – 11 Write a Python program to build an Artificial Neural Network by implementing the Backpropagation
algorithm and test the same using appropriate data sets.
Experiment - 12 Write a Python program to perform hyperparameter optimization in an Artificial Neural Networks using
Grid Search.
31
SEMESTER IV
COURSE OBJECTIVES:
COB1 To enable the students to understand the underlying mathematical operations in any Machine Learning
Model.
COB2 To develop the knowledge about the different types of models in Machine Learning and Deep Learning.
COB3 To enhance the skill to develop different Neural Networks.
COB4 To build the knowledge of fine tuning different parameters and hyperparameters in modelling.
COB5 To be able to apply the concepts on to a real-time application and make a prediction based on the
historical data.
COURSE OUTCOMES: Bloom’s Taxonomy Level
CO1 Understand the mathematical foundations for ML Models. Level 2
CO2 Illustrate the various parameters and hyperparameters involved in developing an Level 3
Artificial Neural Network.
CO3 Categorize different models in Deep Learning based on the nature of the data. Level 4
32
CO5 Compare the different hyperparameters and fine-tune them. Level 5
MODULE – I Learning Machines - Biological Inspiration – Deep Learning The Math behind Machine
Learning: Linear Algebra – Scalars – Vectors – Matrices – Tensors – Hyperplanes – Relevant 12 Hours
Mathematical Operations - Converting Data Into Vectors – Solving Systems of Equations -
Statistics for Machine Learning – Working of Machine Learning Models - Regression –
Classification – Clustering – Optimization Methods – Evaluating Models.
Foundation of Neural Networks:
Neural Networks – The Perceptron – Multilayer Feed-Forward Networks - Training Neural
MODULE – II Networks – Backpropagation Learning – Activation Functions: Linear – Sigmoid – Tanh –
Hard Tanh – Softmax – Rectified Linear. Loss Functions: Loss Functions for Regression – 12 Hours
Loss Functions for Classification – Loss Functions for Reconstruction – Hyperparameters:
Learning Rate. Regularization – Momentum. Defining Deep Learning: Evolutionary Progress
– Advances in Deep Neural Network Architecture - From Feature Engineering to Automate
Feature Learning - Generative Modelling.
Architectural Principles of Deep Neural Networks - I:
MODULE – Parameters – Layers – Activation Functions – Loss Functions – Optimization Algorithms –
III Hyperparameters – Regularization. Building Blocks of Deep Networks: Multilayer Feed- 12 Hours
Forward Networks - Restricted Boltzmann Machines (RBM) – Unsupervised Pretrained
Networks: Autoencoders – Variants of Autoencoders – Variational Autoencoders (VAE) –
Deep Belief Networks (DBN) – Generative Adversarial Networks (GAN).
33
Architectural Principles of Deep Neural Networks - II: 12 Hours
Convolutional Neural Networks (CNN): Architecture Overview – Input Layers –
MODULE –IV
Convolutional Layers – Pooling Layers – Fully Connected Layers. Recurrent Neural
Networks (RNN): Modelling the Time Dimension - Sequences and time-series data - 3D
Volumetric Input - Recurrent Neural Networks architecture and time-steps - LSTM Networks:
Architecture. Recursive Neural Networks: Architecture and Applications.
TEXTBOOKS:
1. Patterson, Josh, and Adam Gibson. “Deep learning: A practitioner's approach.” O'Reilly Media, Inc.", 2017.
2. Moolayil, Jojo, JojoMoolayil, and Suresh John. “Learn Keras for Deep Neural Networks.” Birmingham: Apress, 2019.
REFERENCE BOOKS:
1. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. “Deep learning.” MIT press, 2016.
2. Osinga, Douwe. “Deep learning cookbook: Practical recipes to get started quickly.” O'Reilly Media, Inc.", 2018.
3. Gulli, Antonio, and Sujit Pal. “Deep learning with Keras.” Packt Publishing Ltd, 2017.
34
SUBJECT CODE 22MCAGE4022
HOURS PER WEEK 4
CREDITS 4
COURSE OBJECTIVES:
COB1
Understand the fundamental concepts and techniques of natural language processing (NLP)
COB2
Gain an in-depth understanding of the computational properties of natural languages and the
commonly used algorithms for processing linguistic information.
COB3 To provide understanding about concepts like vectorization, POS, stemming, Lemmatization etc.
COB4 To enhance the skill to develop NLP models like LSTMs, GRU, Attention models using Python’s
libraries.
COB5 To explain the process of applying predictive models to generate language models.
COB6 To develop the skill of using Python’s libraries to implement various NLP models.
COURSE OUTCOMES:
CO1 Extract information from text automatically using concepts and methods from
natural language processing (NLP) including stemming, n-grams, POS tagging, L2
and parsing.
35
CO2 Analyze the syntax, semantics, and pragmatics of a statement written in a natural
language. L2
CO3 Apply scripts and applications in Python to carry out natural language
processing using libraries such as NLTK, Gensim, and spaCY. L3
CO4 Design NLP-based AI systems for question answering, text summarization, and
machine translation. L4
SYLLABUS
MODULE CONTENTS ASSESSMENTS AND CO PO MAPPING
NO. ACTIVITY
I Introduction to NLP - Various stages of NLP –The MCQ based assignment for CO1, PO1,PO2
12 Hours Ambiguity of Language: Why NLP Is Difficult,Parts of comprehension Check CO2
Speech: Nouns and Pronouns, Words: Determiners and
adjectives, verbs, Phrase Structure. Statistics Essential
Information Theory : Entropy, perplexity, The relation
to language, Cross entropy
Text Preprocessing and Morphology- Character
Encoding, Word Segmentation, Sentence Segmentation,
Introduction to Corpora, Corpora Analysis. Inflectional
and Derivation Morphology, Morphological analysis
and generation using Finite State Automata and Finite
State transducer.
36
Language Modelling-
Words: Collocations- Frequency-Mean and Variance –
Hypothesis testing: The t test, Hypothesis testing of
differences, Pearson’s chi-square test, Likelihood ratios.
Statistical Inference: n -gram Models over Sparse Data:
Bins: Forming Equivalence Classes- N gram model - LinkedIn Certification Course,
Assignment CO1,
II Statistical Estimators- Combining Estimators
CO2, PO1,PO2,PO11
12 Hours Word Sense Disambiguation-
CO6
Methodological Preliminaries, Supervised
Disambiguation: Bayesian classification, An
information theoretic approach, Dictionary-Based
Disambiguation: Disambiguation based on sense,
Thesaurus based disambiguation, Disambiguation based
on translations in a second-language corpus.
Markov Model and POS Tagging Markov Model:
Hidden Markov model, Fundamentals, Probability of
Assignment-Review of several
properties, Parameter estimation, Variants, Multiple
III CO2,
input observations. The Information Sources in Language constructs and their PO1,PO4
12 Hours variation CO3,
Tagging: Markov model taggers, Viterbi algorithm,
Applying HMMs to POS tagging, Applications of
Tagging
Syntax and Semantics-
Shallow Parsing and Chunking, Lexical Semantics,
Case Study: Sentiment Analysis
WordNet, Thematic Roles, Semantic Role Labelling
IV – Model Development and CO3,
with CRFs. Statistical Alignment and Machine PO3,PO5,PO11
12 Hours Evaluation. CO5
Translation, Text alignment, Word alignment,
Information extraction, Text mining, Sentimental
Analysis, Vector Space Models
37
Neural Network Models for NLP:
Neural Networks for Sentiment Analysis, Recurrent
Neural Networks for Language Modelling, Group Presentation on NLP CO3,
V
LSTMs and Named Entity Recognition, Neural models generated using Python CO4, PO2,PO3,PO5,PO11
12 Hours
Machine Translation, Text Summarization, Question libraries CO5
Answering, ChatGPT.
TEXTBOOKS:
1. Christopher D. Manning and Hinrich Schutze, “Foundations of Natural Language Processing”, 6 th Edition, The MIT Press Cambridge,
Massachusetts London, England, 2003
2. Daniel Jurafsky and James H. Martin “Speech and Language Processing”, 3rd edition, Prentice Hall, 2009.
REFERENCE BOOKS:
1. NitinIndurkhya, Fred J. Damerau “Handbook of Natural Language Processing”, Second Edition, CRC Press, 2010.
2. James Allen “Natural Language Understanding”, Pearson Publication 8th Edition. 2012.
3. Chris Manning and HinrichSchütze, “Foundations of Statistical Natural Language Processing”, 2nd edition, MIT Press Cambridge,
MA, 2003.
4. Hobson lane, Cole Howard, Hannes Hapke, “Natural language processing in action” MANNING Publications, 2019.
5. Alexander Clark, Chris Fox, Shalom Lappin, “The Handbook of Computational Linguistics and Natural Language Processing”,
Wiley-Blackwell, 2012
6. Rajesh Arumugam, Rajalingappa Shanmugamani “Hands-on natural language processing with python: A practical guide to applying
deep learning architectures to your NLP application”. PACKT publisher, 2018.
38
TITLE COMPUTER VISION
SUBJECT CODE MCAGE4021
COURSE OBJECTIVES:
COB1 To review image processing techniques for computer vision.
COB2 To understand shape and region analysis.
COB3 To understand Hough Transform and its applications to detect lines, circles, ellipses.
COB4 To understand three-dimensional image analysis techniques.
COB5 To understand motion analysis and study some applications of computer vision algorithms.
COURSE OUTCOMES:
CO1 Implement fundamental image processing techniques required for computer vision. L3
CO2 Perform shape analysis. L2
CO3 Implement boundary tracking techniques. L3
CO4 Apply chain codes and other region descriptors. L4
CO5 Apply Hough Transform for line, circle, and ellipse detections. L5
SYLLABUS
39
MODULE NO. CONTENTS ASSESSMENTS CO PO MAPPING
AND ACTIVITY
IMAGE PROCESSING
FOUNDATIONS
Review of image processing
MODULE – I techniques – classical filtering Image Processing
12 Hours operations – thresholding techniques Fundamentals CO1 P01, PO2
– edge detection techniques – corner LinkedIn Course
and interest point detection –
mathematical morphology – texture.
MODULE – III HOUGH TRANSFORM Certification on an CO3 PO1, PO2, PO3, PO4
12 Hours Line detection – Hough Transform Online Course on
(HT) for line detection – foot-of- Computer Vision
normal method – line localization –
line fitting – RANSAC for straight
line detection – HT based circular
40
object detection – accurate center
location – speed problem – ellipse
detection – Case study: Human Iris
location – hole detection –
generalized Hough Transform
MODULE – V APPLICATIONS Seminar on various CO5 PO1, PO2, PO3, PO4, PO5
12 Hours Application: Photo album – Face Applications of
detection – Face recognition – Eigen Image Processing
faces – Active appearance and 3D
shape models of faces Application:
Object Detection Model-YOLO2,
Application: In-vehicle vision
system: locating roadway – road
markings – identifying road signs –
41
locating pedestrians.
Textbooks:
1. D. L. Baggio et al., ―Mastering OpenCV with Practical Computer Vision Projects‖, Packt Publishing, 2018
2. E. R. Davies, ―Computer & Machine Vision‖, Fourth Edition, Academic Press, 2021
3. Jan Erik Solem, ―Programming Computer Vision with Python: Tools and algorithms for analyzing images‖, O'Reilly Media, 2017
Reference Books:
1. Mark Nixon and Alberto S. Aquado, ―Feature Extraction & Image Processing for Computer Vision‖, Third Edition, Academic
Press, 2012.
2. R. Szeliski, ―Computer Vision: Algorithms and Applications‖, Springer 2011.
3. Simon J. D. Prince, ―Computer Vision: Models, Learning, and Inference‖, Cambridge University Press, 2012.
CREDITS 4
COURSE OBJECTIVES
COB1 To understand Cognitive Computing as a discipline with knowledge of its Architecture.
COB2 To understand the working of a Cognitive System with Inference and Decision Support System.
COB3 To understand the connection between Cognitive Computing and Machine Learning.
42
COB4 To Understand how Natural Language is a Support for Cognitive Systems.
COB5 To Apply Cognitive Systems capabilities in real time case studies.
COURSE OUTCOMES
CO1 Implement the Basic Concepts of Cognitive Systems.
CO2 Applies the relationship between Machine learning, NLP and Cognitive Systems to design a better architecture.
CO3 Design a Cognitive System and apply it in real world scenarios.
SYLLABUS
MODULE CONTENT HOURS
I Introduction: Cognitive science and cognitive Computing with AI, 12
Cognitive Computing - Cognitive Psychology - The Architecture of the
Mind - The Nature of Cognitive Psychology – Cognitive architecture –
Cognitive processes – The Cognitive Modelling Paradigms -
Declarative / Logic based Computational cognitive modelling –
connectionist models – Bayesian models. Introduction to Knowledge-
Based AI – Human Cognition on AI – Cognitive Architectures
II Cognitive Computing With Inference and Decision Support 12
Systems: Intelligent Decision making, Fuzzy Cognitive Maps,
Learning algorithms: Non linear Hebbian Learning – Data driven NHL
- Hybrid learning, Fuzzy Grey cognitive maps, Dynamic Random fuzzy
cognitive Maps.
III Cognitive Computing with Machine Learning: Machine learning 12
Techniques for cognitive decision making – Hypothesis Generation and
Scoring - Natural Language Processing - Representing Knowledge -
Taxonomies and Ontologies - Deep Learning.
43
IV 12
Natural Language Processing in support of a Cognitive System:
Textbooks:
1. Hurwitz, Kaufman, and Bowles, Cognitive Computing and Big Data Analytics, Wiley, Indianapolis, IN, 2005, ISBN: 978-1-
118-89662-4, 2021.
2. Masood, Adnan, Hashmi, Adnan, Cognitive Computing Recipes-Artificial Intelligence Solutions Using Microsoft Cognitive
Services and TensorFlow, 2015.
Reference Books:
1. Peter Fingar, Cognitive Computing: A Brief Guide for Game Changers, PHI Publication, 2015.
2. Gerardus Blokdyk, Cognitive Computing Complete Self-Assessment Guide, 2018.
44
3. Rob High, Tanmay Bakshi, Cognitive Computing with IBM Watson: Build smart applications using Artificial
Intelligence as a service, IBM Book Series, 2019.
CREDITS 4
COURSE OBJECTIVES
COB1 To Understand the basic concepts in Business Intelligence.
COB2 To understand how data can be presented through Data Mining.
COB3 To Understand the Process of ETL in a Data Warehouse.
COB4 To Understand the application of BI in Real World Scenarios.
COURSE OUTCOMES
CO1 Implement the concepts of BI for Data Preparation.
CO2 Learn the construction of a BI System and apply it in Decision Making
SYLLABUS
45
MODULE CONTENT HOURS
I Business Intelligence – Introduction: Introduction - History and 12
Evolution: Effective and Timely decisions, Data Information and
Knowledge, Architectural Representation, Role of mathematical
Models, Real Time Business Intelligent System.
II BI –Data Mining and Data Warehousing : Data Mining - 12
Introduction to Data Mining, Architecture of Data Mining and How
Data mining works(Process), Functionalities & Classifications of Data
Mining, Representation of Input Data, Analysis Methodologies. Data
Warehousing - Introduction to Data Warehousing, Data Mart, Online
Analytical Processing (OLAP) – Tools, Data Modelling, Difference
between OLAP and OLTP, Schema – Star and Snowflake Schemas,
ETL Process – Role of ETL.
III BI – DATA PREPARATION: Data Validation - Introduction to Data 12
Validation, Data Transformation – Standardization and Feature
Extraction, Data Reduction – Sampling, Selection, PCA, Data
Discretization.
IV BI – DATA ANALYTICS PROCESS - Introduction to analytics 12
process, Types of Analytical Techniques in BI –Descriptive, Predictive,
Perspective, Social Media Analytics, Behavioral, Iris Datasets.
V IMPLEMENTATION OF BI – Business Activity Monitoring, 12
Complex Event Processing, Business Process Management, Metadata,
Root Cause Analysis.
46
Text Books:
1. Carlo-Vercellis, “Business Intelligence Data Mining and Optimization for Decision-Making”, First Edition, 2019.
2. Drew Bentely, “Business Intelligence and Analytics”, 2017 Library Pres., ISBN: 978-1-9789-2136-8
3. Larissa T. Moss & Shaku Atre, “Business Intelligence Roadmap: The Complete Project Lifecycle”, 1 st Edition, 2017.
Reference Books:
1. For Decision-Support Applications”, First Edition, Addison-Wesley Professional, 2012.
2. Kimball, R., Ross, M., Thornthwaite, W., Mundy, J., and Becker, B. John, “The Data Warehouse”, 2nd Edition, 2017.
3. Lifecycle Toolkit: “Practical Techniques for Building Data Warehouse and Business Intelligence Systems”, Second Edition, Wiley &
Sons, 2008.
4. Cindi Howson, “Successful Business Intelligence”, Second Edition, McGraw-Hill Education, 2013.
TITLE ROBOTICS
CREDITS 4
COURSE OBJECTIVES
COB1 To study about different types of sensors and Transducers in Robotics.
47
COB3 To Learn about image processing Techniques for robotics.
48
Control through Vision sensors, Robot vision locating position, Robot
guidance with vision system, End effector camera Sensor.
III ELEMENTS OF IMAGE PROCESSING TECHNIQUES 12
Discretization, Neighbours of a pixel-connectivity- Distance measures -
pre-processing Neighbourhood averaging, Median filtering.
Smoothening of binary Images- Image Enhancement- Histogram
Equalization-Histogram Specification –Local Enhancement-Edge
detection- Gradient operator Laplace operators-Thresholding-
Morphological image processing
IV OBJECT RECOGNITION AND FEATURE EXTRACTION 12
Image segmentation- Edge linking-Boundary detection-Region
growing- Region splitting and merging- Boundary Descriptors-Freeman
chain code- Regional Descriptors- recognition-structural methods-
Recognition procedure, mahalanobic procedure
V COLLISON FRONTS ALGORITHM 12
Introduction, skeleton of objects. Gradients, propagation, Definitions,
propagation algorithm, Thinning Algorithm, Skeleton lengths of Top
most objects.
Textbooks:
1. Paul W Chapman, “Smart Sensors”, an Independent Learning Module Series, 2nd Edition, 2015
2. Richard D. Klafter, Thomas .A, Chri Elewski, Michael Negin, Robotics Engineering - An Integrated Approach, Phi Learning.,3 rd
Edition 2012.
49
3. John Iovice, “Robots, Androids and Animatrons”, Mc Graw Hill, 5th Edition , 2020
4. K.S. Fu, R.C. Gonzalez, C.S.G. Lee, “Robotics – Control Sensing, Vision and Intelligence”, Tata McGraw-Hill Education, 4 Th
Edition, 2019.
Reference Books:
1. Mikell P Groover & Nicholas G Odrey, Mitchel Weiss, Roger N Nagel, Ashish Dutta, Industrial Robotics, Technology
programming and Applications, Tata McGraw-Hill Education, 4th Edition 2020..
2. Sabrie Soloman, Sensors and Control Systems in Manufacturing, McGraw-Hill Professional Publishing, 5 th Edition, 2019.
CREDITS 4
COURSE OBJECTIVES
COB1 Understand the Basic Taxonomy of Recommender Systems
COB2 Understand the classification of Recommender Systems
COB3 Understand the mathematical aspects of Recommender System design
COB4 Understand the applications of Recommender Systems in various fields.
COURSE OUTCOMES
CO1 Understand the basic concepts of recommender systems
50
CO2 Solve mathematical optimization problems pertaining to recommender systems
CO3 Carry out performance evaluation of recommender systems based on various metrics
CO4 Implement machine-learning and data-mining algorithms in recommender systems data sets.
CO5 Design and implement a simple recommender system.
SYLLABUS
MODULE CONTENT HOURS
I Introduction 12
Introduction and basic taxonomy of recommender systems (RSs).
Traditional and non-personalized RSs. Overview of data mining
methods for recommender systems (similarity measures, classification,
Bayes classifiers, ensembles of classifiers, clustering, SVMs,
dimensionality reduction). Overview of convex and linear optimization
principles.
II Content-based recommender systems 12
The long-tail principle. Domain-specific challenges in recommender
systems. Content-based recommender systems. Advantages and
drawbacks. Basic components of content-based RSs. Feature selection.
Item representation Methods for learning user profiles
III Collaborative Filtering (CF)-based RSs: Mathematical foundations 12
Mathematical optimization in CF RSs. Optimization objective. Baseline
predictor through least squares. Regularization and overfitting.
51
Temporal models. Step-by-step solution of the RS problem.
Collaborative Filtering (CF)-based RSs: systematic approach
Nearest-neighbour collaborative filtering (CF). User-based and item-
based CF, comparison. Components of neighbourhood methods (rating
normalization, similarity weight computation, neighbourhood
selection). Hybrid recommender systems.
IV Performance evaluation of RSs Experimental settings: 12
Working with RSs data sets. Examples. The cold-start problem.
Evaluation metrics. Rating prediction and accuracy. Other metrics
(fairness, coverage, diversity, novelty, serendipity).
Context awareness and Learning principles in RSs Context-aware
recommender systems. Contextual information models for RSs.
Incorporating context in Rs. Learning to rank. Active learning in RSs.
Multi-armed bandits and Reinforcement learning in RSs. Dynamic RSs.
V User behaviour understanding in RSs: 12
Foundations of behavioural science. User choice and decisions models.
Choice models in RSs. Digital nudging and user choice engineering
principles. Applications and examples for recommender systems.
Applications of RSs for content media, social media and communities
Music and video RSs. Datasets. Group recommender systems. Social
recommendations. Recommending friends: link prediction models.
Similarities and differences of RSs with task assignment in mobile
crowd sensing. Social network diffusion awareness in RSs.
52
Textbooks:
1. C.C. Aggarwal, Recommender Systems: The Textbook, Springer, 2016.
2. F. Ricci, L Rokach, B. Shapira and P.B. Kantor, Recommender systems handbook, Springer 2010.
3. J. Leskovec, A. Rajaraman and J. Ullman, Mining of massive datasets, 2nd Ed., Cambridge, 2012.
4. M. Chiang, Networking Life, Cambridge, 2010. (Chapter 4)
53