0% found this document useful (0 votes)
13 views53 pages

MCA Syllabus New Updated

The Master of Computer Applications (MCA) program with a specialization in Artificial Intelligence (AI) and Machine Learning (ML) is a two-year course aimed at addressing the growing demand for intelligent decision-making systems across various industries. The program offers extensive career opportunities in AI and ML, with a projected market growth to $1,581.70 billion by 2030, and prepares students for roles such as AI Engineer, Data Analyst, and Business Intelligence Developer. The curriculum includes core courses, practical labs, and emphasizes ethical practices, lifelong learning, and project management skills.

Uploaded by

vg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views53 pages

MCA Syllabus New Updated

The Master of Computer Applications (MCA) program with a specialization in Artificial Intelligence (AI) and Machine Learning (ML) is a two-year course aimed at addressing the growing demand for intelligent decision-making systems across various industries. The program offers extensive career opportunities in AI and ML, with a projected market growth to $1,581.70 billion by 2030, and prepares students for roles such as AI Engineer, Data Analyst, and Business Intelligence Developer. The curriculum includes core courses, practical labs, and emphasizes ethical practices, lifelong learning, and project management skills.

Uploaded by

vg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 53

DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS

1
Program – MCA (AI & ML)
Duration - 2 years
Batch – 2022-24

2
COURSE DETAILS

1. Programme Nomenclature with duration


Master of Computer Applications (MCA)
Duration : 2 years

2. Need and Scope of the Programme


Across all industries, there is a growing need for intelligent and accurate decision-making systems. As a result, AI and ML
technologies have seen exponential growth, and they are likely to remain relevant for many years to come. Machine Learning is a
subset of Artificial Intelligence which aids programs advance their prediction precision by utilizing old databases. Artificial
Intelligence has already begun showing its effects in the form of convenience that it has provided humans with. The main focus of
AI and ML is to create machines and programs that have problem solving skills and goals like human beings. The scope of AI and
ML lies in the industries of technology, financial services, military and national security, gaming, agriculture, healthcare, and
many more.

3. Market Survey (demand for the programme)

The Artificial Intelligence market size is expected to reach $65.48 billion by 2020. According to the projections made for the
year 2030, this value is to reach $1,581.70 Billion, increasing at the rate of 38.0% of CAGR from the year 2021 to 2030. AI has
almost become synonymous to the future of technology. With AI outperforming human efforts, organizations are opting for AI
more and more to increase efficiency and slashing down the costs in the long run. AI along with Big Data, ML and NLP is ranked
2nd amongst the top tech priorities for 2021-22 according to NASSCOM Tech CEO Survey 2022.

3
4. Career Opportunities / Prospects and Career Path

The job outlook for AI-ML professionals is extremely promising nowadays, the number of AI start-ups in India has shot up in
recent years. It is expected that this number is going to keep rising in the future, which implies an abundance of career
opportunities. Searching LinkedIn recently for "artificial intelligence" jobs revealed more than 45,000 results at a range of
companies. The students pursuing our Master’s degree in AI and ML will have opportunities to become: AI & ML
Engineer/Developer: is responsible for performing statistical analysis, running statistical tests, and implementing statistical
designs. Furthermore, they manage ML programs, implement ML algorithms, and develop deep learning systems. AI
Analyst/Specialist: The key responsibility is to cater to AI-oriented solutions and schemes to enhance the services delivered by a
certain industry using the data analyzing skills to study the trends and patterns of certain datasets. Business Intelligence (BI)
Developer: is an engineer that’s in charge of developing, deploying, and maintaining BI interfaces. The list includes query tools,
dashboards and interactive visualizations, ad hoc reporting, and data modelling tools. And other exciting roles in market like
Human-centered Machine Learning Designer, Data Architect, Research Scientist, NLP Engineer.

5. Potential employers / stakeholders identified

Data driven organizations, companies in Artificial Intelligence & Data Science domains focused on sectors like Information
technology & Services, Consumer goods, Manufacturing and various other sectors.

6. Programme Educational Objectives (PEOs)

PEO1: Be able to analyze the problems by applying the principles of computer science, mathematics, and scientific
investigation and to design and implement industry accepted solutions using latest technologies
PEO2: Apply AI/ML methods, techniques and tools to create AI/ML solutions for various business problems, build and deploy
production grade of AI/ML applications.
PEO3: Be an ethical and socially responsible professional, engage in life-long learning and have an entrepreneurial mindset.

4
7. Programme Outcomes (POs) / Programme Specific Outcomes (PSOs)

Program Outcomes (POs):

PO1:The ability to apply knowledge of fundamentals, specializations, mathematics, and domain knowledge to problem solving
and creating computing models that represent an abstraction of requirements.

PO2: Problem Analysis: Identify, formulate, design and solve complex computing problems providing concrete conclusions
using fundamental principles of mathematics, computing sciences, and relevant domain disciplines.

PO3: Develop solutions to problems in computing by constructing components, implementing processes, and evaluating solutions
that meet defined specifications.

PO4: Modern Tool/Techniques usage: Select, adapt, and apply appropriate tools, techniques, resources to various computing
activities, with an understanding of their limitations.

PO5: Professional Ethics: Understand and commit to professional ethics and cyber regulations, responsibilities, and norms of
professional computing practices.

PO6: Life-long learning: Recognize the need and have the capability to engage in independent learning for continuous
professional development.

PO7: Communication Efficiency: Communicate effectively about computing activities with the computing community, and with
society at large, through the ability to comprehend and write effective reports, design documentation, deliver effective
presentations, and give and understand clear instructions.

PO8: Societal and Environmental Concern: Understand and assess societal, environmental, health, safety, legal, and cultural issues
within local and global contexts, and the consequential responsibilities relevant to professional computing practices.

5
PO9: Individual and Team work: I

n multidisciplinary environment and diverse teams, function as an effective individual, member or a leader.

PO10: Innovation and Entrepreneurship: Create value and wealth for the individual and society at large by identifying a timely
opportunity and using innovation to pursue that opportunity.

PO11: Conduct Investigations of complex computing problems: Apply research-based knowledge and research methods,
including the design of experiments, the analysis and interpretation of data, and the synthesis of the information to provide
valid conclusions.

PO12: Project management and finance: The ability to understand and apply computing and management principles to one's own
work, as a leader or as a member of a team, in order to manage projects in a multidisciplinary environment.

Program specific Outcomes (PSOs):

PSO1: Develop programming, analytical and logical thinking abilities.

PSO2: Develop computational knowledge and project development skills to solve societal problems in AI & ML.

PSO3: Develop the ability to qualify for Employment, higher studies and Research in Artificial Intelligence and Data science with
ethical values.

PSO4: Inculcate the ability to work in a team and act as an individual in a multidisciplinary environment with lifelong learning and
social awareness.

6
8. Eligibility Criteria (Tentative)
The degree requirements of a student for the MCA programme are as follows:
i) B.C.A./B.Sc./ B.Com/B.A. degree with Mathematics as one of the subjects at 10+2 level or at Graduation Level (with
additional bridge courses as per the norms of the University)
ii) Obtained at least 50% marks (45% marks in case of candidates belonging to reserved category) in the qualifying
Examination.

9. Proposed Venue : Jain (Deemed-To Be) University, Jayanagar 9th T Block, Bengaluru
10. Intake of the students (Proposed) : 60

7
SCHEME FOR 2022-2024

1ST SEMESTER

PRACTIC
SUBJECT Hrs/WEEK THEORY
TITLE OF THE SUBJECT CREDITS AL TOTAL
CODE L-T-P
UE IA UE CA

CORE COURSES

22MCAC101 Programming in Java 4-0-0 4 50 50 - - 100


22MCAC102 Advanced Computer Networks 4-0-0 4 50 50 - - 100
22MCAC103 Advanced Operating Systems 4-0-0 4 50 50 - - 100
Mathematical Foundation for Computer 4-0-0
22MCAC104 4 50 50 - - 100
Applications

22MCAC105 Computer Architecture 4-0-0 4 50 50 - - 100


LEARNING LABS

22MCAC101L Programming in Java Lab 0-0-2 1 - - - 100 100


22MCAC102L Advanced Computer Networks Lab 0-0-2 1 - - - 100 100
22MCAC103L Advanced Operating Systems Lab 0-0-2 1 - - - 100 100
PCL – Project I 1-0-2 - - - - - -
23 25 250 300 800
Total
0

8
2nd SEMESTER

SUBJECT Hrs/WEEK THEORY PRACTICAL


TITLE OF THE SUBJECT CREDITS TOTAL
CODE L-T-P UE IA UE CA
CORE COURSES
Advanced Database Management
22MCAC201 4-0-0 4 50 50 - - 100
Systems

22MCAC202 Data structures with algorithms 4-0-0 4 50 50 - - 100

22MCAG203 Artificial Intelligence 4-0-0 4 50 50 - - 100

22MCAG206 Programming in Python 4-0-0 4 50 50 - - 100


Open Elective 3-0-0 3 50 50 - - 100

LEARNING LABS
22MCAC201L Advanced DBMS Lab 0-0-2 1 - - - 100 100
22MCAC202L Data structures with algorithms Lab 0-0-2 1 - - - 100 100
22MCAG203L Programming in Python Lab 0-0-2 1 - - - 100 100
Research Publication I 1-0-2 2 - - 50 50 100
PCL – Research and Entrepreneurship 1-0-4 3 - -
Project 50 50 100

Total 27 250 250 100 400 1000

9
3rd SEMESTER

SUBJECT Hrs/WEEK THEORY PRACTICAL


TITLE OF THE SUBJECT CREDITS TOTAL
CODE L-T-P UE IA UE CA
CORE COURSES

22MCAG301 Machine Learning 4-0-0 4 50 50 - - 100


Predictive Analytics and Data 4-0-0 4
22MCAG302 50 50 - - 100
Visualization
22MCAG303 Inferential Statistics 4-0-0 4 50 50 - - 100
22MCAG304 Introduction to Big Data 4-0-0 4 50 50 - - 100

22MCAG305 50 50 - - 100
Internet of Things 4-0-0 4
Open Elective 3-0-0 3 50 50 - - 100
LEARNING LAB

22MCA301L Machine Learning Lab 0-0-2 1 - - - 100 100


Predictive Analytics and Data 0-0-2 1 - - 100
22MCA302L - 100
Visualization Lab
PCL – Project III 1-0-2 - - - - - 100
Total 25 300 300 - 200 800

10
4th SEMESTER

SUBJECT Hrs/WEEK THEORY PRACTICAL


TITLE OF THE SUBJECT CREDITS TOTAL
CODE L-T-P UE IA UE CA
CORE COURSES

22MCAC401 Deep Learning 4-0-0 4 50 50 - - 100

Natural Language Processing /


22MCAGE4021/
Computer Vision/Cognitive 4-0-0 4 50 50 - - 100
22MCAGE4022/
22MCAGE4023 Computing

Business
22MCAGE4031/
22MCAGE4032 Intelligence/Robotics/Recommender 4-0-0 4 50 50 - - 100
Systems

ABILITY ENHANCEMENT COMPULSORY COURSE


Research Publication II 1-0-2 2 - -
50 50 100

PCL – Research and 1-0-4 3 - -


Entrepreneurship Project 50 50 100

Project / Internship 4-0-8 8 - - 50 50 100

Total 25 150 150 150 150 600

11
CBCS STRUCTURE

Semester I SEM II SEM III SEM IV SEM Total Credits

HC-1 HC-6
HC-2 HC-7
HARD CORE HC-3 HC-8
8x 4 = 32
COURSES HC-4
HC-5

SPECIALIZATI SC-3
SC -1 SC-4
ON CORE / SC-8 9x4=36
SC-2 SC-5
ELECTIVE SC-9
COURSE SC-6
SC-7

HC-1L HC-6L
LEARNING SC-3L
HC-2L HC-7L 8x1=8
LABS SC-4L
HC-3L SC-1L

12
OPEN
OE-1 OE-2 2x2=4
ELECTIVE

RESEARCH
RP-1 RP-2 2x2=4
PUBLICATION

3x2=6
PCL Project I Project II Project III Project IV 1 x 10 = 10

CREDITS 25 25 26 24 100

Semester Credits
1 25

2 29

3 30

4 16

Total 100

13
TITLE MACHINE LEARNING
SUBJECT CODE 22MCAG301
HOURS PER WEEK 4
CREDITS 4

COURSE OBJECTIVES:
COB1 To enable the students to understand the fundamentals of Machine Learning algorithms.
COB2 To enhance the skill of implementing different Machine Learning algorithms from the scratch.
COB3 To help the student to develop knowledge to choose the appropriate Model based on the task.

14
COB4 To enrich the skill of differentiating Regression, Classification and Clustering tasks.
COB5 To develop the knowledge to build a generalized model by fine-tuning the hyperparameters.
COURSE OUTCOMES: Bloom’s Taxonomy Level
CO1 Describe the basics of Machine Learning. Level 2
CO2 Demonstrate various regression techniques. Level 3
CO3 Categorize the purpose of different Classification techniques. Level 4
CO4 Explain the importance of Dimensionality Reduction. Level 4
CO5 Analyse the process of building Neural Networks. Level 4

MODULE CONTENTS HOURS

The Fundamentals of Machine Learning:


Machine Learning - The use of Machine Learning - Types of Machine Learning
Systems: Supervised/Unsupervised Learning – Batch and Online Learning –
MODULE – I Instance Based versus Model-Based Learning. Main Challenges of Machine
Learning: Insufficient Quality of Training Data – Nonrepresentative Training 12 Hours
Data – Poor Quality Data – Overfitting & Underfitting the Training Data –
Stepping Back. Testing and Validating. Working with Real Data – Get the Data -
Discover and Visualize The Data to Gain Insights - Prepare the Data for Machine
Learning Algorithms - Select and Train a Model - Fine-Tune the Model.

15
Regression:
Linear Regression - Gradient Descent: Batch Gradient Descent - Stochastic
Gradient Descent - Mini-batch Gradient Descent. Polynomial Regression -
MODULE – II Learning Curves - Regularized Linear Models: Ridge Regression - Lasso
Regression - Elastic Net - Early Stopping. Logistic Regression: Estimating 12 Hours
Probabilities - Training and Cost Function - Decision Boundaries - Softmax
Regression. SVM Regression: Decision Function and Predictions - Training
Objective - Quadratic Programming - The Dual Problem - Kernelized SVM -
Online SVMs.

Classification:
Training a Binary Classifier - Performance Measures - Measuring Accuracy
MODULE – III Using Cross-Validation - Confusion Matrix - Precision and Recall -
Precision/Recall Trade-off - The ROC Curve - Multiclass Classification - Error 12 Hours
Analysis - Multilabel Classification - Multioutput Classification. Linear SVM
Classification - Soft Margin Classification - Nonlinear SVM Classification -
Polynomial Kernel - Adding Similarity Features - Gaussian RBF Kernel -
Computational Complexity.

Decision Trees and Random Forests:


Training and Visualizing a Decision Tree - Making Predictions - Estimating Class
Probabilities - The CART Training Algorithm - Computational Complexity - Gini
Impurity or Entropy?Regularization Hyperparameters - Regression – Instability.
MODULE –IV Ensemble Learning and Random Forests: Voting Classifiers - Bagging and
Pasting in Scikit-Learn - Out-of-Bag Evaluation - Random Patches and Random 12 Hours
Subspaces - Random Forests - Extra-Trees - Feature Importance – Boosting -
AdaBoost - Gradient Boosting – Stacking. Dimensionality Reduction: The Curse
of Dimensionality - Main Approaches for Dimensionality Reduction - PCA -
Preserving the Variance - Principal Components - PCA for Compression -
Incremental PCA - Randomized PCA - Kernel PCA – LLE.

16
Artificial Neural Networks:
From Biological to Artificial Neurons - Biological Neurons - Logical
Computations with Neurons - The Perceptron - Multi-Layer Perceptron and
MODULE – V Backpropagation - Training an MLP with TensorFlow’s High-Level API -
Training a DNN Using Plain TensorFlow - Fine-Tuning Neural Network 12 Hours
Hyperparameters - Number of Hidden Layers - Number of Neurons per Hidden
Layer - Activation Functions. Vanishing/Exploding Gradients Problems - Reusing
Pretrained Layers - Faster Optimizers - Avoiding Overfitting Through
Regularization.

TEXTBOOKS:
1. Géron, Aurélien. “Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow”, 3 rd Edition, O'Reilly Media, Inc, 2022.
REFERENCE BOOKS:
1. Mitchell, Tom M., and Tom M. Mitchell. “Machine learning”, Vol. 1(9). , 1st Edition, New York: McGraw-hill, 1997.
2. Müller, Andreas C., and Sarah Guido. “Introduction to machine learning with Python: a guide for data scientists.” 1st Edition, O'Reilly Media, Inc.
2016.

TITLE PREDICTIVE ANALYTICS AND VISUALIZATION


SUBJECT CODE 22MCAG302
HOURS PER WEEK 4
CREDITS 4

COURSE OBJECTIVES:
COB1 To enhance the skills to both design and critique visualizations.

17
COB2 To develop the understanding and importance of visualization as a part of data analysis.
COB3 To impart knowledge about the components involved in visualization design.
COB4 To enhance the skills on visualization of time series, proportions and associations.
COB5 To develop the skill of visualizing data helpful for people with color-vision Deficiency.
COURSE OUTCOMES: Bloom’s Taxonomy Level
CO1 Understand basics of Data Visualization Level 2

CO2 Implement visualization of distributions Level 3

CO3 Write programs on visualization of time series, proportions & associations Level 4

CO4 Apply visualization on Trends and uncertainty Level 4

CO5 Explain principles of proportions Level 4

SYLLABUS

MODULE CONTENT HOURS

DATA GATHERING AND CLEANING: Introduction to data,


types of data, cleaning data: checking for missing values, handling
missing values, reading data: reading and cleaning CSV data,
merging and integrating data, reading data from the JSON format,
Reading data from the HTML format, Reading data from XML
format. Data exploring and analysing: Series data Structure: Creating,
I accessing data from a series with a position, exploring and analysing 12
series, Data frame data structure: Creating data frame, updating and
accessing data frame’s column selection, exploring and analysing a
data frame

18
VISUALIZING DATA-Mapping Data onto Aesthetics, Aesthetics
and Types of Data, Scales Map Data Values onto Aesthetics,
Coordinate Systems and Axes- Cartesian Coordinates, Nonlinear
Axes, Coordinate Systems with Curved Axes, Color Scales-Color as
a Tool to Distinguish, Color to Represent Data Values, Color as a
II Tool to Highlight, Directory of Visualizations- Amounts, 12
Distributions, Proportions, x–y relationships, Geospatial Data

BASICS OF VISUALIZATION: Direct plotting, Line plot, Bar


plot, Pie Chart, Box plot, Histogram Plot, Scatter plot, Seaborne
Plotting system: Strip plot, box plot, swarm plot, joint plot,
Matplotlib plot, Line plot, Bar chart ,Histogram plot ,scatter
plot ,stack plot ,stack plot ,pie chart, Visualizing Proportions
Separately as Parts of the Total ,Visualizing Nested Proportions-
Nested Proportions Gone Wrong, Mosaic Plots and Tree maps, 12
Nested Pies ,Parallel Sets. Visualizing Associations Among Two or
III More Quantitative Variables-Scatter plots, Correlograms, Dimension
Reduction, Paired Data. Visualizing Time Series and Other Functions
of an Independent Variable-Individual Time Series , Multiple Time
Series and Dose–Response Curves

VISUALIZING UNCERTIANITY: Visualizing Trends-Smoothing,


Showing Trends with a Defined Functional Form, De-trending and
Time-Series Decomposition, Visualizing Geospatial Data-
IV Projections, Layers, Choropleth Mapping, Cartograms, Visualizing
Uncertainty-Framing Probabilities as Frequencies, Visualizing the
Uncertainty of Point Estimates, Visualizing the Uncertainty of Curve 12
Fits, Hypothetical Outcome Plots

PRINCIPLE OF PROPORTIONAL INK: The Principle of


Proportional Ink-Visualizations Along Linear Axes, Visualizations
Along Logarithmic Axes, Direct Area Visualizations, Handling
Overlapping Points-Partial Transparency and Jittering, 2D

19
Histograms, Contour Lines, Common Pitfalls of Color Use-Encoding
Too Much or Irrelevant Information, Using Non-monotonic Color
V 12
Scales to Encode Data Values, Not Designing for Color-Vision
Deficiency

TEXT BOOKS:
1. Claus Wilke, “Fundamentals of Data Visualization: A Primer on Making Informative and Compelling Figures”, 1 st Edition, O’Reilly
Media Inc, 2019
2. Ossama Embarak, “Data Analysis and Visualization Using Python: Analyze Data to Create Visualizations for BI Systems”, Apress, 1 st
Edition, 2018
REFERENCE BOOKS:
1. Tony Fischetti, Brett Lantz, R: Data Analysis and Visualization, O’Reilly, 1 st Edition, 2016.
2. https://www.netquest.com/hubfs/docs/ebook-data-visualization-EN.pdf

TITLE INFERENTIAL STATISTICS


SUBJECT CODE MCAG303
HOURS PER
WEEK 4

CREDITS
4

COURSE OBJECTIVES:
COB1 To study basic inferential statistics and sampling distribution.

COB2 To understand the concept of parameter estimation using fundamental tests and testing of hypotheses.

20
COB3 To understand the techniques of variance analysis.

COB4 To gain knowledge in predictive analytics techniques.

COB5 To perform a case study with any available sample data sets.

COURSE OUTCOMES:
CO1 Understand the concept of sampling.

CO2 Apply the knowledge to derive hypotheses for given data.

CO3 Demonstrate the skills to perform various tests in the given data.

CO4 Ability to derive inference using Predictive Analytics.

CO5 Perform statistical analytics on a data set.

SYLLABUS
MODULE CONTENTS ASSESSMENTS AND CO PO MAPPING
NO. ACTIVITY
MODULE – I Inferential Statistics CO1 P01, PO2
12 Hours INFERENTIAL STATISTICS - I 9 Fundamentals LinkedIn Course
Populations – samples – random sampling –
probability and statistics. Sampling distribution –
creating a sampling distribution – mean of all
sample means – standard error of the mean – other
sampling distributions. Hypothesis testing – z-test –

21
z-test procedure – statement of the problem – null
hypothesis – alternate hypotheses – decision rule –
calculations – decisions – interpretations.

INFERENTIAL STATISTICS - II 9
MODULE – Why hypothesis tests? – Strong or weak decisions – Real Life Case Study
II discussions on Inferential C02 P01, PO2, PO3
one-tailed and two-tailed tests – case studies
12 Hours Statistics
Influence of sample size – power and sample size.
Estimation – point estimate – confidence interval –
level of confidence – effect of sample size.

T – Test
T-test for one sample – sampling distribution of t
– t-test procedure – degrees of freedom –
MODULE – estimating the standard error – case studies. T-test for Certification on an Online
III Course on Inferential Statistics. CO3 PO1, PO2, PO3, PO4
12 Hours two independent samples – statistical hypotheses –
sampling distribution – test procedure – p-value –
statistical significance – estimating effect size – meta
analysis t-test for two related samples.

MODULE – F- Test Mathematical Problem CO4 PO1, PO2, PO3, PO4,


IV Solving on F – Test through PO5
12 Hours F-test – ANOVA – estimating effect size – multiple Quizzes.
comparisons – case studies Analysis of variance with
repeated measures. Two-factor experiments – three f-

22
tests – two-factor ANOVA – other types of ANOVA
Introduction to chi-square tests.

Predictive Analytics: Linear least squares –


implementation – goodness of fit – testing a linear
model – weighted Resampling. Regression using
MODULE – Stats Models – multiple regression – nonlinear Seminar on various
Applications of Predictive CO5 PO1, PO2, PO3, PO4,
V relationships – logistic regression. Estimating Analytics. PO5
12 Hours
parameters – accuracy. Time series analysis – moving
averages – missing values – serial correlation –
autocorrelation Introduction to survival analysis.

TEXT BOOKS
1. Robert S. Witte and John S. Witte, “Statistics”, Eleventh Edition, Wiley Publications, 2017.
2. Allen B. Downey, “Think Stats: Exploratory Data Analysis in Python”, Green Tea Press, 2014. [Unit V]
REFERENCES
1. David Spiegel halter, “The Art of Statistics: Learning from Data”, Pelican Books, 4th Edition, 2019.
2. Peter Bruce, Andrew Bruce, and Peter Gedek, “Practical Statistics for Data Scientists”, 2nd Edition, O’Reilly Publishers,
2020.
3. Charles R. Severance, “Python for Everybody: Exploring Data in Python 3”, 2 n d Edition, Shroff Publishers, 2017.
4. Bradley Efron and Trevor Hastie, “Computer Age Statistical Inference”, 2 n d E d i t i o n , Cambridge Press, 2016.

TITLE Introduction to Big Data


SUBJECT CODE 22MCAG304

23
HOURS PER WEEK 4
CREDITS 4

COURSE OBJECTIVES:
COB1

To learn Basic concepts of Big Data.

COB2 To understand different types of Data.


COB3 To understand architecture of Hadoop and YARN
COURSE OUTCOMES: Bloom’s Taxonomy Level

CO 1 Understand the concept of Big Data L2

CO 2 Learn the concept of Hadoop. L3

CO 3 Outline the concept of storage layer and processing layer of Hadoop. L4

CO 4 Understand the internals of MapReduce and YARN L5

24
CO 5 Understand the different modes and distribution of Hadoop L6
SYLLABUS
MODULE CONTENT HOURS

Understanding Big data


Defining Data, Types of Data, Structured Data, Semi Structured
Data, Unstructured Data, How data being Generated, Different
source of Data Generation, Rate at which Data is being
generated, Different V’s, Volume, Variety, Velocity, Veracity,
I Value, How single person is contributing towards Big 12
Data, Significance for Big Data, Reason for Big Data,
Understanding RDBMS and why it is failing to store Big Data.
Future of Big Data, Big Data use cases for major IT Industries.
Introduction to Hadoop

What is Hadoop, Apache Community, Cluster, Node,


Commodity Hardware, Rack Awareness, History of Hadoop,
Need for Hadoop, How is Hadoop Important, Apache Hadoop
II 12
Ecosystem, Different Hadoop offering , Hadoop 1.x
Architecture, Apache Hadoop Framework, Master- Slave
Architecture, Advantages of Hadoop.
Big Data Technologies
Hadoop Distributed File System, Design of HDFS, HDFS
Concept, How files are stored in HDFS, Hadoop File system,
Replication factor, Name Node, Secondary Name Node, Job
Tracker, Task tracker, Data Node, FS Image, Edit-logs, Check-
pointing Concept, HDFS federation, HDFS High availability

25
12
III

Map Reduce

IV What is MapReduce, History of MapReduce, How does


MapReduce works, Input files, Input Format types Output
Format Types, Text Input Format, Key Value Input Format, 12
Sequence File Input Format, Input split, Record Reader,
MapReduce overview, Mapper Phase, Reducer Phase, Sort and
Shuffle Phase, Importance of MapReduce

Data Flow, Counters, Combiner Function, Partition Function,


Joins, Map Side Join, Reduce Side Join, MapReduce Web UI,
Job Scheduling, Task Scheduling, Fault Tolerance, Writing
MapReduce Application, Driver Class, Mapper Class, Reducer
Class, Serialization, File Based Data Structure, Writing a
simple MapReduce program to Count Number of words,
MapReduce Work Flows

26
YARN and Hadoop Cluster:

YARN, YARN Architecture, YARN Components, Resource


Manager, Node Manager, Application Master, Concept of
V Container, Difference between Hadoop 1.x and 2.x 12
Architecture, Execution of Job in Yarn Cluster, Comparing and
Contrasting Hadoop with Relational Databases
Cluster Specification, Cluster Setup and Installation, Creating
Hadoop user, Installing Hadoop, SSH Configuration, Hadoop
Configuration, Hadoop daemon properties.

Text Books:

1) Dirk deRoos, Paul C. Zikopoulos, Bruce Brown, “Hadoop: The Definitive Guide, “A Wiley brand, 4th Edition, 2015.

2) Jim Keogh, “J2EE, “The Complete Reference”, Tata McGraw Hill, 5th Edition, 2011.

Reference Books:
1) Hadoop for Dummies, Dirk deRoos, Paul C. Zikopoulos, Bruce Brown, A Wiley brand, 5th Edition, 2019.
2) Hadoop in Action, Chuck Lam, Manning Publications, 2nd Edition, 2014.

27
TITLE INTERNET OF THINGS
SUBJECT CODE 22MCAG306
HOURS PER WEEK 4
CREDITS 4
COURSE OBJECTIVES
COB 1 To get the students on fast track on components essential for IoT
COB 2 To learn about the designing and prototyping of IoT for daily life
COB 3 To give hands-on experience in the implementation
COURSE OUTCOMES
CO 1 To understand the basics of the IoT concepts L2
CO 2 To create a base ground to build a prototype of IoT L4
CO 3 understanding the working and architecture of the chips used L3
CO 4 Understand and build the business model and how to implement the prototype in L4
real-time cases
CO 5 To enforce the ethics and protocol of control for the production L5

28
Introduction to Internet of Things- Definition and Characteristics of IoT, Wireless Sensor
Module 1
Networks, Cloud Computing, the concept of cloud environment, issues with the migration 12 hours
– application performance Degradation, Network congestion- migration time

Design principles and connected devices: calm and ambient technology, IoT design &
Module 2 privacy, web thinking for connected devices, internet communication overview,
12 hours
Prototyping of an IoT (Rapid IoT Prototyping Kit), Thinking about prototyping –
sketching, familiarity, and the cost versus ease of prototyping. Prototyping and production.

Prototyping embedded Devices: electronics – sensors, actuators, scaling up the electronics,


Module 3 embedded computing basics –microcontroller, system-on-chips choosing the platform,
12 hours
Arduino –developing on Arduino and notes on the hardware, Raspberry Pi- development
on Raspberry Pi, notes on hardware openness.
Prototyping online components: getting started with APIs, mashing APIs, scraping,
Module 4 legalities implementation of the API, using curl to test real-time reaction polling, comet.
12 hours
Techniques for writing embedded codes – memory management types of memory, making
the most of your RAM, performance and maintenance of battery life, libraries, debugging.
Characterizing the internet of things: Ethics, privacy, control, disrupting control,
Module 5 environment, reality – business model, space and time, IoT from craft to production
12 hours
models- make things and sell things subscription customization. Domain-Specific IoTs –
Home, City, Environment, Energy, Agriculture, and Industry.

Textbooks:
1. Adrain McEwen & Hakim Cassimally, “Design the Internet of Things”, 2nd Edition, Wiley Publishers, 2013.
2. Sudha Jamthe, “IoT Disruptions: The Internet of Things - Innovations & Jobs”, Kindle edition By
3. Learning IoT with Particle Photon and Electron Kindle Edition by Rashid Khan , KajariGhoshdastidar , Ajith Vasudevan.
4. INTERNET OF THINGS - A HANDS-ON APPROACH by ArsheepBahga, Vijay Madisetti

29
TITLE MACHINE LEARNING LAB
SUBJECT CODE 22MCAG301L
HOURS PER WEEK 2
CREDITS 1
EXPERIMENTNO. TITLE
Experiment - 1 Installation of all the pre-requisites to build Machine Learning Models:
a. Anaconda
b. Tensorflow
c. Keras
d. Scikit-learn
e. CUDA (if the system has GPU support)
Experiment –2 Write a Python program to implement Simple Linear Regression.
Experiment –3 Write a Python program to implement Multiple Linear Regression for House Price Prediction using
sklearn.
Experiment –4 Write a Python program to implement KNN Classifier for a benchmarking dataset using sklearn.
Experiment –5 Write a Python program to implement Logistic Regression for a suitable benchmarking dataset using
sklearn.
Experiment –6 Write a Python program to implement the Naïve Bayes classifier for a sample training data set stored as
a .csv file. Compute the accuracy of the classifier, considering few test data sets.
Experiment –7 Write a Python program to classify data using Support Vector Machines (SVMs): SVM-RBF Kernels.
Experiment –8 Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new sample.
Experiment –9 Write a Python program to implement XG Boost.
Experiment –10 Write a Python program to implementation of K-Means Clustering.

30
Experiment – 11 Write a Python program to build an Artificial Neural Network by implementing the Backpropagation
algorithm and test the same using appropriate data sets.
Experiment - 12 Write a Python program to perform hyperparameter optimization in an Artificial Neural Networks using
Grid Search.

TITLE PREDICTIVE ANALYTICS AND VISUALIZATION LAB

SUBJECT CODE 22MCAG302L


HOURS PER WEEK 2
CREDITS 1
EXPERIMENTNO. TITLE
1 Download the House Pricing dataset from Kaggle and map the values to Aesthetics
2 Use different Color scales on the Rainfall Prediction dataset
3 Create different Bar plots for variables in any dataset
4 Show an example of Skewed data and removal of skewedness

5 For a sales dataset do a Time Series visualization


6 Build a Scatter plot and suggest dimension reduction
7 Use Geospatial Data-Projections on datasets in http://www.gisinindia.com/directory/gis-data-for-india
8 Create the a trend line with a confidence band in any suitable dataset
9 Illustrate Partial Transparency and Jittering
10 Illustrate usage of different color codes
11 Create 500 random temperature readings for six cities over a season and then plot the generated data using Matplotlib.
12 Load the well-known Tips data set, which shows the number of tips received by restaurant staff based on various indicator
data; then implement the factor plots to visualize the total bill per day according to staff gender

31
SEMESTER IV

TITLE DEEP LEARNING


SUBJECT CODE MCAC401
HOURS PER WEEK 4
CREDITS 4

COURSE OBJECTIVES:
COB1 To enable the students to understand the underlying mathematical operations in any Machine Learning
Model.
COB2 To develop the knowledge about the different types of models in Machine Learning and Deep Learning.
COB3 To enhance the skill to develop different Neural Networks.
COB4 To build the knowledge of fine tuning different parameters and hyperparameters in modelling.
COB5 To be able to apply the concepts on to a real-time application and make a prediction based on the
historical data.
COURSE OUTCOMES: Bloom’s Taxonomy Level
CO1 Understand the mathematical foundations for ML Models. Level 2

CO2 Illustrate the various parameters and hyperparameters involved in developing an Level 3
Artificial Neural Network.

CO3 Categorize different models in Deep Learning based on the nature of the data. Level 4

CO4 Discriminate between supervised and unsupervised models in Deep Level 5


Learning.

32
CO5 Compare the different hyperparameters and fine-tune them. Level 5

MODULE CONTENTS HOURS

Review of Machine Learning:

MODULE – I Learning Machines - Biological Inspiration – Deep Learning The Math behind Machine
Learning: Linear Algebra – Scalars – Vectors – Matrices – Tensors – Hyperplanes – Relevant 12 Hours
Mathematical Operations - Converting Data Into Vectors – Solving Systems of Equations -
Statistics for Machine Learning – Working of Machine Learning Models - Regression –
Classification – Clustering – Optimization Methods – Evaluating Models.
Foundation of Neural Networks:
Neural Networks – The Perceptron – Multilayer Feed-Forward Networks - Training Neural
MODULE – II Networks – Backpropagation Learning – Activation Functions: Linear – Sigmoid – Tanh –
Hard Tanh – Softmax – Rectified Linear. Loss Functions: Loss Functions for Regression – 12 Hours
Loss Functions for Classification – Loss Functions for Reconstruction – Hyperparameters:
Learning Rate. Regularization – Momentum. Defining Deep Learning: Evolutionary Progress
– Advances in Deep Neural Network Architecture - From Feature Engineering to Automate
Feature Learning - Generative Modelling.
Architectural Principles of Deep Neural Networks - I:
MODULE – Parameters – Layers – Activation Functions – Loss Functions – Optimization Algorithms –
III Hyperparameters – Regularization. Building Blocks of Deep Networks: Multilayer Feed- 12 Hours
Forward Networks - Restricted Boltzmann Machines (RBM) – Unsupervised Pretrained
Networks: Autoencoders – Variants of Autoencoders – Variational Autoencoders (VAE) –
Deep Belief Networks (DBN) – Generative Adversarial Networks (GAN).

33
Architectural Principles of Deep Neural Networks - II: 12 Hours
Convolutional Neural Networks (CNN): Architecture Overview – Input Layers –
MODULE –IV
Convolutional Layers – Pooling Layers – Fully Connected Layers. Recurrent Neural
Networks (RNN): Modelling the Time Dimension - Sequences and time-series data - 3D
Volumetric Input - Recurrent Neural Networks architecture and time-steps - LSTM Networks:
Architecture. Recursive Neural Networks: Architecture and Applications.

Building Deep Neural Networks:


Build Deep Learning Models in Keras: Input Data – Neuron – Activation Function: Sigmoid
MODULE – V – ReLu – Model: Layers – Loss Function – Optimizers: Stochastic Gradient Descent, Adam,
Other Optimizers – Metrics – Model Configuration and Training – Model Evaluation. Case 12 Hours
Study -1: Regression - Predict the sales for a few identified stores on a given day that belongs
to Rossmann Drugstore in Germany). Tuning the Neural Networks: Regularization L1, L2,
Dropout. Hyperparameter Tuning - Approaches for Hyperparameter Tuning – Tailoring the
Test Data – Saving the Models – Retraining the Models with new Data.

TEXTBOOKS:
1. Patterson, Josh, and Adam Gibson. “Deep learning: A practitioner's approach.” O'Reilly Media, Inc.", 2017.
2. Moolayil, Jojo, JojoMoolayil, and Suresh John. “Learn Keras for Deep Neural Networks.” Birmingham: Apress, 2019.
REFERENCE BOOKS:
1. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. “Deep learning.” MIT press, 2016.
2. Osinga, Douwe. “Deep learning cookbook: Practical recipes to get started quickly.” O'Reilly Media, Inc.", 2018.
3. Gulli, Antonio, and Sujit Pal. “Deep learning with Keras.” Packt Publishing Ltd, 2017.

TITLE NATURAL LANGUAGE PROCESSING

34
SUBJECT CODE 22MCAGE4022
HOURS PER WEEK 4

CREDITS 4

COURSE OBJECTIVES:
COB1
Understand the fundamental concepts and techniques of natural language processing (NLP)

COB2
Gain an in-depth understanding of the computational properties of natural languages and the
commonly used algorithms for processing linguistic information.

COB3 To provide understanding about concepts like vectorization, POS, stemming, Lemmatization etc.
COB4 To enhance the skill to develop NLP models like LSTMs, GRU, Attention models using Python’s
libraries.
COB5 To explain the process of applying predictive models to generate language models.

COB6 To develop the skill of using Python’s libraries to implement various NLP models.
COURSE OUTCOMES:
CO1 Extract information from text automatically using concepts and methods from
natural language processing (NLP) including stemming, n-grams, POS tagging, L2
and parsing.

35
CO2 Analyze the syntax, semantics, and pragmatics of a statement written in a natural
language. L2

CO3 Apply scripts and applications in Python to carry out natural language
processing using libraries such as NLTK, Gensim, and spaCY. L3

CO4 Design NLP-based AI systems for question answering, text summarization, and
machine translation. L4

CO5 Apply predictive models to generate predictions for new data. L3


CO6 Evaluate the performance of NLP tools and systems
L5

SYLLABUS
MODULE CONTENTS ASSESSMENTS AND CO PO MAPPING
NO. ACTIVITY
I Introduction to NLP - Various stages of NLP –The MCQ based assignment for CO1, PO1,PO2
12 Hours Ambiguity of Language: Why NLP Is Difficult,Parts of comprehension Check CO2
Speech: Nouns and Pronouns, Words: Determiners and
adjectives, verbs, Phrase Structure. Statistics Essential
Information Theory : Entropy, perplexity, The relation
to language, Cross entropy
Text Preprocessing and Morphology- Character
Encoding, Word Segmentation, Sentence Segmentation,
Introduction to Corpora, Corpora Analysis. Inflectional
and Derivation Morphology, Morphological analysis
and generation using Finite State Automata and Finite
State transducer.

36
Language Modelling-
Words: Collocations- Frequency-Mean and Variance –
Hypothesis testing: The t test, Hypothesis testing of
differences, Pearson’s chi-square test, Likelihood ratios.
Statistical Inference: n -gram Models over Sparse Data:
Bins: Forming Equivalence Classes- N gram model - LinkedIn Certification Course,
Assignment CO1,
II Statistical Estimators- Combining Estimators
CO2, PO1,PO2,PO11
12 Hours Word Sense Disambiguation-
CO6
Methodological Preliminaries, Supervised
Disambiguation: Bayesian classification, An
information theoretic approach, Dictionary-Based
Disambiguation: Disambiguation based on sense,
Thesaurus based disambiguation, Disambiguation based
on translations in a second-language corpus.
Markov Model and POS Tagging Markov Model:
Hidden Markov model, Fundamentals, Probability of
Assignment-Review of several
properties, Parameter estimation, Variants, Multiple
III CO2,
input observations. The Information Sources in Language constructs and their PO1,PO4
12 Hours variation CO3,
Tagging: Markov model taggers, Viterbi algorithm,
Applying HMMs to POS tagging, Applications of
Tagging
Syntax and Semantics-
Shallow Parsing and Chunking, Lexical Semantics,
Case Study: Sentiment Analysis
WordNet, Thematic Roles, Semantic Role Labelling
IV – Model Development and CO3,
with CRFs. Statistical Alignment and Machine PO3,PO5,PO11
12 Hours Evaluation. CO5
Translation, Text alignment, Word alignment,
Information extraction, Text mining, Sentimental
Analysis, Vector Space Models

37
Neural Network Models for NLP:
Neural Networks for Sentiment Analysis, Recurrent
Neural Networks for Language Modelling, Group Presentation on NLP CO3,
V
LSTMs and Named Entity Recognition, Neural models generated using Python CO4, PO2,PO3,PO5,PO11
12 Hours
Machine Translation, Text Summarization, Question libraries CO5
Answering, ChatGPT.

TEXTBOOKS:
1. Christopher D. Manning and Hinrich Schutze, “Foundations of Natural Language Processing”, 6 th Edition, The MIT Press Cambridge,
Massachusetts London, England, 2003
2. Daniel Jurafsky and James H. Martin “Speech and Language Processing”, 3rd edition, Prentice Hall, 2009.
REFERENCE BOOKS:
1. NitinIndurkhya, Fred J. Damerau “Handbook of Natural Language Processing”, Second Edition, CRC Press, 2010.
2. James Allen “Natural Language Understanding”, Pearson Publication 8th Edition. 2012.
3. Chris Manning and HinrichSchütze, “Foundations of Statistical Natural Language Processing”, 2nd edition, MIT Press Cambridge,
MA, 2003.
4. Hobson lane, Cole Howard, Hannes Hapke, “Natural language processing in action” MANNING Publications, 2019.
5. Alexander Clark, Chris Fox, Shalom Lappin, “The Handbook of Computational Linguistics and Natural Language Processing”,
Wiley-Blackwell, 2012
6. Rajesh Arumugam, Rajalingappa Shanmugamani “Hands-on natural language processing with python: A practical guide to applying
deep learning architectures to your NLP application”. PACKT publisher, 2018.

38
TITLE COMPUTER VISION
SUBJECT CODE MCAGE4021

HOURS PER WEEK


4
CREDITS
4

COURSE OBJECTIVES:
COB1 To review image processing techniques for computer vision.
COB2 To understand shape and region analysis.
COB3 To understand Hough Transform and its applications to detect lines, circles, ellipses.
COB4 To understand three-dimensional image analysis techniques.

COB5 To understand motion analysis and study some applications of computer vision algorithms.
COURSE OUTCOMES:
CO1 Implement fundamental image processing techniques required for computer vision. L3
CO2 Perform shape analysis. L2
CO3 Implement boundary tracking techniques. L3
CO4 Apply chain codes and other region descriptors. L4
CO5 Apply Hough Transform for line, circle, and ellipse detections. L5
SYLLABUS

39
MODULE NO. CONTENTS ASSESSMENTS CO PO MAPPING
AND ACTIVITY
IMAGE PROCESSING
FOUNDATIONS
Review of image processing
MODULE – I techniques – classical filtering Image Processing
12 Hours operations – thresholding techniques Fundamentals CO1 P01, PO2
– edge detection techniques – corner LinkedIn Course
and interest point detection –
mathematical morphology – texture.

Shapes and Regions:


Binary shape analysis –
connectedness – object labeling and
counting – size filtering – distance Presentation on
MODULE – II functions – skeletons and thinning – various Topics and C02 P01, PO2, PO3
12 Hours deformable shape analysis – Regions
boundary tracking procedures –
active contours – shape models and
shape recognition – centroidal
profiles – handling occlusion – .

MODULE – III HOUGH TRANSFORM Certification on an CO3 PO1, PO2, PO3, PO4
12 Hours Line detection – Hough Transform Online Course on
(HT) for line detection – foot-of- Computer Vision
normal method – line localization –
line fitting – RANSAC for straight
line detection – HT based circular

40
object detection – accurate center
location – speed problem – ellipse
detection – Case study: Human Iris
location – hole detection –
generalized Hough Transform

3D VISION AND MOTION


Methods for 3D vision – projection
schemes – shape from shading –
photometric stereo – shape from
texture – shape from focus – active Mathematical
MODULE – IV range finding – surface Problem Solving on
representations – point-based 3D Vision and CO4 PO1, PO2, PO3, PO4, PO5
12 Hours
representation – volumetric Motion
representations – 3D object
recognition – 3D reconstruction –
introduction to motion –
triangulation

MODULE – V APPLICATIONS Seminar on various CO5 PO1, PO2, PO3, PO4, PO5
12 Hours Application: Photo album – Face Applications of
detection – Face recognition – Eigen Image Processing
faces – Active appearance and 3D
shape models of faces Application:
Object Detection Model-YOLO2,
Application: In-vehicle vision
system: locating roadway – road
markings – identifying road signs –

41
locating pedestrians.

Textbooks:
1. D. L. Baggio et al., ―Mastering OpenCV with Practical Computer Vision Projects‖, Packt Publishing, 2018
2. E. R. Davies, ―Computer & Machine Vision‖, Fourth Edition, Academic Press, 2021
3. Jan Erik Solem, ―Programming Computer Vision with Python: Tools and algorithms for analyzing images‖, O'Reilly Media, 2017
Reference Books:
1. Mark Nixon and Alberto S. Aquado, ―Feature Extraction & Image Processing for Computer Vision‖, Third Edition, Academic
Press, 2012.
2. R. Szeliski, ―Computer Vision: Algorithms and Applications‖, Springer 2011.
3. Simon J. D. Prince, ―Computer Vision: Models, Learning, and Inference‖, Cambridge University Press, 2012.

TITLE COGNITIVE COMPUTING

SUBJECT CODE MCAGE4031


HOURS PER WEEK 4

CREDITS 4

COURSE OBJECTIVES
COB1 To understand Cognitive Computing as a discipline with knowledge of its Architecture.
COB2 To understand the working of a Cognitive System with Inference and Decision Support System.
COB3 To understand the connection between Cognitive Computing and Machine Learning.

42
COB4 To Understand how Natural Language is a Support for Cognitive Systems.
COB5 To Apply Cognitive Systems capabilities in real time case studies.
COURSE OUTCOMES
CO1 Implement the Basic Concepts of Cognitive Systems.
CO2 Applies the relationship between Machine learning, NLP and Cognitive Systems to design a better architecture.
CO3 Design a Cognitive System and apply it in real world scenarios.
SYLLABUS
MODULE CONTENT HOURS
I Introduction: Cognitive science and cognitive Computing with AI, 12
Cognitive Computing - Cognitive Psychology - The Architecture of the
Mind - The Nature of Cognitive Psychology – Cognitive architecture –
Cognitive processes – The Cognitive Modelling Paradigms -
Declarative / Logic based Computational cognitive modelling –
connectionist models – Bayesian models. Introduction to Knowledge-
Based AI – Human Cognition on AI – Cognitive Architectures
II Cognitive Computing With Inference and Decision Support 12
Systems: Intelligent Decision making, Fuzzy Cognitive Maps,
Learning algorithms: Non linear Hebbian Learning – Data driven NHL
- Hybrid learning, Fuzzy Grey cognitive maps, Dynamic Random fuzzy
cognitive Maps.
III Cognitive Computing with Machine Learning: Machine learning 12
Techniques for cognitive decision making – Hypothesis Generation and
Scoring - Natural Language Processing - Representing Knowledge -
Taxonomies and Ontologies - Deep Learning.

43
IV 12
Natural Language Processing in support of a Cognitive System:

Role of Natural Language Processing in a Cognitive System:


Importance of Context - Connecting Words of Meaning -
Understanding Linguistics - Tokenization - Phonology - Morphology -
Lexical Analysis - Construction of Grammars - Discourse Analysis -
Pragmatics - Word Sense Disambiguation - Semantic Web - Applying
Natural Language Technologies to Business problems: Enhancing the
Shopping Experience - Leveraging the Connected World of Internet of
Things - Fraud Detection.
V Case Studies: Cognitive Systems in health care – Cognitive Assistant 12
for visually impaired – AI for cancer detection, Predictive Analytics -
Text Analytics - Image Analytics -Speech Analytics – IBM Watson.

Textbooks:
1. Hurwitz, Kaufman, and Bowles, Cognitive Computing and Big Data Analytics, Wiley, Indianapolis, IN, 2005, ISBN: 978-1-
118-89662-4, 2021.
2. Masood, Adnan, Hashmi, Adnan, Cognitive Computing Recipes-Artificial Intelligence Solutions Using Microsoft Cognitive
Services and TensorFlow, 2015.
Reference Books:
1. Peter Fingar, Cognitive Computing: A Brief Guide for Game Changers, PHI Publication, 2015.
2. Gerardus Blokdyk, Cognitive Computing Complete Self-Assessment Guide, 2018.

44
3. Rob High, Tanmay Bakshi, Cognitive Computing with IBM Watson: Build smart applications using Artificial
Intelligence as a service, IBM Book Series, 2019.

TITLE BUSINESS INTELLIGENCE

SUBJECT CODE MCAGE4031


HOURS PER WEEK 4

CREDITS 4

COURSE OBJECTIVES
COB1 To Understand the basic concepts in Business Intelligence.
COB2 To understand how data can be presented through Data Mining.
COB3 To Understand the Process of ETL in a Data Warehouse.
COB4 To Understand the application of BI in Real World Scenarios.
COURSE OUTCOMES
CO1 Implement the concepts of BI for Data Preparation.
CO2 Learn the construction of a BI System and apply it in Decision Making
SYLLABUS

45
MODULE CONTENT HOURS
I Business Intelligence – Introduction: Introduction - History and 12
Evolution: Effective and Timely decisions, Data Information and
Knowledge, Architectural Representation, Role of mathematical
Models, Real Time Business Intelligent System.
II BI –Data Mining and Data Warehousing : Data Mining - 12
Introduction to Data Mining, Architecture of Data Mining and How
Data mining works(Process), Functionalities & Classifications of Data
Mining, Representation of Input Data, Analysis Methodologies. Data
Warehousing - Introduction to Data Warehousing, Data Mart, Online
Analytical Processing (OLAP) – Tools, Data Modelling, Difference
between OLAP and OLTP, Schema – Star and Snowflake Schemas,
ETL Process – Role of ETL.
III BI – DATA PREPARATION: Data Validation - Introduction to Data 12
Validation, Data Transformation – Standardization and Feature
Extraction, Data Reduction – Sampling, Selection, PCA, Data
Discretization.
IV BI – DATA ANALYTICS PROCESS - Introduction to analytics 12
process, Types of Analytical Techniques in BI –Descriptive, Predictive,
Perspective, Social Media Analytics, Behavioral, Iris Datasets.
V IMPLEMENTATION OF BI – Business Activity Monitoring, 12
Complex Event Processing, Business Process Management, Metadata,
Root Cause Analysis.

46
Text Books:
1. Carlo-Vercellis, “Business Intelligence Data Mining and Optimization for Decision-Making”, First Edition, 2019.
2. Drew Bentely, “Business Intelligence and Analytics”, 2017 Library Pres., ISBN: 978-1-9789-2136-8
3. Larissa T. Moss & Shaku Atre, “Business Intelligence Roadmap: The Complete Project Lifecycle”, 1 st Edition, 2017.

Reference Books:
1. For Decision-Support Applications”, First Edition, Addison-Wesley Professional, 2012.
2. Kimball, R., Ross, M., Thornthwaite, W., Mundy, J., and Becker, B. John, “The Data Warehouse”, 2nd Edition, 2017.
3. Lifecycle Toolkit: “Practical Techniques for Building Data Warehouse and Business Intelligence Systems”, Second Edition, Wiley &
Sons, 2008.
4. Cindi Howson, “Successful Business Intelligence”, Second Edition, McGraw-Hill Education, 2013.

TITLE ROBOTICS

SUBJECT CODE MCAGE4031


HOURS PER WEEK 4

CREDITS 4

COURSE OBJECTIVES
COB1 To study about different types of sensors and Transducers in Robotics.

COB2 To explore the concepts of vision in robots.

47
COB3 To Learn about image processing Techniques for robotics.

COB4 To understand how recognition of objects is done in robots.


COURSE OUTCOMES
CO1 Understands the Basics Concepts of Sensors, Transducers’ and its usage in Robotics.
CO2 Implements image processing techniques in Robotics.
CO3 Applies various Techniques for Object Recognition and Feature Extraction.
SYLLABUS
MODULE CONTENT HOURS
I SENSORS IN ROBOTICS 12
An Introduction to sensors and Transducers, History and definitions,
Smart Sensing, AI sensing, Need of sensors in Robotics. Position
sensors – optical, non-optical, Velocity sensors, Accelerometers,
Proximity Sensors – Contact, non-contact, Range Sensing, touch and
Slip Sensors, Force and Torque Sensors. Different sensing variables –
smell, Heat or Temperature, Humidity, Light, Speech or Voice
recognition Systems, Tele-presence and related technologies.
II VISION IN ROBOTICS 12
The Nature of Vision- Robot vision – Need, Applications - image
acquisition –illumination techniques- Point sensor, line sensor, planar
sensor, camera transfer characteristic, Raster scan, Image capture time,
volume sensors, Image representation, picture coding techniques. Robot

48
Control through Vision sensors, Robot vision locating position, Robot
guidance with vision system, End effector camera Sensor.
III ELEMENTS OF IMAGE PROCESSING TECHNIQUES 12
Discretization, Neighbours of a pixel-connectivity- Distance measures -
pre-processing Neighbourhood averaging, Median filtering.
Smoothening of binary Images- Image Enhancement- Histogram
Equalization-Histogram Specification –Local Enhancement-Edge
detection- Gradient operator Laplace operators-Thresholding-
Morphological image processing
IV OBJECT RECOGNITION AND FEATURE EXTRACTION 12
Image segmentation- Edge linking-Boundary detection-Region
growing- Region splitting and merging- Boundary Descriptors-Freeman
chain code- Regional Descriptors- recognition-structural methods-
Recognition procedure, mahalanobic procedure
V COLLISON FRONTS ALGORITHM 12
Introduction, skeleton of objects. Gradients, propagation, Definitions,
propagation algorithm, Thinning Algorithm, Skeleton lengths of Top
most objects.

Textbooks:
1. Paul W Chapman, “Smart Sensors”, an Independent Learning Module Series, 2nd Edition, 2015
2. Richard D. Klafter, Thomas .A, Chri Elewski, Michael Negin, Robotics Engineering - An Integrated Approach, Phi Learning.,3 rd
Edition 2012.

49
3. John Iovice, “Robots, Androids and Animatrons”, Mc Graw Hill, 5th Edition , 2020
4. K.S. Fu, R.C. Gonzalez, C.S.G. Lee, “Robotics – Control Sensing, Vision and Intelligence”, Tata McGraw-Hill Education, 4 Th
Edition, 2019.
Reference Books:
1. Mikell P Groover & Nicholas G Odrey, Mitchel Weiss, Roger N Nagel, Ashish Dutta, Industrial Robotics, Technology
programming and Applications, Tata McGraw-Hill Education, 4th Edition 2020..
2. Sabrie Soloman, Sensors and Control Systems in Manufacturing, McGraw-Hill Professional Publishing, 5 th Edition, 2019.

TITLE RECOMMENDER SYSTEMS

SUBJECT CODE MCAGE4031


HOURS PER WEEK 4

CREDITS 4

COURSE OBJECTIVES
COB1 Understand the Basic Taxonomy of Recommender Systems
COB2 Understand the classification of Recommender Systems
COB3 Understand the mathematical aspects of Recommender System design
COB4 Understand the applications of Recommender Systems in various fields.
COURSE OUTCOMES
CO1 Understand the basic concepts of recommender systems

50
CO2 Solve mathematical optimization problems pertaining to recommender systems
CO3 Carry out performance evaluation of recommender systems based on various metrics
CO4 Implement machine-learning and data-mining algorithms in recommender systems data sets.
CO5 Design and implement a simple recommender system.

SYLLABUS
MODULE CONTENT HOURS
I Introduction 12
Introduction and basic taxonomy of recommender systems (RSs).
Traditional and non-personalized RSs. Overview of data mining
methods for recommender systems (similarity measures, classification,
Bayes classifiers, ensembles of classifiers, clustering, SVMs,
dimensionality reduction). Overview of convex and linear optimization
principles.
II Content-based recommender systems 12
The long-tail principle. Domain-specific challenges in recommender
systems. Content-based recommender systems. Advantages and
drawbacks. Basic components of content-based RSs. Feature selection.
Item representation Methods for learning user profiles
III Collaborative Filtering (CF)-based RSs: Mathematical foundations 12
Mathematical optimization in CF RSs. Optimization objective. Baseline
predictor through least squares. Regularization and overfitting.

51
Temporal models. Step-by-step solution of the RS problem.
Collaborative Filtering (CF)-based RSs: systematic approach
Nearest-neighbour collaborative filtering (CF). User-based and item-
based CF, comparison. Components of neighbourhood methods (rating
normalization, similarity weight computation, neighbourhood
selection). Hybrid recommender systems.
IV Performance evaluation of RSs Experimental settings: 12
Working with RSs data sets. Examples. The cold-start problem.
Evaluation metrics. Rating prediction and accuracy. Other metrics
(fairness, coverage, diversity, novelty, serendipity).
Context awareness and Learning principles in RSs Context-aware
recommender systems. Contextual information models for RSs.
Incorporating context in Rs. Learning to rank. Active learning in RSs.
Multi-armed bandits and Reinforcement learning in RSs. Dynamic RSs.
V User behaviour understanding in RSs: 12
Foundations of behavioural science. User choice and decisions models.
Choice models in RSs. Digital nudging and user choice engineering
principles. Applications and examples for recommender systems.
Applications of RSs for content media, social media and communities
Music and video RSs. Datasets. Group recommender systems. Social
recommendations. Recommending friends: link prediction models.
Similarities and differences of RSs with task assignment in mobile
crowd sensing. Social network diffusion awareness in RSs.

52
Textbooks:
1. C.C. Aggarwal, Recommender Systems: The Textbook, Springer, 2016.
2. F. Ricci, L Rokach, B. Shapira and P.B. Kantor, Recommender systems handbook, Springer 2010.
3. J. Leskovec, A. Rajaraman and J. Ullman, Mining of massive datasets, 2nd Ed., Cambridge, 2012.
4. M. Chiang, Networking Life, Cambridge, 2010. (Chapter 4)

53

You might also like