Syllabus
Syllabus
Introduction:
Research in Engineering and Technology have nowadays become multidisciplinary with the
phenomenal growth of Artificial Intelligence, Machine Learning, Big Data, Cloud Computing,
Robotics, Speech Technologies, Computer Vision, Natural Language Processing etc.
Artificial Intelligence is nowadays one of the most discussed technology topics all over in the
world. IIT Patna, with generous grant from the Department of Science and Technology
(DST) has set up a multidisciplinary centre named Vishlesan I-Hub Foundation under the
Technology Innovation Hub (TIH) that targets to leverage Research and Engineering
capabilities of Sustainable Development Goals and achieves the mandate of National
Mission on Interdisciplinary Cyber Physical Systems, especially in the areas of Video,
Speech and Text Analytics. In DPR of TIH, it has been mentioned to start a new M.Tech
program, which will provide a platform to create the skilled manpower in the broad areas of
Text, Speech and Video Analytics. The earnings in the form of tuition fees collected from this
program is conceived to contribute towards making this hub self-sustainable. The TIH under
the Centre of Excellence in Artificial Intelligence proposes to start a new programme, called
M.Tech in “Artificial Intelligence”.
Objectives:
This M.Tech in Artificial Intelligence program will offer students with deep knowledge of core
and applied Artificial Intelligence, especially Speech, Video and Text Analytics. This
programme is aiming at imparting the necessary breadth and depth to the students for
pursuing careers in academics as well as in industry. This programme is aiming at extending
undergraduate computing skills with up-to-date and in-depth expertise in specialized areas
of Speech Technologies, Computer Vision and Natural Language Processing.
Students, at the end of this programme, will be able to develop an ability to:
1. Understand the fundamentals concepts of Speech Processing, Video Analytics and Text
Analytics
2. Apply appropriate design principles, framework and protocols to develop cyber physical
systems.
3. Demonstrate hands-on knowledge of cutting edge speech, video and text analytics tools.
4. Ability to design and develop systems for speech, video and text analytics
Duration: 2 years
Course Fees: Rs. 75,000/Semester
Total intake: 50 (TA-10; RA-10; Self-sponsored: 15; Sponsored-10, Part-time: 5)
Eligibility:
1st SEMESTER
2nd SEMESTER
SI Course Course Title L T P C
. Number
N
o.
1 MA564 Linear Algebra 3 0 0 6
and Optimization
techniques
2 CS546 Big Data Analytics 3 0 0 6
3rd SEMESTER
SI. Course Course L T P C
No. Numbe Title
r
1 CS695 Project Thesis-I 0 0 20 20
2. CS592 Research Seminar 0 0 4 4
TOTAL 24
4th SEMESTER
SI. Course Course L T P C
No. Number Title
1 CS696 Project 0 0 24 24
Thesis-II
TOTAL 24
Total 37 3 2 2 12
Credit 4 4 4 2
Semester-1: Core Theory
Syllabus:
Books:
References:
1. Rohatgi, V.K., and Saleh, A.K.Md. Ehsanes (2009). An introduction to probability and
statistics. Second Edition, Wiley India.
2. Introduction to the Theory of Statistics; Alexander M. Mood, Franklin A. Graybill,
Duane C. Boes, Tata McGraw Hill.
3. Milton, J.S. and Arnold, J.C. (2009) Introduction to Probability and Statistics, Fourth
Edition, Tata Mcgraw-Hill.
4. Ross, S.M.(2008) Introduction to Probability Models, Ninth edition, Academies
Press.
5. Statistical Inference (2007), G. Casella and R.L. Berger, Duxbury Advanced Series .
Course Name: Artificial Intelligence Credits: 3-0- Prerequisites: Nil
No.:CS561 0-6
Syllabus:
Introduction, Motivation of the course
Problem Solving: Uninformed search, Informed search, local Search, Online search;
Knowledge and Reasoning: Propositional and Predicate Calculus, Semantic Nets, Frames, Scripts,
Probabilistic Reasoning
Learning: Introduction to machine learning paradigms: unsupervised, supervised, reinforcement
learning, Naive Bayes, Decision Tree, Fundamental of Neural Networks and Deep Learning
Evolutionary Computation: Genetic algorithms, Multi objective optimization, Differential Evolution,
Particle Swarm and Ant Colony Optimization
Application Topics: Introduction to NLP, Introduction to Fuzzy Sets and Logic, AI in Social
Networks
Books:
References:
1. S. Russel and P. Norvig. Artificial Intelligence: A Modern Approach (Third Edition), Prentice
Hall, 2009
2. E. Rich and K. Knight, Artificial Intelligence, Addison Wesley, 1990
3. George Klir, U. St. Clair and B. Yuan, Fuzzy Set Theory: Foundations and Applications,
Prentice Hall, 1997
4. Ian Goodfellow, YoshuaBengio and Aaron Courville, Deep Learning, MIT Press, 2016
5. Daphne Koller and Nir Friedman, Probabilistic Graphical Models: Principles and Techniques,
MIT Press, 2009.
Syllabus:
Texts Books:
2. 2. Christopher Bishop. Pattern recognition and machine learning. Springer Verlag, 2006.
4. 4. Probability, Random Variables and Stochastic processes by Papoulis and Pillai, 4th
Edition, Tata McGraw Hill Edition.
5. 5. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice Hall, 198815.
Course Credits: 0-0-3-3 Prerequisites:
AI Lab-I/II
No.:CS61/CS571 NIL
Syllabus:
Books:
References
6. 1. Pytorch: https://pytorch.org/assets/deep-learning/Deep-Learning-with-PyTorch.pdf
7. 2. First Contact With TensorFlow: Get Started With Deep Learning Programming By Jordi Torres
8. 3. https://analyticsindiamag.com/top-10-free-books-and-resources-for-learning-tensorflow/
9. 4. https://keras.io/getting_started/learning_resources/
10. 5. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow (second edition),
by AurélienGéron
Semester II
Course No.:CS546 Name: Big Data Analytics Credits: 3-0-0-6 Prerequisites: Nil
Why Big Data and Where did it come from?, Characteristics of Big Data- Volume, Variety, Velocity,
Veracity, Valence, Value, Challenges and applications of Big Data
Introduction to Big Data Stack, Introduction to some Big Data distribution packages
Overview of Apache Spark, HDFS, YARN, Introduction to MapReduce, MapReduce Programming Model
with Spark, MapReduce Example: Word Count, Page Rank etc.
Part 4: Introduction to Big Data Storage Platforms for Large Scale Data Storage:
CAP Theorem, Eventual Consistency, Consistency Trade-Offs, ACID and BASE, Introduction to
Zookeeper and Paxos, Introduction to Cassandra, Cassandra Internals, Introduction to HBase, HBase
Internals
Introduction to Big Data Streaming Systems, Big Data Pipelines for Real-Time computing, Introduction to
Spark Streaming, Kafka, Streaming Ecosystem
Overview of Big Data Machine Learning, Mahout Introduction, Big Data Machine learning Algorithms in
Mahout- kmeans, Naïve Bayes etc.
Big Data Machine Learning Algorithms in Spark- Introduction to Spark MLlib, Introduction to Deep
Learning for Big Data
Text Books:
Bart Baesens, Analytics in a Big Data World: The Essential Guide to Data Science and its Applications, Wiley,
2014
Reference Book:
2. Chuck Lam, Hadoop in Action, December, 2010 | 336 pages ISBN: 9781935182191
5. Erik Brynjolfsson et al., The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant
Technologies, W. W. Norton & Company, 2014
Natural language processing (NLP) is one of the most important technologies of the information age.
Understanding complex language utterances is also a crucial part of artificial intelligence.
Applications of NLP are everywhere because people communicate most everything in language:
web search, advertisement, emails, customer service, language translation, radiology reports, etc.
There are a large variety of underlying tasks and machine learning models powering NLP
applications. Recently, deep learning approaches have obtained very high performance across
many different NLP tasks. These models can often be trained with a single end-to-end model and do
not require traditional, task-specific feature engineering. In this spring quarter course students will
learn to implement, train, debug, visualize and invent their own neural network models. The course
provides a deep excursion into cutting-edge research in deep learning applied to NLP. The final
project will involve training a complex recurrent neural network and applying it to a large scale NLP
problem. On the model side we will cover word vector representations, window-based neural
networks, recurrent neural networks, long-short-term-memory models, recursive neural networks,
convolutional neural networks as well as some very novel models involving a memory component.
Through lectures and programming assignments students will learn the necessary engineering tricks
for making neural networks work on practical problems.
Course Contents:
Intro to NLP and Deep Learning: Linear Algebra, Probability, Optimization and Vector space models
Advanced word vector representations: language models, softmax, single layer networks:GloVe:
Global Vectors for Word Representation
Neural Networks and backpropagation: PoS tagging and named entity recognition
Recurrent neural networks -- for language modeling and other tasks: Recurrent neural network
based language model, Extensions of recurrent neural network language model, Opinion Mining
with Deep Recurrent Neural Networks
Recursive neural networks -- for parsing, Convolutional neural networks -- for sentence classification
Dan Jurafsky and James H. Martin. Speech and Language Processing (3rd ed. draft)
Jacob Eisenstein. Natural Language Processing
Yoav Goldberg. A Primer on Neural Network Models for Natural Language Processing
Ian Goodfellow, YoshuaBengio, and Aaron Courville. Deep Learning
Delip Rao and Brian McMahan. Natural Language Processing with PyTorch (requires Stanford
login).
Michael A. Nielsen. Neural Networks and Deep Learning
Eugene Charniak. Introduction to Deep Learning
Conferences: ACL (Association for Computational Linguistics), EACL (European Association for
Computational Linguistics), COLING (International Conference on Computational Linguistics), ICML
(International Conference on Machine Learning), IJCNLP (International Joint Conference on Natural
Language Processing), AAAI (American Association of Artificial Intelligence), ECAI (European
Conference on AI), HLT/NAACl (Human language Technology/ North American Association for
Computational Linguistics), ICON (International Conference on Natural Language Processing) etc.
This course will provide basic understanding of deep learning and how to solve classification
problems having large amount of data. In this course several public domain tools will be
demonstrated to build deep learning network.
Course content will be as follows: Brief introduction of big data problem, Overview of linear
algebra, probability, numerical computation Scalars, vectors, matrix, tensors, norms, eigen value,
eigenvector, singular value decomposition, determinant, Probability distribution, bayes rule,
conditional probability, variance, covariance, Overflow, underflow, gradient based optimization,
least square,- Neural network - Perceptron, Multi-level perceptron, Universal approximation
theorem,--Tutorial for Tools, Keras, Theano, Tensor flow, Demo using MNIST. Deep learning
network, Shallow vs Deep network, Deep feedforward network, Gradient based learning - Cost
function, soft max, sigmoid function, Hidden unit - ReLU, Logistic sigmoid, hyperbolic tangent,
Architecture design, Back propagation algorithm - Chain rule of calculus, SGD, Regularization -
parameter norm penalties, drop out, noise robustness, early stopping, Batch normalization,
Optimization for training deep model- Adagrad, Nesterov momentum. Advanced topics:
Convolutional Neural Network, Recurrent Neural Network/ Sequence modeling, Practical
applications - MNIST, etc.
Texts/References:
Richard S. Sutton & Andrew G. Barto, Reinforcement Learning: An Introduction” (available online)
Jerome H. Friedman, Robert Tibshirani, and Trevor Hastie, “The elements of statistical learning”
Natural Language
CS563 3-0-0-6 CS
Processing
Course Contents:
Intro to NLP
Dan Jurafsky and James H. Martin.Speech and Language Processing (3rd ed. draft)
Jacob Eisenstein.Natural Language Processing
Yoav Goldberg.A Primer on Neural Network Models for Natural Language Processing
Delip Rao and Brian McMahan.Natural Language Processing with PyTorch (requires Stanford
login).
Conferences:
ACL (Association for Computational Linguistics), EACL (European Association for
Computational Linguistics), COLING (International Conference on Computational Linguistics),
ICML (International Conference on Machine Learning), IJCNLP (International Joint Conference
on Natural Language Processing), AAAI (American Association of Artificial Intelligence), ECAI
(European Conference on AI), HLT/NAACl (Human language Technology/ North American
Association for Computational Linguistics), ICON (International Conference on Natural
Language Processing) etc.
Texts/References:
2. Milan Sonka, Vaclav Hlavac and Roger Boyle, Image Processing, Analysis and Machine
Vision, Springer
Texts/References:
Advanced Machine
CS566 3-0-0-6 CS
Learning
This course will concentrate on some advanced topics of machine learning like graphical models,
auto-encoders, GANs, reinforcement learning, time series forecasting, advanced unsupervised
classification algorithms, neural architectures for sequence and graph-structured predictions. When
appropriate, the techniques will be linked to applications in translation, conversation modeling,
bioinformatics, and information retrieval.
Syllabus:
Books:
● Kevin P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press 2012
● Ian Goodfellow, YoshuaBengio and Aaron Courville. Deep Learning. MIT Press 2016
● Yoav Goldberg. 2016. A primer on neural network models for natural language processing.
J. Artif. Int. Res. 57, 1 (September 2016), 345-420.
● R. G. Cowell, A. P. Dawid, S. L. Lauritzen and D. J. Spiegelhalter. "Probabilistic Networks
and Expert Systems". Springer-Verlag. 1999.
Syllabus:
The project can span the course Project-II. Hence it is expected that the problem specification and
the milestones to be achieved in solving the problem are clearly specified. The project is
encouraged to be carried out with industry.
The students who work on a project are expected to work towards the goals and milestones set in
course Project-I. At the end there would be demonstration of the solution and possible future work
on the same problem. A dissertation outlining the entire problem, including a literature survey and
the various results obtained along with their solutions is expected to be produced. The project is
encouraged to be carried out with industry.
Basket of Electives:
List of Electives:
Syllabus of Electives:
Texts:
1. R. O. Duda, P. E. Hart and D. G. Stork, Pattern classification, John Wiley & Sons,
2002.
2. S. Theodoridis and K. Koutroumbas , Pattern Recognition, 4th Edition, Academic Press,
2008.
References:
Books:
References:
1. S. Russel and P. Norvig. Artificial Intelligence: A Modern Approach (Third Edition), Prentice
Hall, 2009
2. E. Rich and K. Knight, Artificial Intelligence, Addison Wesley, 1990
3. Ian Goodfellow, YoshuaBengio and Aaron Courville, Deep Learnng, MIT Press, 2016
4. Daphne Koller and Nir Friedman, Probabilistic Graphical Models: Principles and Techniques,
MIT Press, 2009.
5. Sutton and Barto. Reinforcement Learning: An Introduction. Available free online.
6. Hastie, Tibshirani, and Friedman. The elements of statistical learning. Available free online.
Artificial Intelligence, Machine Learning, ACL Anthology, COLING, ICML, ECML, Proceedings
of Uncertainty in AI, ICCV, ICLR etc.
Thanks to the increasing rise of interest in chatbot from the industry, conversational AI (Artificial
Intelligence) is a new hot research field in Natural Language Processing (NLP), Machine
Learning and Deep Learning. The main goal of conversational AI is to generate a human-like
conversation. However, it is a challenging task due to the complex nature of human
conversations, co-reference, etc. Conversations are broadly categorized into two classes: task-
oriented and chit-chat (also called as non-task oriented). Both kinds of conversations are
governed by different factors or pragmatics, such as topic, interlocutors’ personality,
argumentation logic, viewpoint, intent, and so on. It is thus extremely important to properly model
all these factors for effective conversational analysis. In this very emerging and advanced course,
we will discuss the modern technologies of deep learning and word representations such as
BERT, GPT-2, Seq2seq, Transformer for conversation analysis and generation. We will also take
a step ahead from generation to classification by shedding light on the emotion recognition in
conversation, emotion-oriented dialogue generation, conversational question-answering (QA),
intent classification etc.This course will help students, and researchers to understand both the
basic and advanced techniques of conversational AI.
Syllabus:
Natural Language Understanding: Dialogue Act, Intent detection and Slot filling
Conferences and Journals: ACL, EMNLP, COLING, NAACL, EACL, IJCNLP, AACL,
Computational Linguistics, Transaction on ACL, IEEE Transaction on Affective Computing,
ACM Transaction on Intelligent System, ACM Transaction on Human Computer Interaction
Given the dominance of text information over the Internet, mining high-quality information from
text becomes increasingly critical. The actionable knowledge extracted from text data facilitates
our life in a broad spectrum of areas, including business intelligence, information acquisition,
social behavior analysis and decision making. In this course, we will cover important topics in
text mining including: basic natural language processing techniques, document representation,
text categorization and clustering, document summarization, sentiment analysis, probabilistic
topic models.
Text categorization: Basic supervised text categorization algorithms, including Naive Bayes, k
Nearest Neighbor (kNN) and Logistic Regression.
Text clustering: Two typical types of clustering algorithms, i.e., connectivity-based clustering
(a.k.a., hierarchical clustering) and centroid-based clustering (e.g., k-mean
Topic modeling: General idea of topic modeling, two basic topic models, i.e., Probabilistic
Latent Semantic Indexing (pLSI) and Latent Dirichlet Allocation (LDA), and their variants for
different application scenarios, including classification, collaborative filtering, and hierarchical
topical structure modeling.
Syllabus:
Introduction: definition, history, Logic and the foundations of mathematics, Logic in computer
science
First-Order Logic- Syntax and Semantics: Terms and formulas, Structures, assignments, and
semantics, Satisfiability, validity, and logical implication.
First-Order Logic- Normal Forms: Prenex normal form, Skolem normal form, Elimination of
function symbols, Elimination of equality.
Recommended Readings:
Search Operators: Recombination/Crossover for strings (e.g., binary strings), e.g., one-point,
multi-point, and uniform crossover operators, Mutation for strings, e.g., bit-flipping,
Recombination/Crossover and mutation rates, Recombination for real-valued representations,
e.g., discrete and intermediate recombinations, Mutation for real-valued representations, e.g.,
Gaussian and Cauchy mutations, self-adaptive mutations, etc., Why and how a recombination
or mutation operator works
Selection Schemes: Fitness proportional selection and fitness scaling, Ranking, including
linear, power, exponential and other ranking methods, Tournament selection, Selection
pressure and its impact on evolutionary search
Niching and Speciation: Fitness sharing (explicit and implicit), Crowding and mating restriction
Constraint Handling: Common techniques, e.g., penalty methods, repair methods, etc.
Learning Classifier Systems: Basic ideas and motivations, Main components and the main
cycle, Credit assignment and two approaches
Recommended Books:
Introduction to Data Mining: Data Mining Goals, Stages of the Data Mining Process, Data
Mining Techniques, Knowledge Representation Methods, Applications
Data preprocessing: Data cleaning, Data transformation, Data reduction, Discretization and
generating concept hierarchies
Data mining knowledge representation: Task relevant data, Background knowledge,
Interestingness measures, Representing input data and output knowledge, Visualization
techniques
Attribute-oriented analysis: Attribute generalization, Attribute relevance, Class comparison,
Statistical measures, Experiments with Weka - using filters and statistics
Data mining algorithms: Association rules: Motivation and terminology, Basic idea: item sets,
Generating itemsets and rules efficiently, Correlation analysis
Data mining algorithms: Classification- Basic learning/mining tasks, inferring rudimentary
rules: 1R algorithm, Decision trees, Covering rules
Data mining algorithms: Prediction - The prediction task, Statistical (Bayesian) classification,
Bayesian networks, Instance-based methods (nearest neighbor), Linear models, Experiments
with Weka - Prediction
Mining real data: Preprocessing data from a real medical domain (310 patients with Hepatitis
C)., Applying various data mining techniques to create a comprehensive and accurate model of
the data.
Clustering: Basic issues in clustering, First conceptual clustering system: Cluster/2,
Partitioning methods: k-means, expectation maximization (EM), Hierarchical methods: distance-
based agglomerative and divisible clustering, Conceptual clustering.
Advanced techniques, Data Mining software and applications: Text mining: extracting
attributes (keywords), structural approaches (parsing, soft parsing), Bayesian approach to
classifying text, Web mining: classifying web pages, extracting knowledge from the web.
Recommended Readings
1. Ian H. Witten and Eibe Frank, Data Mining: Practical Machine Learning Tools and
Techniques (Second Edition), Morgan Kaufmann, 2005, ISBN: 0-12-088407-0.
Introduction to Bioinformatics and Key Online Bioinformatics Resources: NCBI & EBI
Biology is an information science, History of Bioinformatics, Types of data
Application areas: Introduction to upcoming segments, NCBI & EBI resources for the
molecular domain of bioinformatics, Focus on GenBank, UniProt, Entrez and Gene Ontology.
Sequence Alignment: DNA and Protein Database Searching Homology, Sequence similarity,
Local and global alignment, Database searching with BLAST.
Structure Based Drug Discovery and Biomolecular Simulations: Small molecule docking
methods, Protein motion and conformational variants, Bioinformatics in drug discovery.
Genome Informatics and High Throughput Sequencing: Searching genes and gene
functions, Genome databases, Variation in the Genome, Highthroughput sequencing
technologies, biological applications, bioinformatics analysis methods.
Genes and Disease Human examples: Genomics and human health, The promise and
potential of shifting medicine from a reactive practice of treating symptoms and diseases, to one
where disease risk is diagnosed early or even managed prior to onset.
Proteomics and the Transcriptome: Processing and extracting biological information from
proteomic and transcriptomic datasets, Analysis of RNA-Seq data, Differential expression tests,
Avoiding P-value misuse, Hands-on analysis of RNA-Seq data.
Systems Biology: From genome to phenotypes. Integration of genome-wide data sets into
their functional context, Analysis of protein-protein interactions, pathways and networks,
Modeling and simulation of systems and networks, Computational methods of network modeling
Suggested Readings
Introduction to robotics: brief history, types, classification and usage and the science and
technology of robots. Kinematics of robot: direct and inverse kinematics problems and
workspace, inverse kinematics solution for the general 6R manipulator, redundant and over-
constrained manipulators. Velocity and static analysis of manipulators: Linear and angular
velocity, Jacobian of manipulators, singularity, static analysis. Dynamics of manipulators:
formulation of equations of motion, recursive dynamics, and generation of symbolic equations of
motion by computer simulations of robots using software and commercially available packages.
Planning and control: Trajectory planning, position control, force control, hybrid control Industrial
and medical robotics: application in manufacturing processes, e.g. casting, welding, painting,
machining, heat treatment and nuclear power stations, etc; medical robots: image guided
surgical robots, radiotherapy, cancer treatment, etc; Advanced topics in robotics: Modelling and
control of flexible manipulators, wheeled mobile robots, bipeds, etc. Future of robotics.
Overview of Probability Theory, Bayes Networks, Independence, I-Maps, Undirected Graphical Models,
Bayes Networks and Markov Networks, Local Models, Template Based Representations, Exact
Inference: Variable Elimination; Clique Trees, Belief Propagation, Tree Construction, Intro to
Optimization, Approximate Inference: Sampling, Markov Chains, MAP Inference, Inference in Temporal
Models, Learning Graphical Models : Intro, Parameter Estimation, Bayesian Networks and Shared
Parameters, Structure Learning and search, Partially Observed Data, Gradient Descent, EM, Hidden
Variables, Undirected Models, Causality, Utility Functions, Decision Problems, Expected Utility, Value of
Information
Suggested Reading
1. Probabilistic Graphical Models, by Daphne Koller and Nir Friedman, MIT Press, 2009.
2. Journals and conferences in AI
Texts:
1. A. M. Tekalp, “Digital Video Processing”, Prentice Hall.
References.
2. R. C. Gonzalez, and R. E. Woods, “Digital Image Processing”, Addison-Wesley.
3. Dudgeon &Mersereau, “Multi-dimensional Digital Signal Processing”, Prentice Hall.
4. C. Poynton, “A Technical Introduction to Digital Video”, Wiley.
5. Y. Wang, J. Ostermann, and Y. Zhang, “Video Processing and Communications”, Prentice
Hall.
6. K. Castleman, “Digital Image Processing”, Prentice Hall.
7. S. Mitra, “Digital Signal Processing”, 2nd Edition, McGraw Hill.
Introduction to linear and non-linear programming. Problem formulation. Geo- metrical aspects
of LPP, graphical solution. Linear programming in standard form, simplex, Big M and Two
Phase Methods. Revised simplex method, special cases of
LP. Duality theory, dual simplex method. Sensitivity analysis of LP problem. Transportation,
assignment and traveling salesman problem.
Integer programming Problems-Branch and bound method, Gomory cutting plane method for all
integer and for mixed integer LP.
Books:
Hamdy A. Taha, Operations Research: An Introduction, Eighth edition, PHI, New Delhi (2007).
S. Chandra, Jayadeva, AparnaMehra, Numerical Optimization with Applications, Narosa
Publishing House (2009).
A. Ravindran, Phillips, Solberg, Operation Research, John Wiley and Sons, New York (2005).
M. S. Bazaraa, J. J. Jarvis and H. D. Sherali, Linear Programming and Network Flows, 3rd
Edition, Wiley (2004).
References:
D. G. Luenberger, Linear and Nonlinear Programming, 2nd Edition, Kluwer, 2003. S. A. Zenios
(editor), Financial Optimization, Cambridge University Press (2002).
F. S. Hiller, G. J. Lieberman, Introduction to Operations Research, Eighth edition, McGraw Hill
(2006).
MA5XX:Linear Algebra
Suggested Readings:
1. K.Hoffman and R.Kunze, Linear Algebra, 2nd Edition, Prentice- Hall of India, 2005.
2. M.Artin,Algebra,Prentice-Hall of India, 2005
3. S.Axler, Linear Algebra Done Right, 2nd Edition, John-Wiley, 1999.
4. S. Lang, Linear Algebra, Springer UTM, 1997.
5. S.Kumaresan, Linear Algebra:A Geometric Approach,Prentice-Hall of India, 2004.