0% found this document useful (0 votes)
37 views10 pages

5 BCA - Electives Syllabus

The document outlines three courses: Computer Vision, Natural Language Processing (NLP), and Augmented Reality/Virtual Reality (AR/VR), each with a total of 90 teaching hours and 100 maximum marks. Each course includes a detailed description, objectives, outcomes, units of study, lab exercises, and evaluation patterns, focusing on practical skills and theoretical knowledge in their respective fields. The courses aim to equip students with essential skills for modern technology applications, including image processing, language analysis, and immersive software development.

Uploaded by

coweliv205
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views10 pages

5 BCA - Electives Syllabus

The document outlines three courses: Computer Vision, Natural Language Processing (NLP), and Augmented Reality/Virtual Reality (AR/VR), each with a total of 90 teaching hours and 100 maximum marks. Each course includes a detailed description, objectives, outcomes, units of study, lab exercises, and evaluation patterns, focusing on practical skills and theoretical knowledge in their respective fields. The courses aim to equip students with essential skills for modern technology applications, including image processing, language analysis, and immersive software development.

Uploaded by

coweliv205
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

BCA304A-5 – Computer Vision

Total Teaching Hours for Semester: 90


Max Marks: ​ 100​ ​ ​ ​ ​ ​ Credits: 4

Course Description
This course introduces Computer Vision, covering image processing, feature extraction,
segmentation, and 3D modeling. Students will apply techniques like Fourier Transform and Deep
Learning to enhance and analyze images, gaining hands-on experience in modern vision
applications.

Course Objectives
This course enables the learners to understand various image processing and computer vision
algorithms in terms of computer vision oriented tasks. It also enables the learners to impart
knowledge on advanced concepts in image representation, analysis, object identification and
object recognition. This course helps the learners to implement vision algorithms efficiently in
research or industry.

Course Outcomes
CO1: Understand fundamental concepts in computer vision and apply basic image processing
techniques for enhancement and restoration.
CO2: Implement spatial and frequency domain techniques for image enhancement, filtering, and
noise removal.
CO3: Apply segmentation and feature extraction methods for object identification and pattern
recognition.
CO4: Develop 3D modeling techniques, including image alignment, triangulation, and motion
segmentation.
CO5: Design object detection and classification models using statistical methods and deep
learning.

Unit-1 ​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching Hours: 18


INTRODUCTION TO COMPUTER VISION:
Basic Concepts to computer vision – Pros and cons of human vision - Computer vision and
Image processing- Different Applications of Computer Vision- Geometric Camera Models:
Image Formation, Basic Image Formats, Geometric Camera Calibration, Light and Shading:
Pixel Brightness, Inference from Shading, Shape from one shaded Images, Color: Human Color
Perception, Representing color, A Model of Image color, Inference from Color.
1.​ Image format conversion and basic operations
2.​ Implement Spatial Transformations - Convolution and correlation method (Use at
least 2 kernels)
3.​ Apply the concept of sampling and quantization.
Unit-2 ​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching Hours: 18
IMAGE ENHANCEMENT TECHNIQUES IN DIFFERENT DOMAINS
Gray Level Transformations, Histogram Processing, Histogram equalization, Image
Degradation, Noise Models, Restoration in the presence of Noise- Basics of Spatial Filters,
Smoothening and Sharpening Spatial Filters- Introduction to Fourier Transform and the
frequency Domain, Smoothing and Sharpening, Frequency Domain Filters.
0.​ Create a code to convert the given image model to different color models.
0.​ Apply "Fourier Transform" concepts to decompose an image into its sine and cosine
components and apply the Ideal low pass filters in frequency domain.

Unit-3 ​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching Hours: 18


SEGMENTATION AND EXTRACTING IMAGE FEATURES USING COMPUTER
VISION
Region Based Segmentation – Region Growing and Region Splitting- Image features- Local and
global features-Feature detection, description and matching-Point, Line and Edge detection,
Boundary Pre Processing, Whole Image Features, Scale Invariant Feature Transform (SIFT).
0.​ Demonstrate any two edge detection methods and discuss the results
0.​ Apply the concept of Segmentation (Marker Controlled Watershed Segmentation).

Unit-4 ​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching Hours: 18


HYPER SCALE THREE DIMENSIONAL MODELS
Image Alignment-Triangulation - Structure from motion- Projective reconstruction -
Self-Calibration - Constrained structure and motion-Motion Segmentation -View interpolation-
video-based rendering.
0.​ Design and Develop triangulation frame motion
0.​ Apply linear interpolation method to determine the model

Unit-5 ​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching Hours: 18


HIGHER LEVEL VISION FOR IMAGE PATTERN CLASSIFICATION
Object detection and recognition- Object Detection - Instance Recognition - Classification of
image patterns- Patterns and Patten Classification – Optimum (Bayes) Statistical Classifiers,
Neural Network and Deep Convolutional Networks.
0.​ Design and develop a simple Object detection and recognition model.

Text Books and Reference Books


1.​ Computer Vision A MODERN APPROACH, David A.Forsyth, Jean Ponce, Pearson
Education, 2nd Edition, ISBN-13: 978-0-13-608592-8, 2012.
2.​ Digital Image Processing, R. C. Gonzalez & R. E. Woods, Pearson Education, 4th Edition,
2018.
3.​ Computer Vision: Algorithms and Applications, Richard Szeliski, Springer Science &
Business Media, 2nd Edition, ISBN-13: 978-1848829343, 2022.

Essential Reading / Recommended Reading


1.​ Abhinav Dadhich, Practical Computer Vision: Extract insightful information from images
using TensorFlow, Keras, and OpenCV, Packt Publishing Ltd, Feb-2018.
2.​ Mastering OpenCV with Practical Computer Vision Projects, by Daniel Lélis Baggio,
Shervin Emami, David Millán Escrivá, Khvedchenia Ievgen, Naureen Mahmood, Jasonl
Saragih, Roy Shilkrot, Packt Publishing

Note: For Lab Programs use MATLAB / PYTHON


Web Resources:
1.​ A Gentle Introduction to Computer Vision -
https://machinelearningmastery.com/what-iscomputer-vision/.
2.​ Everything You Ever Wanted to Know About Computer Vision -
3.​ https://towardsdatascience.com/everything-you-ever-wanted-to-know-about-computer-vision
-heresa-look-why-it-s-so-awesome-e8a58dfb641e
4.​ Various MOOC courses – SWAYAM – UDEMY – COURSERA etc

CO – PO Mapping

PO PO PO PO PO PO PO PO
1 2 3 4 5 6 7 8
CO
3 2 2 1 1 - - -
1
CO
3 3 2 2 1 - - -
2
CO
3 3 3 2 1 1 - -
3
CO
3 3 3 2 2 1 - -
4
CO
3 3 3 3 2 1 1 1
5

Evaluation Pattern
CIA - 50%
ESE - 50%
BCA304B-5 - NATURAL LANGUAGE PROCESSING
Total Teaching Hours for Semester: 90
Max Marks: ​ 100​ ​ ​ ​ ​ ​ Credits: 4

Course Description
This course provides a comprehensive introduction to Natural Language Processing (NLP),
covering fundamental concepts, techniques, and applications. It explores key topics such as text
tokenization, parsing, syntax analysis, language modeling, semantic analysis, discourse
processing, and machine translation. Students will gain hands-on experience through lab
exercises that include implementing NLP algorithms, working with the Natural Language Toolkit
(NLTK), and utilizing lexical resources like WordNet and Word Embeddings. The course also
introduces information retrieval techniques, graphical models for sequence labeling, and the
challenges of NLP, including ambiguity and knowledge bottlenecks.

Course Objectives
Students who complete this course will gain a foundational understanding in natural language
processing methods and strategies. They will also learn how to evaluate the strengths and
weaknesses of various NLP technologies and frameworks as they gain practical experience in the
NLP toolkits available. Students will also learn how to employ literary-historical NLP-based
analytic techniques like stylometry, topic modeling, synsetting and named entity recognition in
their personal research.

Course Outcomes
CO1: To understand various approaches on syntax and semantics in NLP.
CO2: To apply various methods to discourse, generation, dialogue and summarization using
NLP.
CO3: To analyze various methodologies used in Machine Translation.

UNIT 1:​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching hours: 18


INTRODUCTION
Introduction to NLP- Background and overview- NLP Applications -NLP hard Ambiguity-
Algorithms and models, Knowledge Bottlenecks in NLP- Introduction to NLTK, Case study.
Lab Exercises:
1. Write a program to tokenize text.
2. Write a program to count word frequency and to remove stop words.​

UNIT 2:​ ​ ​ ​ ​ ​ ​ ​ Teaching hours: 18


PARSING AND SYNTAX
Word Level Analysis: Regular Expressions, Text Normalization, Edit Distance, Parsing and
Syntax- Spelling, Error Detection and correction-Words and Word classes- Part-of Speech
Tagging, Naive Bayes and Sentiment Classification: Case study.
Lab Exercises:
3. Write a program to program to tokenize Non-English Languages
4. Write a program to get synonyms and Antonyms from WordNet
UNIT 3:​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching hours: 18
SMOOTHED ESTIMATION AND LANGUAGE MODELLING
N-gram Language Models: N-Grams, Evaluating Language Models -The language modelling
problem
SEMANTIC ANALYSIS AND DISCOURSE PROCESSING
Semantic Analysis: Meaning Representation-Lexical Semantics- Ambiguity-Word Sense
Disambiguation. Discourse Processing: cohesion-Reference Resolution- Discourse Coherence
and Structure.

Lab Exercises:
5. Implement an N-gram language model with Laplace smoothing.
6. Implement a simple Word Sense Disambiguation using the Lesk algorithm.

UNIT 4: ​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching hours: 18


NATURAL LANGUAGE GENERATION AND MACHINE TRANSLATION
Natural Language Generation: Architecture of NLG Systems, Applications. Machine
Translation: Problems in Machine Translation-Machine Translation Approaches. Evaluation of
Machine Translation systems. Case study: Characteristics of Indian Languages.
Lab Exercises:
7. Write a program for lemmatizing words Using WordNet
8. Write a program to differentiate stemming and lemmatizing words

UNIT 5:​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching hours: 12


INFORMATION RETRIEVAL AND LEXICAL RESOURCES
Information Retrieval: Design features of Information Retrieval Systems-Classical, Non-
classical, Alternative Models of Information Retrieval – valuation Lexical Resources: Word
Embeddings - Word2vec-Glove.
UNSUPERVISED METHODS IN NLP
Graphical Models for Sequence Labelling in NLP.

Lab Exercises:
9. Write a program for POS Tagging or Word Embeddings
10. Implement a basic word-by-word translation system using a dictionary for machine
translation.

Text Books and Reference Books:


[1] Speech and Language Processing, Daniel Jurafsky and James H., 3rd Edition, Martin Prentice
Hall, 2023.
[2] Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press,1999.

Essential Reading / Recommended Reading:


[1] Roland R. Hausser, Foundations of Computational Linguistics: Human computer
Communication in Natural Language, Springer, 2014.
[2] Steven Bird, Ewan Klein and Edward Loper, Natural Language Processing with Python,
O’Reilly Media, First edition, 2009.
Web Resources:
[1] https://web.stanford.edu/~jurafsky/slp3/
[2] https://nptel.ac.in/courses/106101007/
[3] NLTK – Natural Language Tool Kit- http://www.nltk.org
CO – PO Mapping
(Please take up the strength mapping here, map your COs to POs at -, 1, 2, and 3)
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8

CO1 3 2 2 2 1 2 1 1

CO2 2 2 2 1 1 1 1 1

CO3 2 1 1 3 2 2 1 1

Evaluation Pattern
CIA - 50%
ESE - 50%
BCA304C-5 - AR & VR

Total Teaching Hours for Semester: 90


Max Marks: 100 ​ ​ ​ ​ ​ ​ ​ Credits: 4

Course Description
This course offers a comprehensive introduction to AR/VR software development, equipping
students with the skills to create immersive experiences using industry-standard tools such as
Unity, 3ds Max, and Blender. Students will explore the fundamentals of Augmented Reality
(AR) and Virtual Reality (VR), learning about 3D modeling, animation, interactivity, and UX/UI
design. The curriculum covers AR/VR system architecture, interactive design principles,
rendering techniques, and industry best practices. Through hands-on projects, real-world case
studies, and collaborative exercises, students will develop practical expertise in building
engaging AR/VR applications for diverse industries, including gaming, education, healthcare,
and business. By the end of the course, students will be prepared to create high-quality AR/VR
experiences and pursue exciting opportunities in this dynamic field.

Course Objectives
The course objective is to promote the understanding of this technology, underlying principles,
its potential and limits and to learn about the criteria for defining useful applications. Each
student will be exposed to the process of creating virtual environments, by developing a
complete VR or Augmented Reality (AR) application as members of a small team.

Course Outcomes
After completion of the course, students could able to:
CO1: Describe the fundamental concepts, working principles, and applications of Augmented
Reality (AR) and Virtual Reality (VR).
CO2: Apply AR/VR hardware, software development tools, and SDKs using Unity.
CO3: Develop interactive AR/VR applications by implementing 3D modeling, spatial mapping,
and user interaction techniques in Unity.
CO4: Evaluate AR/VR projects using scripting (C# for Unity), interaction models, and industry
best practices.

Unit-1​​ ​ ​ ​ ​ ​ ​ ​ ​ Teaching Hours: 18


INTRODUCTION TO AUGMENTED REALITY (AR)
Fundamentals of Augmented Reality (AR) and its evolution - How AR systems work: Sensors,
cameras, and real-world overlays - AR Development Platforms and Tools (Unity, Unreal Engine,
WebAR, ARKit, ARCore) - Designing AR Experiences: Interaction principles and UI/UX
considerations - Applications of AR in gaming, education, healthcare, and industry - Challenges
& Future Trends in AR development
Lab Exercises:
1.​ Getting Started with AR Development: Setting up Unity3D for AR projects and
understanding its interface
2.​ Basic 3D Object Rendering: Creating a model using OpenGL to render fundamental
shapes (cubes, spheres, pyramids)
Unit-2
INTRODUCTION OF VIRTUAL REALITY (VR) ​ Teaching Hours: 18

What is Virtual Reality (VR)? A beginner-friendly introduction - How VR differs from AR: Key
differences and use cases - Core Components of VR Systems – Hardware – Software -
Key Features of VR: Immersion, interactivity, presence - Creating a Virtual Environment: Basics
of 3D spaces, objects, and interactions - Benefits & Challenges of VR: Applications in gaming,
education, healthcare, and industry - Current Trends & Future of VR: Innovations like
Metaverse, AI-powered VR, and haptic feedback

Lab Exercises:
3.​ Exploring a 3D Scene: Setting up a simple VR environment in Unity, adjusting player
controls, and navigating the space
4.​ Basic Interactions in VR: Implementing camera controls and object interactions
(grabbing, rotating, scaling)
Unit-3
AR & VR HARDWARE AND SOFTWARE ​ Teaching Hours: 18
Understanding AR & VR Hardware - AR Devices - VR Devices - Comparison of AR and VR
Hardware - How Sensors & Trackers Work - Introduction to AR & VR Software Development -
Overview of AR & VR SDKs - Setting Up a Development Environment - Installing and
configuring AR Foundation in Unity

Lab Exercises:
5.​ Exploring AR & VR Hardware: Hands-on session with AR smart glasses and VR
headsets
6.​ Setting Up a Basic Scene in Unity: Installing Unity, setting up an AR/VR project, and
configuring SDKs
Unit-4
3D MODELING AND DESIGN ​ Teaching Hours: 18

Introduction to 3D Objects in AR & VR - Understanding 3D Models - Tools for 3D Modeling -


Spatial Mapping and Surface Detection in AR - How AR Systems Detect Surfaces - Spatial
Mapping in AR - Surface Detection Algorithms - Designing User Interactions with Virtual
Objects - Basic Interaction Techniques - Implementing Interactions in a Real-World Scenario
Lab Exercises:
7.​ Building a Realistic 3D Environment: Adding skyboxes, terrain tools, and lighting for
AR/VR scenes
8.​ Designing and Importing 3D Models: Using Blender/3ds Max to create simple 3D objects
and importing them into Unity
Unit-5
SCRIPTING AND FUTURE OF AR & VR ​ Teaching Hours: 18

Introduction to Scripting in AR/VR - Why Scripting is Essential for AR/VR Development -


Overview of Common Scripting Languages - Future Trends in AR/VR and AI Integration -
Advancements in AR/VR Interactions - Next-Generation AR/VR Hardware - Emerging
Technologies - VRML (Virtual Reality Modeling Language) - Input Trackers & Sensors

Project: Students will work on their AR/VR projects


●​ Implement learned concepts in a mini AR/VR prototype
●​ Presentation of projects: Showcasing creativity and technical skills
Industry Visit / Experiential Learning
●​ Exploring real-world applications in AR/VR startups or research labs

Text Books and Reference Books

1.​ D. Schmalstieg and T. Höllerer. Augmented Reality: Principles and Practice.


Addison-Wesley, Boston, 2016, ISBN-13 978-0-32-188357-5
2.​ Steven M. LaValle. Virtual Reality. Cambridge University Press, 2017, http://vr.cs.uiuc.edu/
(Links to an external site.) (Available online for free)
3.​ G.C. Burdea & P. Coiffet, “Virtual reality Technology, Second Ed.”, Wiley-India.

Essential Reading / Recommended Reading

4.​ D.A. Bowman et al., “3D User Interfaces: Theory and Practice”, Addison Wesley.
5.​ John Vince, “Virtual Reality Systems”, Pearson Ed.
6.​ Grigore C. Burdea, Philippe Coiffet , Virtual Reality Technology, Wiley 2016

Web Resources:
1.​ ARCore by Google - https://developers.google.com/ar
2.​ ARKit by Apple - https://developer.apple.com/augmented-reality/arkit/
3.​ Vuforia - https://developer.vuforia.com/
4.​ Unity3D - https://unity.com/
5.​ Unreal Engine - https://www.unrealengine.com/en-US
6.​ Oculus Developer - https://developer.oculus.com/

CO – PO Mapping

PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8


CO1 2 3 1 2 2 2 2 2
CO2 1 3 2 3 2 1 1 1
CO3 1 2 2 3 1 1 2 2
CO4 2 3 1 3 3 2 1 2

Evaluation Pattern
CIA - 50%
ESE - 50%

You might also like