0% found this document useful (0 votes)
742 views5 pages

NLP and Generative AI Syllabus - 2025

The document outlines the syllabus for the NLP and Generative AI course at Velocity Corporate Training Center in Pune for 2025. It covers topics ranging from the fundamentals of NLP and feature representation to advanced concepts like Transformers, Generative AI models, and Agentic AI, along with practical applications and industry-based projects. Additionally, it includes sections on cloud services like AWS and Azure for implementing generative AI solutions.

Uploaded by

kiranwaghg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
742 views5 pages

NLP and Generative AI Syllabus - 2025

The document outlines the syllabus for the NLP and Generative AI course at Velocity Corporate Training Center in Pune for 2025. It covers topics ranging from the fundamentals of NLP and feature representation to advanced concepts like Transformers, Generative AI models, and Agentic AI, along with practical applications and industry-based projects. Additionally, it includes sections on cloud services like AWS and Azure for implementing generative AI solutions.

Uploaded by

kiranwaghg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Velocity Corporate Training Center, Pune

NLP and Generative AI Syllabus - 2025

1.​Introduction to NLP and Generative AI


●​ What is NLP? Real-World Applications
●​ What is Generative AI? Use Cases and Trends
●​ Evolution of NLP: From Rule-Based to Deep Learning Approaches
●​ Overview of Generative AI Models

2.​Fundamentals of NLP
●​ Text Preprocessing:
○​ Tokenization (Word, Sentence, Subword Tokenization)
○​ Lemmatization and Stemming
○​ Stop Words Removal
●​ Part-of-Speech (POS) Tagging
●​ Named Entity Recognition (NER)

3.​Feature Representation in NLP


●​ Traditional Representations:
○​ Bag of Words (BoW)
○​ TF-IDF
●​ Dense Word Representations:
○​ Word Embeddings( Word2Vec (Skip-gram and CBOW), GloVe)

4.​Sequential Models for NLP


●​ Recurrent Neural Networks (RNNs)
●​ Long Short-Term Memory (LSTM) and Bidirectional LSTMs
●​ Gated Recurrent Units (GRU)
●​ Applications of RNNs, LSTMs, and GRUs in NLP

5.​Transformers: Fundamentals and Attention Mechanisms


●​ Introduction to Transformers:
○​ Why Transformers were developed.
○​ Limitations of RNNs and LSTMs.
●​ Attention Mechanisms:
○​ Self-Attention: Key, Query, and Value Vectors, Scaled Dot-Product Attention.
○​ Multi-Head Attention: Capturing diverse relationships within sequences.
○​ Advanced Attention Mechanisms:
■​ Multi-Query Attention (MQA)
■​ Grouped-Query Attention (GQA)
■​ Sliding Window Attention (Longformer)
■​ Flash Attention (Efficient GPU-Optimized Attention)

6.​Transformers: Architecture and Variants


●​ Transformer Encoder-Decoder Architecture
●​ Positional Encoding: Sinusoidal vs. Learnable
●​ Encoder Components: Self-Attention, Feedforward Network, Layer Normalization
●​ Decoder Components: Masked Self-Attention, Cross-Attention
●​ Variants of Transformers:
○​ Encoder-Only Models (BERT, RoBERTa)
○​ Decoder-Only Models (GPT, GPT-4)
○​ Encoder-Decoder Models (T5, Seq2Seq)
●​ Applications: Machine Translation, Summarization, Text Generation, Knowledge
Retrieval

7.​Generative AI & Pretrained Models


●​ What Makes a Model Generative?
●​ Autoregressive (GPT) vs. Autoencoding (BERT)
●​ Large Language Models (GPT-4, PaLM, Claude, LLaMA)
●​ Popular Pretrained Models:
○​ BERT, RoBERTa, GPT, T5, XLNet
●​ Transfer Learning and its Importance
●​ Tools: Hugging Face Transformers, spaCy, NLTK

8.​Fine-Tuning Techniques
●​ Full Fine-Tuning vs. Partial Fine-Tuning
●​ Parameter-Efficient Fine-Tuning:
○​ LoRA, QLoRA, Adapters, Prefix Tuning, Prompt Tuning
●​ Domain-Specific Fine-Tuning (Healthcare, Finance, Legal AI)
●​ Few-Shot and Zero-Shot Fine-Tuning
●​ Model Evaluation & Explainability:
○​ Perplexity (Measuring model uncertainty in predictions)
○​ BLEU, ROUGE, METEOR (Text generation quality)
○​ BERTScore (Semantic similarity-based evaluation)
○​ Exact Match (EM), F1-Score (QA and retrieval-based evaluations)
○​ Human Evaluation & GPT-Assisted Evaluation
●​ Challenges in Fine-Tuning:
○​ Overfitting in Small Datasets
○​ Catastrophic Forgetting in Domain-Specific Training
○​ Handling Large Models with Limited Compute

9.​Prompt Engineering
●​ What is Prompt Engineering?
●​ Importance of Prompts in LLMs
●​ Designing Effective Prompts
●​ Few-Shot, Zero-Shot, and Chain-of-Thought Prompting
●​ Optimizing Prompts:
○​ Self-Consistency & Iterative Refinement
○​ Dynamic Prompting with LangChain

10.​ Retrieval-Augmented Generation (RAG)


●​ What is RAG?
●​ Role of RAG in Generative AI Workflows
●​ Building RAG Pipelines:
○​ Query Embedding Extraction
○​ Hybrid Search (BM25 + Dense Retrieval)
●​ Chunking Techniques:
○​ Fixed-Length, Semantic, Recursive
●​ Vector Databases:
○​ ChromaDB, Milvus, Pinecone, FAISS, CosmosDB
●​ Re-Rankers: Improving Retrieval Results
●​ Challenges in RAG Implementation:
○​ Indexing Latency
○​ Hallucination Risk
○​ Computational Cost
○​ Privacy & Security Considerations
11.​ Advanced Topics in Generative AI
●​ Multi-Modal Models (DALL-E, CLIP, Flamingo)
●​ Reinforcement Learning with Human Feedback (RLHF)
●​ Adaptive Fine-Tuning & Continuous Learning
●​ Generative Adversarial Networks (GANs)

12.​ Agentic AI
●​ What is Agentic AI?
●​ Difference Between Task-Oriented AI and Autonomous AI
●​ Core Components of Agentic AI:
○​ Decision-Making Capabilities
○​ Feedback Loops & Self-Learning
○​ Dynamic Adaptation to Environments
●​ Advanced Agentic AI Techniques:
○​ ReAct Framework
○​ Multi-Agent AI Systems
○​ CrewAI for Task Delegation
○​ LangGraph: Graph-Based Workflow Design
●​ Applications of Agentic AI
○​ Task Automation with AI Agents
○​ Multi-Agent Systems in Gaming, Healthcare, and Logistics
○​ Collaborative Systems for Complex Workflows

13.​ AWS for Generative AI


●​ Amazon S3: Storage for datasets, model checkpoints
●​ Amazon SageMaker: Train, fine-tune, and deploy LLMs
●​ AWS Bedrock: Access prebuilt foundation models
●​ AWS Lambda: Serverless inference for LLMs
●​ AWS Textract: Extract text, forms, and tables from documents

14.​ Azure for Generative AI


●​ Azure Blob Storage: Storage for datasets and models
●​ Azure Form Recognizer: Extract information from documents
●​ Azure Machine Learning (AML): Train, fine-tune, and deploy models
●​ Azure OpenAI Service: Access GPT, Codex, and DALL-E APIs
●​ Azure Functions: Run serverless inference and preprocessing.
15.​ Industry-Based Projects
●​ NLP-Based Projects:
○​ Sentiment Analysis for Social Media
○​ Named Entity Recognition (NER) for Healthcare
●​ Retrieval-Augmented Generation (RAG) Projects:
○​ Document-Based Question Answering (FAQ Systems)
○​ Legal Document Retrieval & Summarization
●​ Chatbot Development:
○​ AI Assistants for Customer Support
○​ Domain-Specific Chatbots (Healthcare, Finance)
●​ Code Conversion:
○​ Legacy Code Migration
○​ Python Code Optimization
●​ Information Extraction from Documents:
○​ Invoice Processing, Resume Parsing
○​ OCR + LLM for Document Understanding

You might also like