0% found this document useful (0 votes)
5 views3 pages

Best Course

The course 'Emerging Technologies' spans 48 hours and aims to equip learners with core concepts in machine learning and deep learning, including model implementation and real-world applications. It covers foundational topics such as regression, classification, clustering, and advanced deep learning techniques like RNNs and GANs. Additionally, the course explores Gen AI, NLP techniques, and frameworks for building and fine-tuning various neural network architectures.

Uploaded by

himanshujhaa4262
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views3 pages

Best Course

The course 'Emerging Technologies' spans 48 hours and aims to equip learners with core concepts in machine learning and deep learning, including model implementation and real-world applications. It covers foundational topics such as regression, classification, clustering, and advanced deep learning techniques like RNNs and GANs. Additionally, the course explores Gen AI, NLP techniques, and frameworks for building and fine-tuning various neural network architectures.

Uploaded by

himanshujhaa4262
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Course Title: Emerging Technologies

Duration: 48 Hours

Learning Objectives: By the end, learners will:


1. Understand core ML/DL concepts (regression, classification,
clustering, deep learning).
2. Implement models using frameworks like GenAI.
3. Build and fine-tune ANNs, CNNs, RNNs, GANs, and LLMs.
4. Apply NLP techniques, RAG, and agentic AI to real-world problems.

Course Outline

Module 1: Foundations of Machine Learning (3 Hours)


 Session 1 (2 hrs):
 Introduction to ML:
 Types (supervised/unsupervised/reinforcement/semi-supervised).
o Applications & challenges.
 Session 2 (2 hrs):
o Regression Analysis: Linear/Ridge/Lasso regression, evaluation
metrics (RMSE, R²).

Module 2: Classification & Clustering (9 Hours)


 Session 3 (3 hrs):
o Classification: Random Forest, k-NN, Naive Bayes,
evaluation (confusion matrix, ROC-AUC), Support Vector
Machines: Hard and soft margin –
o Functional and Geometric margin - Maximum margin linear
separators – Kernels for learning nonlinear functions.

 Session 4 (3 hrs):
o Clustering: K-means, hierarchical clustering, silhouette
score, Adaptive Boosting, Stacking and DECORATE -
Active learning with ensembles – Clustering – Kmeans
Clustering– Hierarchical Clustering - Expectation
Maximization algorithm – Gaussian Mixture Model –
Dimensionality Reduction – Principal Component Analysis
(PCA) – Linear Discriminant Analysis (LDA) - Latent
Variable Models (LVM) – Latent Dirichlet Allocation –
o Independent Component Analysis (ICA)

Module 3: Deep Learning Basics (10 Hours)


 Session 5 (5 hrs):
o ANN & Perceptrons: MLPs, activation functions (ReLU,
sigmoid), loss functions- Gradient Descent(GD) –
Backpropagation - Vanishing and Exploding GD problem –
Optimization Methods: Stochastic GD: Momentum based
GD & Nesterov Accelerated GD, AdaGrad, RMSProp,
Adam – Bias Variance tradeoff -Regularization – Dropout.
 Session 6 (5 hrs):
o CNNs: Architecture, pooling, filters, Popular CNN
o Architectures: ResNet, AlexNet, VGGNet - Transfer
learning.
o Representation Learning Concepts, Self-Organized Maps,
Boltzmann Machine, Restricted Boltzmann Machine

Module 4: Advanced Deep Learning (10 Hours)


 Session 7 (6 hrs):
 RNNs/LSTMs: Sequence Modelling –Recurrent Neural
Networks, Bidirectional RNNs – Encoder-decoder, sequence to
sequence architectures - Deep Recurrent Networks, Recursive
Neural Networks -Long Short-Term Memory Networks
 Session 8 (4 hrs): Autoencoders, Generative Adversarial

Networks, GAN challenges, GAN Models


o GANs: DCGANs, image generation.
o Review of Seq2Seq, Attention Mechanisms, Positional Encoding.
o Overview of Reinforcement Learning - Components of

Reinforcement Learning - Markov decision process - Model


Based Learning - Model Free Learning - Q Learning.

Module 5: Gen AI (12 Hours)

 Session 9 (3 hrs):
o NLP Basics: Tokenization, Word2Vec, sentiment analysis.
 Session 10 (4 hrs):
o Transformers & LLMs: BERT, GPT architecture, fine-tuning,
Langchain, LammaIndex, Transformers Arch.
 Session 11 (5 hrs):
o RAG & Agentic AI: Chat with PDF/CSV, evaluation metrics.
o Neural Network Regularization and Compression Approaches
(Quantization, Compression, Pruning, Low Rank Adaptation)

You might also like