0% found this document useful (0 votes)
44 views

AI overview Simplified

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

AI overview Simplified

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Machine Learning

Supervised learning
 Types of Learning – Supervised Learning
 Getting started with Classification
 Types of Regression Techniques
 Classification vs Regression
 Linear Regression
o Introduction to Linear Regression
o Implementing Linear Regression
o Univariate Linear Regression
o Multiple Linear Regression
o Python | Linear Regression using sklearn
o Linear Regression Using Tensorflow
o Linear Regression using PyTorch
o Pyspark | Linear regression using Apache MLlib
o Boston Housing Kaggle Challenge with Linear Regression
 Polynomial Regression
o Polynomial Regression ( From Scratch using Python )
o Polynomial Regression
o Polynomial Regression for Non-Linear Data
o Polynomial Regression using Turicreate
 Logistic Regression
o Understanding Logistic Regression
o Implementing Logistic Regression
o Logistic Regression using Tensorflow
o Softmax Regression using TensorFlow
o Softmax Regression Using Keras
 Naive Bayes
o Naive Bayes Classifiers
o Naive Bayes Scratch Implementation using Python
o Complement Naive Bayes (CNB) Algorithm
o Applying Multinomial Naive Bayes to NLP Problems
 Support Vector
o Support Vector Machine Algorithm
o Support Vector Machines(SVMs) in Python
o SVM Hyperparameter Tuning using GridSearchCV
o Creating linear kernel SVM in Python
o Major Kernel Functions in Support Vector Machine (SVM)
o Using SVM to perform classification on a non-linear dataset
 Decision Tree
o Decision Tree
o Implementing Decision tree
o Decision Tree Regression using sklearn
 Random Forest

1
o Random Forest Regression in Python
o Random Forest Classifier using Scikit-learn
o Hyperparameters of Random Forest Classifier
o Voting Classifier using Sklearn
o Bagging classifier
 K-nearest neighbor (KNN)
o K Nearest Neighbors with Python | ML
o Implementation of K-Nearest Neighbors from Scratch using Python
o K-nearest neighbor algorithm in Python
o Implementation of KNN classifier using Sklearn
o Imputation using the KNNimputer()
o Implementation of KNN using OpenCV

Regression,
Algorithm Classification Purpose Method Use Cases

Linear equation
Predict continuous Predicting continuous
Linear Regression minimizing sum of
output values values
Regression squares of residuals

Logistic function
Predict binary output Binary classification
Logistic Classification transforming linear
variable tasks
Regression relationship

Tree-like structure with


Model decisions and Classification and
Decision Both decisions and
outcomes Regression tasks
Trees outcomes

Improve classification Reducing overfitting,


Combining multiple
Random Both and regression improving prediction
decision trees
Forests accuracy accuracy

Create hyperplane for Maximizing margin


classification or between classes or Classification and
Both
predict continuous predicting continuous Regression tasks
SVM values values

KNN Both Predict class or value Finding k closest Classification and

2
Regression,
Algorithm Classification Purpose Method Use Cases

neighbors and
based on k closest Regression tasks,
predicting based on
neighbors sensitive to noisy data
majority or average

Classification and
Combine weak
Iteratively correcting Regression tasks to
Both learners to create
Gradient errors with new models improve prediction
strong model
Boosting accuracy

Text classification,
Predict class based on Bayes’ theorem with
spam filtering,
Classification feature independence feature independence
sentiment analysis,
assumption assumption
Naive Bayes medical

Unsupervised Learning
 Types of Learning – Unsupervised Learning
 Clustering in Machine Learning
 Different Types of Clustering Algorithm
 K means Clustering – Introduction
 Elbow Method for optimal value of k in KMeans
 K-means++ Algorithm
 Analysis of test data using K-Means Clustering in Python
 Mini Batch K-means clustering algorithm
 Mean-Shift Clustering
 DBSCAN – Density based clustering
 Implementing DBSCAN algorithm using Sklearn
 Fuzzy Clustering
 Spectral Clustering
 OPTICS Clustering
 OPTICS Clustering Implementing using Sklearn
 Hierarchical clustering (Agglomerative and Divisive clustering)
 Implementing Agglomerative Clustering using Sklearn
 Gaussian Mixture Model

Unsupervised Learning Algorithms


There are mainly 3 types of Algorithms which are used for Unsupervised dataset.
 Clustering
 Association Rule Learning

3
 Dimensionality Reduction
Clustering Algorithms
Clustering in unsupervised machine learning is the process of grouping unlabeled data into
clusters based on their similarities. The goal of clustering is to identify patterns and
relationships in the data without any prior knowledge of the data’s meaning.
Broadly this technique is applied to group data based on different patterns, such as
similarities or differences, our machine model finds. These algorithms are used to process
raw, unclassified data objects into groups. For example, in the above figure, we have not
given output parameter values, so this technique will be used to group clients based on the
input parameters provided by our data.
Some common clustering algorithms
 K-means Clustering: Groups data into K clusters based on how close the points are to
each other.
 Hierarchical Clustering : Creates clusters by building a tree step-by-step, either merging or
splitting groups.
 Density-Based Clustering (DBSCAN) : Finds clusters in dense areas and treats scattered
points as noise.
 Mean-Shift Clustering : Discovers clusters by moving points toward the most crowded
areas.
 Spectral Clustering : Groups data by analyzing connections between points using graphs.
Association Rule Learning
Association rule learning is also known as association rule mining is a common technique used
to discover associations in unsupervised machine learning. This technique is a rule-based ML
technique that finds out some very useful relations between parameters of a large data set.
This technique is basically used for market basket analysis that helps to better understand the
relationship between different products. For e.g. shopping stores use algorithms based on
this technique to find out the relationship between the sale of one product w.r.t to another’s
sales based on customer behavior. Like if a customer buys milk, then he may also buy bread,
eggs, or butter. Once trained well, such models can be used to increase their sales by
planning different offers.
 Apriori Algorithm: Finds patterns by exploring frequent item combinations step-by-step.
 FP-Growth Algorithm : An Efficient Alternative to Apriori. It quickly identifies frequent
patterns without generating candidate sets.
 Eclat Algorithm: Uses intersections of itemsets to efficiently find frequent patterns.
 Efficient Tree-based Algorithms : Scales to handle large datasets by organizing data in tree
structures.
Dimensionality Reduction
Dimensionality reduction is the process of reducing the number of features in a dataset while
preserving as much information as possible. This technique is useful for improving the
performance of machine learning algorithms and for data visualization. Examples of
dimensionality reduction algorithms include Dimensionality reduction is the process of
reducing the number of features in a dataset while preserving as much information as
possible.

4
 Principal Component Analysis (PCA) : Reduces dimensions by transforming data into
uncorrelated principal components.
 Linear Discriminant Analysis (LDA) : Reduces dimensions while maximizing class
separability for classification tasks.
 Non-negative Matrix Factorization (NMF ): Breaks data into non-negative parts to simplify
representation.
 Locally Linear Embedding (LLE) : Reduces dimensions while preserving the relationships
between nearby points.
 Isomap: Captures global data structure by preserving distances along a manifold.
.

Reinforcement Learning: An Overview


Reinforcement Learning (RL) is a branch of machine learning focused on making decisions to
maximize cumulative rewards in a given situation. Unlike supervised learning, which relies on
a training dataset with predefined answers, RL involves learning through experience. In RL, an
agent learns to achieve a goal in an uncertain, potentially complex environment by
performing actions and receiving feedback through rewards or penalties.
Key Concepts of Reinforcement Learning
 Agent: The learner or decision-maker.
 Environment: Everything the agent interacts with.
 State: A specific situation in which the agent finds itself.
 Action: All possible moves the agent can make.
 Reward: Feedback from the environment based on the action taken.
How Reinforcement Learning Works
RL operates on the principle of learning optimal behavior through trial and error. The agent
takes actions within the environment, receives rewards or penalties, and adjusts its behavior
to maximize the cumulative reward. This learning process is characterized by the following
elements:
 Policy: A strategy used by the agent to determine the next action based on the current
state.
 Reward Function: A function that provides a scalar feedback signal based on the state and
action.
 Value Function: A function that estimates the expected cumulative reward from a given
state.
 Model of the Environment: A representation of the environment that helps in planning by
predicting future states and rewards.

Difference between Reinforcement learning and Supervised learning:


Reinforcement learning Supervised learning

Reinforcement learning is all about making decisions In Supervised learning, the


sequentially. In simple words, we can say that the decision is made on the initial

5
Reinforcement learning Supervised learning

output depends on the state of the current input and


input or the input given at the
the next input depends on the output of the previous
start
input

In supervised learning the


In Reinforcement learning decision is dependent, So decisions are independent of each
we give labels to sequences of dependent decisions other so labels are given to each
decision.

Example: Object recognition,spam


Example: Chess game,text summarization
detection

Types of Reinforcement:
1. Positive: Positive Reinforcement is defined as when an event, occurs due to a particular
behavior, increases the strength and the frequency of the behavior. In other words, it has
a positive effect on behavior.
Advantages of reinforcement learning are:

 Maximizes Performance
 Sustain Change for a long period of time
 Too much Reinforcement can lead to an overload of states which can diminish the
results
2. Negative: Negative Reinforcement is defined as strengthening of behavior because a
negative condition is stopped or avoided.
Advantages of reinforcement learning:

 Increases Behavior
 Provide defiance to a minimum standard of performance
 It Only provides enough to meet up the minimum behavior
Elements of Reinforcement Learning
i) Policy: Defines the agent’s behavior at a given time.
ii) Reward Function: Defines the goal of the RL problem by providing feedback.
iii) Value Function: Estimates long-term rewards from a state.
iv) Model of the Environment: Helps in predicting future states and rewards for planning.

Reinforcement learning are broadly categorized into Model-Based and Model-Free methods,
these approaches differ in how they interact with the environment.
1. Model-Based Methods
These methods use a model of the environment to predict outcomes and help the agent plan
actions by simulating potential results.

6
 Markov decision processes (MDPs)
 Bellman equation
 Value iteration algorithm
 Monte Carlo Tree Search
2. Model-Free Methods
These methods do not build or rely on an explicit model of the environment. Instead, the
agent learns directly from experience by interacting with the environment and adjusting its
actions based on feedback. Model-Free methods can be further divided into Value-
Based and Policy-Based methods:
Value-Based Methods: Focus on learning the value of different states or actions, where the
agent estimates the expected return from each action and selects the one with the highest
value.
 Q-Learning
 SARSA
 Monte Carlo Methods
Policy-based Methods: Directly learn a policy (a mapping from states to actions) without
estimating valueswhere the agent continuously adjusts its policy to maximize rewards.
 REINFORCE Algorithm
 Actor-Critic Algorithm
 Asynchronous Advantage Actor-Critic (A3C)

Deep Learning
 Introduction to Deep Learning
 Introduction to Artificial Neutral Networks
 Implementing Artificial Neural Network training process in Python
 A single neuron neural network in Python
 Convolutional Neural Networks
o Introduction to Convolution Neural Network
o Introduction to Pooling Layer
o Introduction to Padding
o Types of padding in convolution layer
o Applying Convolutional Neural Network on mnist dataset
 Recurrent Neural Networks
o Introduction to Recurrent Neural Network
o Recurrent Neural Networks Explanation
o seq2seq model
o Introduction to Long Short Term Memory
o Long Short Term Memory Networks Explanation
o Gated Recurrent Unit Networks(GAN)
o Text Generation using Gated Recurrent Unit Networks
 GANs – Generative Adversarial Network
o Introduction to Generative Adversarial Network
o Generative Adversarial Networks (GANs)
o Use Cases of Generative Adversarial Networks

7
o Building a Generative Adversarial Network using Keras
o Modal Collapse in GANs

Introduction to Neural Networks

Neural Networks are fundamentals of deep learning inspired by human brain. It consists of
layers of interconnected nodes, or “neurons,” each designed to perform specific calculations.
These nodes receive input data, process it through various mathematical functions, and pass
the output to subsequent layers.
 Biological Neurons vs Artificial Neurons
 Single Layer Perceptron
 Multi-Layer Perceptron
 Artificial Neural Networks (ANNs)
Basic Components of Neural Networks
The basic components of neural network are:
 Neurons
 Layers in Neural Networks
 Weights and Biases
 Forward Propagation
 Activation Functions
 Loss Functions
 Backpropagation
 Learning Rate
Optimization Algorithm in Deep Learning
Optimization algorithms in deep learning are used to minimize the loss function by adjusting
the weights and biases of the model. The most common ones are:
 Gradient Descent
 Stochastic Gradient Descent (SGD)
 Mini-batch Gradient Descent
 RMSprop (Root Mean Square Propagation)
 Adam (Adaptive Moment Estimation)
Convolutional Neural Networks (CNNs)

Convolutional Neural Networks (CNNs) are a class of deep neural networks that are designed
for processing grid-like data, such as images. They use convolutional layers to automatically
detect patterns like edges, textures, and shapes in the data.
 Basics of Digital Image Processing
 Need for CNN
 Strides
 Padding
 Convolutional Layers
 Pooling Layers
 Fully Connected Layers
 Batch Normalization

8
 Backpropagation in CNNs
To learn about the implementation, you can explore the following articles:
 CNN based Image Classification using PyTorch
 CNN based Images Classification using TensorFlow
CNN Based Architectures
There are various architectures in CNNs that have been developed for specific kinds of
problems, such as:
1. LeNet-5
2. AlexNet
3. VGG-16 Network
4. VGG-19 Network
5. GoogLeNet/Inception
6. ResNet (Residual Network)
7. MobileNet
Recurrent Neural Networks (RNNs)

Recurrent Neural Networks (RNNs) are a class of neural networks that are used for modeling
sequence data such as time series or natural language.
 Vanishing Gradient and Exploding Gradient Problem
 How RNN Differs from Feedforward Neural Networks
 Backpropagation Through Time (BPTT)
 Types of Recurrent Neural Networks
 Bidirectional RNNs
 Long Short-Term Memory (LSTM)
 Bidirectional Long Short-Term Memory (Bi-LSTM)
 Gated Recurrent Units (GRU)
Generative Models in Deep Learning

Generative models generate new data that resembles the training data. The key types of
generative models include:
 Generative Adversarial Networks (GANs)
 Autoencoders
 Restricted Boltzmann Machines (RBMs)
Variants of Generative Adversarial Networks (GANs)
GANs consists of two neural networks – the generators and the discriminator that compete
with each other in a game like framework. The variants of GANs include the following:
 Deep Convolutional GAN (DCGAN)
 Conditional GAN (cGAN)
 Cycle-Consistent GAN (CycleGAN)
 Super-Resolution GAN (SRGAN)
 Wasserstein GAN (WGAN)
 StyleGAN
Types of Autoencoders

9
Autoencoders are neural networks used for unsupervised learning that learns to compress
and reconstruct data. There are different types of autoencoders that serve different purpose
such as noise reduction, generative modelling and feature learning.
 Sparse Autoencoder
 Denoising Autoencoder
 Undercomplete Autoencoder
 Contractive Autoencoder
 Convolutional Autoencoder
 Variational Autoencoder
Deep Reinforcement Learning (DRL)

Deep Reinforcement Learning combines the representation learning power of deep learning
with the decision-making ability of reinforcement learning. It enables agents to learn optimal
behaviors in complex environments through trial and error, using high-dimensional sensory
inputs.
 Reinforcement Learning
 Markov Decision Processes
 Function Approximation
Key Algorithms in Deep Reinforcement Learning
 Deep Q-Networks (DQN)
 REINFORCE
 Actor-Critic Methods
 Proximal Policy Optimization (PPO)

Natural Language Processing


 Introduction to Natural Language Processing
 Text Preprocessing in Python | Set – 1
 Text Preprocessing in Python | Set 2
 Removing stop words with NLTK in Python
 Tokenize text using NLTK in python
 How tokenizing text, sentence, words works
 Introduction to Stemming
 Stemming words with NLTK
 Lemmatization with NLTK
 Lemmatization with TextBlob
 How to get synonyms/antonyms from NLTK WordNet in Python?

Phases of Natural Language Processing

10
There are two components of Natural Language Processing:
 Natural Language Understanding
 Natural Language Generation
Libraries for Natural Language Processing

Some of natural language processing libraries include:


 NLTK (Natural Language Toolkit)
 spaCy
 Transformers (by Hugging Face)
 Gensim
Normalizing Textual Data in NLP

Text Normalization transforms text into a consistent format improves the quality and makes
it easier to process in NLP tasks.
Key steps in text normalization includes:
1. Regular Expressions (RE) are sequences of characters that define search patterns.
 How to write Regular Expressions?
 Properties of Regular Expressions
 RegEx in Python
 Email Extraction using RE
2. Tokenization is a process of splitting text into smaller units called tokens.
 How Tokenizing Text, Sentences, and Words Works
 Word Tokenization
 Rule-based Tokenization
 Subword Tokenization
 Dictionary-Based Tokenization
 Whitespace Tokenization
 WordPiece Tokenization
3. Lemmatization reduces words to their base or root form.
4. Stemming reduces works to their root by removing suffixes. Types of stemmers include:
 Porter Stemmer
 Lancaster Stemmer
 Snowball Stemmer
 Lovis Stemmer
 Rule-based Stemming

11
5. Stopword removal is a process to remove common words from the document.
6. Parts of Speech (POS) Tagging assigns a part of speech to each word in sentence based on
definition and context.
Text Representation or Text Embedding Techniques in NLP

Text representation converts textual data into numerical vectors that are processed by the
following methods:
 One-Hot Encoding
 Bag of Words (BOW)
 N-Grams
 Term Frequency-Inverse Document Frequency (TF-IDF)
 N-Gram Language Modeling with NLTK
Text Embedding Techniques refer to the methods and models used to create these vector
representations, including traditional methods (like TFIDF and BOW) and more advanced
approaches:
1. Word Embedding
 Word2Vec (SkipGram, Continuous Bag of Words – CBOW )
 GloVe (Global Vectors for Word Representation)
 fastText
2. Pre-Trained Embedding
 ELMo (Embeddings from Language Models)
 BERT (Bidirectional Encoder Representations from Transformers)
3. Document Embedding – Doc2Vec
Deep Learning Techniques for NLP

Deep learning has revolutionized Natural Language Processing (NLP) by enabling models to
automatically learn complex patterns and representations from raw text. Below are some of
the key deep learning techniques used in NLP:
 Artificial Neural Networks (ANNs)
 Recurrent Neural Networks (RNNs)
 Long Short-Term Memory (LSTM)
 Gated Recurrent Unit (GRU)
 Seq2Seq Models
 Transformer Models

Pre-Trained Language Models


Pre-trained models understand language patterns, context and semantics. The provided
models are trained on massive corpora and can be fine tuned for specific tasks.
 GPT (Generative Pre-trained Transformer)
 Transformers XL
 T5 (Text-to-Text Transfer Transformer)
 RoBERTa
To learn how to fine tune a model, refer to this article: Transfer Learning with Fine-tuning
Natural Language Processing Tasks

12
1. Text Classification
 Dataset for Text Classification
 Text Classification using Naive Bayes
 Text Classification using Logistic Regression
 Text Classification using RNNs
 Text Classification using CNNs
2. Information Extraction
 Information Extraction
 Named Entity Recognition (NER) using SpaCy
 Named Entity Recognition (NER) using NLTK
 Relationship Extraction
3. Sentiment Analysis
 What is Sentiment Analysis?
 Sentiment Analysis using VADER
 Sentiment Analysis using Recurrent Neural Networks (RNN)
4. Machine Translation
 Statistical Machine Translation of Language
 Machine Translation with Transformer
5. Text Summarization
 What is Text Summarization?
 Text Summarizations using Hugging Face Model
 Text Summarization using Sumy
6. Text Generation
 Text Generation using Fnet
 Text Generation using Recurrent Long Short Term Memory Network
 Text2Text Generations using HuggingFace Model

Computer Vision
Mathematical prerequisites for Computer Vision

1. Linear Algebra
 Vectors
 Matrices and Tensors
 Eigenvalues and Eigenvectors
 Singular Value Decomposition
2. Probability and Statistics
 Probability Distributions
 Bayesian Inference and Bayes’ Theorem
 Markov Chains
 Kalman Filters
3. Signal Processing
 Image Filtering and Convolution
 Discrete Fourier Transform (DFT)

13
 Fast Fourier Transform (FFT)
 Principal Component Analysis (PCA)
Image Processing

Image processing refers to a set of techniques for manipulating and analyzing digital images.
The techniques include:
1. Image Transformation is process of modifying or changing an images.
 Geometric Transformations
 Fourier Transform
 Intensity Transformation
2. Image Enhancement improve the visual quality or clarity of image to highlight important
features or details to minimize noise or distortions.
 Histogram Equalization
 Contrast Enhancement
 Image Sharpening
 Color Correction
3. Noise Reduction Techniques removes unwanted noise from images while preserving
important features like edges and texture.
 Gaussian Smoothing
 Median Filtering
 Bilateral Filtering
 Wavelet Denoising
4. Morphological Operations process images based on their structure and shape. Common
morphological operations include:
 Erosion and Dilation
 Opening
 Closing
 Morphological Gradient
Feature Extraction

1. Edge Detection Techniques identify significant changes in the intensity or color, that
corresponds to the boundaries of objects with an image.
 Canny Edge Detector
 Sobel Operator
 Prewitt Operator
 Laplacian of Gaussian (LoG)
2. Corner and Interest Point Detection identify points in an image that are distinctive and can
be detected across different views, transformations or scales.
 Harris Corner Detection
 Shi-Tomasi Corner Detector
3. Feature Descriptors generates a compact representation of local image region around
keypoints making it easier to correspond features across different images.
 SIFT (Scale-Invariant Feature Transform)
 SURF (Speeded-Up Robust Features)

14
 ORB (Oriented FAST and Rotated BRIEF)
 HOG (Histogram of Oriented Gradients)
Deep Learning for Computer Vision

Deep learning has revolutionized the field of computer vision by enabling machines to
understand and interpret visual data in ways that were previously unimaginable.
1. Convolutional Neural Networks (CNNs)
Convolutional Neural Networks are designed to learn spatial hierarchies of features from
image. Key components include:
 Convolutional Layers
 Pooling Layers
 Fully Connected Layers
2. Generative Adversarial Networks (GANs)
Generative Adversarial Networks (GANs) consists of two networks (generator and
discriminator) that work against each other to create realistic images. There are various types
of GANs, each designed for specific tasks and improvements:
 Deep Convolutional GAN (DCGAN)
 Conditional GAN (cGAN)
 Cycle-Consistent GAN (CycleGAN)
 Super-Resolution GAN (SRGAN)
 Wasserstein GAN (WGAN)
 StyleGAN
3. Variational Autoencoders (VAEs)
Variational Autoencoders (VAEs) are probabilistic version of autoencoders, which forces the
model to learn a distribution over the latent space rather than a fixed point. Other
autoencoders used in computer vision are:
 Vanilla Autoencoders
 Denoising Autoencoders (DAE)
 Convolutional Autoencoder (CAE)
4. Vision Transformers (ViT)
Vision Transformers (ViT) are inspired by transformers models to treat images and sequence
of patches and process them using self-attention mechanisms. Common vision transformers
include:
 DeiT (Data-efficient Image Transformer)
 Swin Transformer
 CvT (Convolutional Vision Transformer)
 T2T-ViT (Tokens-to-Token Vision Transformer)
5. Vision Language Models
Vision language models integrate visual and textual information to perform image processing
and natural language understanding.
 CLIP (Contrastive Language-Image Pre-training)
 ALIGN (A Large-scale ImaGe and Noisy-text)
 BLIP (Bootstrapping Language-Image Pre-training)
Computer Vision Tasks

15
1. Image Classification assigns a label or category to an entire image based on its content.
 Multiclass classification classifies an image into multiple predefined classes.
 Multilabel classification involves assigning multiple labels to a single image.
 Zero-shot classification classifies images into categories that model has never seen during
training.
You can perform image classification using following methods.
 Image Classification using Support Vector Machine (SVM)
 Image Classification using RandomForest
 Image Classification using CNN
 Image Classification using TensorFlow
 Image Classification using PyTorch Lightning
 Image Classification using InceptionResNetV2
To learn about the datasets for image classification, you can go through the article on Dataset
for Image Classification.
2. Object Detection involves identifying and locating objects within an image by drawing
bounding boxes around them. Object detection include following concepts:
 Bounding Box Regression
 Intersection over Union (IoU)
 Region Proposal Networks (RPN)
 Non-Maximum Suppression (NMS)
Type of Object Detection Approaches
1. Single-Stage Object Detection
 YOLO (You Only Look Once)
 SSD (Single Shot Multibox Detector)
2. Two-Stage Object Detection
 Region-Based Convolutional Neural Networks (R-CNNs)
 Fast R-CNN
 Faster R-CNN
 Mask R-CNN
You can perform object detection using the following methods:
 Object Detection using TensorFlow
 Object Detection using PyTorch
3. Image Segmentation involves partitioning an image into distinct regions or segments to
identify objects or boundaries at a pixel level. Types of image segmentation are:
 Semantic Segmentation
 Instance Segmentation
 Panoptic Segmentation
You can perform image segmentation using the following methods:
 Image Segmentation using K Means Clustering
 Image Segmentation using UNet
 Image Segmentation using UNet++
 Image Segmentation using TensorFlow
 Image Segmentation with Mask R-CNN

16
17

You might also like