0% found this document useful (0 votes)
6 views

Categorization of ML-DL Algorithms

Categorization of ML-DL Algorithms

Uploaded by

Jalpesh Vasa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Categorization of ML-DL Algorithms

Categorization of ML-DL Algorithms

Uploaded by

Jalpesh Vasa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Artificial intelligence

Machine Learning Deep Learning

Unsupervised Reinforcement
Semi-Supervised Supervised learning Supervised
learning learning Unsupervised Hybrid
Ensemble learning learning

Classification Model Free Multi Layer


Clustering Generative Adversarial
Bagging (Bootstrap 1. Self-training Perceptron(MLP)
Network(GAN)
Aggregating) 2. Co-training
3. Tri-training 1. K means 1. Q-Learning
1. Logistic Regression 1. Backpropagation
4. Label 2. Prototype-Based 2. Hybrid 1. GAN loss functions
2. Naive Bayes (Backpropagation
Propagation Clustering (Eg: K 3. Policy Optimization 2. Wasserstein GAN
Random Forest classifier Through Time for Semi-Supervised Learning
medians & K- 4. Deep Q-Networks (WGAN)
3. K-Nearest neighbour sequential data)
Means++) (DQN) 3. Conditional GAN
4. Support Vector 2. Gradient Descent (SGD,
3. Hierachical clustering 5. Policy Gradient (CGAN)
Machine(SVM) Mini-batch Gradient 1. Contrastive Predictive Coding
4. Expection Methods 4. CycleGAN (CPC)
5. Decision Tree Descent)
Boosting Maximization (REINFORCE, PPO) 5. Progressive GAN 2. Bidirectional Encoder
6. Gradient Boosting 3. Adam, RMSprop, SGD
5. DBSCAN 6. Actor-Critic Models with Momentum (PGAN) Representations from
Machines (A2C, A3C)
6. Mean Shift (optimization 6. StyleGAN Transformers (BERT)
7. Random Forests 7. SARSA (State-
7. Density Peak algorithms) 3. Contrastive Multiview Coding
8. Neural Networks Action-Reward-
1. AdaBoost Clustering (DPC) (CMC)
(DL) State-Action) Self- Organizing
(Adaptive 8. Spectral Clustering 4. Rotation Prediction
8. Deep Deterministic Convolutional Map(SOM)
Boosting) 9. Fuzzy C-Means (FCM) 5. Co-training (Tri-training & Self-
10. Graph-Based Policy Gradient Neural
2. Gradient Network(CNN) labeled Co-training
Boosting Clustering (DDPG)
(Eg: Girvan-Newman 1. Kohonen's learning
Regression algorithm
& Louvain Modularity) Transfer Learning Joint Distribution
11. Deep Embedding Model Based 1. Convolutional layers 2. Topological ordering Matching
2. Pooling layers (Max of neurons
Clustering (DEC)
Stacking Pooling, Average 3. SOM training
1. Linear Regression process (Competitive
Pooling) 1. Fine-Tuning 1. Generative
2. Ridge Regression Learn the Model
3. Activation functions learning, 2. Domain Adaptation Adversarial Networks
3. Ordinary least Association Analysis Given the Model
(ReLU, Leaky ReLU) Neighborhood (Adversarial Domain for Joint Distribution
square Regression
1. Dynamic Programming 4. Dropout regularization function) Adaptation & Domain- Matching (JDM-GANs)
1. Stacked Generalization 4. Stepwise
(DP) 5. Batch normalization Adversarial Neural 2. CycleGAN
2. Voting Regression 1. APRIORI
2. Model Predictive Control Restricted Boltzman Networks (DANN) 3. Adversarial
Classifiers/Regresors 5. Polynomial 2. Eclat
(MPC) Machine(RBM) (Gradient Reversal Discriminative
Regression 3. FP-Growth Layer (GRL))
3. Monte Carlo Methods Domain Adaptation
6. Elastic Net 4. PrefixSpan Algorithm Recurrent Neural
4. Temporal Difference 3. Model Distillation (ADDA)
7. Lasso Regression 5. Maximal and Closed Network
Learning 1. Contrastive 4. Meta-learning (Model-
8. Decision Trees for Itemset Mining
Divergence (CD) Agnostic Meta-Learning
Regression 6. CARMA (Class
algorithm (MAML))
9. Neural Networks Association Rule Multi-task Learning
1. Long Short Term 2. Gibbs sampling 5. Pre-trained CNNs
(DL) Mining Algorithm)
Memory(LSTM) 3. Training using 6. Pre-trained Language
7. Multi-Level models (LMs)
Association Rule 2. Bidirectional Long-Short Contrastive
Mining Term Memory(Bi-LSTM) Divergence (CD) or
3. Gated Recurrent Unit(GRU) 1. Shared
8. Temporal Association Persistent Contrastive
Parameterization
Rule Mining 4. Vanilla RNN Divergence (PCD) Ensemble Methods
5. Echo State Networks 2. Task Attention
(ESNs) Mechanisms
Dimensionality 3. Gradient Sharing
6. Hierarchical RNNs Deep Belief Network(DBN) 1. Mixture of Experts
Reduction 4. Loss Weighting
7. (MoE)
8. Recurrent Convolutional 5. Task Ensemble
2. Co-regularized Methods
Neural Networks (RCNNs) Ensemble
1. Feature Extraction- 9. Transformer-based
1. Greedy layer- 3. Stacked Generalization
Principal Component architectures (e.g., GPT,
wise pretraining (Stacking)
Analysis(PCA) BERT) 4. Bootstrap Aggregating
2. Feature Selection 2. SGD algorithm
for fine-tuning (Bagging
(Wrapper, Filter,
and Embedded method) 3. CD or PCD
3. Singular Value Multi-View Learning
Decomposition (SVD)
4. t-Distributed Stochastic
Neighbor Embedding (t- Auto-Encoder
SNE) 1. Canonical
5. Linear Discriminant Correlation Analysis
Analysis (LDA) (CCA)
1. Sparse
6. Independent Component 2. Deep Canonical
Autoencoder(SAE)
Analysis (ICA) Correlation Analysis
2. Denoising (DCCA)
7. Autoencoders Autoencoder(DAE) 3. Multiple Kernel
3. Contractive Learning (MKL)
Autoencoder(CAE)
4. Variational
Autoencoder(VAE)
5. Contractive Multi-Modal Learning
Autoencoder
6. Adversarial
Autoencoder (AAE) 1. Cross-modal
7. Capsule Autoencoder Retrieval
Networks
2. Fusion-based
Models

Hybridization of
Reinforcement Learning
with Supervision

1. Imitation Learning
from Observation
(ILO)
2. Reward
Augmented
Maximum
Likelihood (RAML)

Prepared by,
researchbrains.com
+91 9940778788

You might also like