0% found this document useful (0 votes)
16 views20 pages

Models

Uploaded by

Ganesh Tangudu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views20 pages

Models

Uploaded by

Ganesh Tangudu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Regression

Classical Learning Classification

Clustering
Clustering

Pattern search

Bagging

Ensemble methods Boosting

Stacking

CNN
CNN

Machine learning

Nueral Nets

RNN

Generative Adversial Network

Auto Encoders

Reinforcement
Time series

Linear

Decomposition
Non-Linear
Decomposition
Non-Linear

Matrix Factorization

Manifold Learning
Data Science

Linear Regression

Polynomial Regression

Ridge Regression

Lasso

Elastic Net

Support Vector Machine

XGBoost

Decision Trees

Random forest

Bayesian Linear Regression

K-Nearest neighbours

Naïve Bayes

Support Vector Machine

Decision Trees

Logistic

Gradient Boost

Random forest

K-Means

Mean shift
Heirarchial clustering

Density-Based spatial clustering of applications with noise (DBSCAN)

Agglomerative

Apriori
Euclat

FP-Growth

Random forest

Xtreme Gradient Boosting(XGBoost)

AdaBoost

LightBoost

Gradient Boost Machine

CatBoost

Stochastic Gradient Boosting

Difussion convolution recurrent network


Deep residual network

LeNet-5

AlexNet

VGGNet

GoogLeNet
MobileNet

EfficientNet

Dense Convolution Network

Long Short Term Memory

Gated Recurrent Network

Recurrent Convolution network

Heirarchial Network

Bidirectional

Generator

Discriminator

seq2seq

Genetic Algorithm

Asynchronous Advantage Actor-Critic(A3C)

State Action Reward State Action(SARSA)

Deep Q-Network

Q-Learning
ARIMA

SARIMA

Exponential Smoothing

Seasonal Decomposition

Vector Autoregression(VAR)

State Space Model

Long short term memory

Guassian processes

Prophet

Neural prphet

Wavelet Transforms

Dynamic Linear Models

Principal Component Analysis

Linear Discriminant Analysis

Factor Analysis

Independent Component Analysis

Multidimensional Scaling

t-Distributed Stochastic Neighbor Embedding(t-SNE)

Isomap

Locally Linear Embedding

Kernel PCA
Self Organising Maps

Auto Encoders

Singular Value Decomposition

Non-Negative Matrix Factorization

Guassian process Latent Variable Model

Difussion maps
Predicts the target variable as a linear function of a single predictor variable.

Predicts the target variable as a linear function of multiple predictor variables.

Adds a penalty equal to the square of the magnitude of coefficients to the loss function to prevent
overfitting.
Adds a penalty equal to the absolute value of the magnitude of coefficients to the loss function,
leading to sparse models.
Combines L1 and L2 regularization penalties to take advantage of both ridge and lasso regression.

Uses support vector machine principles for regression, capable of fitting complex, non-linear
functions.
Builds an ensemble of trees sequentially, where each tree corrects errors made by the previous
ones. Combines the predictions of weak learners to form a strong prediction.
Splits the data into regions and fits a simple model (usually a constant) within each region.
Prediction is the mean value of the target variable in each leaf node.
An ensemble of decision trees, where each tree is trained on a random subset of the data and
predictions are averaged. Aggregates predictions from multiple decision trees.

Incorporates prior distributions for the model parameters and updates these distributions based on
the data. Bayesian inference is used to derive the posterior distribution of the model parameters.

Classifies a data point based on the majority class among its k-nearest neighbors in the feature
space. Simple and effective for small datasets with clear class boundaries.

Assumes independence among predictors given the class. Simple, fast, and works well with small
datasets and text classification.
Finds the hyperplane that best separates classes in the feature space. Uses kernel functions to
handle non-linear boundaries.
Splits the data into subsets based on feature values, forming a tree-like structure. Can handle both
numerical and categorical data.
Models the probability of a binary outcome. Uses the logistic function to constrain the output
between 0 and 1.
Builds an ensemble of trees sequentially, where each tree corrects errors made by the previous
ones. Includes popular implementations like XGBoost, LightGBM, and CatBoost.
An ensemble of decision trees, where each tree is trained on a random subset of the data.
Combines predictions from multiple trees to improve accuracy and reduce overfitting.
One of the most popular clustering algorithms, K-means partitions the data into K clusters by
iteratively assigning data points to the nearest cluster centroid and updating the centroids based
on the mean of the points assigned to each cluster. It aims to minimize the within-cluster variance.

Mean Shift is a non-parametric clustering algorithm that doesn't require specifying the number of
clusters beforehand. It works by iteratively shifting each data point towards the mode (peak) of the
kernel density estimate of the data until convergence.
This method creates a hierarchy of clusters by either bottom-up (agglomerative) or top-down
(divisive) approaches. In agglomerative clustering, each data point starts in its own cluster, and
pairs of clusters are merged as one moves up the hierarchy. In divisive clustering, all data points
start in one cluster, which is recursively split as one moves down the hierarchy.

DBSCAN groups together points that are closely packed together, based on a specified distance
(epsilon) and minimum number of points (minPts) within that distance. It can discover clusters of
arbitrary shapes and is robust to noise.

This hierarchical clustering algorithm starts with each point as its own cluster and merges the
closest pair of clusters iteratively until only one cluster remains.

Identifies frequent itemsets and generates association rules. Used in market basket analysis.
Uses a vertical database format for finding frequent itemsets. Efficient for datasets with many
frequent itemsets.
More efficient than Apriori by using a prefix tree to store frequent patterns. Avoids generating
candidate sets explicitly.
An extension of bagged decision trees where each tree is trained on a bootstrap sample of the data
and, additionally, a random subset of features is considered for splitting at each node. This
decorrelates the trees and improves performance.

An optimized and scalable implementation of gradient boosting, XGBoost includes regularization


terms to prevent overfitting and support parallel processing.
AdaBoost adjusts the weights of incorrectly classified instances, increasing their influence on the
learning process. It combines multiple weak classifiers to form a strong classifier by focusing on
difficult cases in each iteration.
A gradient boosting framework that uses tree-based learning algorithms, LightGBM is designed for
efficiency and scalability. It uses a histogram-based approach for faster training.
GBM builds an ensemble of decision trees sequentially, where each tree is trained to correct the
errors of the previous trees. It uses gradient descent to minimize the loss function.
Specifically designed to handle categorical features efficiently, CatBoost uses ordered boosting to
reduce prediction shift and support categorical data directly.

A variant of gradient boosting that introduces randomness by subsampling the training data and
features. This can help reduce overfitting.
Combines multiple base models (e.g., decision trees, logistic regression, SVMs) and trains a meta-
model on their predictions. The base models are typically trained on the original data, and the
meta-model is trained on the outputs (predictions) of the base models.

Introduced the concept of residual learning to address the degradation problem in deep networks.
Uses skip connections to allow gradients to flow more easily through the network.
One of the earliest CNN architectures designed by Yann LeCun for handwritten digit recognition
(MNIST dataset).
A deeper architecture that popularized CNNs in large-scale image classification tasks. It introduced
the use of ReLU activation and dropout for regularization.

Characterized by its use of very small (3x3) convolutional filters and a deep architecture with up to
19 layers.
Introduced the Inception module, which allows for more efficient computation by combining
multiple convolutions with different filter sizes.
Designed for mobile and embedded vision applications, it uses depthwise separable convolutions
to reduce the number of parameters.
Uses a compound scaling method to balance network depth, width, and resolution for improved
efficiency.
Uses dense blocks where each layer receives input from all previous layers, improving gradient flow
and feature reuse.
An extension of RNN designed to overcome the vanishing gradient problem. LSTMs have a more
complex architecture that includes gates (input gate, forget gate, and output gate) to control the
flow of information.
A simplified version of LSTM that combines the input and forget gates into a single update gate and
merges the cell state and hidden state. GRUs are computationally more efficient than LSTMs while
still addressing the vanishing gradient problem.

Combines the strengths of CNNs and RNNs by applying convolutional layers to capture spatial
features and recurrent layers to capture temporal features.
Organizes RNN layers hierarchically to capture different levels of abstraction at different temporal
scales.
Consists of two RNNs running in parallel, one in the forward direction and one in the backward
direction. This allows the network to have both past and future context.
The generator algorithm is responsible for creating new data samples. It takes random noise as
input and generates data that should ideally be indistinguishable from real data.
The discriminator algorithm is like a binary classifier. It tries to distinguish between real data
samples and the fake samples generated by the generator. Its goal is to correctly classify real data
as real (label 1) and fake data as fake (label 0).

An encoder and a decoder, widely used for tasks like machine translation and text summarization,
where it maps input sequences to output sequences, effectively capturing contextual information
and generating meaningful responses. It utilizes recurrent neural networks or transformers to
encode source sequences into fixed-length representations and decode them into target
sequences, enabling the modeling of complex relationships between inputs and outputs.

Genetic Algorithms are optimization techniques inspired by natural selection, using selection,
crossover, and mutation to evolve candidate solutions towards optimal outcomes across diverse
problem domains. They efficiently explore large search spaces, providing robust solutions to
complex optimization problems, albeit with sensitivity to parameter settings and computational
demands.

Actor-Critic methods combine value-based and policy-based approaches by maintaining both a


policy network (actor) and a value network (critic). The critic evaluates actions, providing feedback
to the actor to improve its policy.

DQN is an extension of Q-Learning that uses deep neural networks to approximate the action-value
function. It employs experience replay and target networks to stabilize training and improve
sample efficiency.
Q-Learning is a model-free RL algorithm that learns an action-value function Q(s,a)Q(s, a)Q(s,a),
which estimates the expected cumulative reward of taking action aaa in state sss. It uses the
Bellman equation to update the Q-values iteratively.
ARIMA is a popular linear model used for time series forecasting. It combines autoregression (AR),
differencing (I), and moving average (MA) components to capture different aspects of the time
series data.
SARIMA extends ARIMA by incorporating seasonal components into the model to account for
periodic fluctuations in the data.
Exponential smoothing methods, such as Simple Exponential Smoothing (SES), Double Exponential
Smoothing (DES), and Triple Exponential Smoothing (Holt-Winters), are simple yet effective
techniques for forecasting time series data. They assign exponentially decreasing weights to past
observations.

STL decomposes time series data into seasonal, trend, and remainder components using a process
of repeated filtering.
VAR models are used for multivariate time series forecasting, where multiple time series variables
are modeled simultaneously as a system of equations.
State space models, including the Kalman Filter and Bayesian Structural Time Series (BSTS) models,
represent time series data as a latent state process evolving over time.
LSTM networks are a type of recurrent neural network (RNN) capable of learning long-term
dependencies in sequential data, making them well-suited for time series forecasting tasks.
Gaussian processes are a flexible non-parametric Bayesian approach for modeling time series data.
They model the distribution over functions and can capture complex patterns in the data.
Prophet is a forecasting tool developed by Facebook that decomposes time series data into trend,
seasonality, and holiday effects and utilizes a piecewise linear or logistic growth curve model.

Neural Prophet is an extension of Facebook's Prophet that incorporates neural network


architectures to capture complex patterns in time series data.
Wavelet transforms are used for time-frequency analysis of non-stationary time series data,
allowing for both time and frequency localization.

DLMs are Bayesian state space models that can be used for time series analysis and forecasting by
specifying dynamic relationships between observed and unobserved variables.
Projects data onto the directions of maximum variance. Commonly used for feature extraction and
data visualization.
Projects data to a lower-dimensional space by maximizing class separability. Often used for
supervised classification tasks.
Models observed variables and their linear combinations using latent factors. Used for uncovering
hidden relationships in the data.
Decomposes multivariate signals into additive, independent components. Commonly used in signal
processing and for separating mixed signals.
Projects data into a lower-dimensional space while preserving pairwise distances as well as
possible. Used for data visualization.
Reduces dimensions by modeling pairwise similarities and preserving local structure. Primarily used
for visualization of high-dimensional data.
Preserves geodesic distances between all points. Captures the intrinsic geometry of the data
manifold.
Preserves local linear relationships among neighboring points. Good for unwrapping manifolds in
high-dimensional data.
Extends PCA using kernel methods to handle non-linear relationships. Used for capturing non-
linear structures.
Neural network-based method that reduces dimensions while preserving topological properties.
Used for clustering and visualization.
Neural networks trained to compress and reconstruct data. Capture complex non-linear
relationships and are used for unsupervised learning.
Decomposes a matrix into singular vectors and singular values. Used for latent semantic analysis,
noise reduction, and feature extraction.
Factorizes a matrix into non-negative factors. Useful for parts-based representation in image
processing and text mining.
Uses Gaussian processes to learn a latent space representation. Captures uncertainty in the
embedding process.
Uses diffusion processes to model the data geometry. Preserves the global data structure and is
robust to noise.
Hyperparameters

fit_intercept, nomalize

degree, include_bias

alpha, solver

alpha, max_iter

alpha, l1_ratio

c, kernel, gamma

alpha, lambda, fit_intercept

n_neighbors, weights

c, kernel, gamma

max_depth, min_samples_split, min_samples_leaf

penalty, c, solver

n_estimators, max_features, max_depth

n_clusters, max_iter
n_estimators, learning_rate
units, activation, reccurent_activation, dropout

units, activation, reccurent_activation, dropout

learning_rate, discount_factor, batch_size, memory_size

learning_rate, discount_factor, epsilon


Role and Range
fit_intercept' determines whether the model will include an intercept term (the y-intercept) in the equation. Boolean range
'normalize' determines whether the input features X should be normalized before fitting the model. Boolean range
degree' determines the highest power of x (the independent variable) to include in the polynomial features. Range(Any positiv
'include_bias' determines whether an additional column of ones (the bias term, or the intercept) is included in the transforme
Range (true or false).
alpha' controls the regularization strength applied to the linear regression model. Range(non-negative real number).
specifies the algorithm used to optimize the ridge regression model.
alpha' controls the strength of the regularization applied to the model. Range(non-negative real number).
'max_iter' determines the maximum number of iterations allowed for the solver to converge to a solution. Range(positive valu
Linear
Polynomial
Ridge
Lasso
Regression
Elastic Net
Bayesian
Quantile
Support vector
Logistic
Classification Support vector
KNN

You might also like