DLunit 1
DLunit 1
Probabilistic Modeling:
Probabilistic modeling is an approach to modeling and analyzing data that incorporates
uncertainty and probability theory. It allows us to reason and make predictions in situations
where there is inherent variability or noise in the data. In probabilistic modeling, we represent
uncertain quantities as probability distributions and use statistical inference techniques to learn
and make inferences from the available data.
Here are some key aspects and applications of probabilistic modeling:
1. Probability Distributions: In probabilistic modeling, we assign probability distributions to
uncertain variables. These distributions describe the likelihood of different values the variables
can take. Commonly used probability distributions include the Gaussian (normal) distribution,
Bernoulli distribution, Poisson distribution, and more.
2. Bayesian Inference: Bayesian inference is a fundamental approach in probabilistic modeling
that allows us to update our beliefs about uncertain variables based on observed data. It combines
prior knowledge or beliefs (expressed as prior distributions) with observed data to obtain
posterior distributions, which represent our updated beliefs.
3. Generative Models: Probabilistic modeling enables the construction of generative models,
which can generate new samples that resemble the observed data. Generative models learn the
underlying probabilistic structure of the data and can be used for tasks such as data generation,
anomaly detection, and missing data imputation.
4. Bayesian Networks: Bayesian networks, also known as probabilistic graphical models, are
graphical representations of probabilistic dependencies among variables. They use directed
acyclic graphs to model the conditional dependencies and allow efficient inference and reasoning
about the joint distribution of variables.
5. Uncertainty Quantification: Probabilistic modeling provides a natural framework for
quantifying and expressing uncertainty. By representing uncertain variables as probability
distributions, we can estimate confidence intervals, calculate probabilities of different outcomes,
and assess the uncertainty associated with predictions or decisions.
6. Applications: Probabilistic modeling finds applications in various fields, including finance,
healthcare, natural language processing, computer vision, and more. It is used for tasks such as
risk assessment, fraud detection, recommendation systems, sentiment analysis, image
recognition, and predictive modeling.
Notable probabilistic modeling techniques include Bayesian regression, Hidden Markov
Models (HMMs), Gaussian Processes (GPs), and Variational Autoencoders (VAEs). These
techniques provide powerful tools for modeling complex systems and making principled
inferences in the presence of uncertainty.
Kernal Methods:
Kernel methods are a family of machine learning techniques that operate in a high-dimensional
feature space implicitly through a kernel function. They are particularly useful for solving
complex nonlinear problems while preserving the computational efficiency of linear methods.
Kernel methods have applications in various fields, including classification, regression,
dimensionality reduction, and anomaly detection.
Here are some key aspects of kernel methods:
1. Kernel Functions: A kernel function measures the similarity or distance between pairs of data
points in the input space. It takes two inputs and returns a similarity measure or inner product in
a high-dimensional feature space. Popular kernel functions include the linear kernel, polynomial
kernel, Gaussian (RBF) kernel, and sigmoid kernel.
2. Kernel Trick: The kernel trick is a central concept in kernel methods. It allows us to implicitly
map the original input space into a higher-dimensional feature space without explicitly
computing the transformed features. This is computationally efficient as it avoids the need to
compute and store the high-dimensional feature representations explicitly.
3. Support Vector Machines (SVM): SVM is a widely used kernel-based algorithm for
classification and regression tasks. It aims to find a hyperplane that separates data points of
different classes while maximizing the margin between the classes. SVMs use kernel functions to
implicitly operate in a high-dimensional feature space and find the optimal decision boundary.
4. Kernel PCA: Kernel Principal Component Analysis (PCA) is an extension of traditional PCA
that uses kernel functions to perform nonlinear dimensionality reduction. It captures nonlinearn
relationships in the data by mapping it to a high-dimensional feature space and computing
principal components in that space.
5. Gaussian Processes (GPs): Gaussian processes are probabilistic models that use kernel
functions to define the covariance structure between data points. GPs are flexible and can model
complex nonlinear relationships while providing uncertainty estimates. They are used for
regression, classification, and Bayesian optimization tasks.
6. Kernel-based Clustering: Kernel methods can also be applied to clustering algorithms, such as
Kernel K-means and Spectral Clustering. These methods use kernel functions to measure
similarity or dissimilarity between data points and group them into clusters.
Kernel methods have several advantages, including their ability to handle nonlinear
relationships, their mathematical elegance, and their interpretability. However, they may face
challenges with scalability and hyperparameter selection. Nevertheless, kernel methods have had
a significant impact on the field of machine learning, providing powerful tools for solving a wide
range of problems.
Decision Trees:
Decision Tree is a Supervised learning technique that can be used for both
classification and Regression problems, but mostly it is preferred for solving
Classification problems. It is a tree-structured classifier, where internal nodes represent
the features of a dataset, branches represent the decision rules and each leaf node
represents the outcome.
In a Decision tree, there are two nodes, which are the Decision Node and Leaf Node.
Decision nodes are used to make any decision and have multiple branches, whereas Leaf
nodes are the output of those decisions and do not contain any further branches.
The decisions or the test are performed on the basis of features of the given dataset.
It is a graphical representation for getting all the possible solutions to a
problem/decision based on given conditions.
It is called a decision tree because, similar to a tree, it starts with the root node, which
expands on further branches and constructs a tree-like structure.
Decision Tree Terminologies
Root Node: Root node is from where the decision tree starts. It represents the entire
dataset, which further gets divided into two or more homogeneous sets.
Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated
further after getting a leaf node.
Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes
according to the given conditions.
Branch/Sub Tree: A tree formed by splitting the tree.
Pruning: Pruning is the process of removing the unwanted branches from the tree.
Parent/Child node: The root node of the tree is called the parent node, and other nodes
are called the child nodes.
Algorithm
Step-1: Begin the tree with the root node, says S, which contains the complete dataset.
Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).
Step-3: Divide the S into subsets that contains possible values for the best attributes.
Step-4: Generate the decision tree node, which contains the best attribute.
Step-5: Recursively make new decision trees using the subsets of the dataset created in
step -3. Continue this process until a stage is reached where you cannot further classify
the nodes and called the final node as a leaf node.
Random Forest:
Random Forest is an ensemble learning method that combines multiple decision trees to make
predictions or classifications. It is a powerful and widely used algorithm known for its
robustness and ability to handle complex datasets. Random Forest overcomes the limitations of
individual decision trees by reducing overfitting and improving generalization.
Here are the key characteristics and concepts of Random Forest:
1. Ensemble of Decision Trees: Random Forest consists of a collection of decision trees, where
each tree is trained on a random subset of the training data. Each tree independently makes
predictions, and the final prediction is determined by combining the predictions of all the trees.
2. Random Sampling: Random Forest uses two types of random sampling. The first type is
random sampling with replacement, also known as bootstrap sampling. It creates multiple
bootstrap samples by randomly selecting data points from the training dataset, allowing some
data points to be present in multiple subsets. The second type is random feature selection,
where only a subset of features is considered for splitting at each node of the decision tree.
3. Voting for Predictions: Random Forest employs a majority voting scheme for classification
tasks and averaging for regression tasks. Each decision tree in the ensemble makes an
individual prediction, and the class with the most votes or the average of the predicted values is
chosen as the final prediction.
4. Feature Importance: Random Forest can provide a measure of feature importance based on
the average impurity decrease (such as Gini impurity or entropy) caused by the feature across
all decision trees in the forest. This information helps identify the most informative features for
the task at hand.
6. Parallelizable: Random Forest can be easily parallelized since each decision tree in the
ensemble can be trained independently. This allows for efficient computation, especially for
large datasets.
7. Versatility: Random Forest is applicable to both classification and regression problems. It
handles a mixture of feature types, such as categorical and numerical features, without
requiring extensive preprocessing.
Here are the key characteristics and concepts of Gradient Boosting Machines:
1. Boosting: GBMs belong to the boosting family of algorithms, where weak models are sequentially
trained to correct the mistakes of the previous models. Each subsequent model in the ensemble focuses
on reducing the errors made by the previous models, leading to an ensemble with improved overall
predictive performance.
2. Gradient Descent: GBMs optimize the ensemble by minimizing a differentiable loss function using
gradient descent. The loss function measures the discrepancy between the predicted values and the
true values of the target variable. Gradient descent updates the model parameters in the direction of
steepest descent to iteratively improve the model's predictions.
3. Weak Learners: GBMs use weak learners as building blocks, typically decision trees with a small depth
(often referred to as "shallow trees" or "decision stumps"). These weak learners are simple models that
make predictions slightly better than random guessing. They are usually shallow to prevent overfitting
and to focus on capturing the specific patterns missed by previous models.
4. Residuals: In GBMs, the subsequent weak learners are trained to predict the residuals (the differences
between the true values and the predictions of the ensemble so far). By focusing on the residuals, the
subsequent models are designed to correct the errors made by the previous models and improve the
overall prediction accuracy.
5. Learning Rate: GBMs introduce a learning rate parameter that controls the contribution of each weak
learner to the ensemble. A smaller learning rate makes the learning process more conservative, slowing
down the convergence but potentially improving the generalization ability.
Machine learning is a subset of AI, which enables the machine to automatically learn from
data, improve performance from past experiences, and make predictions. Machine learning
contains a set of algorithms that work on a huge amount of data. Data is fed to these algorithms
to train them, and on the basis of training, they build the model & perform a specific task.
These ML algorithms help to solve different business problems like Regression, Classification,
Forecasting, Clustering, and Associations, etc.
Based on the methods and way of learning, machine learning is divided into mainly four types,
which are:
In this topic, we will provide a detailed description of the types of Machine Learning along with
their respective algorithms:
Let's understand supervised learning with an example. Suppose we have an input dataset of cats
and dog images. So, first, we will provide the training to the machine to understand the images,
such as the shape & size of the tail of cat and dog, Shape of eyes, colour, height (dogs are
taller, cats are smaller), etc. After completion of training, we input the picture of a cat and ask
the machine to identify the object and predict the output. Now, the machine is well trained, so it
will check all the features of the object, such as height, shape, colour, eyes, ears, tail, etc., and
find that it's a cat. So, it will put it in the Cat category. This is the process of how the machine
identifies the objects in Supervised Learning.
The main goal of the supervised learning technique is to map the input variable(x) with the
output variable(y). Some real-world applications of supervised learning are Risk Assessment,
Fraud Detection, Spam filtering, etc.
Classification
Regression
a) Classification
Classification algorithms are used to solve the classification problems in which the output
variable is categorical, such as "Yes" or No, Male or Female, Red or Blue, etc. The
classification algorithms predict the categories present in the dataset. Some real-world examples
of classification algorithms are Spam Detection, Email filtering, etc.
Some popular classification algorithms are given below:
Since supervised learning work with the labelled dataset so we can have an exact idea
about the classes of objects.
These algorithms are helpful in predicting the output on the basis of prior experience.
Disadvantages:
Image Segmentation:
Supervised Learning algorithms are used in image segmentation. In this process,
image classification is performed on different image data with pre-defined labels.
Medical Diagnosis:
Supervised algorithms are also used in the medical field for diagnosis purposes. It is
done by using medical images and past labelled data with labels for disease
conditions. With such a process, the machine can identify a disease for the new
patients.
Fraud Detection - Supervised Learning classification algorithms are used for identifying
fraud transactions, fraud customers, etc. It is done by using historic data to identify the
patterns that can lead to possible fraud.
Spam detection - In spam detection & filtering, classification algorithms are used. These
algorithms classify an email as spam or not spam. The spam emails are sent to the spam
folder.
Speech Recognition - Supervised learning algorithms are also used in speech
recognition. The algorithm is trained with voice data, and various identifications can be
done using the same, such as voice-activated passwords, voice commands, etc.
2. Unsupervised Machine Learning
Unsupervised learning is different from the Supervised learning technique; as its name suggests,
there is no need for supervision. It means, in unsupervised machine learning, the machine is
trained using the unlabeled dataset, and the machine predicts the output without any supervision.
In unsupervised learning, the models are trained with the data that is neither classified nor
labelled, and the model acts on that data without any supervision.
The main aim of the unsupervised learning algorithm is to group or categories the unsorted
dataset according to the similarities, patterns, and differences. Machines are instructed to
find the hidden patterns from the input dataset.
Let's take an example to understand it more preciously; suppose there is a basket of fruit images,
and we input it into the machine learning model. The images are totally unknown to the model,
and the task of the machine is to find the patterns and categories of the objects.
So, now the machine will discover its patterns and differences, such as colour difference, shape
difference, and predict the output when it is tested with the test dataset.
Clustering
Association
1) Clustering
The clustering technique is used when we want to find the inherent groups from the data. It is a
way to group the objects into a cluster such that the objects with the most similarities remain in
one group and have fewer or no similarities with the objects of other groups. An example of the
clustering algorithm is grouping the customers by their purchasing behaviour.
Some of the popular clustering algorithms are given below:
These algorithms can be used for complicated tasks compared to the supervised ones
because these algorithms work on the unlabeled dataset.
Unsupervised algorithms are preferable for various tasks as getting the unlabeled dataset
is easier as compared to the labelled dataset.
Disadvantages:
The output of an unsupervised algorithm can be less accurate as the dataset is not
labelled, and algorithms are not trained with the exact output in prior.
Working with Unsupervised learning is more difficult as it works with the unlabelled
dataset that does not map with the output.
Applications of Unsupervised Learning
Network Analysis: Unsupervised learning is used for identifying plagiarism and
copyright in document network analysis of text data for scholarly articles.
Recommendation Systems: Recommendation systems widely use unsupervised learning
techniques for building recommendation applications for different web applications and
e-commerce websites.
Anomaly Detection: Anomaly detection is a popular application of unsupervised
learning, which can identify unusual data points within the dataset. It is used to discover
fraudulent transactions.
Singular Value Decomposition: Singular Value Decomposition or SVD is used to
extract particular information from the database. For example, extracting information of
each user located at a particular location.
3. Semi-Supervised Learning
Semi-Supervised learning is a type of Machine Learning algorithm that lies between
Supervised and Unsupervised machine learning. It represents the intermediate ground
between Supervised (With Labelled training data) and Unsupervised learning (with no labelled
training data) algorithms and uses the combination of labelled and unlabeled datasets during the
training period.
Although Semi-supervised learning is the middle ground between supervised and unsupervised
learning and operates on the data that consists of a few labels, it mostly consists of unlabeled
data. As labels are costly, but for corporate purposes, they may have few labels. It is completely
different from supervised and unsupervised learning as they are based on the presence & absence
of labels.
To overcome the drawbacks of supervised learning and unsupervised learning algorithms,
the concept of Semi-supervised learning is introduced. The main aim of semi-supervised
learning is to effectively use all the available data, rather than only labelled data like in
supervised learning. Initially, similar data is clustered along with an unsupervised learning
algorithm, and further, it helps to label the unlabeled data into labelled data. It is because labelled
data is a comparatively more expensive acquisition than unlabeled data.
We can imagine these algorithms with an example. Supervised learning is where a student is
under the supervision of an instructor at home and college. Further, if that student is self-
analysing the same concept without any help from the instructor, it comes under unsupervised
learning. Under semi-supervised learning, the student has to revise himself after analyzing the
same concept under the guidance of an instructor at college.
1. Overfitting:
Overfitting occurs when a model learns the training data too well, capturing noise and random
variations that are specific to the training set but do not exist in the underlying population or the
test data. Signs of overfitting include:
- High training accuracy but poor performance on the test/validation data.
- The model captures the noise and outliers in the training data, leading to poor generalization.
- The model is excessively complex and has too many parameters, which allows it to memorize
the training examples instead of learning the underlying patterns.
- Overly flexible models like deep neural networks can be prone to overfitting, especially with
limited training data.
To mitigate overfitting, the following strategies can be employed:
- Increase the size of the training dataset to provide more diverse examples.
- Use techniques like cross-validation or train/test split to evaluate the model's performance on
unseen data.
- Regularization methods like L1 or L2 regularization can be applied to penalize complex models
and reduce the impact of noise in the training data.
- Simplify the model by reducing the number of parameters, limiting the depth of decision trees,
or reducing the complexity of neural networks.
- Feature selection or dimensionality reduction techniques can help remove irrelevant or noisy
features.
2. Underfitting:
Underfitting occurs when a model is too simple to capture the underlying patterns in the data. It
fails to learn the important relationships between the input features and the target variable,
resulting in poor performance on both the training and test data. Signs of underfitting include:
- Low training accuracy and poor performance on both the training and test/validation data.
- The model is too simple and does not capture the complexities of the data.
- The model fails to learn important patterns or relationships in the data.