0% found this document useful (0 votes)
2 views

AI using Python

The document provides an overview of machine learning, detailing its types including supervised, unsupervised, semi-supervised, and reinforcement learning, along with their applications and algorithms. It also explains the Naive Bayes algorithm, its advantages, disadvantages, and types, as well as neural networks, specifically Artificial Neural Networks (ANN) and Convolutional Neural Networks (CNN), including their structures and types. Additionally, it touches on face recognition and detection using OpenCV.

Uploaded by

Anirudh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AI using Python

The document provides an overview of machine learning, detailing its types including supervised, unsupervised, semi-supervised, and reinforcement learning, along with their applications and algorithms. It also explains the Naive Bayes algorithm, its advantages, disadvantages, and types, as well as neural networks, specifically Artificial Neural Networks (ANN) and Convolutional Neural Networks (CNN), including their structures and types. Additionally, it touches on face recognition and detection using OpenCV.

Uploaded by

Anirudh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

AI using Python

Q1. What is Machine learning? Explain its types in detail.

Ans. Machine learning is a subset of AI, which enables the machine to


automatically learn from data, improve performance from past experiences, and
make predictions. Machine learning contains a set of algorithms that work on a
huge amount of data. Machine learning uses a massive amount of structured and
semi-structured data so that a machine learning model can generate accurate
result or give predictions based on that data. Machine learning is being used in
various places such as for online recommender system, for Google search
algorithms, Email spam filter, Facebook Auto friend tagging suggestion, etc.

Types of Machine Learning:-

1. Supervised Machine Learning


2. Unsupervised Machine Learning
3. Semi-Supervised Machine Learning
4. Reinforcement Learning
1. Supervised machine learning
Supervised machine learning is based on supervision. It means in the supervised
learning technique, we train the machines using the "labelled" dataset, and based
on the training, the machine predicts the output. Here, the labelled data specifies
that some of the inputs are already mapped to the output. More preciously, we
can say; first, we train the machine with the input and corresponding output, and
then we ask the machine to predict the output using the test dataset.

Example: Suppose we have an input dataset of Cats and Dog images. So, first,
we will provide the training to the machine to understand the images, such as
shape & size of the tail of cat and dog, Shape of eyes, colour, height (dogs are
taller, cats are smaller), etc. After completion of training, we input the picture
of a cat and ask the machine to identify the object and predict the output. Now,
the machine is well trained, so it will check all the features of the object, such as
height, shape, colour, eyes, ears, tail, etc., and find that it is a cat. So, it will put
it in the Cat category. This is the process of how the machine identifies the objects
in Supervised Learning.

The main goal of the supervised learning technique is to map the input variable(x)
with the output variable(y). Some real-world applications of supervised learning
are Risk Assessment, Fraud Detection, Spam filtering, etc.

Categories of Supervised Machine Learning:-

Supervised machine learning can be classified into two types of problems, which
are given below:

o Classification
o Regression
a) Classification

Classification algorithms are used to solve the classification problems in which


the output variable is categorical, such as "Yes" or No, Male or Female, Red or
Blue, etc. The classification algorithms predict the categories present in the
dataset. Some real-world examples of classification algorithms are Spam
Detection, Email filtering, etc.

Some popular classification algorithms are given below:

o Random Forest Algorithm


o Decision Tree Algorithm
o Logistic Regression Algorithm
o Support Vector Machine Algorithm

b) Regression

Regression algorithms are used to solve regression problems in which there is a


linear relationship between input and output variables. These are used to predict
continuous output variables, such as market trends, weather prediction, etc.

Some popular Regression algorithms are given below:

o Simple Linear Regression Algorithm


o Multivariate Regression Algorithm
o Decision Tree Algorithm
o Lasso Regression
2. Unsupervised Machine Learning

Unsupervised learning is different from the Supervised learning technique; as its


name suggests, there is no need for supervision. It means, in unsupervised
machine learning, the machine is trained using the unlabelled dataset, and the
machine predicts the output without any supervision.

In unsupervised learning, the models are trained with the data that is neither
classified nor labelled, and the model acts on that data without any supervision.

The main aim of the unsupervised learning algorithm is to group or


categories the unsorted dataset according to the similarities, patterns, and
differences. Machines are instructed to find the hidden patterns from the input
dataset.

Example: Suppose there is a basket of fruit images, and we input it into the
machine learning model. The images are totally unknown to the model, and the
task of the machine is to find the patterns and categories of the objects.

So, now the machine will discover its patterns and differences, such as colour
difference, shape difference, and predict the output when it is tested with the test
dataset.

Categories of Unsupervised Machine Learning:-

Unsupervised Learning can be further classified into two types, which are given
below:

o Clustering
o Association
1) Clustering

The clustering technique is used when we want to find the inherent groups from
the data. It is a way to group the objects into a cluster such that the objects with
the most similarities remain in one group and have fewer or no similarities with
the objects of other groups. An example of the clustering algorithm is grouping
the customers by their purchasing behaviour.

Some of the popular clustering algorithms are given below:

o K-Means Clustering algorithm


o Mean-shift algorithm
o DBSCAN Algorithm
o Principal Component Analysis
o Independent Component Analysis

2) Association

Association rule learning is an unsupervised learning technique, which finds


interesting relations among variables within a large dataset. The main aim of this
learning algorithm is to find the dependency of one data item on another data item
and map those variables accordingly so that it can generate maximum profit. This
algorithm is mainly applied in Market Basket analysis, Web usage mining,
continuous production, etc.

Some popular algorithms of Association rule learning are Apriori Algorithm,


Eclat, FP-growth algorithm.
3. Semi-Supervised Learning

Semi-Supervised learning is a type of Machine Learning algorithm that lies


between Supervised and Unsupervised machine learning. It represents the
intermediate ground between Supervised (With Labelled training data) and
Unsupervised learning (with no labelled training data) algorithms and uses the
combination of labelled and unlabelled datasets during the training period.

To overcome the drawbacks of supervised learning and unsupervised learning


algorithms, the concept of Semi-supervised learning is introduced. The main aim
of semi-supervised learning is to effectively use all the available data, rather than
only labelled data like in supervised learning.

We can imagine these algorithms with an example. Supervised learning is where


a student is under the supervision of an instructor at home and college. Further,
if that student is self-analysing the same concept without any help from the
instructor, it comes under unsupervised learning. Under semi-supervised
learning, the student must revise himself after analyzing the same concept under
the guidance of an instructor at college.

4. Reinforcement Learning

Reinforcement learning works on a feedback-based process, in which an AI agent


(A software component) automatically explore its surrounding by hitting & trail,
taking action, learning from experiences, and improving its performance. Agent
gets rewarded for each good action and get punished for each bad action; hence
the goal of reinforcement learning agent is to maximize the rewards.

In reinforcement learning, there is no labelled data like supervised learning, and


agents learn from their experiences only.
The reinforcement learning process is like a human being; for example, a child
learns various things by experiences in his day-to-day life. An example of
reinforcement learning is to play a game, where the Game is the environment,
moves of an agent at each step define states, and the goal of the agent is to get a
high score. Agent receives feedback in terms of punishment and rewards.

Due to its way of working, reinforcement learning is employed in different fields


such as Game theory, Operation Research, Information theory, multi-agent
systems.

Categories of Reinforcement Learning:-

Reinforcement learning is categorized mainly into two types of


methods/algorithms:

o Positive Reinforcement Learning: Positive reinforcement learning


specifies increasing the tendency that the required behaviour would occur
again by adding something. It enhances the strength of the behaviour of the
agent and positively impacts it.
o Negative Reinforcement Learning: Negative reinforcement learning
works exactly opposite to the positive RL. It increases the tendency that
the specific behaviour would occur again by avoiding the negative
condition.
Q2. Explain Naive Bayes algorithm.

Ans. Naive Bayes Classifier Algorithm

o Naïve Bayes algorithm is a supervised learning algorithm, which is based


on Bayes theorem and used for solving classification problems.
o It is mainly used in text classification that includes a high-dimensional
training dataset.
o Naïve Bayes Classifier is one of the simple and most effective
Classification algorithms which helps in building the fast machine learning
models that can make quick predictions.
o It is a probabilistic classifier, which means it predicts on the basis of
the probability of an object.
o Some popular examples of Naïve Bayes Algorithm are spam filtration,
Sentimental analysis, and classifying articles.

Why is it called Naive Bayes?

The Naive Bayes algorithm is comprised of two words Naive and Bayes, which
can be described as:

o Naïve: It is called Naïve because it assumes that the occurrence of a certain


feature is independent of the occurrence of other features. Such as if the
fruit is identified on the bases of color, shape, and taste, then red, spherical,
and sweet fruit is recognized as an apple. Hence each feature individually
contributes to identify that it is an apple without depending on each other.
o Bayes: It is called Bayes because it depends on the principle of Bayes'
Theorem.
Bayes' Theorem:

o Bayes' theorem is also known as Bayes' Rule or Bayes' law, which is used
to determine the probability of a hypothesis with prior knowledge. It
depends on the conditional probability.
o The formula for Bayes' theorem is given as:

Where,

P(A|B) is Posterior probability: Probability of hypothesis A on the observed


event B.

P(B|A) is Likelihood probability: Probability of the evidence given that the


probability of a hypothesis is true.

P(A) is Prior Probability: Probability of hypothesis before observing the


evidence.

P(B) is Marginal Probability: Probability of Evidence.

Working of Naïve Bayes' Classifier:

Working of Naïve Bayes' Classifier can be understood with the help of the below
example:

Suppose we have a dataset of weather conditions and corresponding target


variable "Play". So, using this dataset we need to decide that whether we should
play or not on a particular day according to the weather conditions. So, to solve
this problem, we need to follow the below steps:
1. Convert the given dataset into frequency tables.
2. Generate Likelihood table by finding the probabilities of given features.
3. Now, use Bayes theorem to calculate the posterior probability.

Advantages of Naïve Bayes Classifier:

o Naïve Bayes is one of the fast and easy ML algorithms to predict a class of
datasets.
o It can be used for Binary as well as Multi-class Classifications.
o It performs well in multi-class predictions as compared to the other
Algorithms.
o It is the most popular choice for text classification problems.

Disadvantages of Naïve Bayes Classifier:

o Naive Bayes assumes that all features are independent or unrelated, so it


cannot learn the relationship between features.

Applications of Naïve Bayes Classifier:

o It is used for Credit Scoring.


o It is used in medical data classification.
o It can be used in real-time predictions because Naïve Bayes Classifier is
an eager learner.
o It is used in Text classification such as Spam filtering and Sentiment
analysis.
Types of Naïve Bayes Model:

There are three types of Naive Bayes Model, which are given below:

o Gaussian: The Gaussian model assumes that features follow a normal


distribution. This means if predictors take continuous values instead of
discrete, then the model assumes that these values are sampled from the
Gaussian distribution.
o Multinomial: The Multinomial Naïve Bayes classifier is used when the
data is multinomial distributed. It is primarily used for document
classification problems, it means a particular document belongs to which
category such as Sports, Politics, education, etc.
The classifier uses the frequency of words for the predictors.
o Bernoulli: The Bernoulli classifier works like the Multinomial classifier,
but the predictor variables are the independent Booleans variables. Such as
if a particular word is present or not in a document. This model is also
famous for document classification tasks.

Q3. What is a Neural network? Describe ANN & CNN, it’s


structure and types in detail.

Ans. A neural network is a method in artificial intelligence that teaches


computers to process data in a way that is inspired by the human brain. It is a type
of machine learning process, called deep learning, that uses interconnected nodes
or neurons in a layered structure that resembles the human brain. It creates an
adaptive system that computers use to learn from their mistakes and improve
continuously. Thus, artificial neural networks attempt to solve complicated
problems, like summarizing documents or recognizing faces, with greater
accuracy.

Artificial Neural Network (ANN):

Artificial Neural Network (ANN), is a group of multiple neurons at each


layer. ANN is also known as a Feed-Forward Neural network because inputs
are processed only in the forward direction.
This type of neural networks are one of the simplest variants of neural
networks. They pass information in one direction, through various input nodes,
until it makes it to the output node. The network may or may not have hidden
node layers, making their functioning more interpretable.
 Architecture: Made up of layers with unidirectional flow of data
(from input through hidden and the output layer).
 Training: Backpropagation is often used during training for the main
aim of reducing the prediction errors.
 Applications: In visual and voice recognition, NLP, financial
forecasting, and recommending systems.

 Input Layer: Receives the initial data (features) for processing.


 Hidden Layers: Perform intermediate computations and feature
extraction. There can be multiple hidden layers in a deep neural network.
 Output Layer: Produces the final prediction or classification.

Types of Artificial Neural Network:

Feedback ANN:

In this type of ANN, the output returns into the network to accomplish the best-
evolved results internally. As per the University of Massachusetts, Lowell
Centre for Atmospheric Research. The feedback networks feed information back
into itself and are well suited to solve optimization issues. The Internal system
error corrections utilize feedback ANNs.

Feed-Forward ANN:

A feed-forward network is a basic neural network comprising of an input layer,


an output layer, and at least one layer of a neuron. Through assessment of its
output by reviewing its input, the intensity of the network can be noticed based
on group behavior of the associated neurons, and the output is decided. The
primary advantage of this network is that it figures out how to evaluate and
recognize input patterns.

Convolutional Neural Network (CNN):

Convolutional Neural Network (CNN) is the extended version of artificial


neural networks (ANN) which is predominantly used to extract the feature
from the grid-like matrix dataset.
For example visual datasets like images or videos where data patterns play an
extensive role.
Convolutional neural networks (CNN) are one of the most popular models
used today. This neural network computational model uses a variation of
multilayer perceptrons and contains one or more convolutional layers that can
be either entirely connected or pooled. These convolutional layers create
feature maps that record a region of image which is ultimately broken into
rectangles and sent out for nonlinear processing.
CNN architecture

Convolutional Neural Network consists of multiple layers like the input layer,
Convolutional layer, Pooling layer, and fully connected layers.

Simple CNN architecture

The Convolutional layer applies filters to the input image to extract features,
the Pooling layer down-samples the image to reduce computation, and the
fully connected layer makes the final prediction. The network learns the
optimal filters through backpropagation and gradient descent.
 Convolutional Layers: Apply filters to the input data to produce feature
maps.
 Pooling Layers: Reduce the dimensionality of the feature maps.
 Dense Layer: A Dense Layer (also known as a Fully Connected Layer)
Perform the final classification or regression task. In this, every neuron is
connected to every neuron in the previous layer, making it "fully
connected."
 Output Layer: It is the final layer that produces the result of the network’s
computations. It provides the final predictions or classifications for a given
input.
Types of CNN:-
1. LeNEt
2. VGGNet
3. AlexNet
4. GoogLeNet

Q4. Describe face recognition and detection using open CV.


Ans. Face recognition and detection using OpenCV involves using machine
learning techniques to identify and locate faces in images or video streams.
OpenCV (Open Source Computer Vision Library) is a popular library for
computer vision tasks.

The face recognition is a technique to identify or verify the face from the digital
images or video frame. A human can quickly identify the faces without much
effort. It is an effortless task for us, but it is a difficult task for a computer. There
are various complexities, such as low resolution, occlusion, illumination
variations, etc. These factors highly affect the accuracy of the computer to
recognize the face more effectively. First, it is necessary to understand the
difference between face detection and face recognition.

Face Detection: The face detection is generally considered as finding the faces
(location and size) in an image and probably extract them to be used by the face
detection algorithm.

Face Recognition: The face recognition algorithm is used in finding features that
are uniquely described in the image. The facial image is already extracted,
cropped, resized, and usually converted in the grayscale.
Basic Concept of HAAR Cascade Algorithm

The HAAR cascade is a machine learning approach where a cascade function is


trained from a lot of positive and negative images. Positive images are those
images that consist of faces, and negative images are without faces. In face
detection, image features are treated as numerical information extracted from the
pictures that can distinguish one image from another.

HAAR-Cascade Detection in OpenCV

OpenCV provides the trainer as well as the detector. We can train the classifier
for any object like cars, planes, and buildings by using the OpenCV. There are
two primary states of the cascade image classifier first one is training and the
other is detection.

OpenCV provides two applications to train cascade


classifier opencv_haartraining and opencv_traincascade. These two
applications store the classifier in the different file format.

For training, we need a set of samples. There are two types of samples:

o Negative sample: It is related to non-object images.


o Positive samples: It is a related image with detect objects.

Face recognition using OpenCV

Face recognition is a simple task for humans. Successful face recognition tends
to effective recognition of the inner features (eyes, nose, mouth) or outer features
(head, face, hairline).

The basic idea of face recognition is based on the geometric features of a face. It
is the feasible and most intuitive approach for face recognition. The first
automated face recognition system was described in the position of eyes, ears,
nose. These positioning points are called features vector (distance between the
points).

The face recognition is achieved by calculating the Euclidean distance between


feature vectors of a probe and reference image. This method is effective in
illumination change by its nature, but it has a considerable drawback. The correct
registration of the maker is very hard.

The face recognition system can operate basically in two modes:

1. Authentication or Verification of a facial image-

It compares the input facial image with the facial image related to the user, which
is required authentication. It is a 1x1 comparison.

2. Identification or facial recognition

It basically compares the input facial images from a dataset to find the user that
matches that input face. It is a 1xN comparison.

There are various types of face recognition algorithms, for example:

o Eigenfaces (1991)
o Local Binary Patterns Histograms (LBPH) (1996)
o Fisherfaces (1997)
o Scale Invariant Feature Transform (SIFT) (1999)
o Speed Up Robust Features (SURF) (2006)
Q5. What is NLP? Describe its components in detail. Also
describe its steps.

Ans. NLP stands for Natural Language Processing, which is a part


of Computer Science, Human language, and Artificial Intelligence. It is the
technology that is used by machines to understand, analyse, manipulate, and
interpret human's languages. It helps developers to organize knowledge for
performing tasks such as translation, automatic summarization, Named
Entity Recognition (NER), speech recognition, relationship
extraction, and topic segmentation.

Components of NLP:-

1. Natural Language Understanding (NLU)

Natural Language Understanding (NLU) helps the machine to understand and


analyse human language by extracting the metadata from content such as
concepts, entities, keywords, emotion, relations, and semantic roles.

NLU mainly used in Business applications to understand the customer's problem


in both spoken and written language.
NLU involves the following tasks -

o It is used to map the given input into useful representation.


o It is used to analyze different aspects of the language.

2. Natural Language Generation (NLG)

Natural Language Generation (NLG) acts as a translator that converts the


computerized data into natural language representation. It mainly involves Text
planning, Sentence planning, and Text Realization.

Applications of NLP:-

1. Tokenization: Tokenization is the process of breaking down text into


smaller units called tokens, which could be words, phrases, or even
characters. This is the first step in many NLP tasks.
2. Text Normalization:
o Description: Text normalization involves converting text into a
standard format, which includes lowercasing, removing punctuation,
stemming, and lemmatization.
o Lowercasing: Converting all characters in the text to lowercase.
o Removing Punctuation: Stripping out punctuation marks from the
text.
o Stemming: Reducing words to their root form (e.g., "running" to
"run").
o Lemmatization: Reducing words to their base or dictionary form
(e.g., "better" to "good").
3. Sentiment Analysis: Sentiment analysis determines the sentiment or
emotion expressed in a piece of text, categorizing it as positive, negative,
or neutral.
4. Machine Translation: Machine translation automatically translates text
from one language to another.
5. Text Summarization:
o Description: Text summarization reduces a large body of text to a
shorter version while retaining its main points and overall meaning.
o Example: Generating a summary of a long news article.
6. Question Answering:
o Description: Question answering systems provide answers to
questions posed in natural language by extracting relevant
information from a dataset or corpus.
o Example: Asking a virtual assistant, "What is the capital of France?"
and receiving the answer "Paris."
7. Language Modeling:
o Description: Language modeling predicts the next word in a
sentence given the preceding words, which is fundamental for tasks
like text generation and speech recognition.
o Example: Predicting the next word in the sentence "The weather
today is" as "sunny."
Phases of NLP:-

There are the following five phases of NLP:

1. Lexical Analysis and Morphological: The first phase of NLP is the Lexical Analysis.
This phase scans the source code as a stream of characters and converts it into
meaningful lexemes. It divides the whole text into paragraphs, sentences, and words.

2. Syntactic Analysis (Parsing): Syntactic Analysis is used to check grammar, word


arrangements, and shows the relationship among the words.

Example: Agra goes to the Poonam. In the real world, Agra goes to the Poonam, does
not make any sense, so this sentence is rejected by the Syntactic analyzer.

3. Semantic Analysis: Semantic analysis is concerned with the meaning representation.


It mainly focuses on the literal meaning of words, phrases, and sentences.

4. Discourse Integration: Discourse Integration depends upon the sentences that


proceeds it and also invokes the meaning of the sentences that follow it.
5. Pragmatic Analysis: Pragmatic is the fifth and last phase of NLP. It helps you to
discover the intended effect by applying a set of rules that characterize cooperative
dialogues.

For Example: "Open the door" is interpreted as a request instead of an order.

Advantages of NLP:-

o NLP helps users to ask questions about any subject and get a direct
response within seconds.
o NLP offers exact answers to the question means it does not offer
unnecessary and unwanted information.
o NLP helps computers to communicate with humans in their languages.
o It is very time efficient.
o Most of the companies use NLP to improve the efficiency of
documentation processes, accuracy of documentation, and identify the
information from large databases.

Disadvantages of NLP:-
A list of disadvantages of NLP is given below:

o NLP may not show context.


o NLP is unpredictable
o NLP may require more keystrokes.
o NLP is unable to adapt to the new domain, and it has a limited function
that's why NLP is built for a single and specific task only.

Q6. Describe ensemble methods classification.

Ans. Ensemble methods in classification are techniques that combine multiple


machine learning models to improve the overall performance compared to
individual models. The idea is to leverage the strengths and mitigate the
weaknesses of various models to create a more accurate and robust classifier.
Ensemble methods can be broadly categorized into two types: bagging and
boosting. There are also other methods like stacking and voting. Here’s a detailed
description of these techniques:

1. Bagging (Bootstrap Aggregating)

Description: Bagging involves training multiple instances of the same learning


algorithm on different subsets of the training data (created using bootstrapping,
which means random sampling with replacement) and then aggregating their
predictions to make the final prediction.

Steps:

1. Generate multiple bootstrapped subsets from the original training dataset.


2. Train a model (e.g., decision tree) on each subset.
3. Aggregate the predictions from all models (e.g., majority voting for
classification).

Example Algorithm: Random Forest

A Random Forest consists of a collection of decision trees, each trained on a


different bootstrapped subset of the training data. The final prediction is made by
aggregating the predictions of all trees (majority vote for classification).

2. Boosting:-

Description: Boosting involves training multiple models sequentially, where


each model tries to correct the errors of the previous one. The final prediction is
a weighted sum of the predictions from all models.

Steps:

1. Initialize weights for all training samples.


2. Train a base model and evaluate its performance.
3. Increase the weights of incorrectly classified samples and decrease the
weights of correctly classified ones.
4. Train the next model on the re-weighted samples.
5. Repeat the process for a specified number of iterations.
6. Aggregate the predictions using a weighted sum.

Example Algorithms:

o AdaBoost (Adaptive Boosting): Adjusts the weights of


misclassified samples, making them more likely to be chosen in
subsequent rounds.
o Gradient Boosting: Fits new models to the residual errors made by
previous models, effectively reducing the overall error.

3. Stacking (Stacked Generalization):-

Description: Stacking involves training multiple models (base learners) and then
using another model (meta-learner) to combine their predictions.

Steps:

1. Split the training data into several folds.


2. Train base learners on different folds and get their predictions.
3. Use these predictions as inputs to train the meta-learner.
4. The meta-learner makes the final prediction.

Example: Combining logistic regression, decision trees, and support vector


machines as base learners, with a neural network as the meta-learner.
4. Voting:-

Description: Voting is a simple ensemble method where multiple models (which


can be of different types) are trained on the same data, and their predictions are
combined using a majority vote (for classification) or averaging (for regression).

Types:

o Hard Voting: The class label that gets the most votes is the final prediction.
o Soft Voting: The probabilities of each class are averaged, and the class with
the highest average probability is chosen.
 Example: Combining predictions from a logistic regression model, a
decision tree, and a k-nearest neighbours model using hard or soft voting.

Advantages of Ensemble Methods:-

1. Improved Accuracy: By combining multiple models, ensemble methods


can achieve better performance than individual models.
2. Robustness: Ensembles reduce the risk of overfitting and are more
resilient to noisy data.
3. Versatility: They can be used with different types of base learners and can
be applied to various types of data.

Disadvantages of Ensemble Methods:-

1. Complexity: Building and training multiple models increases


computational complexity and time.
2. Interpretability: Ensembles are often harder to interpret and understand
compared to single models.
3. Resource Intensive: Requires more memory and processing power.

You might also like