0% found this document useful (0 votes)
6 views

Deep learning

Uploaded by

Shaik Reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Deep learning

Uploaded by

Shaik Reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Artificial Intelligence

Artificial Intelligence is basically the mechanism to incorporate human intelligence into machines
through a set of rules(algorithm). AI is a combination of two words: “Artificial” meaning something
made by humans or non-natural things and “Intelligence” meaning the ability to understand or think
accordingly. Another definition could be that “AI is basically the study of training your
machine(computers) to mimic a human brain and its thinking capabilities”.

AI focuses on 3 major aspects(skills): learning, reasoning, and self-correction to obtain the


maximum efficiency possible.

Machine Learning:

Machine Learning is basically the study/process which provides the system(computer) to learn
automatically on its own through experiences it had and improve accordingly without being explicitly
programmed. ML is an application or subset of AI. ML focuses on the development of programs so
that it can access data to use it for itself. The entire process makes observations on data to identify
the possible patterns being formed and make better future decisions as per the examples provided
to them. The major aim of ML is to allow the systems to learn by themselves through experience
without any kind of human intervention or assistance.

Deep Learning:

Deep Learning is basically a sub-part of the broader family of Machine Learning which makes use
of Neural Networks(similar to the neurons working in our brain) to mimic human brain-like behavior.
DL algorithms focus on information processing patterns mechanism to possibly identify the patterns
just like our human brain does and classifies the information accordingly. DL works on larger sets of
data when compared to ML and the prediction mechanism is self-administered by machines.

Below is a table of differences between Artificial Intelligence, Machine Learning and Deep Learning:

Artificial Intelligence Machine Learning Deep Learning

AI stands for Artificial DL stands for Deep Learning,


ML stands for Machine
Intelligence, and is basically and is the study that makes
Learning, and is the study that
the study/process which use of Neural Networks(similar
uses statistical methods
enables machines to mimic to neurons present in human
enabling machines to improve
human behaviour through brain) to imitate functionality
with experience.
particular algorithm. just like a human brain.

AI is the broader family


consisting of ML and DL as ML is the subset of AI. DL is the subset of ML.
it’s components.

AI is a computer algorithm ML is an AI algorithm which DL is a ML algorithm that uses


which exhibits intelligence allows system to learn from deep(more than one layer)
through decision making. data. neural networks to analyze
Artificial Intelligence Machine Learning Deep Learning

data and provide output


accordingly.

If you are clear about the math


If you have a clear idea about involved in it but don’t have
the logic(math) involved in idea about the features, so
Search Trees and much behind and you can visualize you break the complex
complex math is involved in the complex functionalities like functionalities into
AI. K-Mean, Support Vector linear/lower dimension
Machines, etc., then it defines features by adding more
the ML aspect. layers, then it defines the DL
aspect.

It attains the highest rank in


The aim is to basically The aim is to increase accuracy
terms of accuracy when it is
increase chances of success not caring much about the
trained with large amount of
and not accuracy. success ratio.
data.

DL can be considered as neural


networks with a large number
Three broad
of parameters layers lying in
categories/types Of AI are:
Three broad categories/types one of the four fundamental
Artificial Narrow
Of ML are: Supervised Learning, network architectures:
Intelligence (ANI), Artificial
Unsupervised Learning and Unsupervised Pre-trained
General Intelligence (AGI)
Reinforcement Learning Networks, Convolutional
and Artificial Super
Neural Networks, Recurrent
Intelligence (ASI)
Neural Networks and
Recursive Neural Networks

The efficiency Of AI is
Less efficient than DL as it can’t More powerful than ML as it
basically the efficiency
work for longer dimensions or can easily work for larger sets
provided by ML and DL
higher amount of data. of data.
respectively.

Examples of AI applications Examples of ML applications Examples of DL applications


include: Google’s AI- include: Virtual Personal include: Sentiment based
Powered Predictions, Assistants: Siri, Alexa, Google, news aggregation, Image
Ridesharing Apps Like Uber etc., Email Spam and Malware analysis and caption
and Lyft, Commercial
Artificial Intelligence Machine Learning Deep Learning

Flights Use an AI Autopilot,


Filtering. generation, etc.
etc.

AI refers to the broad field


of computer science that
ML is a subset of AI that focuses
focuses on creating DL is a subset of ML that
on developing algorithms that
intelligent machines that focuses on developing deep
can learn from data and
can perform tasks that neural networks that can
improve their performance over
would normally require automatically learn and extract
time without being explicitly
human intelligence, such as features from data.
programmed.
reasoning, perception, and
decision-making.

ML algorithms can be
categorized as supervised,
unsupervised, or reinforcement
AI can be further broken DL algorithms are inspired by
learning. In supervised learning,
down into various subfields the structure and function of
the algorithm is trained on
such as robotics, natural the human brain, and they are
labeled data, where the desired
language processing, particularly well-suited to
output is known. In
computer vision, expert tasks such as image and
unsupervised learning, the
systems, and more. speech recognition.
algorithm is trained on
unlabeled data, where the
desired output is unknown.

DL networks consist of
In reinforcement learning, the multiple layers of
AI systems can be rule- algorithm learns by trial and interconnected neurons that
based, knowledge-based, error, receiving feedback in the process data in a hierarchical
or data-driven. form of rewards or manner, allowing them to
punishments. learn increasingly complex
representations of the data.

AI vs. Machine Learning vs. Deep Learning Examples:

Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that
would normally require human intelligence.

Some examples of AI include:

There are numerous examples of AI applications across various industries. Here are some common
examples:
 Speech recognition: speech recognition systems use deep learning algorithms to recognize
and classify images and speech. These systems are used in a variety of applications, such as
self-driving cars, security systems, and medical imaging.

 Personalized recommendations: E-commerce sites and streaming services like Amazon and
Netflix use AI algorithms to analyze users’ browsing and viewing history to recommend
products and content that they are likely to be interested in.

 Predictive maintenance: AI-powered predictive maintenance systems analyze data from


sensors and other sources to predict when equipment is likely to fail, helping to reduce
downtime and maintenance costs.

 Medical diagnosis: AI-powered medical diagnosis systems analyze medical images and other
patient data to help doctors make more accurate diagnoses and treatment plans.

 Autonomous vehicles: Self-driving cars and other autonomous vehicles use AI algorithms
and sensors to analyze their environment and make decisions about speed, direction, and
other factors.

 Virtual Personal Assistants (VPA) like Siri or Alexa – these use natural language processing
to understand and respond to user requests, such as playing music, setting reminders, and
answering questions.

 Autonomous vehicles – self-driving cars use AI to analyze sensor data, such as cameras and
lidar, to make decisions about navigation, obstacle avoidance, and route planning.

 Fraud detection – financial institutions use AI to analyze transactions and detect patterns
that are indicative of fraud, such as unusual spending patterns or transactions from
unfamiliar locations.

 Image recognition – AI is used in applications such as photo organization, security systems,


and autonomous robots to identify objects, people, and scenes in images.

 Natural language processing – AI is used in chatbots and language translation systems to


understand and generate human-like text.

 Predictive analytics – AI is used in industries such as healthcare and marketing to analyze


large amounts of data and make predictions about future events, such as disease outbreaks
or consumer behavior.

 Game-playing AI – AI algorithms have been developed to play games such as chess, Go, and
poker at a superhuman level, by analyzing game data and making predictions about the
outcomes of moves.

Examples of Machine Learning:

Machine Learning (ML) is a subset of Artificial Intelligence (AI) that involves the use of algorithms and
statistical models to allow a computer system to “learn” from data and improve its performance over
time, without being explicitly programmed to do so.

Here are some examples of Machine Learning:


 Image recognition: Machine learning algorithms are used in image recognition systems to
classify images based on their contents. These systems are used in a variety of applications,
such as self-driving cars, security systems, and medical imaging.

 Speech recognition: Machine learning algorithms are used in speech recognition systems to
transcribe speech and identify the words spoken. These systems are used in virtual assistants
like Siri and Alexa, as well as in call centers and other applications.

 Natural language processing (NLP): Machine learning algorithms are used in NLP systems to
understand and generate human language. These systems are used in chatbots, virtual
assistants, and other applications that involve natural language interactions.

 Recommendation systems: Machine learning algorithms are used in recommendation


systems to analyze user data and recommend products or services that are likely to be of
interest. These systems are used in e-commerce sites, streaming services, and other
applications.

 Sentiment analysis: Machine learning algorithms are used in sentiment analysis systems to
classify the sentiment of text or speech as positive, negative, or neutral. These systems are
used in social media monitoring and other applications.

 Predictive maintenance: Machine learning algorithms are used in predictive maintenance


systems to analyze data from sensors and other sources to predict when equipment is likely
to fail, helping to reduce downtime and maintenance costs.

 Spam filters in email – ML algorithms analyze email content and metadata to identify and
flag messages that are likely to be spam.

 Recommendation systems – ML algorithms are used in e-commerce websites and streaming


services to make personalized recommendations to users based on their browsing and
purchase history.

 Predictive maintenance – ML algorithms are used in manufacturing to predict when


machinery is likely to fail, allowing for proactive maintenance and reducing downtime.

 Credit risk assessment – ML algorithms are used by financial institutions to assess the credit
risk of loan applicants, by analyzing data such as their income, employment history, and
credit score.

 Customer segmentation – ML algorithms are used in marketing to segment customers into


different groups based on their characteristics and behavior, allowing for targeted advertising
and promotions.

 Fraud detection – ML algorithms are used in financial transactions to detect patterns of


behavior that are indicative of fraud, such as unusual spending patterns or transactions from
unfamiliar locations.

 Speech recognition – ML algorithms are used to transcribe spoken words into text, allowing
for voice-controlled interfaces and dictation software.

Examples of Deep Learning:

Deep Learning is a type of Machine Learning that uses artificial neural networks with multiple layers
to learn and make decisions.
Here are some examples of Deep Learning:

 Image and video recognition: Deep learning algorithms are used in image and video
recognition systems to classify and analyze visual data. These systems are used in self-driving
cars, security systems, and medical imaging.

 Generative models: Deep learning algorithms are used in generative models to create new
content based on existing data. These systems are used in image and video generation, text
generation, and other applications.

 Autonomous vehicles: Deep learning algorithms are used in self-driving cars and other
autonomous vehicles to analyze sensor data and make decisions about speed, direction, and
other factors.

 Image classification – Deep Learning algorithms are used to recognize objects and scenes in
images, such as recognizing faces in photos or identifying items in an image for an e-
commerce website.

 Speech recognition – Deep Learning algorithms are used to transcribe spoken words into
text, allowing for voice-controlled interfaces and dictation software.

 Natural language processing – Deep Learning algorithms are used for tasks such as
sentiment analysis, language translation, and text generation.

 Recommender systems – Deep Learning algorithms are used in recommendation systems to


make personalized recommendations based on users’ behavior and preferences.

 Fraud detection – Deep Learning algorithms are used in financial transactions to detect
patterns of behavior that are indicative of fraud, such as unusual spending patterns or
transactions from unfamiliar locations.

 Game-playing AI – Deep Learning algorithms have been used to develop game-playing AI


that can compete at a superhuman level, such as the AlphaGo AI that defeated the world
champion in the game of Go.

 Time series forecasting – Deep Learning algorithms are used to forecast future values in time
series data, such as stock prices, energy consumption, and weather patterns.
Decision Tree Classification Algorithm
o Decision Tree is a Supervised learning technique that can be used for both classification and
Regression problems, but mostly it is preferred for solving Classification problems. It is a
tree-structured classifier, where internal nodes represent the features of a dataset,
branches represent the decision rules and each leaf node represents the outcome.

o In a Decision tree, there are two nodes, which are the Decision Node and Leaf
Node. Decision nodes are used to make any decision and have multiple branches, whereas
Leaf nodes are the output of those decisions and do not contain any further branches.

o The decisions or the test are performed on the basis of features of the given dataset.

o It is a graphical representation for getting all the possible solutions to a problem/decision


based on given conditions.

o It is called a decision tree because, similar to a tree, it starts with the root node, which
expands on further branches and constructs a tree-like structure.

o In order to build a tree, we use the CART algorithm, which stands for Classification and
Regression Tree algorithm.

o A decision tree simply asks a question, and based on the answer (Yes/No), it further split the
tree into subtrees.

o Below diagram explains the general structure of a decision tree:

Note: A decision tree can contain categorical data (YES/NO) as well as numeric data.

Why use Decision Trees?


There are various algorithms in Machine learning, so choosing the best algorithm for the given
dataset and problem is the main point to remember while creating a machine learning model. Below
are the two reasons for using the Decision tree:

o Decision Trees usually mimic human thinking ability while making a decision, so it is easy to
understand.

o The logic behind the decision tree can be easily understood because it shows a tree-like
structure.

Decision Tree Terminologies

 Root Node: Root node is from where the decision tree starts. It represents the entire dataset,
which further gets divided into two or more homogeneous sets.

 Leaf Node: Leaf nodes are the final output node, and the tree cannot be segregated further after
getting a leaf node.

 Splitting: Splitting is the process of dividing the decision node/root node into sub-nodes according
to the given conditions.

 Branch/Sub Tree: A tree formed by splitting the tree.

 Pruning: Pruning is the process of removing the unwanted branches from the tree.

 Parent/Child node: The root node of the tree is called the parent node, and other nodes are called
the child nodes.

How does the Decision Tree algorithm Work?

In a decision tree, for predicting the class of the given dataset, the algorithm starts from the root
node of the tree. This algorithm compares the values of root attribute with the record (real dataset)
attribute and, based on the comparison, follows the branch and jumps to the next node.

For the next node, the algorithm again compares the attribute value with the other sub-nodes and
move further. It continues the process until it reaches the leaf node of the tree. The complete
process can be better understood using the below algorithm:

o Step-1: Begin the tree with the root node, says S, which contains the complete dataset.

o Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM).

o Step-3: Divide the S into subsets that contains possible values for the best attributes.

o Step-4: Generate the decision tree node, which contains the best attribute.

o Step-5: Recursively make new decision trees using the subsets of the dataset created in step
-3. Continue this process until a stage is reached where you cannot further classify the nodes
and called the final node as a leaf node.

Example: Suppose there is a candidate who has a job offer and wants to decide whether he should
accept the offer or Not. So, to solve this problem, the decision tree starts with the root node (Salary
attribute by ASM). The root node splits further into the next decision node (distance from the office)
and one leaf node based on the corresponding labels. The next decision node further gets split into
one decision node (Cab facility) and one leaf node. Finally, the decision node splits into two leaf
nodes (Accepted offers and Declined offer). Consider the below diagram:
Attribute Selection Measures

While implementing a Decision tree, the main issue arises that how to select the best attribute for
the root node and for sub-nodes. So, to solve such problems there is a technique which is called
as Attribute selection measure or ASM. By this measurement, we can easily select the best attribute
for the nodes of the tree. There are two popular techniques for ASM, which are:

o Information Gain

o Gini Index

1. Information Gain:

o Information gain is the measurement of changes in entropy after the segmentation of a


dataset based on an attribute.

o It calculates how much information a feature provides us about a class.

o According to the value of information gain, we split the node and build the decision tree.

o A decision tree algorithm always tries to maximize the value of information gain, and a
node/attribute having the highest information gain is split first. It can be calculated using the
below formula:

1. Information Gain= Entropy(S)- [(Weighted Avg) *Entropy(each feature)

Entropy: Entropy is a metric to measure the impurity in a given attribute. It specifies randomness in
data. Entropy can be calculated as:

Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)

Where,
o S= Total number of samples

o P(yes)= probability of yes

o P(no)= probability of no

2. Gini Index:

o Gini index is a measure of impurity or purity used while creating a decision tree in the
CART(Classification and Regression Tree) algorithm.

o An attribute with the low Gini index should be preferred as compared to the high Gini index.

o It only creates binary splits, and the CART algorithm uses the Gini index to create binary
splits.

o Gini index can be calculated using the below formula:

Gini Index= 1- ∑jPj2

Pruning: Getting an Optimal Decision tree

Pruning is a process of deleting the unnecessary nodes from a tree in order to get the optimal
decision tree.

A too-large tree increases the risk of overfitting, and a small tree may not capture all the important
features of the dataset. Therefore, a technique that decreases the size of the learning tree without
reducing accuracy is known as Pruning. There are mainly two types of tree pruning technology used:

o Cost Complexity Pruning

o Reduced Error Pruning.

Advantages of the Decision Tree

o It is simple to understand as it follows the same process which a human follow while making
any decision in real-life.

o It can be very useful for solving decision-related problems.

o It helps to think about all the possible outcomes for a problem.

o There is less requirement of data cleaning compared to other algorithms.

Disadvantages of the Decision Tree

o The decision tree contains lots of layers, which makes it complex.

o It may have an overfitting issue, which can be resolved using the Random Forest algorithm.

o For more class labels, the computational complexity of the decision tree may increase.
Random Forest Algorithm

Random Forest is a popular machine learning algorithm that belongs to the supervised learning
technique. It can be used for both Classification and Regression problems in ML. It is based on the
concept of ensemble learning, which is a process of combining multiple classifiers to solve a complex
problem and to improve the performance of the model.

As the name suggests, "Random Forest is a classifier that contains a number of decision trees on
various subsets of the given dataset and takes the average to improve the predictive accuracy of
that dataset." Instead of relying on one decision tree, the random forest takes the prediction from
each tree and based on the majority votes of predictions, and it predicts the final output.

The greater number of trees in the forest leads to higher accuracy and prevents the problem of
overfitting.

The below diagram explains the working of the Random Forest algorithm:

Advertisement

Note: To better understand the Random Forest Algorithm, you should have knowledge of the
Decision Tree Algorithm.

Assumptions for Random Forest

Since the random forest combines multiple trees to predict the class of the dataset, it is possible that
some decision trees may predict the correct output, while others may not. But together, all the trees
predict the correct output. Therefore, below are two assumptions for a better Random forest
classifier:

Advertisement
Advertisement

o There should be some actual values in the feature variable of the dataset so that the
classifier can predict accurate results rather than a guessed result.

o The predictions from each tree must have very low correlations.

Why use Random Forest?

Below are some points that explain why we should use the Random Forest algorithm:

<="" li="" style="box-sizing: border-box;">

o It takes less training time as compared to other algorithms.

o It predicts output with high accuracy, even for the large dataset it runs efficiently.

o It can also maintain accuracy when a large proportion of data is missing.

How does Random Forest algorithm work?

Random Forest works in two-phase first is to create the random forest by combining N decision tree,
and second is to make predictions for each tree created in the first phase.

The Working process can be explained in the below steps and diagram:

Step-1: Select random K data points from the training set.

Step-2: Build the decision trees associated with the selected data points (Subsets).

Advertisement

Advertisement

Step-3: Choose the number N for decision trees that you want to build.

Step-4: Repeat Step 1 & 2.

Step-5: For new data points, find the predictions of each decision tree, and assign the new data
points to the category that wins the majority votes.

The working of the algorithm can be better understood by the below example:

Example: Suppose there is a dataset that contains multiple fruit images. So, this dataset is given to
the Random forest classifier. The dataset is divided into subsets and given to each decision tree.
During the training phase, each decision tree produces a prediction result, and when a new data
point occurs, then based on the majority of results, the Random Forest classifier predicts the final
decision. Consider the below image:
Applications of Random Forest

There are mainly four sectors where Random forest mostly used:

1. Banking: Banking sector mostly uses this algorithm for the identification of loan risk.

2. Medicine: With the help of this algorithm, disease trends and risks of the disease can be
identified.

3. Land Use: We can identify the areas of similar land use by this algorithm.

4. Marketing: Marketing trends can be identified using this algorithm.

Advertisement

Advantages of Random Forest

o Random Forest is capable of performing both Classification and Regression tasks.

o It is capable of handling large datasets with high dimensionality.

o It enhances the accuracy of the model and prevents the overfitting issue.

Disadvantages of Random Forest

o Although random forest can be used for both classification and regression tasks, it is not
more suitable for Regression tasks.
Advertisement

Python Implementation of Random Forest Algorithm

Now we will implement the Random Forest Algorithm tree using Python. For this, we will use the
same dataset "user_data.csv", which we have used in previous classification models. By using the
same dataset, we can compare the Random Forest classifier with other classification models such
as Decision tree Classifier, KNN, SVM, Logistic Regression, etc.

Advertisement

Advertisement

Implementation Steps are given below:

o Data Pre-processing step

o Fitting the Random forest algorithm to the Training set

o Predicting the test result

o Test accuracy of the result (Creation of Confusion matrix)

o Visualizing the test set result.

You might also like