0% found this document useful (0 votes)
111 views28 pages

Aiml Iii

Jntuk R20, Artificial intelligence and machine learning

Uploaded by

Shiva Datti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views28 pages

Aiml Iii

Jntuk R20, Artificial intelligence and machine learning

Uploaded by

Shiva Datti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT– III:

Bayesian and Computational Learning: Bayes theorem, concept learning, maximum likelihood,
minimum description length principle, Gibbs Algorithm, Naïve Bayes Classifier Algorithm , Instance Based
Learning- K-Nearest neighbor learning
Introduction to Machine Learning (ML): Definition, Evolution, Need, applications of ML in
industry and real world, classification; differences between supervised and unsupervised learning
paradigms.

1. Experiment

An experiment is defined as the planned operation under controlled condition.

1. Tossing Coin,
2. Drawing Card And
3. Rolling Dice, Etc.

2. Sample Space

During an experiment what we get as a result is called as possible outcomes and the set of all possible outcome of an
event is known as sample space.

For example, if we are rolling a dice, sample space will be:

S1 = {1, 2, 3, 4, 5, 6}

Similarly, if our experiment is related to toss a coin and recording its outcomes, then sample space will be:

S2 = {Head, Tail}

3. Event

Event is defined as subset of sample space in an experiment.

Assume in our experiment of rolling a dice, there are two event A and B such that;
A = Event when an even number is obtained = {2, 4, 6}
B = Event when a number is greater than 4 = {5, 6}
Probability of the event A ''P(A)''= Number of favourable outcomes / Total number of possible outcomes
P(E) = 3/6 =1/2 =0.5
Similarly, Probability of the event B ''P(B)''= Number of favourable outcomes / Total number of possible outcomes
=2/6
=1/3
=0.333

A = Event when an even number is obtained = {2, 4, 6}


B = Event when a number is greater than 4 = {5, 6}
Union of event A and B: Intersection of event A and B: Disjoint Event:
A∪B = {2, 4, 5, 6} A∩B= {6}

Disjoint Event: If the intersection of the event A and B is an empty set or null then such events are known as disjoint
event or mutually exclusive events also.

4. Random Variable:

A random variable is taken on some random values and each value having some probability.
A random variable can either be discrete, continuous or combination of both.

5. Exhaustive Event:

Two events A and B are said to be exhaustive then


If either A or B definitely occur at a time and both are mutually exclusive for

e.g., while tossing a coin, either it will be a Head or may be a Tail.

6. Independent Event:

Two events are said to be independent when occurrence of one event does not affect the occurrence of another event.
In simple words we can say that the probability of outcome of both events does not depends one another.
Mathematically, two events A and B are said to be independent if:

P(A ∩ B) = P(AB) = P(A)*P(B)

7. Conditional Probability:

Conditional probability is defined as the probability of an event A, given that another event B has already occurred
(i.e. A conditional B).
This is represented by P(A|B) and we can define it as:

P(A|B) = P(A ∩ B) / P(B)

Bayes' theorem in Artificial intelligence

 Bayes theorem is given by an English statistician, philosopher, named Mr. Thomas Bayes in 17th century.
 Bayes provides their thoughts in decision theory which is extensively used in important mathematics concepts as
Probability.
 Bayes theorem is also widely used in Machine Learning where we need to predict output accurately.
 An important concept of Bayes theorem named Bayesian method
 It used to calculate conditional probability in Machine Learning application that includes classification tasks.
 Further, a simplified version of Bayes theorem (Naïve Bayes classification) is also used to reduce computation time
and average cost of the projects.
 Bayes theorem is also known with some other name such as Bayes rule or Bayes Law.
 It is used to calculate the probability of occurring one event while other one already occurred.

What is Bayes Theorem?


Bayes theorem is one of the most popular machine learning concepts that helps to calculate the probability of
occurring one event with uncertain knowledge while other one has already occurred.

Bayes' theorem can be derived using product rule and conditional probability of event X with known event Y:

o According to the product rule we can express as the probability of event X with known event Y as follows;

1. P(X ? Y)= P(X|Y) P(Y) {equation 1}


o Further, the probability of event Y with known event X:

1. P(X ? Y)= P(Y|X) P(X) {equation 2}

Mathematically, Bayes theorem can be expressed by combining both equations on right hand side. We will get:

Here, both events X and Y are independent events which means probability of outcome of both events does not
depends one another.

The above equation is called as Bayes Rule or Bayes Theorem.


 P(X|Y) is called as posterior, which we need to calculate. It is defined as updated probability after considering the
evidence.
 P(Y|X) is called the likelihood. It is the probability of evidence when hypothesis is true.
 P(X) is called the prior probability, probability of hypothesis before considering the evidence
 P(Y) is called marginal probability. It is defined as the probability of evidence under any consideration.

Hence, Bayes Theorem can be written as:

posterior = likelihood * prior / evidence

Minimum Description Length Principal

Less complex data we can process efficiently.


Then time effective & cost effective.
That why we converting data into model.
L(D)= L(M)+L(D/M)

Length of the database = Length of the Model +(Length of Data /Length of the Model)

This formula is derived from Bayes therom.

Compact Representation = MDL doesn’t miss the essential data.

Error Over Complexity = if one hypothesis then I got more error. Then I select another hypothesis for less error.

Gibbs Algorithm
Bayes Optimal classifier is shows best performance to show posterior probability using hypothesis H.
But it becomes costly because of H(combination of all hypothesis).

Gibbs algorithm is an alternative to Bayes Optimal Classifier.

Gibbs Algorithm defines as:

 Choose random hypothesis “h” from “H” for posteriori probability distribution H.
 Use random hypothesis “h” to predict the next outcome.
{Here random hypothesis h1, h2, h3 are subset of H}
Take one hypothesis in between h1 or h2 or h3 which one has less errors.

Maximum likelihood
What Are Likelihood and Probability?

 Probability is a branch of mathematics that deals with the possibility of a random experiment occurring. The
term "probability" refers to the possibility of something happening.
 Likelihood refers to the process of determining the best data distribution given a specific situation in the data.
 Goal of maximum likelihood is to find the parameter values that give the distribution that maximize the
probability of observing the data.
Normally we assume probability or likelihood of Mouse weights are something but as per mouse weights Likelihood is
diffrent

Maximum Likelihood means we find the average of all outcomes and shifted or update the “Normal distribution” to over
maximum distribution.
--------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------------
Naïve Bayes Classifier Algorithm

 Naïve Bayes algorithm is a supervised learning algorithm, which is based on Bayes theorem and used for
solving classification problems.
 It is mainly used in text classification that includes a high-dimensional training dataset.
 Naïve Bayes Classifier is one of the simple and most effective Classification algorithms which helps in building
the fast machine learning models that can make quick predictions.
 It is a probabilistic classifier, which means it predicts on the basis of the probability of an object.
 Some popular examples of Naïve Bayes Algorithm are spam filtration, Sentimental analysis, and classifying
articles.

Why is it called Naïve Bayes?

The Naïve Bayes algorithm is comprised of two words Naïve and Bayes, Which can be described as:

o Naïve: It is called Naïve because it assumes that the occurrence of a certain feature is independent of the
occurrence of other features.

Such as if the fruit is identified on the bases of color, shape, and taste, then red, spherical, and sweet fruit is
recognized as an apple.

Hence each feature individually contributes to identify that it is an apple without depending on each other.

o Bayes: It is called Bayes because it depends on the principle of Bayes' Theorem.

Bayes' Theorem:

o Bayes' theorem is also known as Bayes' Rule or Bayes' law, which is used to determine the probability of a hypothesis with
prior knowledge. It depends on the conditional probability.
o The formula for Bayes' theorem is given as:

Working of Naïve Bayes' Classifier:

Working of Naïve Bayes' Classifier can be understood with the help of the below example:

Suppose we have a dataset of weather conditions and corresponding target variable "Play". So using this dataset we
need to decide that whether we should play or not on a particular day according to the weather conditions. So to solve
this problem, we need to follow the below steps:

1. Convert the given dataset into frequency tables.


2. Generate Likelihood table by finding the probabilities of given features.
3. Now, use Bayes theorem to calculate the posterior probability.

Problem: If the weather is sunny, then the Player should play or not?
Solution: To solve this, first consider the below dataset:

Outlook Play

0 Rainy Yes

1 Sunny Yes

2 Overcast Yes

3 Overcast Yes

4 Sunny No

5 Rainy Yes

6 Sunny Yes

7 Overcast Yes

8 Rainy No

9 Sunny No

10 Sunny Yes

11 Rainy No

12 Overcast Yes

13 Overcast Yes

Frequency table for the Weather Conditions:

Weather Yes No

Overcast 5 0

Rainy 2 2

Sunny 3 2

Total 10 5

Likelihood table weather condition:

Weather No Yes

Overcast 0 5 5/14= 0.35

Rainy 2 2 4/14=0.29

Sunny 2 3 5/14=0.35
All 4/14=0.29 10/14=0.71

Applying Bayes'theorem:

P(Yes|Sunny)= P(Sunny|Yes)*P(Yes)/P(Sunny)

P(Sunny|Yes)= 3/10= 0.3

P(Sunny)= 0.35

P(Yes)=0.71

So P(Yes|Sunny) = 0.3*0.71/0.35= 0.60

P(No|Sunny)= P(Sunny|No)*P(No)/P(Sunny)

P(Sunny|NO)= 2/4=0.5

P(No)= 0.29

P(Sunny)= 0.35

So P(No|Sunny)= 0.5*0.29/0.35 = 0.41

So as we can see from the above calculation that P(Yes|Sunny)>P(No|Sunny)

Hence on a Sunny day, Player can play the game.

Referred from :-https://www.javatpoint.com/machine-learning-naive-bayes-classifier


What is Euclidean Distance?
 In Mathematics, the Euclidean distance is defined as the distance between two points.
 In other words, the Euclidean distance between two points
 Euclidean space is defined as the length of the line between two points.

Euclidean Distance Formula


 Euclidean distance formula helps to find the distance of a line segment.
 Let us assume two points, such as (x1, y1) and (x2, y2) in the two-dimensional coordinate plane.
 Thus, the Euclidean distance formula is given by:

Where,
“d” is the Euclidean distance
(x1, y1) is the coordinate of the first point
(x2, y2) is the coordinate of the second point.

K-Nearest Neighbor (KNN) Algorithm for Machine Learning

Why do we need a K-NN Algorithm?

1. Suppose there are two categories, i.e., Category A and Category B.


2. And we have a new data point x1.
3. So this data point will lie in which of these categories.
4. To solve this type of problem, we need a K-NN algorithm.
5. With the help of K-NN, we can easily identify the category or class of a particular dataset.
6. Consider the below diagram.

How does K-NN work?

The K-NN working can be explained on the basis of the below algorithm:
 Step-1: Select the number K of the neighbors
 Step-2: Calculate the Euclidean distance of K number of neighbors
 Step-3: Take the K nearest neighbors as per the calculated Euclidean distance.
 Step-4: Among these k neighbors, count the number of the data points in each category.
 Step-5: Assign the new data points to that category for which the number of the neighbor is maximum.
 Step-6: Our model is ready.

Example:- We have a new data point and we need to put it in the required category. Consider the below
image:

Firstly, we will choose the number of neighbors, so we will choose the k=5.
Next, we will calculate the Euclidean distance between the data points.
As we can see the 3 nearest neighbors are from category A, hence this new data point must belong to category A.

How to select the value of K in the K-NN Algorithm?

Below are some points to remember while selecting the value of K in the K-NN algorithm:

 There is no particular way to determine the best value for "K", so we need to try some values to find the
best out of them. The most preferred value for K is 5.
 A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of outliers in the model.
 Large values for K are good, but it may find some difficulties.

Advantages of KNN Algorithm:

 It is simple to implement.
 It is robust to the noisy training data
 It can be more effective if the training data is large.

Disadvantages of KNN Algorithm:

 Always needs to determine the value of K which may be complex some time.
 The computation cost is high because of calculating the distance between the data points for all the training samples.

Referred from :- https://www.javatpoint.com/k-nearest-neighbor-algorithm-for-machine-learning


What is Machine Learning
In the real world, we are surrounded by humans, who can learn everything from their experiences with their learning
capability,

And we have computers or machines which work on our instructions.

But can a machine also learn from experiences or past data like a human does? So here comes the role of Machine
Learning.

Machine Learning is said as a subset of artificial intelligence that is mainly concerned with the development of
algorithms which allow a computer to learn from the data and past experiences on their own.

The term machine learning was first introduced by Arthur Samuel in 1959.

With the help of sample historical data, which is known as training data, machine learning algorithms build
a mathematical model that helps in making predictions or decisions without being explicitly programmed.

How does Machine Learning work

A Machine Learning system learns from historical data, builds the prediction models, and whenever it receives new
data, predicts the output for it.

 The accuracy of predicted output depends upon the amount of data, as the huge amount of data helps to
build a better model which predicts the output more accurately.
 Suppose we have a complex problem, where we need to perform some predictions, so instead of writing a
code for it, we just need to feed the data to generic algorithms, and with the help of these algorithms,
machine builds the logic as per the data and predict the output.
 Machine learning has changed our way of thinking about the problem.
 The below block diagram explains the working of Machine Learning algorithm:
Machine learning is used in self-driving cars, cyber fraud detection, face recognition, and friend suggestion by
Facebook, etc.

Various top companies such as Netflix and Amazon have build machine learning models that are using a vast amount of
data to analyze the user interest and recommend product accordingly.

Following are some key points which show the importance of Machine Learning:

 Rapid increment in the production of data


 Solving complex problems, which are difficult for a human
 Decision making in various sector including finance
 Finding hidden patterns and extracting useful information from data.

Classification of Machine Learning


At a broad level, machine learning can be classified into three types:

1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning

1) Supervised Learning
 Supervised learning is a type of machine learning method
 Here we provide sample data to the train the system.
 The system creates a model using Sample data to understand the datasets and learn about each data.
 Once the training and processing are done then we test the system by providing a sample data
 And check whether it is predicting the exact output or not.

2) Unsupervised Learning
 Unsupervised learning is a learning method in which a machine learns without any supervision.
 The training is provided to the machine with the set of data that has not been labeled, classified, or
categorized,
 And the algorithm needs to act on that data without any supervision.
 The goal of unsupervised learning is to restructure the input data into new features or a group of objects with
similar patterns.

History of Machine Learning - How did it Evolve?


Year 1950:
 Alan Turing developed the Turing Test during this year.
 The Turing Test machine is able to think like a human.
 While the technique is quite primitive by today's standards, the philosophical implications have had
a big impact on the development of AI.
 Turing Test is defined as a game of question and answers played by a human and a machine.
 ChatBots are best example.

Year 1957: Perceptron


 Deemed the first ever Neural Network was designed this year by Frank Rosenblatt.
 Neural Networks is a very popular and subset of Machine Learning called Deep Learning.
 It is one of the most promising Machine Learning tools we have at our disposal today.

Year 1960: (Language Processing program)


 MIT developed a Natural Language Processing program to act as a therapist.
 The program was called ELIZA, It was quite a success experimentally.
 It was a key milestone for the development of NLP - Natural Language Processing
 It is again a subset of Machine Learning and is widely used today.
Year 1967 :
 The advent of Nearest Neighbor algorithm, very prominently used in Search and Approximation.
 K-Nearest Neighbor or KNN is one of the most popular Machine Learning algorithms.

Year 1970 :
 Backpropagation is a set of algorithms used extensively in Deep Learning, they dynamically alter the Deep
Learning Neural Network to effectively do self correction.
 Backpropagation scientific paper was published by Seppo Linnainmaa but at that time it was called Automatic
Differentiation (AD).

Year 1980 :
 Kunihiko Fukushima successfully built a multilayered Neural Network called ANN - Artificial Neural Network
 It acted as a platform for the development of Convoluted Neural Networks down the line.
Year 1981 :
Gerald Dejong built a new way to teach machines and he called it Explanation Based Learning, this was a very early
Machine Learning implementation and it processed Data to create a set of rules which is another way of saying that it
created an algorithm.

Year 1989 :
 Reinforcement Learning is finally realized. Q-Learning algorithm is developed by Christopher Watkins which
made it teaching a machine for play and risk and reward game.

Year 1995 :
 Rise of 2 very important algorithms in the Machine Learning space;
“Random Forest Algorithm” and “Support Vector Machines”.

Support Vector Machines


Random Forest Algorithm

Year 1997/98 : (Machine Learning algorithms for Handwriting Recognition )


 LSTM was introduced by Sepp Hochreiter and Jürgen Schmidhuber, LSTN revolutionized NLP research and
application.
 Along with this MNIST database was also developed courtesy of a team led by Yann LeCun.
 MNIST database is regarded as a benchmark in training Machine Learning algorithms for Handwriting
Recognition

Year 2009 :
 ImageNet is created, which facilitated Computer Vision research by giving researchers access to a vast
database categorized by objects and features.
 It was a project initiated by Fei-Fei Li from Stanford University.

Year 2010 till now :


 Google Brain and Facebook's DeepFace are now revolutionizing Machine Learning.
 Google Brain has successfully reached a Cat's level of intelligence and can now even browse and use youtube
and correctly predict or identify which videos contain a cat,
 On the other hand Facebook's DeepFace can now identify people with an accuracy figure exceeding 97%.

Referred from :-https://www.zeolearn.com/magazine/what-is-machine-learning

Applications of Machine learning


Machine learning is a buzzword for today's technology, and it is growing very rapidly day by day. We are using machine
learning in our daily life even without knowing it such as Google Maps, Google assistant, Alexa, etc. Below are some
most trending real-world applications of Machine Learning:

1. Image Recognition:
 Image recognition is one of the most common applications of machine learning.
 It is used to identify objects, persons, places, digital images, etc.
 The popular use case of image recognition and face detection is, Automatic friend tagging suggestion:
 Facebook provides us a feature of auto friend tagging suggestion.
 Whenever we upload a photo with our Facebook friends, then we automatically get a tagging suggestion with
name, and the technology behind this is machine learning's face detection and recognition algorithm.
 It is based on the Facebook project named "Deep Face," which is responsible for face recognition and person
identification in the picture.

2. Speech Recognition
 While using Google, we get an option of "Search by voice," it comes under speech recognition, and
it's a popular application of machine learning.
 Speech recognition is a process of converting voice instructions into text, and it is also known as
"Speech to text", or "Computer speech recognition."
 At present, machine learning algorithms are widely used by various applications of speech recognition.
 Google assistant, Siri, Cortana, and Alexa are using speech recognition technology to follow the
voice instructions.

3. Traffic prediction:
 If we want to visit a new place, we take help of Google Maps, which shows us the correct path with the
shortest route and predicts the traffic conditions.
 It predicts the traffic conditions such as whether traffic is cleared, slow-moving, or heavily congested with
the help of two ways:
Real Time location of the vehicle form Google Map app and sensors

Average time has taken on past days at the same time.

 Everyone who is using Google Map is helping this app to make it better.
 It takes information from the user and sends back to its database to improve the performance.

4. Product recommendations:
 Machine learning is widely used by various e-commerce and entertainment companies such
as Amazon, Netflix, etc., for product recommendation to the user.
 Whenever we search for some product on Amazon, then we started getting an advertisement for the same
product while internet surfing on the same browser and this is because of machine learning.
 Google understands the user interest using various machine learning algorithms and suggests the product as per
customer interest.
 As similar, when we use Netflix, we find some recommendations for entertainment series, movies, etc., and this
is also done with the help of machine learning.

5. Self-driving cars:
 One of the most exciting applications of machine learning is self-driving cars. Machine learning plays a significant
role in self-driving cars.
 Tesla, the most popular car manufacturing company is working on self-driving car.
 It is using unsupervised learning method to train the car models to detect people and objects while driving.

6. Email Spam and Malware Filtering:

Whenever we receive a new email, it is filtered automatically as important, normal, and spam.
We always receive an important mail in our inbox with the important symbol and spam emails in our
spam box, and the technology behind this is Machine learning.

Below are some spam filters used by Gmail:

o Content Filter
o Header filter
o General blacklists filter
o Rules-based filters
o Permission filters

Some machine learning algorithms such as Multi-Layer Perceptron, Decision tree, and Naïve Bayes
classifier are used for email spam filtering and malware detection.

7. Virtual Personal Assistant:


 We have various virtual personal assistants such as Google assistant, Alexa, Cortana, Siri.
 As the name suggests, they help us in finding the information using our voice instruction.
 These assistants can help us in various ways just by our voice instructions such as Play music, call someone,
Open an email, Scheduling an appointment, etc.
 These virtual assistants use machine learning algorithms as an important part.

8. Online Fraud Detection:

Machine learning is making our online transaction safe and secure by detecting fraud transaction.
Whenever we perform some online transaction, there may be various ways that a fraudulent
transaction can take place such as fake accounts, fake ids, and steal money in the middle of a
transaction.

So to detect this, Feed Forward Neural network helps us by checking whether it is a genuine
transaction or a fraud transaction.

For each genuine transaction, the output is converted into some hash values, and these values
become the input for the next round. For each genuine transaction, there is a specific pattern which
gets change for the fraud transaction hence, it detects it and makes our online transactions more
secure.

9. Stock Market trading:

Machine learning is widely used in stock market trading. In the stock market, there is always a risk of
up and downs in shares, so for this machine learning's long short term memory neural network is
used for the prediction of stock market trends.

10. Medical Diagnosis:


In medical science, machine learning is used for diseases diagnoses. With this, medical technology is
growing very fast and able to build 3D models that can predict the exact position of lesions in the
brain.

It helps in finding brain tumors and other brain-related diseases easily.

11. Automatic Language Translation:

Nowadays, if we visit a new place and we are not aware of the language then it is not a problem at all,
as for this also machine learning helps us by converting the text into our known languages.

Google's GNMT (Google Neural Machine Translation) provide this feature, which is a Neural Machine
Learning that translates the text into our familiar language, and it called as automatic translation.
Difference between Supervised and Unsupervised Learning
Supervised and Unsupervised learning are the two techniques of machine learning.
But both the techniques are used in different scenarios and with different datasets.

Supervised Machine Learning:


Supervised learning is a machine learning method in which models are trained using labeled data.
In supervised learning, models need to find the mapping function to map the input variable (X) with the output
variable (Y).

Supervised learning needs supervision to train the model, which is similar to as a student learns things in
the presence of a teacher.
Supervised learning can be used for two types of problems:
Classification and Regression.

Unsupervised Machine Learning:


Unsupervised learning is another machine learning method in which patterns inferred from the unlabeled
input data.

The goal of unsupervised learning is to find the structure and patterns from the input data.

Unsupervised learning does not need any supervision.


Instead, it finds patterns from the data by its own.

Learn more Unsupervised Machine Learning

Unsupervised learning can be used for two types of problems: Clustering and Association.

Example:

 To understand the unsupervised learning, we will use the example given above.
 So unlike supervised learning, here we will not provide any supervision to the model.
 We will just provide the input dataset to the model and allow the model to find the patterns from the data.
 With the help of a suitable algorithm, the model will train itself and divide the fruits int

Supervised Learning Unsupervised Learning

Supervised learning algorithms are Unsupervised learning algorithms are


trained using labeled data. trained using unlabeled data.

Supervised learning model takes direct Unsupervised learning model does not
feedback to check if it is predicting take any feedback.
correct output or not.

Supervised learning model predicts the Unsupervised learning model finds the
output. hidden patterns in data.

In supervised learning, input data is In unsupervised learning, only input


provided to the model along with the data is provided to the model.
output.

The goal of supervised learning is to train The goal of unsupervised learning is to


the model so that it can predict the find the hidden patterns and useful
output when it is given new data. insights from the unknown dataset.

Supervised learning needs supervision to Unsupervised learning does not need


train the model. any supervision to train the model.

Supervised learning can be categorized Unsupervised Learning can be classified


in Classification and Regression problems. in Clustering and Associations problems.

Supervised learning can be used for those Unsupervised learning can be used for
cases where we know the input as well as those cases where we have only input
corresponding outputs. data and no corresponding output data.

Supervised learning model produces an Unsupervised learning model may give


accurate result. less accurate result as compared to
supervised learning.

Supervised learning is not close to true Unsupervised learning is more close to


Artificial intelligence as in this, we first the true Artificial Intelligence as it
train the model for each data, and then learns similarly as a child learns daily
only it can predict the correct output. routine things by his experiences.

It includes various algorithms such as It includes various algorithms such as


Linear Regression, Logistic Regression, Clustering, KNN, and Apriori algorithm.
Support Vector Machine, Multi-class
Classification, Decision tree, Bayesian
Logic, etc.

You might also like