0% found this document useful (0 votes)
24 views14 pages

Unit 5 Machine Learning

The document discusses machine learning, including its history, types (supervised, unsupervised, reinforcement learning), applications, and importance. Machine learning allows computers to automatically learn from data without being explicitly programmed and can be used for tasks like speech recognition, recommendations, and image recognition.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views14 pages

Unit 5 Machine Learning

The document discusses machine learning, including its history, types (supervised, unsupervised, reinforcement learning), applications, and importance. Machine learning allows computers to automatically learn from data without being explicitly programmed and can be used for tasks like speech recognition, recommendations, and image recognition.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit 5

Machine Learning
A rapidly developing field of technology, machine learning allows computers to
automatically learn from previous data. For building mathematical models and making
predictions based on historical data or information, machine learning employs a variety of
algorithms. It is currently being used for a variety of tasks, including speech recognition,
email filtering, auto-tagging on Facebook, a recommender system, and image recognition.

You will learn about the many different methods of machine learning, including
reinforcement learning, supervised learning, and unsupervised learning, in this machine
learning tutorial. Regression and classification models, clustering techniques, hidden Markov
models, and various sequential models will all be covered.

What is Machine Learning


In the real world, we are surrounded by humans who can learn everything from their
experiences with their learning capability, and we have computers or machines which work
on our instructions. But can a machine also learn from experiences or past data like a human
does? So here comes the role of Machine Learning.

Introduction to Machine Learning


A subset of artificial intelligence known as machine learning focuses primarily on the
creation of algorithms that enable a computer to independently learn from data and previous
experiences. Arthur Samuel first used the term "machine learning" in 1959. It could be
summarized as follows:

Without being explicitly programmed, machine learning enables a machine to automatically


learn from data, improve performance from experiences, and predict things.

Machine learning algorithms create a mathematical model that, without being explicitly
programmed, aids in making predictions or decisions with the assistance of sample historical
data, or training data. For the purpose of developing predictive models, machine learning
brings together statistics and computer science. Algorithms that learn from historical data are
either constructed or utilized in machine learning. The performance will rise in proportion to
the quantity of information we provide.

A machine can learn if it can gain more data to improve its performance.

How does Machine Learning work


A machine learning system builds prediction models, learns from previous data, and predicts
the output of new data whenever it receives it. The amount of data helps to build a better
model that accurately predicts the output, which in turn affects the accuracy of the predicted
output.

Let's say we have a complex problem in which we need to make predictions. Instead of
writing code, we just need to feed the data to generic algorithms, which build the logic based
on the data and predict the output. Our perspective on the issue has changed as a result of
machine learning. The Machine Learning algorithm's operation is depicted in the following
block diagram:

Features of Machine Learning:


o Machine learning uses data to detect various patterns in a given dataset.
o It can learn from past data and improve automatically.
o It is a data-driven technology.
o Machine learning is much similar to data mining as it also deals with the huge amount
of the data.

Need for Machine Learning


The demand for machine learning is steadily rising. Because it is able to perform tasks that
are too complex for a person to directly implement, machine learning is required. Humans are
constrained by our inability to manually access vast amounts of data; as a result, we require
computer systems, which is where machine learning comes in to simplify our lives.

By providing them with a large amount of data and allowing them to automatically explore
the data, build models, and predict the required output, we can train machine learning
algorithms. The cost function can be used to determine the amount of data and the machine
learning algorithm's performance. We can save both time and money by using machine
learning.

The significance of AI can be handily perceived by its utilization's cases, Presently, AI is


utilized in self-driving vehicles, digital misrepresentation identification, face
acknowledgment, and companion idea by Facebook, and so on. Different top organizations,
for example, Netflix and Amazon have constructed AI models that are utilizing an immense
measure of information to examine the client interest and suggest item likewise.

Following are some key points which show the importance of Machine Learning:

o Rapid increment in the production of data


o Solving complex problems, which are difficult for a human
o Decision making in various sector including finance
o Finding hidden patterns and extracting useful information from data.

Classification of Machine Learning


At a broad level, machine learning can be classified into three types:

1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning
1) Supervised Learning
In supervised learning, sample labeled data are provided to the machine learning system for
training, and the system then predicts the output based on the training data.

The system uses labeled data to build a model that understands the datasets and learns about
each one. After the training and processing are done, we test the model with sample data to
see if it can accurately predict the output.

The mapping of the input data to the output data is the objective of supervised learning. The
managed learning depends on oversight, and it is equivalent to when an understudy learns
things in the management of the educator. Spam filtering is an example of supervised
learning.

Supervised learning can be grouped further in two categories of algorithms:

o Classification
o Regression

2) Unsupervised Learning
Unsupervised learning is a learning method in which a machine learns without any
supervision.

The training is provided to the machine with the set of data that has not been labeled,
classified, or categorized, and the algorithm needs to act on that data without any supervision.
The goal of unsupervised learning is to restructure the input data into new features or a group
of objects with similar patterns.
In unsupervised learning, we don't have a predetermined result. The machine tries to find
useful insights from the huge amount of data. It can be further classifieds into two categories
of algorithms:

o Clustering
o Association

3) Reinforcement Learning
Reinforcement learning is a feedback-based learning method, in which a learning agent gets a
reward for each right action and gets a penalty for each wrong action. The agent learns
automatically with these feedbacks and improves its performance. In reinforcement learning,
the agent interacts with the environment and explores it. The goal of an agent is to get the
most reward points, and hence, it improves its performance.

The robotic dog, which automatically learns the movement of his arms, is an example of
Reinforcement learning.

History of Machine Learning


Before some years (about 40-50 years), machine learning was science fiction, but today it is
the part of our daily life. Machine learning is making our day to day life easy from self-
driving cars to Amazon virtual assistant "Alexa". However, the idea behind machine
learning is so old and has a long history. Below some milestones are given which have
occurred in the history of machine learning:
The early history of Machine Learning (Pre-1940):

o 1834: In 1834, Charles Babbage, the father of the computer, conceived a device that
could be programmed with punch cards. However, the machine was never built, but
all modern computers rely on its logical structure.
o 1936: In 1936, Alan Turing gave a theory that how a machine can determine and
execute a set of instructions.

The era of stored program computers:

o 1940: In 1940, the first manually operated computer, "ENIAC" was invented, which
was the first electronic general-purpose computer. After that stored program computer
such as EDSAC in 1949 and EDVAC in 1951 were invented.
o 1943: In 1943, a human neural network was modeled with an electrical circuit. In
1950, the scientists started applying their idea to work and analyzed how human
neurons might work.

Computer machinery and intelligence:

o 1950: In 1950, Alan Turing published a seminal paper, "Computer Machinery and
Intelligence," on the topic of artificial intelligence. In his paper, he asked, "Can
machines think?"

Machine intelligence in Games:

o 1952: Arthur Samuel, who was the pioneer of machine learning, created a program
that helped an IBM computer to play a checkers game. It performed better more it
played.
o 1959: In 1959, the term "Machine Learning" was first coined by Arthur Samuel.

The first "AI" winter:

o The duration of 1974 to 1980 was the tough time for AI and ML researchers, and this
duration was called as AI winter.
o In this duration, failure of machine translation occurred, and people had reduced their
interest from AI, which led to reduced funding by the government to the researches.

Machine Learning from theory to reality


o 1959: In 1959, the first neural network was applied to a real-world problem to remove
echoes over phone lines using an adaptive filter.
o 1985: In 1985, Terry Sejnowski and Charles Rosenberg invented a neural
network NETtalk, which was able to teach itself how to correctly pronounce 20,000
words in one week.
o 1997: The IBM's Deep blue intelligent computer won the chess game against the
chess expert Garry Kasparov, and it became the first computer which had beaten a
human chess expert.

Machine Learning at 21st century


2006:

o Geoffrey Hinton and his group presented the idea of profound getting the hang of
utilizing profound conviction organizations.
o The Elastic Compute Cloud (EC2) was launched by Amazon to provide scalable
computing resources that made it easier to create and implement machine learning
models.

2007:

o Participants were tasked with increasing the accuracy of Netflix's recommendation


algorithm when the Netflix Prize competition began.
o Support learning made critical progress when a group of specialists utilized it to
prepare a PC to play backgammon at a top-notch level.

ADVERTISEMENT

2008:

o Google delivered the Google Forecast Programming interface, a cloud-based help that
permitted designers to integrate AI into their applications.
o Confined Boltzmann Machines (RBMs), a kind of generative brain organization,
acquired consideration for their capacity to demonstrate complex information
conveyances.

2009:

o Profound learning gained ground as analysts showed its viability in different errands,
including discourse acknowledgment and picture grouping.
o The expression "Large Information" acquired ubiquity, featuring the difficulties and
open doors related with taking care of huge datasets.

2010:

o The ImageNet Huge Scope Visual Acknowledgment Challenge (ILSVRC) was


presented, driving progressions in PC vision, and prompting the advancement of
profound convolutional brain organizations (CNNs).

2011:

o On Jeopardy! IBM's Watson defeated human champions., demonstrating the potential


of question-answering systems and natural language processing.

2012:

o AlexNet, a profound CNN created by Alex Krizhevsky, won the ILSVRC,


fundamentally further developing picture order precision and laying out profound
advancing as a predominant methodology in PC vision.
o Google's Cerebrum project, drove by Andrew Ng and Jeff Dignitary, utilized
profound figuring out how to prepare a brain organization to perceive felines from
unlabeled YouTube recordings.

2013:

o Ian Goodfellow introduced generative adversarial networks (GANs), which made it


possible to create realistic synthetic data.
o Google later acquired the startup DeepMind Technologies, which focused on deep
learning and artificial intelligence.

2014:

o Facebook presented the DeepFace framework, which accomplished close human


precision in facial acknowledgment.
o AlphaGo, a program created by DeepMind at Google, defeated a world champion Go
player and demonstrated the potential of reinforcement learning in challenging games.

2015:

o Microsoft delivered the Mental Toolbox (previously known as CNTK), an open-


source profound learning library.
o The performance of sequence-to-sequence models in tasks like machine translation
was enhanced by the introduction of the idea of attention mechanisms.

2016:

o The goal of explainable AI, which focuses on making machine learning models easier
to understand, received some attention.
o Google's DeepMind created AlphaGo Zero, which accomplished godlike Go abilities
to play without human information, utilizing just support learning.

2017:

o Move learning acquired noticeable quality, permitting pretrained models to be utilized


for different errands with restricted information.
o Better synthesis and generation of complex data were made possible by the
introduction of generative models like variational autoencoders (VAEs) and
Wasserstein GANs.
o These are only a portion of the eminent headways and achievements in AI during the
predefined period. The field kept on advancing quickly past 2017, with new leap
forwards, strategies, and applications arising.

Machine Learning at present:


The field of machine learning has made significant strides in recent years, and its applications
are numerous, including self-driving cars, Amazon Alexa, Catboats, and the recommender
system. It incorporates clustering, classification, decision tree, SVM algorithms, and
reinforcement learning, as well as unsupervised and supervised learning.

Present day AI models can be utilized for making different expectations, including climate
expectation, sickness forecast, financial exchange examination, and so on.

What is Distributed Computing?



Distributed computing refers to a system where processing and data storage is
distributed across multiple devices or systems, rather than being handled by a single
central device. In a distributed system, each device or system has its own processing
capabilities and may also store and manage its own data. These devices or systems
work together to perform tasks and share resources, with no single device serving as
the central hub.
One example of a distributed computing system is a cloud computing system, where
resources such as computing power, storage, and networking are delivered over the
Internet and accessed on demand. In this type of system, users can access and use
shared resources through a web browser or other client software.

Components
There are several key components of a Distributed Computing System
 Devices or Systems: The devices or systems in a distributed system have
their own processing capabilities and may also store and manage their own
data.
 Network: The network connects the devices or systems in the distributed
system, allowing them to communicate and exchange data.
 Resource Management: Distributed systems often have some type of
resource management system in place to allocate and manage shared
resources such as computing power, storage, and networking.
The architecture of a Distributed Computing System is typically a Peer-to-Peer
Architecture, where devices or systems can act as both clients and servers and
communicate directly with each other.

Characteristics
There are several characteristics that define a Distributed Computing System
 Multiple Devices or Systems: Processing and data storage is distributed
across multiple devices or systems.
 Peer-to-Peer Architecture: Devices or systems in a distributed system can
act as both clients and servers, as they can both request and provide services
to other devices or systems in the network.
 Shared Resources: Resources such as computing power, storage, and
networking are shared among the devices or systems in the network.
 Horizontal Scaling: Scaling a distributed computing system typically
involves adding more devices or systems to the network to increase
processing and storage capacity. This can be done through hardware
upgrades or by adding additional devices or systems to the network..
Advantages and Disadvantages
Advantages of the Distributed Computing System are:
 Scalability: Distributed systems are generally more scalable than
centralized systems, as they can easily add new devices or systems to the
network to increase processing and storage capacity.
 Reliability: Distributed systems are often more reliable than centralized
systems, as they can continue to operate even if one device or system fails.
 Flexibility: Distributed systems are generally more flexible than centralized
systems, as they can be configured and reconfigured more easily to meet
changing computing needs.
There are a few limitations to Distributed Computing System
 Complexity: Distributed systems can be more complex than centralized
systems, as they involve multiple devices or systems that need to be
coordinated and managed.
 Security: It can be more challenging to secure a distributed system, as
security measures must be implemented on each device or system to ensure
the security of the entire system.
 Performance: Distributed systems may not offer the same level of
performance as centralized systems, as processing and data storage is
distributed across multiple devices or systems.
Applications
Distributed Computing Systems have a number of applications, including:
 Cloud Computing: Cloud Computing systems are a type of distributed
computing system that are used to deliver resources such as computing
power, storage, and networking over the Internet.
 Peer-to-Peer Networks: Peer-to-Peer Networks are a type of distributed
computing system that is used to share resources such as files and
computing power among users.
 Distributed Architectures: Many modern computing systems, such as
microservices architectures, use distributed architectures to distribute
processing and data storage across multiple devices or systems.

Difference between Supervised and Unsupervised


Learning


Navigating the realm of machine learning, many grapple with understanding the key
disparities between supervised and unsupervised learning. This article aims to
elucidate these differences, addressing questions on input data, computational
complexities, real-time analysis, and the reliability of results.
Supervised learning
When an algorithm is trained on a labelled dataset—that is, when the input data used
for training is paired with corresponding output labels—it is referred to as supervised
learning. Supervised learning aims to find a mapping or relationship between the input
variables and the desired output, which enables the algorithm to produce precise
predictions or classifications when faced with fresh, unobserved data.
An input-output pair training set is given to the algorithm during a supervised learning
process. For every example in the training set, the algorithm iteratively modifies its
parameters to minimize the discrepancy between its predicted output and the actual
output (the ground truth). This procedure keeps going until the algorithm performs at
an acceptable level.
Supervised learning can be divided into two main types:
1. Regression: In regression problems, the goal is to predict a continuous
output or value. For example, predicting the price of a house based on its
features, such as the number of bedrooms, square footage, and location.
1. Classification: In classification problems, the goal is to assign input data to
one of several predefined categories or classes. Examples include spam
email detection, image classification (e.g., identifying whether an image
contains a cat or a dog), and sentiment analysis.
Why supervised learning?
The basic aim is to approximate the mapping function(mentioned above) so well that
when there is a new input data (x) then the corresponding output variable can be
predicted. It is called supervised learning because the process of learning(from the
training dataset) can be thought of as a teacher who is supervising the entire learning
process. Thus, the “learning algorithm” iteratively makes predictions on the training
data and is corrected by the “teacher”, and the learning stops when the algorithm
achieves an acceptable level of performance (or the desired accuracy).
Supervised Learning Example
Suppose there is a basket which is filled with some fresh fruits, the task is to arrange
the same type of fruits in one place. Also, suppose that the fruits are apple, banana,
cherry, and grape. Suppose one already knows from their previous work (or
experience) that, the shape of every fruit present in the basket so, it is easy for them to
arrange the same type of fruits in one place. Here, the previous work is called training
data in Data Mining terminology. So, it learns things from the training data. This is
because it has a response variable that says y that if some fruit has so and so features
then it is grape, and similarly for every fruit. This type of information is deciphered
from the data that is used to train the model. This type of learning is
called Supervised Learning. Such problems are listed under classical Classification
Tasks.
Unsupervised Learning
Unsupervised learning is a type of machine learning where the algorithm is given
input data without explicit instructions on what to do with it. In unsupervised learning,
the algorithm tries to find patterns, structures, or relationships in the data without the
guidance of labelled output.
The main goal of unsupervised learning is often to explore the inherent structure
within a set of data points. This can involve identifying clusters of similar data points,
detecting outliers, reducing the dimensionality of the data, or discovering patterns and
associations.
There are several common types of unsupervised learning techniques:
1. Clustering: Clustering algorithms aim to group similar data points into
clusters based on some similarity metric. K-means clustering and
hierarchical clustering are examples of unsupervised clustering techniques.
1. Dimensionality Reduction: These techniques aim to reduce the number of
features (or dimensions) in the data while preserving its essential
information. Principal Component Analysis (PCA) and t-distributed
Stochastic Neighbor Embedding (t-SNE) are examples of dimensionality
reduction methods.
1. Association: Association rule learning is used to discover interesting
relationships or associations between variables in large datasets. The
Apriori algorithm is a well-known example used for association rule
learning.
Why Unsupervised Learning?
The main aim of Unsupervised learning is to model the distribution of the data to learn
more about the data. It is called unsupervised learning because there is no correct
answer and there is no such teacher(unlike supervised learning). Algorithms are left to
their own devices to discover and present an interesting structure in the data.
Unsupervised Learning example
Again, Suppose there is a basket and it is filled with some fresh fruits. The task is to
arrange the same type of fruits in one place. This time there is no information about
those fruits beforehand, it’s the first time that the fruits are being seen or discovered
So how to group similar fruits without any prior knowledge about them? First, any
physical characteristic of a particular fruit is selected. Suppose colour. Then the fruits
are arranged based on the color.
The groups will be something as shown below:
 RED COLOR GROUP: apples & cherry fruits.
 GREEN COLOR GROUP: bananas & grapes. So now, take another
physical character say, size, so now the groups will be something like this.
 RED COLOR AND BIG SIZE: apple.
 RED COLOR AND SMALL SIZE: cherry fruits.
 GREEN COLOR AND BIG SIZE: bananas.
 GREEN COLOR AND SMALL SIZE: grapes.
The job is done! Here, there is no need to know or learn anything beforehand. That
means, no train data and no response variable. This type of learning is known as
Unsupervised Learning.
Difference between Supervised and Unsupervised
Learning
The distinction between supervised and unsupervised learning depends on whether the
learning algorithm uses pattern-class information. Supervised learning assumes the
availability of a teacher or supervisor who classifies the training examples, whereas
unsupervised learning must identify the pattern-class information as a part of the
learning process.
Supervised learning algorithms utilize the information on the class membership of
each training instance. This information allows supervised learning algorithms to
detect pattern misclassifications as feedback to themselves. In unsupervised learning
algorithms, unlabeled instances are used. They blindly or heuristically process them.
Unsupervised learning algorithms often have less computational complexity and less
accuracy than supervised learning algorithms.

Supervised Learning Unsupervised Learning


Uses Known and
Uses Unknown Data as input
Input Data Labeled Data as input

Computational Less Computational More Computational


Complexity Complexity Complex

Uses Real-Time Analysis of


Uses off-line analysis
Real-Time Data

The number of Classes is The number of Classes is not


Number of Classes known known

Accurate and Reliable Moderate Accurate and


Accuracy of Results Results Reliable Results

The desired output is The desired, output is not


Output data given. given.

In supervised learning it
In unsupervised learning it is
is not possible to learn
possible to learn larger and
larger and more complex
more complex models than
models than in,
with unsupervised learning
Model supervised learning

In supervised learning
In unsupervised learning
training data is used to
training data is not used.
Training data infer model

Supervised learning is Unsupervised learning is also


Another name also called classification. called clustering.

Test of model We can test our model. We can not test our model.

Optical Character
Find a face in an image.
Example Recognition

Conclusion
In conclusion, the article unravels the intricate tapestry of supervised and
unsupervised learning, shedding light on their roles in data analysis. Whether
classifying known data or exploring uncharted territories, these methodologies play
crucial roles in shaping the landscape of artificial intelligence.

You might also like