Unit 5 Machine Learning
Unit 5 Machine Learning
Machine Learning
A rapidly developing field of technology, machine learning allows computers to
automatically learn from previous data. For building mathematical models and making
predictions based on historical data or information, machine learning employs a variety of
algorithms. It is currently being used for a variety of tasks, including speech recognition,
email filtering, auto-tagging on Facebook, a recommender system, and image recognition.
You will learn about the many different methods of machine learning, including
reinforcement learning, supervised learning, and unsupervised learning, in this machine
learning tutorial. Regression and classification models, clustering techniques, hidden Markov
models, and various sequential models will all be covered.
Machine learning algorithms create a mathematical model that, without being explicitly
programmed, aids in making predictions or decisions with the assistance of sample historical
data, or training data. For the purpose of developing predictive models, machine learning
brings together statistics and computer science. Algorithms that learn from historical data are
either constructed or utilized in machine learning. The performance will rise in proportion to
the quantity of information we provide.
A machine can learn if it can gain more data to improve its performance.
Let's say we have a complex problem in which we need to make predictions. Instead of
writing code, we just need to feed the data to generic algorithms, which build the logic based
on the data and predict the output. Our perspective on the issue has changed as a result of
machine learning. The Machine Learning algorithm's operation is depicted in the following
block diagram:
By providing them with a large amount of data and allowing them to automatically explore
the data, build models, and predict the required output, we can train machine learning
algorithms. The cost function can be used to determine the amount of data and the machine
learning algorithm's performance. We can save both time and money by using machine
learning.
Following are some key points which show the importance of Machine Learning:
1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning
1) Supervised Learning
In supervised learning, sample labeled data are provided to the machine learning system for
training, and the system then predicts the output based on the training data.
The system uses labeled data to build a model that understands the datasets and learns about
each one. After the training and processing are done, we test the model with sample data to
see if it can accurately predict the output.
The mapping of the input data to the output data is the objective of supervised learning. The
managed learning depends on oversight, and it is equivalent to when an understudy learns
things in the management of the educator. Spam filtering is an example of supervised
learning.
o Classification
o Regression
2) Unsupervised Learning
Unsupervised learning is a learning method in which a machine learns without any
supervision.
The training is provided to the machine with the set of data that has not been labeled,
classified, or categorized, and the algorithm needs to act on that data without any supervision.
The goal of unsupervised learning is to restructure the input data into new features or a group
of objects with similar patterns.
In unsupervised learning, we don't have a predetermined result. The machine tries to find
useful insights from the huge amount of data. It can be further classifieds into two categories
of algorithms:
o Clustering
o Association
3) Reinforcement Learning
Reinforcement learning is a feedback-based learning method, in which a learning agent gets a
reward for each right action and gets a penalty for each wrong action. The agent learns
automatically with these feedbacks and improves its performance. In reinforcement learning,
the agent interacts with the environment and explores it. The goal of an agent is to get the
most reward points, and hence, it improves its performance.
The robotic dog, which automatically learns the movement of his arms, is an example of
Reinforcement learning.
o 1834: In 1834, Charles Babbage, the father of the computer, conceived a device that
could be programmed with punch cards. However, the machine was never built, but
all modern computers rely on its logical structure.
o 1936: In 1936, Alan Turing gave a theory that how a machine can determine and
execute a set of instructions.
o 1940: In 1940, the first manually operated computer, "ENIAC" was invented, which
was the first electronic general-purpose computer. After that stored program computer
such as EDSAC in 1949 and EDVAC in 1951 were invented.
o 1943: In 1943, a human neural network was modeled with an electrical circuit. In
1950, the scientists started applying their idea to work and analyzed how human
neurons might work.
o 1950: In 1950, Alan Turing published a seminal paper, "Computer Machinery and
Intelligence," on the topic of artificial intelligence. In his paper, he asked, "Can
machines think?"
o 1952: Arthur Samuel, who was the pioneer of machine learning, created a program
that helped an IBM computer to play a checkers game. It performed better more it
played.
o 1959: In 1959, the term "Machine Learning" was first coined by Arthur Samuel.
o The duration of 1974 to 1980 was the tough time for AI and ML researchers, and this
duration was called as AI winter.
o In this duration, failure of machine translation occurred, and people had reduced their
interest from AI, which led to reduced funding by the government to the researches.
o Geoffrey Hinton and his group presented the idea of profound getting the hang of
utilizing profound conviction organizations.
o The Elastic Compute Cloud (EC2) was launched by Amazon to provide scalable
computing resources that made it easier to create and implement machine learning
models.
2007:
ADVERTISEMENT
2008:
o Google delivered the Google Forecast Programming interface, a cloud-based help that
permitted designers to integrate AI into their applications.
o Confined Boltzmann Machines (RBMs), a kind of generative brain organization,
acquired consideration for their capacity to demonstrate complex information
conveyances.
2009:
o Profound learning gained ground as analysts showed its viability in different errands,
including discourse acknowledgment and picture grouping.
o The expression "Large Information" acquired ubiquity, featuring the difficulties and
open doors related with taking care of huge datasets.
2010:
2011:
2012:
2013:
2014:
2015:
2016:
o The goal of explainable AI, which focuses on making machine learning models easier
to understand, received some attention.
o Google's DeepMind created AlphaGo Zero, which accomplished godlike Go abilities
to play without human information, utilizing just support learning.
2017:
Present day AI models can be utilized for making different expectations, including climate
expectation, sickness forecast, financial exchange examination, and so on.
Components
There are several key components of a Distributed Computing System
Devices or Systems: The devices or systems in a distributed system have
their own processing capabilities and may also store and manage their own
data.
Network: The network connects the devices or systems in the distributed
system, allowing them to communicate and exchange data.
Resource Management: Distributed systems often have some type of
resource management system in place to allocate and manage shared
resources such as computing power, storage, and networking.
The architecture of a Distributed Computing System is typically a Peer-to-Peer
Architecture, where devices or systems can act as both clients and servers and
communicate directly with each other.
Characteristics
There are several characteristics that define a Distributed Computing System
Multiple Devices or Systems: Processing and data storage is distributed
across multiple devices or systems.
Peer-to-Peer Architecture: Devices or systems in a distributed system can
act as both clients and servers, as they can both request and provide services
to other devices or systems in the network.
Shared Resources: Resources such as computing power, storage, and
networking are shared among the devices or systems in the network.
Horizontal Scaling: Scaling a distributed computing system typically
involves adding more devices or systems to the network to increase
processing and storage capacity. This can be done through hardware
upgrades or by adding additional devices or systems to the network..
Advantages and Disadvantages
Advantages of the Distributed Computing System are:
Scalability: Distributed systems are generally more scalable than
centralized systems, as they can easily add new devices or systems to the
network to increase processing and storage capacity.
Reliability: Distributed systems are often more reliable than centralized
systems, as they can continue to operate even if one device or system fails.
Flexibility: Distributed systems are generally more flexible than centralized
systems, as they can be configured and reconfigured more easily to meet
changing computing needs.
There are a few limitations to Distributed Computing System
Complexity: Distributed systems can be more complex than centralized
systems, as they involve multiple devices or systems that need to be
coordinated and managed.
Security: It can be more challenging to secure a distributed system, as
security measures must be implemented on each device or system to ensure
the security of the entire system.
Performance: Distributed systems may not offer the same level of
performance as centralized systems, as processing and data storage is
distributed across multiple devices or systems.
Applications
Distributed Computing Systems have a number of applications, including:
Cloud Computing: Cloud Computing systems are a type of distributed
computing system that are used to deliver resources such as computing
power, storage, and networking over the Internet.
Peer-to-Peer Networks: Peer-to-Peer Networks are a type of distributed
computing system that is used to share resources such as files and
computing power among users.
Distributed Architectures: Many modern computing systems, such as
microservices architectures, use distributed architectures to distribute
processing and data storage across multiple devices or systems.
In supervised learning it
In unsupervised learning it is
is not possible to learn
possible to learn larger and
larger and more complex
more complex models than
models than in,
with unsupervised learning
Model supervised learning
In supervised learning
In unsupervised learning
training data is used to
training data is not used.
Training data infer model
Test of model We can test our model. We can not test our model.
Optical Character
Find a face in an image.
Example Recognition
Conclusion
In conclusion, the article unravels the intricate tapestry of supervised and
unsupervised learning, shedding light on their roles in data analysis. Whether
classifying known data or exploring uncharted territories, these methodologies play
crucial roles in shaping the landscape of artificial intelligence.