0% found this document useful (0 votes)
16 views11 pages

FAM Unit4

Machine learning is a subset of artificial intelligence that enables computers to learn from data and experiences, allowing them to detect patterns and improve automatically. The process involves a life cycle consisting of data gathering, preparation, analysis, model training, testing, and deployment. Its applications range from self-driving cars to financial decision-making, demonstrating its significance in solving complex problems and managing large datasets.

Uploaded by

koolavarghese6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views11 pages

FAM Unit4

Machine learning is a subset of artificial intelligence that enables computers to learn from data and experiences, allowing them to detect patterns and improve automatically. The process involves a life cycle consisting of data gathering, preparation, analysis, model training, testing, and deployment. Its applications range from self-driving cars to financial decision-making, demonstrating its significance in solving complex problems and managing large datasets.

Uploaded by

koolavarghese6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

What is Machine Learning

In the real world, we are surrounded by humans who can learn everything from their experiences
with their learning capability, and we have computers or machines which work on our instructions.
But can a machine also learn from experiences or past data like a human does? So here comes the
role of Machine Learning.

Introduction to Machine Learning

A subset of artificial intelligence known as machine learning focuses


primarily on the creation of algorithms that enable a computer to
independently learn from data and previous experiences. Arthur Samuel
first used the term "machine learning" in 1959.

How does Machine Learning work

Features of Machine Learning:


o Machine learning uses data to detect various patterns in a given
dataset.
o It can learn from past data and improve automatically.
o It is a data-driven technology.
o Machine learning is much similar to data mining as it also deals with
the huge amount of the data.

Importance of Machine Learning:

o Rapid increment in the production of data


o Solving complex problems, which are difficult for a human
o Decision making in various sector including finance
o Finding hidden patterns and extracting useful information from data.

History of Machine Learning


Before some years (about 40-50 years), machine learning was science
fiction, but today it is the part of our daily life. Machine learning is making
our day to day life easy from self-driving cars to Amazon virtual
assistant "Alexa". However, the idea behind machine learning is so old
and has a long history. Below some milestones are given which have
occurred in the history of machine learning:

The early history of Machine Learning (Pre-1940):

o 1834: In 1834, Charles Babbage, the father of the computer,


conceived a device that could be programmed with punch cards.
However, the machine was never built, but all modern computers
rely on its logical structure.
o 1936: In 1936, Alan Turing gave a theory that how a machine can
determine and execute a set of instructions.

The era of stored program computers:

o 1940: In 1940, the first manually operated computer, "ENIAC" was


invented, which was the first electronic general-purpose computer.
After that stored program computer such as EDSAC in 1949 and
EDVAC in 1951 were invented.
o 1943: In 1943, a human neural network was modeled with an
electrical circuit. In 1950, the scientists started applying their idea
to work and analyzed how human neurons might work.

Computer machinery and intelligence:

o 1950: In 1950, Alan Turing published a seminal paper, "Computer


Machinery and Intelligence," on the topic of artificial
intelligence. In his paper, he asked, "Can machines think?"

Machine intelligence in Games:


o 1952: Arthur Samuel, who was the pioneer of machine learning,
created a program that helped an IBM computer to play a checkers
game. It performed better more it played.
o 1959: In 1959, the term "Machine Learning" was first coined
by Arthur Samuel.

The first "AI" winter:

o The duration of 1974 to 1980 was the tough time for AI and ML
researchers, and this duration was called as AI winter.
o In this duration, failure of machine translation occurred, and people
had reduced their interest from AI, which led to reduced funding by
the government to the researches.

Machine Learning from theory to reality

o 1959: In 1959, the first neural network was applied to a real-world


problem to remove echoes over phone lines using an adaptive filter.
o 1985: In 1985, Terry Sejnowski and Charles Rosenberg invented a
neural network NETtalk, which was able to teach itself how to
correctly pronounce 20,000 words in one week.
o 1997: The IBM's Deep blue intelligent computer won the chess
game against the chess expert Garry Kasparov, and it became the
first computer which had beaten a human chess expert.

Machine Learning at 21st cent2006:

o Geoffrey Hinton and his group presented the idea of profound


getting the hang of utilizing profound conviction organizations.
o The Elastic Compute Cloud (EC2) was launched by Amazon to
provide scalable computing resources that made it easier to create
and implement machine learning models.

2007:
o Participants were tasked with increasing the accuracy of Netflix's
recommendation algorithm when the Netflix Prize competition
began.
o Support learning made critical progress when a group of specialists
utilized it to prepare a PC to play backgammon at a top-notch level.

2008:

o Google delivered the Google Forecast Programming interface, a


cloud-based help that permitted designers to integrate AI into their
applications.
o Confined Boltzmann Machines (RBMs), a kind of generative brain
organization, acquired consideration for their capacity to
demonstrate complex information conveyances.

2009:

o Profound learning gained ground as analysts showed its viability in


different errands, including discourse acknowledgment and picture
grouping.
o The expression "Large Information" acquired ubiquity, featuring the
difficulties and open doors related with taking care of huge datasets.

2010:

o The ImageNet Huge Scope Visual Acknowledgment Challenge


(ILSVRC) was presented, driving progressions in PC vision, and
prompting the advancement of profound convolutional brain
organizations (CNNs).

2011:

o On Jeopardy! IBM's Watson defeated human champions.,


demonstrating the potential of question-answering systems and
natural language processing.
2012:

o AlexNet, a profound CNN created by Alex Krizhevsky, won the


ILSVRC, fundamentally further developing picture order precision
and laying out profound advancing as a predominant methodology
in PC vision.
o Google's Cerebrum project, drove by Andrew Ng and Jeff Dignitary,
utilized profound figuring out how to prepare a brain organization to
perceive felines from unlabeled YouTube recordings.

2013:

o Ian Goodfellow introduced generative adversarial networks (GANs),


which made it possible to create realistic synthetic data.
o Google later acquired the startup DeepMind Technologies, which
focused on deep learning and artificial intelligence.

2014:

o Facebook presented the DeepFace framework, which accomplished


close human precision in facial acknowledgment.
o AlphaGo, a program created by DeepMind at Google, defeated a
world champion Go player and demonstrated the potential of
reinforcement learning in challenging games.

2015:

o Microsoft delivered the Mental Toolbox (previously known as CNTK),


an open-source profound learning library.
o The performance of sequence-to-sequence models in tasks like
machine translation was enhanced by the introduction of the idea of
attention mechanisms.

2016:

o The goal of explainable AI, which focuses on making machine


learning models easier to understand, received some attention.
o Google's DeepMind created AlphaGo Zero, which accomplished
godlike Go abilities to play without human information, utilizing just
support learning.

2017:

o Move learning acquired noticeable quality, permitting pretrained


models to be utilized for different errands with restricted
information.
o Better synthesis and generation of complex data were made
possible by the introduction of generative models like variational
autoencoders (VAEs) and Wasserstein GANs.
o These are only a portion of the eminent headways and
achievements in AI during the predefined period. The field kept on
advancing quickly past 2017, with new leap forwards, strategies,
and applications arising.

Machine Learning at present:


The field of machine learning has made significant strides in recent years,
and its applications are numerous, including self-driving cars, Amazon
Alexa, Catboats, and the recommender system. It incorporates clustering,
classification, decision tree, SVM algorithms, and reinforcement learning,
as well as unsupervised and supervised learning.

Present day AI models can be utilized for making different expectations,


including climate expectation, sickness forecast, financial exchange
examination, and so on.
Machine learning Life cycle
Machine learning has given the computer systems the abilities to
automatically learn without being explicitly programmed. But how does a
machine learning system work? So, it can be described using the life cycle
of machine learning. Machine learning life cycle is a cyclic process to build
an efficient machine learning project. The main purpose of the life cycle is
to find a solution to the problem or project.

Machine learning life cycle involves seven major steps, which are given
below:

o Gathering Data
o Data preparation
o Data Wrangling
o Analyse Data
o Train the model
o Test the model
o Deployment

The most important thing in the complete process is to understand the


problem and to know the purpose of the problem. Therefore, before
starting the life cycle, we need to understand the problem because the
good result depends on the better understanding of the problem.

In the complete life cycle process, to solve a problem, we create a


machine learning system called "model", and this model is created by
providing "training". But to train a model, we need data, hence, life cycle
starts by collecting data.

1. Gathering Data:
Data Gathering is the first step of the machine learning life cycle. The goal
of this step is to identify and obtain all data-related problems.

In this step, we need to identify the different data sources, as data can be
collected from various sources such as files, database, internet,
or mobile devices. It is one of the most important steps of the life cycle.
The quantity and quality of the collected data will determine the efficiency
of the output. The more will be the data, the more accurate will be the
prediction.

This step includes the below tasks:

o Identify various data sources


o Collect data
o Integrate the data obtained from different sources

By performing the above task, we get a coherent set of data, also called
as a dataset. It will be used in further steps.

2. Data preparation
After collecting the data, we need to prepare it for further steps. Data
preparation is a step where we put our data into a suitable place and
prepare it to use in our machine learning training.

In this step, first, we put all data together, and then randomize the
ordering of data.

This step can be further divided into two processes:

o Data exploration:
It is used to understand the nature of data that we have to work with. We
need to understand the characteristics, format, and quality of data.
A better understanding of data leads to an effective outcome. In this, we
find Correlations, general trends, and outliers.
o Data pre-processing:
Now the next step is preprocessing of data for its analysis.

3. Data Wrangling
Data wrangling is the process of cleaning and converting raw data into a
useable format. It is the process of cleaning the data, selecting the
variable to use, and transforming the data in a proper format to make it
more suitable for analysis in the next step. It is one of the most important
steps of the complete process. Cleaning of data is required to address the
quality issues.

It is not necessary that data we have collected is always of our use as


some of the data may not be useful. In real-world applications, collected
data may have various issues, including:

o Missing Values
o Duplicate data
o Invalid data
o Noise

So, we use various filtering techniques to clean the data.

It is mandatory to detect and remove the above issues because it can


negatively affect the quality of the outcome.

4. Data Analysis
Now the cleaned and prepared data is passed on to the analysis step. This
step involves:

o Selection of analytical techniques


o Building models
o Review the result
The aim of this step is to build a machine learning model to analyze the
data using various analytical techniques and review the outcome. It starts
with the determination of the type of the problems, where we select the
machine learning techniques such
as Classification, Regression, Cluster analysis, Association, etc.
then build the model using prepared data, and evaluate the model.

Hence, in this step, we take the data and use machine learning algorithms
to build the model.

5. Train Model
Now the next step is to train the model, in this step we train our model to
improve its performance for better outcome of the problem.

We use datasets to train the model using various machine learning


algorithms. Training a model is required so that it can understand the
various patterns, rules, and, features.

6. Test Model
Once our machine learning model has been trained on a given dataset,
then we test the model. In this step, we check for the accuracy of our
model by providing a test dataset to it.

Testing the model determines the percentage accuracy of the model as


per the requirement of project or problem.

7. Deployment
The last step of machine learning life cycle is deployment, where we
deploy the model in the real-world system.

If the above-prepared model is producing an accurate result as per our


requirement with acceptable speed, then we deploy the model in the real
system. But before deploying the project, we will check whether it is
improving its performance using available data or not. The deployment
phase is similar to making the final report for a project.

You might also like