0% found this document useful (0 votes)
16 views14 pages

ML Doc1

Machine learning

Uploaded by

vutukuruhaasini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views14 pages

ML Doc1

Machine learning

Uploaded by

vutukuruhaasini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Machine learning (ML) is a subdomain of artificial intelligence

(AI) that focuses on developing systems that learn—or improve


performance—based on the data they ingest. Artificial intelligence is a
broad word that refers to systems or machines that resemble human
intelligence. Machine learning and AI are frequently discussed together,
and the terms are occasionally used interchangeably, although they do
not signify the same thing. A crucial distinction is that, while all machine
learning is AI, not all AI is machine learning.

What is Machine Learning?


Machine Learning is the field of study that gives computers the capability
to learn without being explicitly programmed. ML is one of the most
exciting technologies that one would have ever come across. As it is
evident from the name, it gives the computer that makes it more similar
to humans: The ability to learn. Machine learning is actively being used
today, perhaps in many more places than one would expect.

Features of Machine Learning


 Machine learning is a data-driven technology. A large amount of
data is generated by organizations daily, enabling them to identify
notable relationships and make better decisions.

 Machines can learn from past data and automatically improve their
performance.

 Given a dataset, ML can detect various patterns in the data.

 For large organizations, branding is crucial, and targeting a relatable


customer base becomes easier.
 It is similar to data mining, as both deal with substantial amounts of
data.

Definition of Learning
A computer program is said to learn from experience E concerning
some class of tasks T and performance measure P, if its
performance at tasks T, as measured by P, improves with
experience E.

Examples

 Handwriting recognition learning problem

o Task T : Recognizing and classifying handwritten words


within images

o Performance P : Percent of words correctly classified

o Training experience E : A dataset of handwritten words with


given classifications

 A robot driving learning problem

o Task T : Driving on highways using vision sensors

o Performance P : Average distance traveled before an error

o Training experience E : A sequence of images and steering


commands recorded while observing a human driver

Classification of Machine Learning


Machine learning implementations are classified into four major
categories, depending on the nature of the learning “signal” or
“response” available to a learning system which are as follows:

1. Supervised learning:

Supervised learning is the machine learning task of learning a


function that maps an input to an output based on example input-
output pairs. The given data is labeled.
Both classification and regression problems are supervised learning
problems.

 Example – Consider the following data regarding patients entering a


clinic . The data consists of the gender and age of the patients and
each patient is labeled as “healthy” or “sick”.
Supervised learning can be further divided into several different
types, each with its own unique characteristics and
applications. Here are some of the most common types of supervised
learning algorithms:

 Linear Regression: Linear regression is a type of supervised


learning regression algorithm that is used to predict a continuous
output value. It is one of the simplest and most widely used
algorithms in supervised learning.

 Logistic Regression : Logistic regression is a type of supervised


learning classification algorithm that is used to predict a binary
output variable.

 Decision Trees : Decision tree is a tree-like structure that is used to


model decisions and their possible consequences. Each internal
node in the tree represents a decision, while each leaf node
represents a possible outcome.

 Random Forests : Random forests again are made up of multiple


decision trees that work together to make predictions. Each tree in
the forest is trained on a different subset of the input features and
data. The final prediction is made by aggregating the predictions of
all the trees in the forest.

 Support Vector Machine(SVM) : The SVM algorithm creates a


hyperplane to segregate n-dimensional space into classes and
identify the correct category of new data points. The extreme cases
that help create the hyperplane are called support vectors, hence
the name Support Vector Machine.

 K-Nearest Neighbors (KNN) : KNN works by finding k training


examples closest to a given input and then predicts the class or
value based on the majority class or average value of these
neighbors. The performance of KNN can be influenced by the choice
of k and the distance metric used to measure proximity.

 Gradient Boosting : Gradient Boosting combines weak learners,


like decision trees, to create a strong model. It iteratively builds new
models that correct errors made by previous ones.

 Naive Bayes Algorithm: The Naive Bayes algorithm is a


supervised machine learning algorithm based on applying Bayes’
Theorem with the “naive” assumption that features are independent
of each other given the class label.

2. Unsupervised learning:

Unsupervised learning is a type of machine learning algorithm used


to draw inferences from datasets consisting of input data without
labeled responses. In unsupervised learning algorithms,
classification or categorization is not included in the observations.
Example: Consider the following data regarding patients entering a
clinic. The data consists of the gender and age of the patients.

Clustering

Clustering is a type of unsupervised learning that is used to group


similar data points together. Clustering algorithms work by
iteratively moving data points closer to their cluster centers and
further away from data points in other clusters.

1. Exclusive (partitioning)

2. Agglomerative

3. Overlapping

4. Probabilistic

Clustering Types:-

1. Hierarchical clustering
2. K-means clustering

3. Principal Component Analysis

4. Singular Value Decomposition

5. Independent Component Analysis

6. Gaussian Mixture Models (GMMs)

7. Density-Based Spatial Clustering of Applications with Noise


(DBSCAN)

Association rule learning

Association rule learning is a type of unsupervised learning that is


used to identify patterns in a data. Association rule learning
algorithms work by finding relationships between different items in a
dataset.

Some common association rule learning algorithms include:

 Apriori Algorithm

 Eclat Algorithm

 FP-Growth Algorithm

3.Reinforcement Learning:

Reinforcement learning is the problem of getting an agent to act in


the world so as to maximize its rewards.

A learner is not told what actions to take as in most forms of


machine learning but instead must discover which actions yield the
most reward by trying them. For example — Consider teaching a
dog a new trick: we cannot tell him what to do, what not to do, but
we can reward/punish it if it does the right/wrong thing.

When watching the video, notice how the program is initially


clumsy and unskilled but steadily improves with training until it
becomes a champion.
4. Semi-Supervised Learning:

Where an incomplete training signal is given: a training set with


some (often many) of the target outputs missing. There is a special
case of this principle known as Transduction where the entire set of
problem instances is known at learning time, except that part of the
targets are missing. Semi-supervised learning is an approach to
machine learning that combines small labeled data with a large
amount of unlabeled data during training. Semi-supervised learning
falls between unsupervised learning and supervised learning.

Data is a crucial component in the field of Machine Learning. It


refers to the set of observations or measurements that can be used
to train a machine-learning model. The quality and quantity of data
available for training and testing play a significant role in
determining the performance of a machine-learning model. Data can
be in various forms such as numerical, categorical, or time-series
data, and can come from various sources such as databases,
spreadsheets, or APIs. Machine learning algorithms use data to learn
patterns and relationships between input variables and target
outputs, which can then be used for prediction or classification
tasks.

Data is typically divided into two types:

1. Labeled data
2. Unlabeled data

Labeled data includes a label or target variable that the model is


trying to predict, whereas unlabeled data does not include a label or
target variable. The data used in machine learning is typically
numerical or categorical. Numerical data includes values that can be
ordered and measured, such as age or income. Categorical data
includes values that represent categories, such as gender or type of
fruit.

Data can be divided into training and testing sets. The training set is
used to train the model, and the testing set is used to evaluate the
performance of the model. It is important to ensure that the data is
split in a random and representative way.
Data preprocessing is an important step in the machine learning
pipeline. This step can include cleaning and normalizing the data,
handling missing values, and feature selection or engineering.

DATA: It can be any unprocessed fact, value, text, sound, or picture


that is not being interpreted and analyzed. Data is the most
important part of all Data Analytics, Machine Learning, and Artificial
Intelligence. Without data, we can’t train any model and all modern
research and automation will go in vain. Big Enterprises are
spending lots of money just to gather as much certain data as
possible.

Example: Why did Facebook acquire WhatsApp by paying a huge


price of $19 billion?

The answer is very simple and logical – it is to have access to the


users’ information that Facebook may not have but WhatsApp will
have. This information about their users is of paramount importance
to Facebook as it will facilitate the task of improvement in their
services.

INFORMATION: Data that has been interpreted and manipulated


and has now some meaningful inference for the users.

KNOWLEDGE: Combination of inferred information, experiences,


learning, and insights. Results in awareness or concept building for
an individual or organization.
How do we split data in Machine Learning?
 Training Data: The part of data we use to train our model. This is
the data that your model actually sees(both input and output) and
learns from.

 Validation Data: The part of data that is used to do a frequent


evaluation of the model, fit on the training dataset along with
improving involved hyperparameters (initially set parameters before
the model begins learning). This data plays its part when the model
is actually training.

 Testing Data: Once our model is completely trained, testing data


provides an unbiased evaluation. When we feed in the inputs of
Testing data, our model will predict some values(without seeing
actual output). After prediction, we evaluate our model by
comparing it with the actual output present in the testing data. This
is how we evaluate and see how much our model has learned from
the experiences feed in as training data, set at the time of training.

Consider an example:

There’s a Shopping Mart Owner who conducted a survey for which


he has a long list of questions and answers that he had asked from
the customers, this list of questions and answers is DATA. Now
every time when he wants to infer anything and can’t just go
through each and every question of thousands of customers to find
something relevant as it would be time-consuming and not helpful.
In order to reduce this overhead and time wastage and to make
work easier, data is manipulated through software, calculations,
graphs, etc. as per your own convenience, this inference from
manipulated data is Information. So, Data is a must for
Information. Now Knowledge has its role in differentiating between
two individuals having the same information. Knowledge is actually
not technical content but is linked to the human thought process.

Different Forms of Data


 Numeric Data : If a feature represents a characteristic measured in
numbers , it is called a numeric feature.

 Categorical Data : A categorical feature is an attribute that can


take on one of the limited , and usually fixed number of possible
values on the basis of some qualitative property . A categorical
feature is also called a nominal feature.

 Ordinal Data : This denotes a nominal variable with categories


falling in an ordered list . Examples include clothing sizes such as
small, medium , and large , or a measurement of customer
satisfaction on a scale from “not at all happy” to “veryhappy”.
Advantages of using data in Machine Learning:
 Improved accuracy: With large amounts of data, machine learning
algorithms can learn more complex relationships between inputs
and outputs, leading to improved accuracy in predictions and
classifications.

 Automation: Machine learning models can automate decision-


making processes and can perform repetitive tasks more efficiently
and accurately than humans.

 Personalization: With the use of data, machine learning algorithms


can personalize experiences for individual users, leading to
increased user satisfaction.

 Cost savings: Automation through machine learning can result in


cost savings for businesses by reducing the need for manual labor
and increasing efficiency.

 Disadvantages of using data in Machine Learning:


 Bias: Data used for training machine learning models can be biased,
leading to biased predictions and classifications.

 Privacy: Collection and storage of data for machine learning can


raise privacy concerns and can lead to security risks if the data is
not properly secured.

 Quality of data: The quality of data used for training machine


learning models is critical to the performance of the model. Poor
quality data can lead to inaccurate predictions and classifications.

 Lack of interpretability: Some machine learning models can be


complex and difficult to interpret, making it challenging to
understand how they are making decisions.

 Use of Machine Learning :


 Machine learning is a powerful tool that can be used in a wide range
of applications. Here are some of the most common uses of machine
learning:

 Predictive modeling: Machine learning can be used to build


predictive models that can predict future outcomes based on
historical data. This can be used in many applications, such as stock
market prediction, fraud detection, weather forecasting, and
customer behavior prediction.

 Image recognition: Machine learning can be used to train models


that can recognize objects, faces, and other patterns in images. This
is used in many applications, such as self-driving cars, facial
recognition systems, and medical image analysis.

 Natural language processing: Machine learning can be used to


analyze and understand natural language, which is used in many
applications, such as chatbots, voice assistants, and sentiment
analysis.

 Recommendation systems: Machine learning can be used to build


recommendation systems that can suggest products, services, or
content to users based on their past behavior or preferences.

 Data analysis: Machine learning can be used to analyze large


datasets and identify patterns and insights that would be difficult or
impossible for humans to detect.

 Robotics: Machine learning can be used to train robots to perform


tasks autonomously, such as navigating through a space or
manipulating objects.

 Issues of using data in Machine Learning:


 Data quality: One of the biggest issues with using data in machine
learning is ensuring that the data is accurate, complete, and
representative of the problem domain. Low-quality data can result in
inaccurate or biased models.

 Data quantity: In some cases, there may not be enough data


available to train an accurate machine learning model. This is
especially true for complex problems that require a large amount of
data to accurately capture all the relevant patterns and
relationships.

 Bias and fairness: Machine learning models can sometimes


perpetuate bias and discrimination if the training data is biased or
unrepresentative. This can lead to unfair outcomes for certain
groups of people, such as minorities or women.

 Overfitting and underfitting: Overfitting occurs when a model is


too complex and fits the training data too closely, resulting in poor
generalization to new data. Underfitting occurs when a model is too
simple and does not capture all the relevant patterns.

Applications of Machine Learning


Machine learning is one of the most exciting technologies that one
would have ever come across. As is evident from the name, it gives the
computer that which makes it more similar to humans: The ability to
learn. Machine learning is actively being used today, perhaps in many
more places than one would expect.

Today, companies are using Machine Learning to improve business


decisions, increase productivity, detect disease, forecast weather, and
do many more things. With the exponential growth of technology, we
not only need better tools to understand the data we currently have,
but we also need to prepare ourselves for the data we will have. To
achieve this goal we need to build intelligent machines. We can write a
program to do simple things. But most of the time, Hardwiring
Intelligence in it is difficult. The best way to do it is to have some way
for machines to learn things themselves. A mechanism for learning – if
a machine can learn from input then it does the hard work for us. This
is where Machine Learning comes into action. Some of the most
common examples are:

 Image Recognition

 Speech Recognition

 Recommender Systems

 Fraud Detection

 Self Driving Cars

 Medical Diagnosis

 Stock Market Trading

 Virtual Try On

Image Recognition

Image Recognition is one of the reasons behind the boom one could
have experienced in the field of Deep Learning. The task which started
from classification between cats and dog images has now evolved up
to the level of Face Recognition and real-world use cases based on that
like employee attendance tracking.

Also,image recognition has helped revolutionized the healthcare


industry by employing smart systems in disease recognition and
diagnosis methodologies.

Speech Recognition

Speech Recognition based smart systems like Alexa and Siri have
certainly come across and used to communicate with them. In the
backend, these systems are based basically on Speech Recognition
systems. These systems are designed such that they can convert voice
instructions into text.

One more application of the Speech recognition that we can encounter


in our day-to-day life is that of performing Google searches just by
speaking to it.

Recommender Systems

As our world has digitalized more and more approximately every tech
giants try to provide customized services to its users. This application
is possible just because of the recommender systems which can
analyze a user’s preferences and search history and based on that they
can recommend content or services to them.

An example of these services is very common for example youtube. It


recommends new videos and content based on the user’s past search
patterns. Netflix recommends movies and series based on the interest
provided by users when someone creates an account for the very first
time.

Fraud Detection

In today’s world, most things have been digitalized varying from buying
toothbrushes or making transactions of millions of dollars everything is
accessible and easy to use. But with this process of digitization cases
of fraudulent transactions and fraudulent activities have increased.
Identifying them is not that easy but machine learning systems are
very efficient in these tasks.

Due to these applications only whenever the system detects red flags
in a user’s activity than a suitable notification be provided to the
administrator so, that these cases can be monitored properly for any
spam or fraud activities.

Self Driving Cars


It would have been assumed that there is certainly some ghost who is
driving a car if we ever saw a car being driven without a driver but all
thanks to machine learning and deep learning that in today’s world,
this is possible and not a story from some fictional book. Even though
the algorithms and tech stack behind these technologies are highly
advanced but at the core it is machine learning which has made these
applications possible.

The most common example of this use case is that of the Tesla cars
which are well-tested and proven for autonomous driving.

Medical Diagnosis

If you are a machine learning practitioner or even if you are a student


then you must have heard about projects like breast cancer
Classification, Parkinson’s Disease Classification, Pneumonia detection,
and many more health-related tasks which are performed by machine
learning models with more than 90% of accuracy.

Not even in the field of disease diagnosis in human beings but they
work perfectly fine for plant disease-related tasks whether it is to
predict the type of disease it is or to detect whether some disease is
going to occur in the future.

Stock Market Trading

Stock Market has remained a hot topic among working professionals


and even students because if you have sufficient knowledge of the
markets and the forces which drives them then you can make fortune
in this domain. Attempts have been made to create intelligent systems
which can predict future price trends and market value as well.

This can be considered as one of the applications of time series


forecasting because stock price data is nothing but sequential data in
which the time at which data has been taken is of utmost importance.

Virtual Try On

Have you ever purchased your specs or lenses from Lenskart? If yes
then you must have come across its feature where you can try different
frames virtually without actually purchasing them or visiting the outlet.
This has become possible just because of the machine learning
systems only which identify certain landmarks on a person’s face and
then place the specs virtually on your face using those landmarks.

You might also like