0% found this document useful (0 votes)
30 views16 pages

Day 4-2 Compressed

Uploaded by

mufeedsheik786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views16 pages

Day 4-2 Compressed

Uploaded by

mufeedsheik786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

How Machine Learning

Works
Machine learning (ML) is a transformative technology that empowers
computers to learn from data without explicit programming. In this
presentation, we'll dive into the intricate process of how ML algorithms
analyze data, uncover patterns, and make intelligent decisions.
The Machine Learning
Workflow
Step 1: Data Step 2: Data
Collection
ML models are only as good as the data they are trained on. Preprocessing
Raw data is typically messy and requires cleaning, formatting,
This initial step involves gathering a representative dataset. and feature selection. This step ensures the data is ready for
For example, to develop a system for recognizing handwritten the learning process. For example, you might remove
digits, you would collect thousands of labeled images of duplicates, fill missing values, or scale features to a common
handwritten numbers. range.

Data Collection Model


Gathering relevant and high-quality data is the The trained model is integrated into applications or
foundation of any machine learning project.
Deployment
systems to make real-world decisions.

1 2 3

Model Training
Algorithms are trained on the data to identify patterns and learn
how to make predictions.
Workflow for building a machine
learning model
Choosing the Right
Algorithm
1 Supervised Learning 2 Unsupervised
Algorithms learn from labeled Learning
Algorithms learn from unlabeled
data, where each input example data, discovering hidden patterns
has a known output. Examples and structures. Examples include
include predicting house prices grouping customers into clusters
based on features like size and based on their purchase history,
location, or classifying emails as or reducing the dimensionality of
spam or not spam. complex datasets.

3 Reinforcement
Learning
Algorithms learn through trial and error, receiving feedback in the form of
rewards or penalties. Examples include training robots to navigate, or
developing game-playing AI systems.
Supervised Learning Unsupervised Learning Reinforcement
Learning
Algorithms learn from labeled Algorithms learn by interacting with

data to make predictions or an environment and receiving


Algorithms discover hidden
decisions. rewards or penalties.
Patterns and structures in
unlabeled data.
Supervised
Supervised learning is the types of machineLearning
learning in which machines are trained using well "labelled" training data,
and on basis of that data, machines predict the output.
The labelled data means some input data is already tagged with the correct output.
In the real-world, supervised learning can be used for Risk Assessment, Image classification, Fraud Detection, spam
filtering, etc .
Example:-
The labeled data contains pictures of dogs and cats (it’s our job to provide the labels for pictures as dog or cat). The
algorithm uses this information for training purposes. Once the model is trained, it can predict the input data
(unlabeled data/test data) and classify the new picture as a dog or cat.
Unsupervised
Learning
Unsupervised learning is a type of machine learning in which models are trained using unlabeled dataset and are
allowed to act on that data without any supervision.
The goal of unsupervised learning is to find the underlying structure of dataset, group that data according to
similarities, and represent that dataset in a compressed format.
Example
Suppose the unsupervised learning algorithm is given an input dataset containing images of different types of cats and
dogs.
Reinforcement
Learning

Observe Act Reward Learn


The agent observes the The agent selects an action to The environment provides a The agent updates its
current state of the take based on its current reward or penalty based on policy to maximize
environment. policy. the agent's action. future rewards.
Training the Model
Splitting the Data Iterative Adjustment
The data is divided into three sets: training set (used to teach During training, the algorithm iteratively adjusts its parameters
the model), validation set (used to tune hyperparameters), and to minimize the error between its predictions and the actual
test set (used to evaluate the model's performance on unseen values. This process is often guided by optimization
data). techniques like gradient descent.
Evaluating Model
Performance

Accuracy Precision
Measures the proportion of correct Measures the proportion of true
predictions made by the model. positive predictions out of all positive
predictions.

Recall
Measures the proportion of true
positive predictions out of all actual
positive instances.
Deployment and Real-
World Applications
1 Deployment
Once a model performs well on the test set, it is deployed to
make predictions or decisions in real-world applications.

2 Examples
ML models are used in diverse applications, from fraud
detection and medical diagnosis to personalized
recommendations and self-driving cars.
Deep Dive: Neural
Networks 1 Input
Layer
2 Hidden
Layers
3 Output
Layer
Neural networks are a powerful type of ML model, inspired by the structure of the human brain. They consist of layers of
interconnected nodes (neurons), which process information and learn complex patterns.
How Neural Networks
LearnForward Propagation
Data flows through the network, starting from the input layer and
passing through hidden layers to the output layer, generating a
prediction.

Backward
Propagation
The network calculates the error between its prediction and the actual
value. This error information is used to adjust the weights of the
connections between neurons, improving the accuracy of future
predictions.

Optimization
Gradient Descent is a common optimization technique used to adjust
weights in the right direction to minimize errors and improve the
model's performance.
Key Components of
Learning
Features Labels
The input variables used by the The known outputs for
model to learn patterns. For supervised learning tasks. For
example, in predicting house example, in predicting spam, the
prices, features could include label would be "Spam" or "Not
size, location, and number of Spam".
rooms.

Loss Function
A function that measures the error between the model's predictions and
the actual values. The goal is to minimize this loss during training to
improve accuracy.
Practical Analogy: Learning to
Ride a Bike
1 2
Data Collection Training
Observing others ride a bike. Practice pedaling and balancing (trial and
error).

3 4
Model Adjustment Deployment
Adjust your movements based on falls Ride confidently in real-world scenarios.
(feedback).

Just like learning to ride a bike, machine learning involves a process of collecting data, training
the model, adjusting based on feedback, and deploying the learned knowledge to make
predictions and decisions.

You might also like