0% found this document useful (0 votes)
14 views40 pages

AI Lecture 02

Uploaded by

arnold sopiimeh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views40 pages

AI Lecture 02

Uploaded by

arnold sopiimeh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

MACHINE LEARNING

 Machine Learning (ML) is an area of artificial intelligence


(AI) that studies the creation of algorithms and models that
allow computers to learn from data and make predictions
or judgments without being explicitly programmed for
each task. It entails developing systems that can
automatically learn and improve via experience, allowing
them to do jobs more precisely or effectively over time.
It involves the creation of systems that can automatically learn
and improve from experience, enabling them to perform tasks
more accurately or efficiently over time

20XX Pitch deck title 2


KEY STEPS MACHINE LEARNING
1. Problem Definition

First things first, where are we headed? Clearly define the problem you want to solve and the
goals you aim to achieve. Are you predicting customer churn, diagnosing diseases, or generating
creative content?

2. Data Collection

Gather relevant data from various sources, ensuring it is clean, relevant, and
sufficient for the task.
3. Data Preprocessing

Data preprocessing is like cleaning, sorting, and preparing those ingredients. Address missing
4. Feature Engineering

Select, extract, or create meaningful features from the data to improve model
performance.

5. Model Selection

Choose appropriate machine learning algorithms based on the problem


type, data characteristics, and desired outcomes.
6. Model Training

Train the selected model on the training data using appropriate


techniques such as cross-validation to optimize performance. 4
7. Model Evaluation

Evaluate the trained model's performance using appropriate metrics and validate it
on separate test data to assess generalization

8. Model Tuning

Fine-tune hyperparameters and adjust model complexity to improve


performance. Fine-tuning is like adjusting the seasoning in your dish.
9. Deployment

Deploy the trained model into production, integrating it into existing


systems or applications for real-world use. 5
KEY CONCEPTS IN ARTIFICIAL INTELLIGENCE
20XX Pitch dec title 7
20XX Pitch deck title 8
20XX Pitch deck title 9
20XX Pitch deck title 10
SUPERVISED LEARNING
 It’s a type of machine learning where the algorithm learns from labeled data, meaning each
training example consists of an input-output pair.

 The algorithm learns to map inputs to outputs based on the provided examples.

 Supervised machine learning is a powerful technique for solving classification and


regression problems by learning from labeled data.

 By training models to recognize patterns and relationships in data, supervised learning


enables the development of predictive models that can make accurate predictions on unseen
inputs.

 Classification: In classification tasks, the algorithm learns to predict a categorical label or


class for a given input. For example, classifying emails as spam or not spam, or identifying
LABELED DATA

 Labeled data is made up of input-output pairs, where the algorithm accepts data as input

and outputs the matching label or category to that data.

 Labels can be numerical (e.g., property prices, temperature readings) or categorical

(e.g., dog/cat, spam/not spam).


TRAINING PROCEDURE
 In the course of training, the algorithm builds a model that
encapsulates the underlying patterns and relationships between
inputs and outputs by iteratively learning from the labeled data.

 To decrease the discrepancy between the expected outputs and the


actual labels in the training data, the algorithm modifies its internal
parameters or structure.
20XX Pitch deck title 14
20XX Pitch deck title 15
SUPERVISED LEARNING ALGORITHMS

20XX Pitch deck title 16


20XX Pitch deck title 17
LINEAR REGRESSION

Linear regression is a simple and widely used regression algorithm that


models the relationship between a dependent variable (target) and one or
more independent variables (features) by fitting a linear equation to the
observed data points.

It is commonly used for predicting continuous numerical values.

Imagine you are trying to estimate the price of a car based on its mileage.
Linear regression can help you model this relationship by fitting a straight
line to the data, where each point represents a car's mileage and price
HOW LINEAR REGRESSION WORKS
1.Data Points: It analyzes data points, each with an independent
variable (like size) and a dependent variable (like price).

2.Finding the Line: It draws a straight line that best fits the data
points, minimizing the overall error between the line's predictions and
the actual values.

3.Prediction Power: Once the line is fitted, you can use it


to predict the dependent variable for new data points. For example, if a
new house has a certain size and location, the line can estimate its price.
20XX Pitch deck title 20
20XX Pitch deck title 21
20XX Pitch deck title 22
20XX Pitch deck title 23
20XX Pitch deck title 24
LOGISTICS REGRESSION
 Logistic regression is a classification algorithm used for binary classification
tasks, where the output variable (target) is binary (0 or 1).

 It models the probability that a given input belongs to a particular class using the
logistic function (sigmoid function).

 Imagine you want to classify emails as spam or not spam.

 That's where logistic regression.

 It takes an input, analyzes it, and predicts the probability of it belonging


to one of two classes (in this case, spam or not spam).
HOW LOGISTICS REGRESSION WORKS
1.Data Points: It analyzes data points, each with features and a binary target
variable (0 or 1).

2.Probability Power: It uses the logistic function (sigmoid function), which


squishes any input value between 0 and 1. This essentially translates the
analysis into a probability of belonging to class 1 (e.g., spam).

3.Decision Time: Based on a chosen threshold (usually 0.5), it predicts class


1 if the probability is above the threshold, and class 0 otherwise. So, if the
spam probability is 0.8, it's classified as spam; if it's 0.3, it's likely not spam.
LOGISTIC REGRESSION GRAPH

20XX Pitch deck title 27


20XX Pitch deck title 28
DECISION TREES
 Decision trees are versatile supervised learning algorithms that can
perform both classification (predicting categories) and regression
(predicting numerical values) tasks.

 They work by recursively partitioning the feature space into regions


based on the values of input features, with each partition representing
a decision or split in the tree.

 Decision trees are like intelligent decision-makers in the


world of machine learning.
20XX Pitch deck title 29
 Decision tree algorithms are a widely used machine learning tool for classification and regression
tasks.

 They divide the dataset repeatedly into subsets based on feature values, resulting in a tree-like
structure with each node representing a feature and each branch representing a choice based on that
feature.

 The operation continues until the data is properly partitioned or a halting requirement is fulfilled.

 Decision trees are interpretable and adaptable, able to handle both numerical and categorical
information.

 They improve comprehension of decision-making processes and can deal with nonlinear
relationships.

 They
20XX may, however, experience overfitting if notPitch
cleaned
deck title or regularized adequately. 30
20XX Pitch deck title 31
20XX Pitch deck title 32
RANDOM FOREST
 Random forests are ensemble learning methods that combine
multiple decision trees to improve predictive accuracy and reduce
overfitting.

 They train multiple decision trees on random subsets of the training


data and aggregate their predictions to make the final prediction.

 Random Forest is an ensemble learning technique for classification


and regression applications.
20XX Pitch deck title 33
 During training, it constructs numerous decision trees and outputs their mode (for
classification) or mean prediction (for regression).

 Each tree is constructed using a random selection of features and bootstrapped


samples from the training data, which introduces unpredictability while reducing
overfitting.

 During prediction, the ensemble of trees aggregates their outputs, yielding robust
and reliable results.

 Random Forest is well-known for its simplicity, scalability, and capacity to handle
high-dimensional data, making it a popular choice for machine learning
applications.
20XX Pitch deck title 34
20XX Pitch deck title 35
SUPPORT VECTOR MACHINES (SVM)
Support Vector Machines (SVMs) are a distinct breed of supervised learning algorithms,

specifically designed for classification tasks.

They work by creating a clear dividing line, called a hyperplane, between different groups of

data points. But unlike other algorithms, SVMs don't just draw any line – they strive for the

optimal hyperplane that maximizes the margin between the classes.

Think of it like separating two groups of friends at a party. You wouldn't just draw a random line

down the middle, would you? You'd try to find the widest possible gap, ensuring the two groups
36
are as distinct as possible. That's the essence of SVMs.
NEURAL NETWORKS
Neural networks are a class of machine learning models inspired by the structure and function of the human brain. They consist
of interconnected layers of neurons (nodes) that process and transform input data to produce output predictions. Neural
networks can learn complex patterns and relationships in data through a process called training, where the model adjusts its
parameters to minimize the difference between predicted and actual outputs.

Components of a Neural Network:

•Input Layer: Receives input data and passes it to the next layer.

•Hidden Layers: Intermediate layers between the input and output layers. Each neuron in a hidden layer performs a
computation based on the inputs it receives and passes the result to the next layer.

•Output Layer: Produces the final predictions or outputs of the model.

•Weights and Biases: Parameters of the model that are learned during training to adjust the strength of connections between
neurons and the neuron's activation threshold, respectively.

•Activation Functions: Non-linear functions applied to the output of neurons to introduce non-linearity into the model,
20XX Pitch deck title 37
enabling it to learn complex relationships.
Types of Neural Networks

Feedforward Neural Networks (FNN): Neurons are organized in layers, and


information flows in one direction, from input to output.

Convolutional Neural Networks (CNN): Designed for processing grid-like data, such
as images, by using convolutional layers to detect spatial patterns.

Recurrent Neural Networks (RNN): Designed for sequential data, such as time series
or natural language, by introducing connections between neurons to capture temporal
dependencies.

Long Short-Term Memory Networks (LSTM): A type of RNN that can learn long-
term dependencies by maintaining a memory state over time.
20XX Pitch deck title 38
EXAMPLE
Consider an image classification task where we want to classify images of
handwritten digits (0-9) using a neural network.

We can use a convolutional neural network (CNN) for this task.

The CNN will learn to detect features like edges and shapes in the input images
through convolutional layers and make predictions about the digit present in the
image through its output layer

20XX Pitch deck title 39

You might also like