0% found this document useful (0 votes)
12 views65 pages

Unit 5

Q-learning is a model-free, value-based reinforcement learning algorithm that finds the best actions for an agent based on its current state to maximize future rewards. It utilizes a Q-table to store and update the expected values of actions in various states, employing techniques like the Bellman equation and the Epsilon Greedy Strategy for exploration and exploitation. While effective for small to medium problems, Q-learning faces challenges in large state spaces, slow convergence, and requires careful parameter tuning, leading to the development of enhancements like Deep Q-Networks (DQNs) for improved scalability and performance.

Uploaded by

baliyand8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views65 pages

Unit 5

Q-learning is a model-free, value-based reinforcement learning algorithm that finds the best actions for an agent based on its current state to maximize future rewards. It utilizes a Q-table to store and update the expected values of actions in various states, employing techniques like the Bellman equation and the Epsilon Greedy Strategy for exploration and exploitation. While effective for small to medium problems, Q-learning faces challenges in large state spaces, slow convergence, and requires careful parameter tuning, leading to the development of enhancements like Deep Q-Networks (DQNs) for improved scalability and performance.

Uploaded by

baliyand8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

• UNIT-5

• What is Q-Learning?
• Q-learning is a model-free, value-based, off-policy algorithm
that will find the best series of actions based on the agent's
current state.
• The “Q” stands for quality. Quality represents how valuable the
action is in maximizing future rewards.
• The model-based algorithms use transition and reward
functions to estimate the optimal policy and create the model.
• In contrast, model-free algorithms learn the consequences of
their actions through the experience without transition and
reward function.
• The value-based method trains the value function to learn
which state is more valuable and take action.

• On the other hand, policy-based methods train the policy


directly to learn which action to take in a given state.
• In the off-policy, the algorithm evaluates and updates a
policy that differs from the policy used to take an action.

• Conversely, the on-policy algorithm evaluates and improves


the same policy used to take an action.
• Key Terminologies in Q-learning
• Before we jump into how Q-learning works, we need to learn a
few useful terminologies to understand Q-learning's
fundamentals.
• States(s): the current position of the agent in the
environment.
• Action(a): a step taken by the agent in a particular state.
• Rewards: for every action, the agent receives a reward and
penalty.
• Episodes: the end of the stage, where agents can’t take new
action. It happens when the agent has achieved the goal or
failed.
• Q(St+1, a): expected optimal Q-value of doing the action in a
particular state.
• How Does Q-Learning Work?
• We will learn in detail how Q-learning works by using the
example of a frozen lake. In this environment, the agent must
cross the frozen lake from the start to the goal, without falling
into the holes.

• The best strategy is to reach goals by taking the shortest


path.
• Q-Table
• The agent will use a Q-table to take the best possible action
based on the expected reward for each state in the
environment.

• In simple words, a Q-table is a data structure of sets of actions


and states, and we use the Q-learning algorithm to update the
values in the table.

• Q-Function
• The Q-function uses the Bellman equation and takes state(s)
and action(a) as input. The equation simplifies the state values
and state-action value calculation.
• Q-learning algorithm
• Initialize Q-Table
• We will first initialize the Q-table. We will build the table with
columns based on the number of actions and rows based on
the number of states.
• In our example, the character can move up, down, left, and
right. We have four possible actions and four states(start, Idle,
wrong path, and end). You can also consider the wrong path
for falling into the hole. We will initialize the Q-Table with
values at 0.
• Choose an Action
• The second step is quite simple. At the start, the agent will
choose to take the random action(down or right), and on the
second run, it will use an updated Q-Table to select the action.
• Perform an Action
• Choosing an action and performing the action will repeat
multiple times until the training loop stops. The first action and
state are selected using the Q-Table. In our case, all values of
the Q-Table are zero.
• Then, the agent will move down and update the Q-Table using
the Bellman equation. With every move, we will be updating
values in the Q-Table and also using it for determining the best
course of action.
• Initially, the agent is in exploration mode and chooses a
random action to explore the environment. The Epsilon Greedy
Strategy is a simple method to balance exploration and
exploitation. The epsilon stands for the probability of choosing
to explore and exploits when there are smaller chances of
exploring.
• At the start, the epsilon rate is higher, meaning the agent is in
exploration mode. While exploring the environment, the
epsilon decreases, and agents start to exploit the
environment. During exploration, with every iteration, the
agent becomes more confident in estimating Q-values
• In the frozen lake example, the agent is unaware of the
environment, so it takes random action (move down) to start.
As we can see in the above image, the Q-Table is updated
using the Bellman equation.
• Measuring the Rewards
• After taking the action, we will measure the outcome and the
reward.
• The reward for reaching the goal is +1
• The reward for taking the wrong path (falling into the hole) is 0
• The reward for Idle or moving on the frozen lake is also 0.
• Update Q-Table
• We will update the function Q(St, At) using the equation. It uses
the previous episode’s estimated Q-values, learning rate, and
Temporal Differences error. Temporal Differences error is
calculated using Immediate reward, the discounted maximum
expected future reward, and the former estimation Q-value.
• The process is repeated multiple times until the Q-Table is
updated and the Q-value function is maximized.
• At the start, the agent is exploring the environment to update
the Q-table. And when the Q-Table is ready, the agent will start
exploiting and start taking better decisions.
• In the case of a frozen lake, the agent will learn to take the
shortest path to reach the goal and avoid jumping into the
holes.
• Q-learning Algorithm: Brief Overview
• Q-learning is a model-free reinforcement learning algorithm used to find the
optimal action-selection policy for an agent interacting with an environment.
It aims to learn the best actions to take under certain conditions to maximize
the cumulative reward.
• Key Components:
• Agent: The decision-maker.
• Environment: The world with which the agent interacts.
• State (S): A specific situation in the environment.
• Action (A): The moves the agent can take.
• Reward (R): Feedback from the environment based on the agent's action.
• Q-value (Q): The expected cumulative reward of taking an action in a given
state.
• Benefits:
• Model-Free: It does not require a model of the environment, making it
versatile.
• Simple Implementation: Easy to implement in discrete action spaces.
• Convergence: It converges to the optimal policy when using proper learning
parameters.
• Uses:
• Robotics: Q-learning is used to teach robots how to navigate environments
autonomously.
• Game AI: It is commonly applied in video games for AI agents to learn optimal
strategies.
• Finance: Q-learning can be used to optimize trading strategies by learning
from historical data.
• Game-Based Example:
• A popular use case is training a Pac-Man agent:
• States: The grid locations of Pac-Man and the ghosts.
• Actions: Moving up, down, left, or right.
• Rewards: Positive for eating dots and negative for being caught by ghosts.
• The agent learns the best policy to maximize its score while avoiding ghosts.
• Evaluation:
• Q-learning is evaluated by:
1.Cumulative Rewards: Measuring how much reward the agent accumulates
over episodes.
2.Policy Performance: Checking the optimality of the learned policy (whether it
selects actions that maximize long-term rewards).
• Numerical Problem Example:
• Consider a simple environment where:
• 3 states: S1, S2, S3
• 2 actions: A1 (left), A2 (right)
• Transition and reward matrix:
• From S1, taking A1 moves to S2 with reward +5; A2 moves to S3 with reward 0.
• From S2, taking A1 moves to S1 with reward 0; A2 moves to S3 with reward +10.
• From S3, both actions lead to terminal state with no reward.
• Assume learning rate α=0.5, discount γ=0.9, and initial Q-values as 0.
• Advantages of Q-Learning:
1.Model-Free:
1. Q-learning does not require a model of the environment, meaning the agent doesn't
need to know the environment's dynamics beforehand. This makes it highly versatile
for different types of environments.
2.Guaranteed Convergence:
1. Given infinite exploration and a suitable learning rate, Q-learning will converge to an
optimal policy. This is a significant advantage when searching for the best action-
selection policy over time.
3.Simple Implementation:
1. The algorithm is relatively simple and easy to implement, especially in discrete state-
action spaces. It requires only a Q-table and simple updates.
• Exploration-Exploitation Balance:
• Q-learning naturally supports the exploration-exploitation trade-off, allowing
the agent to explore the environment and gradually learn the optimal
strategy.
• Off-Policy Learning:
• Q-learning is off-policy, meaning it can learn from actions taken outside the
current policy. This makes it more flexible and allows it to learn from different
sources (like simulated experiences or another agent's actions).
• Widely Applicable:
• Q-learning can be applied to a variety of domains, including robotics, game AI,
finance, and more. It is effective in situations where learning from trial and
error is necessary.
• Disadvantages of Q-Learning:
1.High Memory Usage for Large State Spaces:
1. Q-learning uses a Q-table to store values for every state-action pair. In large or
continuous state spaces, this leads to high memory consumption. The algorithm
doesn't scale well for very large environments.
2.Slow Convergence:
1. In complex environments with many states and actions, Q-learning may take a long
time to converge to an optimal policy, especially if the agent needs to explore many
different paths.
3.Lack of Generalization:
1. Since Q-learning assigns a specific value to each state-action pair, it does not generalize
well. Small changes in states (e.g., nearby grid cells in a game) are treated
independently, leading to inefficiencies.
4.Sensitive to Parameter Tuning:
1. The performance of Q-learning depends heavily on the selection of parameters like the
learning rate (α), discount factor (γ), and exploration rate (ϵ). Poor choices can result in
suboptimal learning or slow convergence.
• Inefficient in Continuous Action Spaces:
• In environments with continuous actions, Q-learning struggles since it needs
to maintain and update Q-values for all possible action-state pairs. This leads
to inefficiencies, and alternative methods like Deep Q-Networks (DQN) are
preferred in such cases.
• Exploration vs. Exploitation Balance:
• While Q-learning addresses the exploration-exploitation dilemma, balancing
them effectively over time is still challenging. Excessive exploration can slow
down learning, while too little exploration might result in a suboptimal policy.
• Note: Q-learning is an efficient and effective algorithm for small to medium-
sized problems, but it faces scalability issues in larger, more complex
environments.

• It’s also sensitive to the right hyperparameter settings, which requires careful
tuning for successful learning
• Neural Network Refinement in Q-Learning (Deep Q-Networks - DQNs)
• Neural Network Refinement in Q-Learning enhances the original Q-learning
algorithm by replacing the Q-table with a neural network.

• This approach is particularly useful for environments with large or continuous


state spaces, such as video games or robotic control systems, where
maintaining a Q-table is infeasible.
• Key Components of Neural Network Refinement in Q-Learning:
1.Q-Network: A neural network (NN) is used as a function approximator to
estimate the Q-values. The NN takes the current state as input and outputs
the Q-values for all possible actions. This helps the agent determine the best
action for any given state.
2.Experience Replay: Experience replay stores the agent's experiences (state,
action, reward, next state) in a buffer. Random batches of experiences are
sampled to train the neural network, breaking the correlation between
consecutive experiences and improving stability.
3.Target Network: A second, slower-updating target network is used to provide
stable Q-value updates. This prevents oscillations and divergence during
training. The weights of the target network are updated less frequently than
the primary Q-network.
4.Loss Function: The neural network is trained by minimizing a loss function
that measures the difference between the predicted Q-values and the target
values derived from the Bellman equation:
• Example: Deep Q-Network in Games
• Consider a game like Atari Breakout, where the agent (a paddle) learns to hit
a ball to break blocks. The agent observes the game's frame (image) as the
state and must decide whether to move left or right. The neural network helps
the agent learn which actions will lead to higher scores, improving its
performance over time by updating Q-values for each action in each game
state.
• Advantages of Using Neural Networks in Q-Learning:
• Scalability: It can handle large or continuous state spaces.
• Generalization: Neural networks can generalize across unseen states,
improving learning efficiency.
• Experience Replay: It helps in stabilizing training by using random past
experiences.
• Disadvantages:
• Training Instability: Without techniques like target networks or experience
replay, training can become unstable.
• Resource Intensive: Requires more computational power and memory than
traditional Q-learning.

• Note: Neural Network Refinement in Q-learning, such as with DQNs, is a


powerful extension of the Q-learning algorithm that scales to complex tasks,
offering better learning capabilities for environments with large state-action
spaces.
• Advantages:
1.Scalability:
1. Traditional Q-learning struggles with large or continuous state spaces, but Deep Q-Networks
(DQNs) can handle such environments efficiently. The neural network approximates Q-values,
allowing the algorithm to scale to more complex tasks like video games or robotic control.
2.Generalization:
1. Neural networks generalize across similar states, which enables better performance in unseen
environments. This is especially beneficial in environments where similar states may lead to
the same optimal actions.
3.Stabilized Learning:
1. With techniques like experience replay and target networks, the learning process in DQNs
becomes more stable compared to basic Q-learning. Experience replay avoids overfitting by
training on a variety of past experiences, while target networks help reduce fluctuations
during updates.
4.Continuous Actions:
1. Neural networks allow for Q-learning to be extended to environments with continuous action
spaces, a major limitation of basic Q-learning. Techniques such as Deep Deterministic Policy
Gradient (DDPG) enable the use of continuous actions.
• Disadvantages:
• High Computational Cost:
• Training a deep neural network can be resource-intensive, requiring substantial
computational power and memory, which can be prohibitive for real-time or embedded
applications.
• Training Instability:
• Without careful tuning (learning rate, exploration-exploitation balance, etc.), neural
network-based Q-learning can be prone to instability, oscillations, or divergence in the Q-
values.
• Longer Training Time:
• DQNs often require a longer time to converge compared to simpler algorithms due to the
complexity of neural networks and the large number of training episodes needed to
update weights effectively.
• Hyperparameter Sensitivity:
• Neural network-based algorithms are sensitive to hyperparameters such as learning rate,
discount factor, and exploration parameters, making fine-tuning critical for optimal
performance.
• A common real-world application of Deep Q-Learning is in Atari Games.
Google DeepMind’s DQN was famously used to achieve human-level
performance on a variety of Atari games.

• The neural network takes raw pixel input (game screens), processes it through
convolutional layers, and outputs Q-values for each possible action (e.g.,
moving left, right, or firing).
• Over time, the network learns to master the game by updating its Q-values to
maximize rewards, like breaking bricks in Breakout or eating pellets in Pac-
Man.
• Another use case is in robotics, where DQNs can help robots navigate, grasp
objects, or perform other tasks in complex environments by learning from
interaction rather than being explicitly programmed.
• Note: Neural Network Refinement in Q-learning, especially using Deep Q-
Networks (DQNs), significantly enhances the original algorithm's ability to
handle complex environments with large or continuous state spaces.

• While this method is resource-intensive and requires careful tuning, it has


proven highly effective in tasks like gaming, robotics, and navigation, achieving
state-of-the-art results where traditional Q-learning falls short.
• Greedy Policy Refinement in Q-Learning
• In Q-Learning, the greedy policy refers to a strategy where the agent selects
actions based on the maximum Q-value for the current state.

• This ensures the agent exploits the best-known action according to its learned
Q-values.

• However, the basic greedy approach may fail in cases where the agent needs
to explore the environment to discover better actions.
• To address this, refinements like epsilon-greedy or softmax exploration are
often used.
• 1. Epsilon-Greedy Policy:
• In epsilon-greedy refinement, the agent follows the greedy policy most of the
time but occasionally takes random actions to explore the environment. The
policy is refined to balance exploration and exploitation:
• With probability ϵ, the agent explores by choosing a random action.
• With probability 1−ϵ, it exploits by selecting the action with the highest Q-
value.
• Over time, ϵ decays, so the agent explores less and focuses more on exploiting
the learned policy.
• 2. Advantages:
• Exploration: Ensures the agent explores the environment and doesn't get
stuck in local optima by selecting random actions at times.
• Simplicity: Easy to implement and tune with a decaying ϵ\epsilonϵ value over
time.
• Efficiency: Works well with smaller environments where optimal actions can
be found through exploration.
• 3. Disadvantages:
• Suboptimal Behavior: During the early phases of training, the random actions
might lead to suboptimal rewards.
• Fixed Exploration: Even with epsilon decay, the exploration might not be
sufficient for large state spaces. A fixed decay rate might be inappropriate for
some environments.
• Inefficiency in Large State Spaces: In environments with large or continuous
state spaces, random exploration becomes less efficient as the chance of
finding optimal actions decreases.
• Neural Networks extract identifying features from data, lacking
pre-programmed understanding. Network components include
neurons, connections, weights, biases, propagation functions,
and a learning rule. Neurons receive inputs, governed by
thresholds and activation functions.

• Connections involve weights and biases regulating information


transfer. Learning, adjusting weights and biases, occurs in
three stages: input computation, output generation, and
iterative refinement enhancing the network’s proficiency in
diverse tasks.
• These include:
1.The neural network is simulated by a new environment.
2.Then the free parameters of the neural network are changed
as a result of this simulation.
3.The neural network then responds in a new way to the
environment because of the changes in its free parameters.
• Importance of Neural Networks
• The ability of neural networks to identify patterns, solve
intricate puzzles, and adjust to changing surroundings is
essential. Their capacity to learn from data has far-reaching
effects, ranging from revolutionizing technology like
natural language processing and self-driving automobiles to
automating decision-making processes and increasing
efficiency in numerous industries.

• The development of artificial intelligence is largely dependent


on neural networks, which also drive innovation and influence
the direction of technology.
• Working of a Neural Network
• Neural networks are complex systems that mimic some
features of the functioning of the human brain. ‘
• It is composed of an input layer, one or more hidden layers,
and an output layer made up of layers of artificial neurons that
are coupled. The two stages of the basic process are called
backpropagation and forward propagation.
• Forward Propagation
• Input Layer: Each feature in the input layer is represented by
a node on the network, which receives input data.
• Weights and Connections: The weight of each neuronal
connection indicates how strong the connection is. Throughout
training, these weights are changed.
• Hidden Layers: Each hidden layer neuron processes inputs
by multiplying them by weights, adding them up, and then
passing them through an activation function. By doing this,
non-linearity is introduced, enabling the network to recognize
intricate patterns.
• Output: The final result is produced by repeating the process
until the output layer is reached.
• Backpropagation
• Loss Calculation: The network’s output is evaluated against the real goal values,
and a loss function is used to compute the difference. For a regression
problem, the Mean Squared Error (MSE) is commonly used as the cost
function.
• Gradient Descent: Gradient descent is then used by the network to reduce the
loss. To lower the inaccuracy, weights are changed based on the derivative of
the loss with respect to each weight.
• Adjusting weights: The weights are adjusted at each connection by applying
this iterative process, or backpropagation, backward across the network.
• Training: During training with different data samples, the entire process of
forward propagation, loss calculation, and backpropagation is done iteratively,
enabling the network to adapt and learn patterns from the data.
• Actvation Functions: Model non-linearity is introduced by activation functions
like the rectified linear unit (ReLU) or sigmoid. Their decision on whether to
“fire” a neuron is based on the whole weighted input.
• Types of Neural Networks
• There are seven types of neural networks that can be used.
• Feedforward Neteworks: A feedforward neural network is a
simple artificial neural network architecture in which data
moves from input to output in a single direction. It has input,
hidden, and output layers; feedback loops are absent. Its
straightforward architecture makes it appropriate for a number
of applications, such as regression and pattern recognition.
• Multilayer Perceptron (MLP): MLP is a type of feedforward
neural network with three or more layers, including an input
layer, one or more hidden layers, and an output layer. It uses
nonlinear activation functions.
• Convolutional Neural Network (CNN): A
Convolutional Neural Network (CNN) is a specialized artificial
neural network designed for image processing. It employs
convolutional layers to automatically learn hierarchical
features from input images, enabling effective image
recognition and classification. CNNs have revolutionized
computer vision and are pivotal in tasks like object detection
and image analysis.
• Recurrent Neural Network (RNN): An artificial neural
network type intended for sequential data processing is called
a Recurrent Neural Network (RNN). It is appropriate for
applications where contextual dependencies are critical, such
as time series prediction and natural language processing,
since it makes use of feedback loops, which enable information
to survive within the network.
• Long Short-Term Memory (LSTM): LSTM is a type of RNN
• Advantages of Neural Networks
• Neural networks are widely used in many different applications
because of their many benefits:
• Adaptability: Neural networks are useful for activities where the link
between inputs and outputs is complex or not well defined because
they can adapt to new situations and learn from data.
• Pattern Recognition: Their proficiency in pattern recognition
renders them efficacious in tasks like as audio and image
identification, natural language processing, and other intricate data
patterns.
• Parallel Processing: Because neural networks are capable of
parallel processing by nature, they can process numerous jobs at
once, which speeds up and improves the efficiency of computations.
• Non-Linearity: Neural networks are able to model and comprehend
complicated relationships in data by virtue of the non-linear activation
functions found in neurons, which overcome the drawbacks of linear
models.
• Disadvantages of Neural Networks
• Neural networks, while powerful, are not without drawbacks and
difficulties:
• Computational Intensity: Large neural network training can
be a laborious and computationally demanding process that
demands a lot of computing power.
• Black box Nature: As “black box” models, neural networks
pose a problem in important applications since it is difficult to
understand how they make decisions.
• Overfitting: Overfitting is a phenomenon in which neural
networks commit training material to memory rather than
identifying patterns in the data. Although regularization
approaches help to alleviate this, the problem still exists.
• Need for Large datasets: For efficient training, neural
networks frequently need sizable, labeled datasets; otherwise,
their performance may suffer from incomplete or skewed data.
• Example:
• Image Recognition (e.g., Google Photos)
• Neural networks, particularly Convolutional Neural Networks (CNNs), are widely
used in image recognition tasks like Google Photos automatically classifying and
organizing images based on the objects or people in them.
• How it works:
• Convolutional layers scan the image to detect features like edges, textures, and
colors.
• As data moves through deeper layers of the CNN, more abstract features like faces or
objects (e.g., dogs, trees) are detected.
• The final output layer classifies the image into predefined categories (e.g., cat, car,
person).
• Training: The network is trained on millions of labeled images, learning the patterns
that distinguish between different categories.
• Application: Google Photos uses this to automatically tag images with objects and
even suggest groupings based on people’s faces.
• Self-Driving Cars (e.g., Tesla Autopilot)
• Self-driving cars, such as Tesla’s Autopilot, use Deep Neural Networks (DNNs) and
CNNs to interpret and understand their surroundings, make driving decisions, and
navigate safely.
• How it works:
• Cameras, radar, and sensors on the car feed visual and environmental data into a CNN,
which processes the information.
• CNNs are used to detect objects on the road (pedestrians, other vehicles, traffic signs)
and lane markings.
• The system also uses Reinforcement Learning to learn driving strategies (e.g., when to
change lanes, adjust speed) based on continuous feedback from driving conditions.
• Training: The model is trained on large datasets collected from driving in various
environments, including city streets, highways, and parking lots.
• Application: The neural network helps the car navigate, avoid obstacles, maintain
lanes, and follow traffic rules, enabling semi-autonomous or fully autonomous driving.
• Fraud Detection (e.g., Banking Systems)
• Banks and credit card companies use neural networks, especially Feedforward
Neural Networks and Autoencoders, to detect fraudulent transactions.
• How it works:
• Transaction data, such as the time, location, amount, and user behavior, is fed
into a neural network.
• The network has learned patterns of typical (legitimate) transactions and can
identify anomalies that might indicate fraud (e.g., an unusual purchase in a
foreign country).
• The output is a probability score, indicating whether a transaction is fraudulent.
• Training: The network is trained on historical data that includes both legitimate
and fraudulent transactions. It learns to differentiate between them by
recognizing patterns.
• Application: This system flags suspicious transactions for further investigation,
potentially preventing unauthorized access to funds.
• Healthcare (e.g., Disease Diagnosis)
• Neural networks are being used to assist doctors in diagnosing diseases such
as cancer by analyzing medical images (e.g., X-rays, MRIs) and patient data.
• How it works:
• A CNN can process medical images to detect features such as abnormal
growths, which might indicate cancer or other diseases.
• For other types of data, such as patient history or lab results, Feedforward
Neural Networks analyze structured data and predict the likelihood of a
certain disease.
• Training: The neural network is trained on medical images or patient data
labeled by doctors (e.g., positive/negative cases of a disease).
• Application: This technology can assist radiologists in identifying tumors more
quickly and accurately, leading to early detection and improved treatment
plans.
• Personalized Recommendations (e.g., Netflix, Amazon)
• Recommendation Systems used by platforms like Netflix and Amazon rely
heavily on neural networks to suggest content or products based on a user’s
past behavior.
• How it works:
• A Collaborative Filtering model, often using Neural Networks, analyzes user
behavior (what they watch or buy) and predicts what they may be interested
in next.
• The network learns the preferences of millions of users and uses this data to
find similarities between users and items.
• Training: The model is trained on historical user data, including what content
or products users have interacted with.
• Application: Netflix suggests shows and movies based on viewing history,
while Amazon recommends products you are likely to purchase based on past
shopping behavior.

You might also like