0% found this document useful (0 votes)
22 views17 pages

A.I Paper 1

This document contains an AI paper with questions covering various topics: - MCQ questions on SVM kernels, software agents, the Turing test, and the A* search algorithm. - Questions requiring short answers about recurrent neural networks, agents in static vs dynamic environments, overfitting, and breadth-first search. - Questions asking for 1-2 sentence explanations of informed search, state space, Ockham's razor, and the goal test function. - Long-form questions asking to explain goal-based agents with a diagram and pseudocode, applications of AI, formulating the 8-Queens puzzle as a search problem, calculating heuristic values in a block puzzle, and explaining deterministic/multi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views17 pages

A.I Paper 1

This document contains an AI paper with questions covering various topics: - MCQ questions on SVM kernels, software agents, the Turing test, and the A* search algorithm. - Questions requiring short answers about recurrent neural networks, agents in static vs dynamic environments, overfitting, and breadth-first search. - Questions asking for 1-2 sentence explanations of informed search, state space, Ockham's razor, and the goal test function. - Long-form questions asking to explain goal-based agents with a diagram and pseudocode, applications of AI, formulating the 8-Queens puzzle as a search problem, calculating heuristic values in a block puzzle, and explaining deterministic/multi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

0A.

I Paper 1
Q.1] Attempt all of the following
A] MCQ’s
1) SVM’s create a linear separating hyperplane, but they have the ability to
embed the data into a higher dimensional space using the so called __________.
A) Kernel trick B) Support Vectors C) Margin D) Decision Boundary
2) Software agents are also called as __________.
A) Hard Bots B) Soft Bots C) Firm ware D) Ghosts
3) The ______________ states that a program cannot give mind to a computer.
A) Chinese room experiment B) Turing test C) Agent D) None of these
4) A* uses __________________.
A) F(n)= g(n) + h(n) B) F(n) = g(n) C) F(n) = h(n) D) f(n) = c

B] Answer the following.


1) Recurrent Neural Network(RNN) network feeds its output back as its own
input.
2) An agent solving a crossword problem by itself is in a static environment.
3) Overfitting happens when a model learns the detail and noise in the training
data to the extent that it negatively impacts the performance of the model on
new data.
4) The breadth first search algorithm uses queue data structure.

C] Answer in 1-2 sentences.


1) What is meant by informed search?
→ It is a type of search algorithm that uses domain specific knowledge to guide
its search through a problem space.
2)What is state space?
→ It is a discrete space representing the set of all possible configurations of a
system.
3) State Ockham’s Razor.
→ When there are multiple hypotheses to solve a problem, the simpler one must
be preferred.
4) What is the goal test function?
→ It is a test that determines whether a given state is a goal state.

Q.2] Answer any THREE of the following.


I) Explain “Goal Based Agent” with a schematic diagram and a pseudo
code.

→ A goal-based agent is an intelligent agent that selects its actions based on the
desirability of their outcomes. It uses a process called goal formulation to
determine what it wants to achieve, and then selects actions that will bring it closer
to its goals. The agent has a goal or set of goals that it is trying to achieve, and it
uses its knowledge about the world and its actions to determine the best course of
action to take in order to achieve those goals.

Here is a schematic diagram that illustrates the basic architecture of a goal-based


agent:
+----------------+
| |
| Goal |
| Formulation |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Knowledge |
| Base |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Action |
| Selection |
| |
+----------------+
And here is an example of pseudo code for a simple goal-based agent:
# Define the agent's goals
goals = ["achieve_goal_1", "achieve_goal_2", ...]

# Define the agent's knowledge base


knowledge_base = {...}

# Define the agent's possible actions


actions = ["action_1", "action_2", ...]

# Define a function to evaluate the desirability of an action


def evaluate_action(action):
# Use the knowledge base and goals to determine how desirable the
action is
desirability = ...
return desirability

# Select the best action to take


best_action = max(actions, key=evaluate_action)

# Take the selected action


take_action(best_action)

II) Explain any two applications of Artificial Intelligence.


→ (AI) has a wide range of applications in various industries. Here are two
examples of how AI is being used:

E-Commerce: AI is used in e-commerce to create personalized shopping


experiences for customers. For example, AI technology is used to create
recommendation engines that suggest products to customers based on their
browsing history, preferences, and interests1. This helps improve customer
engagement and loyalty towards the brand. Additionally, AI-powered virtual
shopping assistants and chatbots can help improve the user experience while
shopping online. Natural Language Processing (NLP) is used to make the
conversation with these assistants as human-like and personal as possible1.

Healthcare: AI has various applications in the healthcare industry. For


example, AI can be used for medical diagnosis, where it can analyze patient
data and medical images to help doctors make more accurate diagnoses. AI can
also be used to develop personalized treatment plans for patients based on their
medical history and genetic information. Additionally, AI can help with
administrative tasks such as scheduling appointments and managing patient
records, freeing up time for healthcare professionals to focus on providing high-
quality care to their patients.
III) Formulate the “8-Queens Puzzle” as a search problem.
→ We first move the empty space in all the possible directions in the start state
and calculate the f-score for each state.
• This is called expanding the current state. After expanding the current state, it
is pushed into the closed list and the newly generated states are pushed into the
open list.
• A state with the least f-score is selected and expanded again. This process
continues until the goal state occurs as the current state.
• Basically, here we are providing the algorithm a measure to choose its actions.
The algorithm chooses the best possible action and proceeds in that path.
• This solves the issue of generating redundant child states, as the algorithm will
expand the node with the least f-score.
• In our 8-Puzzle problem, we can define the h-score as the number of
misplaced tiles by comparing the current state and the goal state

IV) Consider a Block Puzzle as given below. The task is to reach the goal
state from the initial state by moving only one block at a time. If we assign
a heuristic function h(n) such that for every title is on an incorrect block we
count blocks below it and give a-1 for each block, otherwise. We give a+1.
Calculate the heuristic values of the states that are generated from the
initial state.

V) Explain the following with respect to task environment.


Deterministic:
In the field of Artificial Intelligence (AI), a deterministic approach refers to a
method where the outcome can be determined based on a specific state¹. This
means that for a given particular input, the AI will always produce the same
output⁴. It follows exactly the algorithm, and there is no "freewill" in AI, it's all
about mathematics and algorithms¹.
Deterministic AI techniques are predictable, fast, and easy to implement,
understand, test, and debug¹. They are commonly used in game AI⁵ and are
straightforward to build⁵. However, deterministic approaches lay the task of
anticipating all circumstances and explicitly writing all behavior on the
developers’ shoulders⁵.

In terms of task environment, deterministic AI environments are those where the


outcome of an action is completely determined by the state and the action itself.
This means that if we know the current state and the action taken by the AI, we
can predict with certainty what the next state will be.

On the other hand, in non-deterministic environments, even if we know the


current state and action, we cannot predict with certainty what the next state will
be due to randomness or other factors⁴.

For example, a chess game is a deterministic environment because the next state
of the game (i.e., the positions of all pieces on the board) is completely
determined by the current state and the move made by a player. Conversely, a
card game like poker is non-deterministic because even if we know the current
state (i.e., the cards in each player's hand) and action (i.e., a player's decision to
draw a card), we cannot predict with certainty what card will be drawn next.

In summary, deterministic AI approaches are particularly useful in environments


where outcomes can be accurately predicted based on current states and actions.
However, they may not be as effective in environments with inherent
uncertainty or randomness.

Multiagent:
A multi-agent system (MAS) in Artificial Intelligence (AI) is a system
composed of multiple interacting intelligent agents¹. These agents, which can be
autonomous or semi-autonomous, are capable of perceiving their environment,
making decisions, and taking action to achieve a common objective⁵.
In terms of the task environment, multi-agent systems can be applied in a
variety of fields, including AI, economics, and sociology⁴. They are particularly
useful in environments where tasks are complex and require the cooperation of
multiple agents. For example, in a traffic management system, multiple agents
(representing individual vehicles) can interact with each other to optimize traffic
flow and reduce congestion¹.

Multi-agent systems have several advantages over traditional single-agent


systems⁴:
1. **Increased Efficiency**: A MAS can automate tasks that would otherwise
be completed by humans, freeing up time for humans to focus on other tasks⁴.
2. **Improved Accuracy**: A MAS can provide more data to work with,
helping to reduce the chances of errors being made⁴.
3. **Increased Flexibility**: A MAS can adapt to changing conditions, making
it more robust⁴.
4. **Reduced Costs**: A MAS can automate tasks that would otherwise be
completed by humans, saving money on labor costs⁴.
5. **Increased Scalability**: A MAS can handle more data, making it more
effective⁴.

However, designing and managing a multi-agent system can be complex⁴.


Agents may have conflicting goals and may need to be coordinated. Also,
agents may need to learn and adapt to changing conditions⁴.

Despite these challenges, multi-agent systems offer a powerful approach for


solving complex problems in AI.

Attempt any 3 of the following:


1) Explain Artificial Neural Network.
→ Artificial Neural Networks (ANNs) are computing systems designed to
simulate the way the human brain analyzes and processes information⁴. They
are modeled after the neurons in the human brain¹.

An Artificial Neural Network contains artificial neurons which are called units.
These units are arranged in a series of layers that together constitute the whole
Artificial Neural Network in a system¹. A typical Artificial Neural Network has
an input layer, an output layer as well as hidden layers¹.

The input layer receives data from the outside world which the neural network
needs to analyze or learn about. Then this data passes through one or multiple
hidden layers that transform the input into data that is valuable for the output
layer. Finally, the output layer provides an output in the form of a response of
the Artificial Neural Networks to input data provided¹.

In most neural networks, units are interconnected from one layer to another.
Each of these connections has weights that determine the influence of one unit
on another unit. As the data transfers from one unit to another, the neural
network learns more and more about the data which eventually results in an
output from the output layer¹.

Artificial Neural Networks have self-learning capabilities that enable it to


produce a better result as more data become available⁴. They are a subset of
machine learning and are at the heart of deep learning algorithms⁵.

2) Explain logistic regression with an example.


→ Logistic regression is a statistical method used to model the relationship
between a binary dependent variable and one or more independent variables¹.
The goal of logistic regression is to predict the probability of an event occurring
based on a set of predictor variables¹.

In logistic regression, the dependent variable is binary, meaning it can only take
on two values, typically labeled as 0 or 1¹. The independent variables can be
either continuous or categorical¹. The logistic regression model is based on the
logistic function, which is a type of S-shaped curve that maps any continuous
input to a probability value between 0 and 1¹.

The equation for logistic regression is:

$$p = \frac{1}{1 + e^{-z}}$$

where `p` is the probability of the dependent variable taking on the value of 1,
`z` is a linear combination of the independent variables and their coefficients¹.
The equation can be expanded as follows:

$$z = b0 + b1*x1 + b2*x2 + ... + bn*xn$$

where `b0` is the intercept term, `b1`, `b2`, …, `bn` are the coefficients of the
independent variables `x1`, `x2`, ..., `xn`, respectively¹.

For example, let's say we want to predict whether a student will pass (1) or fail
(0) an exam based on their hours of study. Here, 'pass or fail' is our dependent
variable (y), and 'hours of study' is our independent variable (x). We could fit a
logistic regression model to our data and use it to predict the probability of
passing the exam based on hours of study.

Logistic regression is commonly used in fields such as healthcare, marketing,


finance, and social sciences to predict the likelihood of an event occurring, such
as whether a patient has a certain disease or whether a customer will buy a
product

3) Calculate Support vector machine.


→ Support Vector Machine (SVM) is a powerful machine learning algorithm
used for linear or nonlinear classification, regression, and even outlier detection
tasks¹. SVMs can be used for a variety of tasks, such as text classification,
image classification, spam detection, handwriting identification, gene
expression analysis, face detection, and anomaly detection¹.

The main objective of the SVM algorithm is to find the optimal hyperplane in
an N-dimensional space that can separate the data points in different classes in
the feature space¹. The hyperplane tries that the margin between the closest
points of different classes should be as maximum as possible¹. The dimension of
the hyperplane depends upon the number of features. If the number of input
features is two, then the hyperplane is just a line. If the number of input features
is three, then the hyperplane becomes a 2-D plane¹.

SVM chooses the extreme points/vectors that help in creating the hyperplane.
These extreme cases are called as support vectors, and hence algorithm is
termed as Support Vector Machine².

SVM can be of two types:


- **Linear SVM**: Linear SVM is used for linearly separable data, which
means if a dataset can be classified into two classes by using a single straight
line, then such data is termed as linearly separable data².
- **Non-linear SVM**: Non-Linear SVM is used for non-linearly separated
data, which means if a dataset cannot be classified by using a straight line, then
such data is termed as non-linear data².

SVMs are adaptable and efficient in a variety of applications because they can
manage high-dimensional data and nonlinear relationships¹. They are very
effective as they try to find the maximum separating hyperplane between the
different classes available in the target feature¹.

4) calculate info below.



Q.4) Answer any three of the following.
1) Explain reinforcement learning.
→ Reinforcement Learning is a feedback-based Machine learning technique in
which an agent learns to behave in an environment by performing the actions
and seeing the results of actions. For each good action, the agent gets positive
feedback, and for each bad action, the agent gets negative feedback or penalty.
• In Reinforcement Learning, the agent learns automatically using feedbacks
without any labeled data, unlike supervised learning.
• Since there is no labeled data, so the agent is bound to learn by its experience
only.
• RL solves a specific type of problem where decision making is sequential, and
the goal is long-term, such as game-playing, robotics, etc.
• The agent interacts with the environment and explores it by itself. The primary
goal of an agent in reinforcement learning is to improve the performance by
getting the maximum positive rewards.
• The agent learns with the process of hit and trial, and based on the experience,
it learns to perform the task in a better way. Hence, we can say that
"Reinforcement learning is a type of machine learning method where an
intelligent agent (computer program) interacts with the environment and learns
to act within that." How a Robotic dog learns the movement of his arms is an
example of Reinforcement learning.
• It is a core part of Artificial intelligence and all AI agent works on the concept
of reinforcement learning. Here we do not need to pre-program the agent, as it
learns from its own experience without any human intervention.
• Example: Suppose there is an AI agent present within a maze environment,
and his goal is to find the diamond. The agent interacts with the environment by
performing some actions, and based on those actions, the state of the agent gets
changed, and it also receives a reward or penalty as feedback.
• The agent continues doing these three things (take action, change state/remain
in the same state, and get feedback), and by doing these actions, he learns and
explores the environment.
• The agent learns that what actions lead to positive feedback or rewards and
what actions lead to negative feedback penalty. As a positive reward, the agent
gets a positive point, and as a penalty, it gets a negative point.
2) Explain Naive Bayes model with an example.
→ Naive Bayes is a classification algorithm based on Bayes' Theorem, and it
makes an assumption that all predictors are independent⁴. This means that the
presence of a particular feature in a class is unrelated to the presence of any other
feature⁴.

For example, let's consider a dataset that describes the weather conditions for
playing a game of golf¹. Given the weather conditions, each tuple classifies the
conditions as fit ("Yes") or unfit ("No") for playing golf¹. Here is a tabular
representation of our dataset:

| Outlook | Temperature | Humidity | Windy | Play Golf |


|----------|-------------|----------|-------|-----------|
| Rainy | Hot | High | False | No |
| Rainy | Hot | High | True | No |
| Overcast | Hot | High | False | Yes |
| Sunny | Mild | High | False | Yes |
| Sunny | Cool | Normal | False | Yes |
| Sunny | Cool | Normal | True | No |
| Overcast | Cool | Normal | True | Yes |
| Rainy | Mild | High | False | No |
| Rainy | Cool | Normal | False | Yes |
| Sunny | Mild | Normal | False | Yes |
| Rainy | Mild | Normal | True | Yes |
| Overcast | Mild | High | True | Yes |
| Overcast | Hot | Normal | False | Yes |
| Sunny | Mild | High | True | No |

In this dataset, features are 'Outlook', 'Temperature', 'Humidity', and 'Windy'. The
class variable name is 'Play golf'¹. The Naive Bayes model assumes that no pair of
features are dependent. For example, the temperature being 'Hot' has nothing to do
with the humidity or the outlook being 'Rainy' has no effect on the winds. Hence,
the features are assumed to be independent¹. Secondly, each feature is given the
same weight (or importance). For example, knowing only temperature and
humidity alone can't predict the outcome accurately. None of the attributes is
irrelevant and assumed to be contributing equally to the outcome¹.

Despite these assumptions often not being met in real-world data, Naive Bayes
classifiers can be surprisingly effective and are particularly popular in text
classification tasks.
3) Explain any 2 application of reinforcement learning.
→ 1. RL in Marketing
Marketing is all about promoting and then, selling the products or services either of
your brand or someone else’s. In the process of marketing, finding the right
audience which yields larger returns on investment you or your company is making
is a challenge in itself.

And, it is one of the reasons companies are investing dollars in managing digitally
various marketing campaigns. Through real-time bidding supporting well the
fundamental capabilities of RL, your and other companies, smaller or larger, can
expect: –

more display ad impressions in real-time.


increased ROI, profit margins.
predicting the choices, reactions, and behavior of customers towards your
products/services.
2. RL in Broadcast Journalism
Through different types of Reinforcement Learning, attracting likes and views
along with tracking the reader’s behavior is much simpler. Besides, recommending
news that suits the frequently-changing preferences of readers and other online
users can possibly be achieved since journalists can now be equipped with an RL-
based system that keeps an eye on intuitive news content as well as the headlines.
Take a look at other advantages too which Reinforcement Learning is offering to
readers all around the world.

News producers are now able to receive the feedback of their users
instantaneously.
Increased communication, as users are more expressive now.
No space for disinformation, hatred.

4) Explain Policy search.


→Policy search in Artificial Intelligence (AI) is a method used in reinforcement
learning algorithms to find the optimal policy¹. A policy defines the learning
agent's way of behaving at a given time¹.

In other words, a policy is a strategy that the agent employs to determine the
next action based on the current state¹. Policy search algorithms aim to find the
best policy that will maximize the agent's total reward or minimize the cost over
time¹.
Policy search can be categorized into three types:
1. **Deterministic Policy Search**: In this, the policy is a function that maps
states to actions¹.
2. **Stochastic Policy Search**: Here, the policy outputs a probability
distribution over actions¹.
3. **Genetic Algorithms**: These are optimization algorithms based on the
principles of genetics and natural selection. They are used when the policy
space is discrete¹.

Policy search methods can be particularly useful in high-dimensional action


spaces where traditional value-based methods (like Q-learning) may struggle¹.

For example, consider an AI-driven robot that needs to learn how to navigate
through a maze to reach a goal. The robot can take actions like moving forward,
turning left, or turning right. The policy defines what action the robot should
take at each state (location in the maze). The robot uses policy search to find the
best policy that will get it to the goal as quickly as possible¹.

In summary, policy search is a critical component of reinforcement learning


algorithms, enabling them to learn optimal behaviors for complex tasks¹.
Q.5) Answer one of three.
1) Write pseudo code implementation of pseudo code implementation of
simple search 1 algorithm.

→ Sure, here is a simple pseudocode for a linear search algorithm, which is one
of the simplest search algorithms:

Procedure linear_search(A, n, x)
Input: An array A[0..n-1] and a search key x
Output: The index of the first occurrence of x in
A or -1 if x is not in A

for i from 0 to n-1 do


if A[i] == x then
return i
return -1
In this pseudocode:

• A is the array to be searched.


• n is the size of the array.
• x is the value to be searched for in the array.
• The algorithm iterates over each element in the array from the first index
(0) to the last index (n-1).
• If it finds an element that matches x, it returns the index of that element.
• If it doesn’t find x in the array, it returns -1.

This algorithm has a time complexity of O(n), where n is the number of


elements in the array. This is because in the worst-case scenario, it might have
to look at every element in the array once. This makes it less efficient for large
lists compared to other search algorithms like binary search. However, it’s very
simple and works well for small lists or unsorted data.

2) Explain ensemble learning algorithm.

→ Ensemble learning is a general meta approach to machine learning that seeks


better predictive performance by combining the predictions from multiple models¹.
The idea of ensemble learning is closely related to the idea of the "wisdom of
crowds" where many different independent decisions, choices, or estimates are
combined into a final outcome that is often more accurate than any single
contribution⁴.

There are three main classes of ensemble learning methods:

1. **Bagging**: Bagging involves fitting many decision trees on different samples


of the same dataset and averaging the predictions¹.

2. **Stacking**: Stacking involves fitting many different model types on the same
data and using another model to learn how to best combine the predictions¹.

3. **Boosting**: Boosting involves adding ensemble members sequentially that


correct the predictions made by prior models and outputs a weighted average of the
predictions¹.

Each of these methods is a field of study that has spawned many more specialized
methods¹. The main challenge in developing ensemble models is not to obtain
highly accurate base models, but rather to obtain base models which make different
kinds of errors². For example, if ensembles are used for classification, high
accuracies can be accomplished if different base models misclassify different
training examples, even if the base classifier accuracy is low².
Ensemble learning helps improve machine learning results by combining several
models. This approach allows the production of better predictive performance
compared to a single model.

3) Write a short note on Active reinforcement learning

→ Active Reinforcement Learning (ARL) is a variant of reinforcement learning


where the agent does not observe the reward unless it chooses to pay a query
cost³. The central question of ARL is how to quantify the long-term value of
reward information³.

In traditional reinforcement learning, an agent interacts with its environment by


taking actions, receiving feedback from the environment, and updating its
knowledge with the aim of maximizing long-term rewards⁴. However, in active
reinforcement learning, the agent actively chooses the actions to perform based
on the current state of the environment². This means that the agent has complete
control over its actions and is free to explore different options to determine the
best way to maximize its reward².

For example, consider an agent navigating a maze. In traditional reinforcement


learning, the agent would take an action (like moving forward), observe the
result (like hitting a wall or finding a path), and receive a reward or penalty. But
in active reinforcement learning, the agent might need to decide whether it's
worth it to pay a cost to find out if moving forward would result in hitting a
wall³.

Active reinforcement learning combines the strengths of offline planning and


online exploration¹. It allows domain experts to specify possibly inaccurate
models of the world offline. However, instead of using this model for planning,
our algorithm uses it as a blueprint for exploration¹.

4) Explain local maxima problem faced by hill climbing algorithm.


→ The hill climbing algorithm is a local search algorithm that continuously
moves in the direction of increasing value to find the peak of the mountain or
the best solution to the problem¹. However, one of the main issues with hill
climbing algorithms is that they may end up in local maxima³.

A local maximum is a state which is better than its neighbor states, but there is
also another state which is higher than it². Once the algorithm reaches a point
where all neighboring states have a value that is worse than the current state, the
algorithm stops². This is because hill climbing uses a greedy approach and will
not move to a worse state².

The problem with this is that the process will end even though a better solution
may exist². This means that hill climbing will not necessarily find the global
maximum, but may instead converge on a local maximum⁴. This problem does
not occur if the heuristic is convex. However, as many functions are not convex,
hill climbing may often fail to reach a global maximum.

5) Explain PEAS description for automated vacume cleaners.


→ PEAS stands for Performance measure, Environment, Actuators, and
Sensors. It is a framework for defining a rational agent1. Here is the PEAS
description for an automated vacuum cleaner:
1. **Performance Measure**: This is the criterion that determines how
successful the vacuum cleaner is. It could include measures such as the amount
of dirt collected, the time taken to clean a certain area, the amount of battery
used, and whether it avoided obstacles effectively¹. The goal of the vacuum
cleaner is to maximize this performance measure.

2. **Environment**: This is where the vacuum cleaner operates. It could be a


room with a variety of surfaces (like carpets, tiles, or hardwood floors) and
obstacles (like furniture, stairs, or pets). The environment could also change
over time as people move around and rearrange furniture¹.

3. **Actuators**: These are the parts of the vacuum cleaner that carry out
actions. They include the wheels that move the vacuum cleaner around, the
brushes that dislodge dirt from surfaces, and the vacuum system that sucks up
the dirt¹.

4. **Sensors**: These are what the vacuum cleaner uses to perceive its
environment and make decisions. They include a camera for visual input, a dirt
detection sensor to identify dirty areas, a cliff sensor to avoid falling off edges
like stairs, bump sensors to detect obstacles and walls, and infrared wall sensors
to navigate around rooms¹.
The automated vacuum cleaner uses these components to navigate its
environment, clean efficiently, and avoid obstacles. It uses its sensors to
perceive its environment and its actuators to take actions based on those
perceptions. Its goal, as defined by its performance measure, is to clean as much
dirt as possible while efficiently using its battery and avoiding damage¹.

You might also like