AI Endsem
AI Endsem
14, 16, 17, 20, 21, 24, 25, 23, 24, 25, 26,
CO3: 4, 6, 7, 8, 9
CO4: 5, 7, 10, 11,
CO5: 1, 2, 3, 5, 13, 16, 18, 22-28
CO1: introduction
1. Define artificial intelligence. What are different AI applications? Enlist any five AI
applications.
Artificial intelligence (AI) is a scientific field that involves creating machines and computers that can
learn, reason, and act in ways that normally require human intelligence. AI can perform a variety of
advanced functions, including:
2. Analyzing data
3. Making recommendations
Applications:
Artificial Intelligence is widely used in the field of E-commerce as it helps the organization to establish a good
engagement between the user and the company. Artificial Intelligence helps to make appropriate suggestions
and recommendations as per the user search history and view preferences. There are also AI chatbots that
are used to provide customer support instantly and help to reduce complaints and queries to a great extent.
Let’s take a closer look at AI applications in E-commerce.
Personalization: Using this feature, customers would be able to see those products based on their interest
pattern and that eventually will drive more conversions.
Enhanced Support: It’s very important to attend to every customer’s query to reduce the churn ratio and to
empower that AI-powered chatbots are well capable of handling most of the queries that too 24×7
Dynamic Pricing Structure: It’s a smart way of fluctuating the price of any given product by analyzing data
from different sources and based on which price prediction is being done.
2. AI in Education Purpose
Educational sectors are totally organized and managed by human involvement till some years back. But
these days, the educational sector is also coming under the influence of Artificial Intelligence. It helps the
faculty as well as the students by making course recommendations, Analysing some data and some decisions
about the student, etc. Making automated messages to the students, and parents regarding any vacation,
and test results are done by Artificial Intelligence these days. Let’s take a closer look at AI applications in
Education.
Voice Assistant: With the help of AI algorithms, this feature can be used in multiple and broad ways to
save time. provide convenience, and can assist users as and when required.
Gamification: This feature has enabled e-learning companies to design attractive game modes into their
system so that kids can learn in a super fun way. This will not only make kids engage while learning but will
also ensure that they are catching the concepts and all thanks to AI for that.
Smart Content Creation: AI uses algorithms to detect, predict and design content & provide valuable
insights based on the user’s interest which can include videos, audio, infographics, etc. Following this, with
the introduction of AR/VR technologies, e-learning companies are likely to start creating games (for
learning), and video content for the best experience.
Artificial Intelligence is one of the major technologies that provide the robotics field with a boost to increase
their efficiency. AI provides robots to make decisions in real time and increase productivity. Let’s take a closer
look at AI applications in Robotics.
NLP: Natural Language Processing plays a vital role in robotics to interpret the command as a human
being instructs. This enables AI algorithms & techniques such as sentimental analysis, syntactic parsing,
etc.
Object Recognition & Manipulation: This functionality enables robots to detect objects within the
perimeter and this technique also helps robots to understand the size & shape of that particular object.
Besides this, this technique has two units, one is to identify the object & the other one refers to the physical
interaction with the object.
GPS technology uses Artificial Intelligence to make the best route and provide the best available route to
the users for traveling. It helps the user to choose their type of lane and roads which increases the
safety features of a user.
Personalization (Intelligent Routing): The personalized system gets active based on the user’s pattern &
behavior of preferred routes. Irrespective of the time & duration, the GPS will always provide suggestions
based on multiple patterns & analyses.
Traffic Prediction: AI uses a Linear Regression algorithm that helps in preparing and analyzing the traffic
data. This clearly helps an individual in saving time and alternate routes are provided based on congestion
ahead of the user.
Positioning & Planning: GPS & Navigation requires enhance support of AI for better positioning &
planning to avoid unwanted traffic zones. To help with this, AI-based techniques are being used such as
Kalman, Sensor fusion, etc. Besides this, AI also uses prediction methods to analyze the fastest & efficient
route to surface the real-time data.
5. Healthcare
Artificial Intelligence is widely used in the field of healthcare and medicine. The various algorithms of Artificial
Intelligence are used to build precise machines that are able to detect minor diseases inside the human body.
Also, Artificial Intelligence uses the medical history and current situation of a particular human being to predict
future diseases. Artificial Intelligence is also used to find the current vacant beds in the hospitals of a city that
saves the time of patients who are in emergency conditions. Let’s take a closer look at AI applications in
Healthcare.
Insights & Analysis: With the help of AI, a collection of large datasets, that includes clinical data, research
studies, and public health data, to identify trends and patterns. This inversely provides aid in surveillance
and public health planning.
Telehealth: This feature enables doctors and healthcare experts to take close monitoring while analyzing
data to prevent any uncertain health issues. Patients who are at high risk and require intensive care are
likely to get benefitted from this AI-powered feature.
Patient Monitoring: In case of any abnormal activity and alarming alerts during the care of patients, an AI
system is being used for early intervention. Besides this, RPM, or Remote Patient Monitoring has been
significantly growing & is expected to go up by USD 6 Billion by 2025, to treat and monitor patients.
Surgical Assistance: To ensure a streamlined procedure guided by the AI algorithms, it helps surgeons to
take effective decisions based on the provided insights to make sure that no further risks are involved in
this while processing.
Regular Computing: Traditional systems are proficient at solving specific problems for
which they are programmed. Their utility extends to a wide array of applications but
remains confined to predefined tasks.
Decision-Making:
Human-Like Capabilities:
Regular Computing: Traditional systems lack the capacity for human-like reasoning,
learning, or understanding. They can be powerful tools but don’t attempt to emulate
cognitive functions.
AI: Artificial intelligence aims to replicate and augment human cognitive abilities.
Natural language processing, image recognition, and even creativity (especially
generative AI applications such as MidJourney, DALL-E, Adobe Firefly) are within the
realm of AI applications.
6. State Space Search explain in detail and give steps for state space search
Steps in State Space Search
The process of state space search involves several key steps that guide the search from the initial state to the goal state.
Here’s a step-by-step explanation:
1. Define the Initial State
This is where the problem begins. For example, in a puzzle, the initial state would be the starting arrangement of the pieces.
2. Define the Goal State
The goal state is the desired arrangement or condition that solves the problem. In our puzzle example, it would be the
completed picture.
3. Generate Successor States
From the current state, create a set of possible 'next moves' or states that are reachable directly from the current state.
4. Apply Search Strategy
Choose a path based on the chosen search strategy, such as depth-first, breadth-first, or best-first search. This decision is
crucial and can affect the efficiency of the search.
5. Check for Goal State
At each new state, check to see if the goal has been reached. If it has, the search is successful.
6. Path Cost
Calculate the cost of the current path. If the cost is too high, or if a dead end is reached, backtrack and try a different path.
7. Loop and Expand
Repeat the process: generate successors, apply the search strategy, and check for the goal state until the goal is reached or
no more states can be generated.
These steps form the core of the state space search process, allowing for systematic exploration of all possible actions until a
solution is found or all options are exhausted.
7. Give the structure of agents. Compare Model based agent with Utility based Agent
with the help of suitable block diagrams.
Agents can be grouped into five classes based on their degree of perceived intelligence and capability :
Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept.
Percept history is the history of all that an agent has perceived to date. The agent function is based on
the condition-action rule. A condition-action rule is a rule that maps a state i.e., a condition to an action. If
the condition is true, then the action is taken, else not. This agent function only succeeds when the
environment is fully observable. For simple reflex agents operating in partially observable environments,
infinite loops are often unavoidable. It may be possible to escape from infinite loops if the agent can randomize
its actions.
Problems with Simple reflex agents are :
A model-based reflex agent is one that uses internal memory and a percept history to create a model of the
environment in which it's operating and make decisions based on that model. The term percept means
something that has been observed or detected by the agent. The model-based reflex agent stores the past
percepts in its memory and uses them to create a model of the environment. The agent then uses this model
to determine which action should be taken in any given situation.
It works by finding a rule whose condition matches the current situation. A model-based agent can
handle partially observable environments by the use of a model about the world. The agent has to keep
track of the internal state which is adjusted by each percept and that depends on the percept history. The
current state is stored inside the agent which maintains some kind of structure describing the part of the world
which cannot be seen.
Updating the state requires information about:
Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from their goal(description of
desirable situations). Their every action is intended to reduce their distance from the goal. This allows the
agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be modified, which makes these agents
more flexible. They usually require search and planning. The goal-based agent’s behavior can easily be
changed.
Goal-Based Agents
Utility-Based Agents
They choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is
not enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be
taken into consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the world, a
utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real
number which describes the associated degree of happiness.
Utility-Based Agents
Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it has learning
capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through
learning. A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning from the environment.
2. Critic: The learning element takes feedback from critics which describes how well the agent is doing with
respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action.
4. Problem Generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
8. What is Production System and which are the rules of Production System
Inference Rules
There are many production rules in Artificial Intelligence. One of them is the inference rule. It is a type of rule that consists of a
logical form used for transformation. Let us look at the types of inference rules in AI:
It consists of a logic that helps reasoning with the help of multiple statements to reach a conclusion.
In this example, we have two statements: “All mammals are animals” and “Dogs are mammals.” We can use deductive
inference to draw a logical conclusion based on these statements.
Using the deductive inference rule of categorical syllogism, which states that if the major premise (“All mammals are animals”)
and the minor premise (“Dogs are mammals”) are true, then the conclusion (“Therefore, dogs are animals”) is also true.
By applying deductive inference to the given example, we can conclude that dogs are indeed animals based on the statements
provided.
This rule helps explain the conclusion most simply by using the given observations.
Example:
In this example, we have two observations: “The ground is wet” and “There are dark clouds in the sky.” We can use abductive
inference to generate a plausible explanation or hypothesis that best explains these observations.
The abductive inference rule suggests that the simplest and most likely explanation that can account for the given observations
should be considered. In this case, the most straightforward explanation is that it might have rained. The wet ground and the
presence of dark clouds in the sky are consistent with the hypothesis that rain occurred.
9. What are PEAS descriptors? For the following activity, give a PEAS description of
the task environment and characterize it in terms of the properties
i)Robot meant for cleaning the house.
PEAS System is used to categorize similar agents together. The PEAS system delivers the performance
measure with respect to the environment, actuators, and sensors of the respective agent. Most of the highest
performing agents are Rational Agents.
PEAS stands for a Performance measure, Environment, Actuator, Sensor.
1. Performance Measure: Performance measure is the unit to define the success of an agent. Performance
varies with agents based on their different precepts.
2. Environment: Environment is the surrounding of an agent at every instant. It keeps changing with time if
the agent is set in motion. There are 5 major types of environments:
Fully Observable & Partially Observable
Episodic & Sequential
Static & Dynamic
Discrete & Continuous
Deterministic & Stochastic
3. Actuator: An actuator is a part of the agent that delivers the output of action to the environment.
4. Sensor: Sensors are the receptive parts of an agent that takes in the input for the agent.
The primary measure is the level of cleanliness achieved in the house. This can be measured
by factors like dust and debris removal, floor sanitation, and surface cleanliness.
Additionally, factors like efficiency (cleaning time), coverage (percentage of area cleaned),
and user satisfaction might be considered.
Environment:
Partially Observable: The robot can only sense its immediate surroundings through its
sensors. It might not have complete information about dirt or obstacles beyond its sensor
range.
Sequential: Cleaning tasks are typically done in a sequence, like vacuuming before mopping.
Dynamic: The environment can change over time with new dirt appearing, furniture being
moved, or people walking through.
Continuous: The environment has continuous aspects like dust level or floor space, requiring
sensors with fine resolution.
Stochastic: Events like dropped objects or spills can occur randomly, affecting the cleaning
needs.
Actuators:
Sensors:
Characterization:
10. Define in your own words: (a) intelligence, (b) artificial intelligence, (c) agent, (d)
logical reasoning.
Intelligence has been defined in many ways: the capacity for abstraction, logic,
understanding, self-awareness, learning, emotional knowledge, reasoning, planning,
creativity, critical thinking, and problem-solving.
Artificial intelligence (AI) is a scientific field that involves creating machines and computers that can
learn, reason, and act in ways that normally require human intelligence. AI can perform a variety of
advanced functions, including:
5. Analyzing data
6. Making recommendations
In artificial intelligence, an agent is a computer program or system that is designed to perceive its
environment, make decisions and take actions to achieve a specific goal or set of goals. The agent operates
autonomously, meaning it is not directly controlled by a human operator. Agents can be classified into different
types based on their characteristics, such as whether they are reactive or proactive, whether they have a fixed
or dynamic environment, and whether they are single or multi-agent systems.
Reasoning plays a great role in the process of artificial Intelligence. Thus Reasoning can be defined as the
logical process of drawing conclusions, making predictions or constructing approaches towards a particular
thought with the help of existing knowledge. In artificial intelligence, reasoning is very important because to
understand the human brain, how the brain thinks, how it draws conclusions towards particular things for all
these sorts of works we need the help of reasoning.
11. For each of the following activities, give a PEAS description of the task environment.
i. Playing soccer.
ii. Exploring the subsurface oceans of Titan.
iii. Shopping for used AI books on the Internet.
iv. Playing a tennis match.
v. Self-driving car
vi. Picking Robot.
i. Playing Soccer
Performance Measure: Scoring goals while adhering to the rules of soccer and working
collaboratively with teammates to win the game.
Environment:
o Partially Observable: Players can only see a portion of the field and rely on
teammates and communication for complete awareness.
o Episodic: The game consists of distinct halves or periods with a defined start and
end.
o Dynamic: The ball and players constantly move, and the environment changes with
each action.
o Discrete: The game has a finite set of actions (kicks, passes, etc.) and a defined
duration.
o Stochastic: Unpredictable events like bounces and opponent actions can occur.
Actuators: Legs for running, kicking, and jumping.
Sensors: Vision for ball and player position, balance sensors for movement control.
Performance Measure: Gathering scientific data about the composition, temperature, and
potential life forms in the ocean.
Environment:
o Fully Observable (limited): Sensors provide data on the immediate surroundings,
but the overall environment is largely unknown.
o Episodic (potentially): Missions might consist of distinct exploration phases with
specific goals.
o Static (potentially): The ocean itself might be relatively stable, but external factors
could change.
o Continuous: Ocean properties like pressure and temperature likely vary
continuously.
o Stochastic: Unexpected events like equipment malfunctions or environmental
hazards could occur.
Actuators: Propulsion systems for movement, manipulators for sample collection.
Sensors: Cameras, sonar, chemical sensors to analyze the environment and collect data.
iii. Shopping for Used AI Books on the Internet
Performance Measure: Finding and purchasing used AI books at a reasonable price and with
good condition.
Environment:
o Partially Observable: Search results and product information provide limited details
about book quality and condition.
o Sequential: The shopping process involves browsing, selecting, and purchasing,
often in a specific order.
o Dynamic: Product availability, prices, and seller information can change rapidly.
o Discrete: Search options and purchase actions are typically well-defined.
o Deterministic (mostly): Website behavior is generally predictable, except for
potential system errors.
Actuators: User interface controls for searching, selecting, and purchasing books.
Sensors: Web scraping or API access to gather information from online marketplaces.
Performance Measure: Winning the match by scoring more points than the opponent while
adhering to the rules of tennis.
Environment:
o Partially Observable: Players cannot see the entire court at once and rely on
anticipation of opponent actions.
o Episodic: The match consists of sets and games with defined scoring and breaks.
o Dynamic: The ball and players constantly move, changing the environment with
every shot.
o Discrete: Actions like strokes and serves are well-defined with specific rules.
o Stochastic: Unpredictable factors like wind or ball bounces can influence the game.
Actuators: Arms and racket for swinging and hitting the ball.
Sensors: Vision to track the ball and opponent, balance sensors for movement control.
v. Self-Driving Car
Performance Measure: Safely navigating to a destination while following traffic rules and
avoiding obstacles.
Environment:
o Partially Observable: Sensors provide information on the immediate surroundings,
but visibility can be limited by weather or other vehicles.
o Sequential (potentially): The driving task can be broken down into sequential
actions like lane changes and intersections.
o Dynamic: Other vehicles, pedestrians, and weather conditions constantly change the
environment.
o Continuous: Traffic flow, speed, and distance require continuous monitoring and
adjustment.
o Stochastic: Unexpected events like accidents or sudden maneuvers by other drivers
can occur.
Actuators: Steering, acceleration, and braking mechanisms to control the car's movement.
Sensors: Cameras, LiDAR, radar, and GPS for obstacle detection, traffic monitoring, and self-
localization.
Performance Measure: Accurately selecting and picking desired objects from a cluttered
environment, placing them in designated locations with minimal damage.
Environment:
o Partially Observable: Sensors provide information on the immediate area but might
not capture all object details or occlusions by other objects.
o Episodic (potentially): Picking tasks might involve distinct pick-and-place cycles with
specific targets.
o Dynamic: The environment can change with object removal or addition by other
robots or human workers.
o Discrete (mostly): Picking actions (grasp, lift, place) are well-defined, but object
shapes and sizes can vary.
o Deterministic (mostly): Robot movements are generally predictable, except for
potential sensor errors or object fragility.
Actuators: Robotic arm for manipulation and gripping.
Sensors: Cameras or depth sensors for object recognition and location. Force sensors for
grip control and object fragility detection.
Fully Observable vs. Partially Observable: A fully observable environment offers complete
information to the agent through its sensors. Imagine a chessboard, where the agent (a chess
AI) has perfect knowledge of all the pieces and their positions. In contrast, a partially
observable environment limits the agent's awareness. A self-driving car, for instance, can
only sense its immediate surroundings through cameras and LiDAR, leaving blind spots and
relying on predictions for what's beyond sensor range.
Static vs. Dynamic: A static environment remains constant throughout the agent's interaction.
A robot playing chess again has a static environment; the board layout doesn't change on its
own. On the other hand, a dynamic environment constantly evolves. The house cleaning
robot encounters a dynamic environment as dirt appears, furniture moves, and people walk
around. The robot must adapt its cleaning strategy based on these changes.
14. Assume that now there are 3 rooms and 2 Roombas (autonomous robotic vacuum
cleaners). Each room can be either dirty/clean and each Roomba is present in one of
the 3 rooms. What are the number of states in propositional/factored knowledge
representation?
There are 3 rooms and 2 Roombas. Each Roomba can be in any one of the 3 rooms.
The position of Roomba 1 can be in any of the 3 rooms, and similarly, the position of
Roomba 2 can also be in any of the 3 rooms.
The total number of configurations for the positions of the Roombas is:
3×3=32=9
3×3=32=9
Combined State:
To find the total number of states in the system, we multiply the number of configurations for
the rooms' cleanliness by the number of configurations for the Roombas' positions.
Therefore, the total number of states is:
8×9=72
8×9=72
15. Explain the different types of task environment. Consider an example of automated
taxi. List the environment the taxi has to operate and justify.
An environment in artificial intelligence is the surrounding of the agent. The agent takes
input from the environment through sensors and delivers the output to the environment
through actuators. There are several types of environments:
Fully Observable vs Partially Observable
Single-agent vs Multi-agent
Static vs Dynamic
Discrete vs Continuous
Episodic vs Sequential
Environment types
When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it
is said to be a fully observable environment else it is partially observable.
Maintaining a fully observable environment is easy as there is no need to keep track of the history of the
surrounding.
An environment is called unobservable when the agent has no sensors in all environments.
Examples:
Chess – the board is fully observable, and so are the opponent’s moves.
Driving – the environment is partially observable because what’s around the corner is not known.
2. Single-agent vs Multi-agent
3. Dynamic vs Static
An environment that keeps constantly changing itself when the agent is up with some action is said to be
dynamic.
A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant.
An idle environment with no change in its state is called a static environment.
An empty house is static as there’s no change in the surroundings when an agent enters.
4. Discrete vs Continuous
If an environment consists of a finite number of actions that can be deliberated in the environment to obtain
the output, it is said to be a discrete environment.
The game of chess is discrete as it has only a finite number of moves. The number of moves might vary
with every game, but still, it’s finite.
The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be
continuous.
Self-driving cars are an example of continuous environments as their actions are driving, parking, etc.
which cannot be numbered.
5.Episodic vs Sequential
In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes.
There is no dependency between current and previous incidents. In each incident, an agent receives input
from the environment and then performs the corresponding action.
Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the
conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no
dependency between current and previous decisions.
In a Sequential environment, the previous decisions can affect all future decisions. The next action of the
agent depends on what action he has taken previously and what action he is supposed to take in the future.
Example:
Checkers- Where the previous move can affect all the following moves.
1. Partially Observable: The environment is partially observable. While the taxi has sensors
to detect its immediate surroundings (traffic lights, pedestrians, other vehicles), it cannot
see beyond obstacles or predict future events (sudden stops, accidents).
3. Competitive (to an extent): The environment has some competitive aspects. While some
vehicles might cooperate by following traffic rules, others might behave aggressively,
creating a need for the taxi to optimize its path and speed for efficiency.
4. Multi-Agent: The environment is multi-agent. The taxi shares the road with numerous
other vehicles, pedestrians, and cyclists, all making independent decisions that can affect
the taxi's performance.
5. Dynamic: The environment is highly dynamic. Traffic conditions, weather, and pedestrian
activity are constantly changing, requiring the taxi to adapt its behavior in real-time.
7. Sequential: The environment is sequential. The taxi's decisions (route, speed) at one
point depend on the current state and influence its future actions and the overall trip
efficiency.
8. Partially Known: The environment is partially known. While the taxi has a map and traffic
data, unexpected events (accidents, road closures) or changes in traffic patterns can
introduce unknown elements.
16. Formulate vacuum cleaner problem, states can be represented by [<block>, clean]
or [<block>, dirty]. Assume suitable initial state.
Agent:
A vacuum cleaner (agent) can be in one of the two blocks (Block A or Block B).
The agent has two actions:
States:
<block> is either "A" or "B", indicating the current location of the vacuum cleaner.
<clean> is a boolean value, True if the block the agent is currently in is clean, False
otherwise.
Examples:
Goal:
Successor Function:
The successor function S(state, action) takes a state and an action and returns the
resulting state after the action is performed.
Initial State: This state requires an initial state for the problem which starts the AI agent towards a
specified goal. In this state new methods also initialize problem domain solving by a specific class.
Action: This stage of problem formulation works with function with a specific class taken from the initial
state and all possible actions done in this stage.
Transition: This stage of problem formulation integrates the actual action done by the previous action
stage and collects the final stage to forward it to their next stage.
Goal test: This stage determines that the specified goal achieved by the integrated transition model or not,
whenever the goal achieves stop the action and forward into the next stage to determines the cost to
achieve the goal.
Path costing: This component of problem-solving numerical assigned what will be the cost to achieve the
goal. It requires all hardware software and human working cost.
It can be seen that for n =1, the problem has a trivial solution, and no solution exists for n =2 and n =3. So first we will consider
the 4 queens problem and then generate it to n - queens problem.
Given a 4 x 4 chessboard and number the rows and column of the chessboard 1 through 4.
Since, we have to place 4 queens such as q 1 q2 q3 and q4 on the chessboard, such that no two queens attack each other. In such
a conditional each queen must be placed on a different row, i.e., we put queen "i" on row "i."
Now, we place queen q 1 in the very first acceptable position (1, 1). Next, we put queen q 2 so that both these queens do not
attack each other. We find that if we place q 2 in column 1 and 2, then the dead end is encountered. Thus the first acceptable
position for q2 in column 3, i.e. (2, 3) but then no position is left for placing queen 'q 3' safely. So we backtrack one step and
place the queen 'q2' in (2, 4), the next best possible solution. Then we obtain the position for placing 'q 3' which is (3, 2). But later
this position also leads to a dead end, and no place is found where 'q 4' can be placed safely. Then we have to backtrack till 'q 1'
and place it to (1, 2) and then all other queens are placed safely by moving q 2 to (2, 4), q3 to (3, 1) and q4 to (4, 3). That is, we get
the solution (2, 4, 1, 3). This is one possible solution for the 4-queens problem. For another possible solution, the whole method
is repeated for all partial solutions. The other solutions for 4 - queens problems is (3, 1, 4, 2) i.e.
The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as follows:
Fig shows the complete state space for 4 - queens problem. But we can use backtracking method to generate the necessary
node and stop if the next node violates the rule, i.e., if two queens are attacking.
4 - Queens solution space with nodes numbered in DFS
It can be seen that all the solutions to the 4 queens problem can be represented as 4 - tuples (x 1, x2, x3, x4) where xi represents
the column on which queen "qi" is placed.
Vacuum cleaner problem is a well-known search problem for an agent which works on Artificial Intelligence. In this
problem, our vacuum cleaner is our agent. It is a goal based agent, and the goal of this agent, which is the vacuum
cleaner, is to clean up the whole area. So, in the classical vacuum cleaner problem, we have two rooms and one vacuum
cleaner. There is dirt in both the rooms and it is to be cleaned. The vacuum cleaner is present in any one of these rooms.
So, we have to reach a state in which both the rooms are clean and are dust free.
So, there are eight possible states possible in our vacuum cleaner problem. These can be well illustrated with the help
of the following diagrams:
Here, states 1 and 2 are our initial states and state 7 and state 8 are our final states (goal states). This means that,
initially, both the rooms are full of dirt and the vacuum cleaner can reside in any room. And to reach the final goal
state, both the rooms should be clean and the vacuum cleaner again can reside in any of the two rooms.
The vacuum cleaner can perform the following functions: move left, move right, move forward, move backward and to
suck dust. But as there are only two rooms in our problem, the vacuum cleaner performs only the following functions
here: move left, move right and suck.
Here the performance of our agent (vacuum cleaner) depends upon many factors such as time taken in cleaning, the
path followed in cleaning, the number of moves the agent takes in total, etc. But we consider two main factors for
estimating the performance of the agent. They are:
1. Search Cost: How long the agent takes to come up with the solution.
2. Path cost: How expensive each action in the solution are.
By considering the above factors, the agent can also be classifies as a utility based agent.
18. Explain any two uninformed search strategies with example. What are the
advantages of Iterative Deepening Depth First Search over other uninformed search
strategies? Explain in detail.
1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches
breadthwise in a tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all successor node at the current level
before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.
Advantages:
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal node K.
BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and the traversed path will
be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS until the
shallowest Node. Where the d= depth of shallowest solution and b is a node at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is O(b d).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
ADVERTISEMENT
2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each path to its greatest depth
node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the
current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the tree as E has
no other successor and still goal node is not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will expand every node within a limited search
tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given by:
Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of DFS is
equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach to the goal
node.
Iterative Deepening Depth-First Search (IDDFS) offers several advantages over other
uninformed search strategies, particularly Depth-First Search (DFS) and Breadth-First Search
(BFS). Here's a breakdown of its key benefits:
DFS Advantage: Unlike BFS, IDDFS is complete. This means it guarantees finding a solution if
one exists in the search space, similar to DFS. This is crucial when the depth of the goal node
is unknown.
BFS Advantage: IDDFS is more space-efficient than BFS. BFS explores all nodes at a given
depth before moving to the next depth. This can lead to a large memory footprint, especially
for problems with branching factors (number of children per node) and large depths. IDDFS,
like DFS, only explores the path down to the current depth limit, requiring less memory.
IDDFS inherits the advantage of DFS in potentially finding shallow solutions (solutions closer
to the root node) faster. Since it iteratively deepens the search, it explores shallower depths
first, potentially finding the goal earlier than BFS, which explores all levels evenly.
IDDFS is particularly well-suited for scenarios where the depth of the goal node is unknown.
By iteratively increasing the depth limit, it avoids the potentially excessive exploration of
irrelevant parts of the search space that can occur with BFS.
In summary, IDDFS offers a good balance between completeness, space efficiency, and the
ability to find shallow solutions quickly. It's a strong choice for uninformed search
problems with unknown depths and potentially large branching factors where memory
limitations might exist.
25. Analyze depth Limited search algorithm and give time and space complexity.
Give the initial state , goal test, successor function, and cost function for each of the
following
a. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot
ceiling. He would like to get the bananas. The room contains 2 stackable, moveable,
climbable 3 foot high crates.
b.You have three jugs, measuring 12 gallons , 8 gallons and 3 gallons and a water
faucet. You can fill the jugs up or empty them out from one to another or onto the
ground. You need to measure out exactly one gallon.
A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited search can solve the
drawback of the infinite path in the Depth-first search. In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.
ADVERTISEMENT
o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Disadvantages:
Example:
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if ℓ>d.
Initial State
css
Copy code
Copy code
Goal Test
Copy code
Since the bananas are at 8 feet and each crate is 3 feet high, the monkey can reach the bananas if he stands on a stack of two
crates (3 feet + 3 feet + 3 feet of the monkey's height).
Successor Function
Cost Function
Initial State
State representation:
scss
Copy code
scss
Copy code
(0, 0, 0)
Goal Test
makefile
Copy code
Successor Function
3. Pour water from one jug into another until either the first jug is empty or the second jug is full.
Cost Function
23.What is the state of the queue at each iteration of BFS if it is called from node 'a'?
24. Fill out the following graph by labeling each node 1 through 12 according to the
order in which the depth-first search would visit the nodes:
25. Here is an example that compares the order that the graph is searched in when
using a BFS and then a DFS (by each of the three approaches)
.
26. Find a route from the 1st node that is 1 to the last element 16.using Bidirectional
Search, explain with interactions and direction with tree.
CO3. Informed Search
1. Simulated Annealing is a variation of Hill Climbing algorithm. Explain how
Simulated Annealing algorithm overcomes the limitations of Hill Climbing algorithm.
Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbor has a higher value.
Hill Climbing is a heuristic optimization process that iteratively advances towards a better solution at each
step in order to find the best solution in a given search space. It is a straightforward and quick technique that
iteratively improves the initial solution by making little changes to it. Hill Climbing only accepts
solutions that are better than the current solution and employs a greedy technique to iteratively move
towards the best solution at each stage.
Hill Climbing may not locate the global optimum because it is susceptible to becoming caught in local
optima. Because of this, it is inappropriate for complex issues with numerous local optima. Hill Climbing is
simple to create and has no tweaking requirements.
In order to discover the best solution in a given search space, the probabilistic optimization algorithm
Simulated Annealing simulates the annealing process used in metalworking. The algorithm begins with a
randomly generated initial solution and incrementally improves it by accepting less desirable solutions
with a certain probability. The probability of accepting a worse solution decreases as the algorithm
progresses, which enables it to escape local optima and find the global optimum.
Simulated annealing explores the search space and avoids local optimum by employing a probabilistic
method to accept a worse solution with a given probability . The initial temperature, cooling schedule, and
acceptance probability function are just a few of the tuning parameters. Hill Climbing is faster, but Simulated
Annealing is better at locating the global optimum, particularly for complex issues with numerous local optima.
Several fields, including logistics, scheduling, and circuit design, use simulated annealing. The approach is
especially helpful for optimization issues when the objective function is challenging to evaluate or where the
search space is intricate and high-dimensional.
2. Explain in detail what mean Informed Search Algorithms? Give advantage and
disadvantage of it?
A key idea in artificial intelligence (AI) and search algorithms is informed search,
which improves problem-solving effectiveness by using more information about the
issue at hand.
Advantages
1. Efficiency:
Guided Search: Informed search algorithms use heuristics to guide the search process towards
the goal, often resulting in fewer nodes being explored compared to uninformed search
methods.
Faster Solution: By focusing on more promising paths, these algorithms can find solutions
faster, reducing the overall search time.
2. Optimal Solutions:
Heuristic Pruning: Heuristics can help prune large parts of the search space that are unlikely to
lead to a solution, thereby saving computational resources and time.
4. Scalability:
Applicable to Large Problems: These algorithms can be applied to large and complex
problems where uninformed search methods would be infeasible due to their high
computational requirements.
5. Flexibility:
Disadvantages
1. Heuristic Design:
Complexity: Designing effective heuristics can be complex and often requires domain-
specific knowledge. Poor heuristics can lead to inefficient searches, negating the advantages
of informed search.
Accuracy: If the heuristic is not accurate, it can misguide the search process, potentially
leading to longer search times or suboptimal solutions.
2. Computational Overhead:
Heuristic Calculation: The computation of heuristics can add overhead, especially if they are
complex or require significant computation themselves. This can sometimes outweigh the
benefits gained from using the heuristic.
3. Memory Usage:
Resource Intensive: Some informed search algorithms, like A*, can be memory-intensive as
they need to store all explored nodes and their associated costs. This can be a limitation for
very large search spaces.
4. Incomplete Information:
Dependency on Heuristics: The performance of informed search algorithms heavily depends
on the quality and availability of heuristic information. In cases where heuristic information is
incomplete or unavailable, the algorithms may not perform well.
5. Overfitting:
Specificity: Heuristics tailored too specifically for certain problems may not generalize well to
other problems. This can lead to overfitting where the heuristic works well for a particular
instance but poorly on others.
3. Define hill climbing and describe its disadvantages. Also write the solution to these
disadvantages?
4. Find the route from Vasai fort to Panhala Fort Using A* search method (It’s just
problem formation locations and distance is imaginary) .
5. Explain concept of Genetic Algorithm. State the taxonomy of the crossover operator.
After calculating the fitness of every existent in the population, a selection process is
used to determine which of the individualities in the population will get to reproduce
and produce the seed that will form the coming generation.
Once the initial generation is created, the algorithm evolves the generation using following operators –
1) Selection Operator: The idea is to give preference to the individuals with good fitness scores and allow
them to pass their genes to successive generations.
2) Crossover Operator: This represents mating between individuals. Two individuals are selected using
selection operator and crossover sites are chosen randomly. Then the genes at these crossover sites are
exchanged thus creating a completely new individual (offspring). For example –
3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the diversity in the
population to avoid premature convergence. For example –
Encoding scheme-dependent:
Binary crossover: Applicable to GAs where chromosomes are represented as binary strings
(0s and 1s). Examples include single-point crossover, two-point crossover, and uniform
crossover.
Real-coded crossover: Applicable to GAs where chromosomes represent real numbers.
Examples include arithmetic crossover and simulated binary crossover.
Permutation crossover: Applicable to GAs where chromosomes represent orderings or
permutations (e.g., scheduling tasks). Examples include order crossover and cycle crossover.
Tree-based crossover: Applicable to GAs where chromosomes are represented as tree
structures. Examples include subtree crossover and single-point crossover (on tree
structures).
6. Consider the search problem below with start state S and Goal state G. The transition
cost are next to the edges and the heuristic values are as shown in the table. Calculate
the final cost using A * search algorithm.
A value accompanies each board state. If the maximizer has the upper hand in a particular situation, the
board score will typically be positive. In that board condition, if the minimizer has the advantage, it will
typically be some negative number.
7. Using Game theory “Tic Tac Toe” explain Min –Max Search?
https://levelup.gitconnected.com/minimax-algorithm-explanation-using-tic-tac-toe-
game-22668694aa13
9. Evaluate optimal value using Alfa Beta alpha-beta pruning of following example.
CO4: Knowledge reasoning
1) What is knowledge-based agent.
o An intelligent agent needs knowledge about the real world for taking decisions
and reasoning to act efficiently.
o Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions. These agents can represent the world with some formal
representation and act intelligently.
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.
The above diagram is representing a generalized architecture for a knowledge-based agent. The knowledge-based agent (KBA)
take input from the environment by perceiving the environment. The input is taken by the inference engine of the agent and
which also communicate with KB to decide as per the knowledge store in KB. The learning element of KBA regularly updates the
KB by learning new knowledge.
Knowledge base: Knowledge-base is a central component of a knowledge-based agent, it is also known as KB. It is a collection
of sentences (here 'sentence' is a technical term and it is not identical to sentence in English). These sentences are expressed in
a language which is called a knowledge representation language. The Knowledge-base of KBA stores fact about the world.
Knowledge-base is required for updating knowledge for an agent to learn with experiences and take action as per the
knowledge.
ADVERTISEMENT
ADVERTISEMENT
Inference system
Inference means deriving new sentences from old. Inference system allows us to add a new sentence to the knowledge base. A
sentence is a proposition about the world. Inference system applies logical rules to the KB to deduce new information.
Inference system generates new facts so that an agent can update the KB. An inference system works mainly in two rules which
are given as:
o Forward chaining
o Backward chaining
Following are three operations which are performed by KBA in order to show the intelligent behavior:
1. TELL: This operation tells the knowledge base what it perceives from the environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3. Perform: It performs the selected action.
There are mainly four ways of knowledge representation which are given as follows:
1. Logical Representation
2. Semantic Network Representation
3. Frame Representation
4. Production Rules
1. Logical Representation
Logical representation is a language with some concrete rules which deals with propositions and has no ambiguity in
representation. Logical representation means drawing a conclusion based on various conditions. This representation lays down
some important communication rules. It consists of precisely defined syntax and semantics which supports the sound inference.
Each sentence can be translated into logics using syntax and semantics.
Syntax:
ADVERTISEMENT
o Syntaxes are the rules which decide how we can construct legal sentences in the logic.
o It determines which symbol we can use in knowledge representation.
o How to write those symbols.
Semantics:
o Semantics are the rules by which we can interpret the sentence in the logic.
o Semantic also involves assigning a meaning to each sentence.
a. Propositional Logics
b. Predicate logics
Note: We will discuss Prepositional Logics and Predicate logics in later chapters.
1. Logical representations have some restrictions and are challenging to work with.
2. Logical representation technique may not be very natural, and inference may not be so efficient.
Note: Do not be confused with logical representation and logical reasoning as logical representation is a representation
language and reasoning is a process of thinking logically.
Semantic networks are alternative of predicate logic for knowledge representation. In Semantic networks, we can represent our
knowledge in the form of graphical networks. This network consists of nodes representing objects and arcs which describe the
relationship between those objects. Semantic networks can categorize the object in different forms and can also link those
objects. Semantic networks are easy to understand and can be easily extended.
Example: Following are some statements which we need to represent in the form of nodes and arcs.
Statements:
a. Jerry is a cat.
b. Jerry is a mammal
c. Jerry is owned by Priya.
d. Jerry is brown colored.
e. All Mammals are animal.
In the above diagram, we have represented the different type of knowledge in the form of nodes and arcs. Each object is
connected with another object by some relation.
1. Semantic networks take more computational time at runtime as we need to traverse the complete network tree to
answer some questions. It might be possible in the worst case scenario that after traversing the entire tree, we find
that the solution does not exist in this network.
2. Semantic networks try to model human-like memory (Which has 1015 neurons and links) to store the information, but
in practice, it is not possible to build such a vast semantic network.
3. These types of representations are inadequate as they do not have any equivalent quantifier, e.g., for all, for some,
none, etc.
4. Semantic networks do not have any standard definition for the link names.
5. These networks are not intelligent and depend on the creator of the system.
3. Frame Representation
A frame is a record like structure which consists of a collection of attributes and its values to describe an entity in the world.
Frames are the AI data structure which divides knowledge into substructures by representing stereotypes situations. It consists
of a collection of slots and slot values. These slots may be of any type and sizes. Slots have names and values which are called
facets.
Facets: The various aspects of a slot is known as Facets. Facets are features of frames which enable us to put constraints on the
frames. Example: IF-NEEDED facts are called when data of any particular slot is needed. A frame may consist of any number of
slots, and a slot may include any number of facets and facets may have any number of values. A frame is also known as slot-
filter knowledge representation in artificial intelligence.
Frames are derived from semantic networks and later evolved into our modern-day classes and objects. A single frame is not
much useful. Frames system consist of a collection of frames which are connected. In the frame, knowledge about an object or
event can be stored together in the knowledge base. The frame is a type of technology which is widely used in various
applications including Natural language processing and machine visions.
1. The frame knowledge representation makes the programming easier by grouping the related data.
2. The frame representation is comparably flexible and used by many applications in AI.
3. It is very easy to add slots for new attribute and relations.
4. It is easy to include default data and to search for missing values.
5. Frame representation is easy to understand and visualize.
Production rules system consist of (condition, action) pairs which mean, "If condition then action". It has mainly three parts:
In production rules agent checks for the condition and if the condition exists then production rule fires and corresponding
action is carried out. The condition part of the rule determines which rule may be applied to a problem. And the action part
carries out the associated problem-solving steps. This complete process is called a recognize-act cycle.
The working memory contains the description of the current state of problems-solving and rule can write knowledge to the
working memory. This knowledge match and may fire other rules.
If there is a new situation (state) generates, then multiple production rules will be fired together, this is called conflict set. In this
situation, the agent needs to select a rule from these sets, and it is called a conflict resolution.
Example:
o IF (at bus stop AND bus arrives) THEN action (get into the bus)
o IF (on the bus AND paid AND empty seat) THEN action (sit down).
o IF (on bus AND unpaid) THEN action (pay charges).
o IF (bus arrives at destination) THEN action (get down from the bus).
1. Production rule system does not exhibit any learning capabilities, as it does not store the result of the problem for the
future uses.
2. During the execution of the program, many rules may be active hence rule-based production systems are inefficient.
Predicate Logic deals with predicates, which are propositions, consist of variables.
Quantifier:
The variable of predicates is quantified by quantifiers. There are two types of quantifier in predicate
logic - Existential Quantifier and Universal Quantifier.
Existential Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∃x p(x) and read as "There exists at
least one value in the universe of variable x such that p(x) is true. The quantifier ∃ is called the
existential quantifier.
There are several ways to write a proposition, with an existential quantifier, i.e.,
(∃x∈A)p(x) or ∃x∈A such that p (x) or (∃x)p(x) or p(x) is true for some x ∈A.
Universal Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∀x,p(x) and read as "For every
x∈U,p(x) is true." The quantifier ∀ is called the Universal Quantifier.
The two rules for negation of quantified proposition are as follows. These are also called DeMorgan's
Law.
ADVERTISEMENT
ADVERTISEMENT
Example: Write the negation for each of the following. Determine whether the resulting statement is
true or false. Assume U = R.
1.∀ x ∃ m(x2<m)
Sol: Negation of ∀ x ∃ m(x 2<m) is ∃ x ∀ m (x2≥m). The meaning of ∃ x ∀ m (x 2≥m) is that there exists
for some x such that x 2≥m, for every m. The statement is true as there is some greater x such that
x2≥m, for every m.
2. ∃ m∀ x(x2<m)
Sol: Negation of ∃ m ∀ x (x 2<m) is ∀ m∃x (x2≥m). The meaning of ∀ m∃x (x 2≥m) is that for every m,
there exists for some x such that x 2≥m. The statement is true as for every m, there exists for some
greater x such that x2≥m.
5) Explain forward chaining and Backward chaining algorithm with the help of
example.
Inference engine:
The inference engine is the component of the intelligent system in artificial intelligence, which applies logical rules to the
knowledge base to infer new information from known facts. The first inference engine was part of the expert system. Inference
engine commonly proceeds in two modes, which are:
a. Forward chaining
b. Backward chaining
Horn clause and definite clause are the forms of sentences, which enables knowledge base to use a more restricted and efficient
inference algorithm. Logical inference algorithms use forward and backward chaining approaches, which require KB in the form
of the first-order definite clause.
ADVERTISEMENT
ADVERTISEMENT
Definite clause: A clause which is a disjunction of literals with exactly one positive literal is known as a definite clause or strict
horn clause.
Horn clause: A clause which is a disjunction of literals with at most one positive literal is known as horn clause. Hence all the
definite clauses are horn clauses.
A. Forward Chaining
Forward chaining is also known as a forward deduction or forward reasoning method when using an inference engine. Forward
chaining is a form of reasoning which start with atomic sentences in the knowledge base and applies inference rules (Modus
Ponens) in the forward direction to extract more data until a goal is reached.
The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are satisfied, and add their
conclusion to the known facts. This process repeats until the problem is solved.
Properties of Forward-Chaining:
Consider the following famous example which we will use in both approaches:
Example:
"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A, an enemy of America, has
some missiles, and all the missiles were sold to it by Robert, who is an American citizen."
To solve the above problem, first, we will convert all the above facts into first-order definite clauses, and then we will use a
forward-chaining algorithm to reach the goal.
o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite clauses by using Existential
Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
o All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
o Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
o Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
o Country A is an enemy of America.
Enemy (A, America) .........(7)
o Robert is American
American(Robert). ..........(8)
B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning method when using an inference engine. A
backward chaining algorithm is a form of reasoning, which starts with the goal and works backward, chaining through rules to
find known facts that support the goal.
Example:
ADVERTISEMENT
In backward-chaining, we will use the same above example, and will rewrite all the rules.
https://www.javatpoint.com/ai-resolution-in-first-order-logic
7) Explain the steps involved in converting the propositional logic statement into CNF
with a suitable example.
https://youtu.be/Jf2T8RdCYfA?si=tIoIZnONEsdeUPbr
Introduction :
Prolog is a logic programming language. It has important role in artificial intelligence. Unlike
many other programming languages, Prolog is intended primarily as a declarative
programming language. In prolog, logic is expressed as relations (called as Facts and Rules).
Core heart of prolog lies at the logic being applied. Formulation or Computation is carried out
by running a query over these relations.
Syntax and Basic Fields :
In prolog, We declare some facts. These facts constitute the Knowledge Base of the system. We
can query against the Knowledge Base. We get output as affirmative if our query is already in
the knowledge Base or it is implied by Knowledge Base, otherwise we get output as negative.
So, Knowledge Base can be considered similar to database, against which we can query. Prolog
facts are expressed in definite pattern. Facts contain entities and their relation. Entities are
written within the parenthesis separated by comma (, ). Their relation is expressed at the start
and outside the parenthesis. Every fact/rule ends with a dot (.). So, a typical prolog fact goes as
follows :
Example :
friends(raju, mahesh).
singer(sonu).
odd_number(5).
Explanation :
These facts can be interpreted as :
raju and mahesh are friends.
sonu is a singer.
5 is an odd number.
Key Features :
1. Unification : The basic idea is, can the given terms be made to represent the same structure.
2. Backtracking : When a task fails, prolog traces backwards and tries to satisfy previous task.
3. Recursion : Recursion is the basis for any search in program.
Running queries :
A typical prolog query can be asked as :
Query 1 : ?- singer(sonu).
Output : Yes.
Query 2 : ?- odd_number(7).
Output : No.
Advantages :
1. Easy to build database. Doesn’t need a lot of programming effort.
2. Pattern matching is easy. Search is recursion based.
3. It has built in list handling. Makes it easier to play with any algorithm involving lists.
Disadvantages :
1. LISP (another logic programming language) dominates over prolog with respect to I/O
features.
2. Sometimes input and output is not easy.
Applications :
Prolog is highly used in artificial intelligence(AI). Prolog is also used for pattern matching over
natural language parse trees.
https://www.javatpoint.com/bayesian-belief-network-in-artificial-intelligence
10) Explain the steps involved in converting the propositional logic statement into CNF.
Consider the following Axioms.
All people who are graduating are happy.
All happy people smile.
Someone is Graduating.
1) Represent these axioms in FOL.
2) Convert the FOL to CNF.
3) Prove that someone is smiling using resolution technique
11) Explain the steps involved in converting the propositional logic statement into CNF.
Consider the following Axioms.
Rani is hungry.
If rani is hungry she barks.
If rani is barking then raja is angry.
1) Represent these axioms in FOL.
2) Convert the FOL to CNF.
3) Prove that Raja is angry by using resolution technique
e.
5. Obtain the subset hood and equality measures S(A,B) and E(A,B) among the
following fuzzy sets
a. A = 0.1/0.1 + 0.2/0.2 + 0.3/0.3 + 0.4/0.4 + 0.5/0.5
b. B = 0.2/0.1 + 0.2/0.2 + 0.4/0.3 + 0.4/0.4 + 0.6/0.5
Fuzzy propositions are statements within the framework of fuzzy logic, which deals with
reasoning that is approximate rather than fixed and exact. Traditional logic involves
propositions that are either true or false, but fuzzy logic allows for propositions to have a
degree of truth ranging between completely true and completely false. This concept is
crucial in dealing with real-world scenarios where information is often uncertain, imprecise,
or vague.
1.
Fuzziness:
2.
Fuzziness arises from the ambiguity and vagueness present in many real-world situations. A
fuzzy proposition can reflect this uncertainty by having a truth value that is not just true (1)
or false (0), but any value in between.
3.
Fuzzy Sets:
4.
Instead of classical sets where elements either belong or don't belong, fuzzy sets allow
elements to have degrees of membership. For example, in the fuzzy set of "tall people,"
someone might belong to this set with a membership degree of 0.7 if they are somewhat
tall.
5.
Linguistic Variables:
6.
These are variables whose values are words or sentences from a natural language, instead of
numerical values. For instance, "temperature" might be a linguistic variable with values like
"cold," "warm," and "hot," which can be represented by fuzzy sets.
7.
Truth Values:
8.
In fuzzy logic, truth values are not binary. A proposition like "John is tall" might have a truth
value of 0.8, indicating that John is mostly tall but not completely so.
9.
Fuzzy Operators:
10.
Logical operators such as AND, OR, and NOT are extended to handle fuzzy propositions. For
example, the fuzzy AND operation might return the minimum of two truth values, reflecting
the idea that both conditions need to be sufficiently true.
1.
Temperature Control:
2.
Proposition: "The room is warm."
________________________________________________
The word fuzzy refers to things which are not clear or are vague. Any
event, process, or function that is changing continuously cannot always
be defined as either true or false, which means that we need to define
such activities in a Fuzzy manner.
In other words, we can say that fuzzy logic is not logic that is fuzzy, but
logic that is used to describe fuzziness. There can be numerous other
examples like this with the help of which we can understand the concept
of fuzzy logic.
Example
Fuzzification is the process of converting a crisp input value into a fuzzy value, which involves
mapping an input value to a corresponding degree of membership in a fuzzy set. This
process is a crucial step in fuzzy logic systems, enabling the system to handle imprecise and
vague data. Here's a detailed look at the fuzzification method:
Components of Fuzzification
1. Fuzzy Sets: These are sets with boundaries that are not sharply defined. Each element in the
fuzzy set has a degree of membership, ranging from 0 to 1, which indicates how strongly the
element belongs to the set.
2. Membership Functions: These functions define how each point in the input space is
mapped to a degree of membership. Common types of membership functions include
triangular, trapezoidal, Gaussian, and sigmoid functions.
1. Define the Universe of Discourse: Determine the range of input values for which the fuzzy
sets are defined.
2. Establish Fuzzy Sets and Membership Functions: For each input variable, create fuzzy sets
and their corresponding membership functions. For example, for the input variable
"temperature," you might have fuzzy sets like "Cold," "Warm," and "Hot," each with its
membership function.
3. Convert Crisp Input to Fuzzy Values: Take the crisp input value and determine its degree
of membership in each fuzzy set using the membership functions.
Example of Fuzzification
Consider an example where the input variable is "temperature," measured in degrees Celsius.
Let's define three fuzzy sets for this variable:
The following diagram shows the architecture of Fuzzy Logic Control (FLC).
Followings are the major components of the FLC as shown in the above figure −
Fuzzifier − The role of fuzzifier is to convert the crisp input values into
fuzzy values.
Fuzzy Knowledge Base − It stores the knowledge about all the input-
output fuzzy relationships. It also has the membership function which
defines the input variables to the fuzzy rule base and the output variables
to the plant under control.
Fuzzy Rule Base − It stores the knowledge about the operation of the
process of domain.
Fuzzy rule base configuration − Now formulate the fuzzy rule base by
assigning relationship between fuzzy input and output.
Applications:
FLC systems find a wide range of applications in various industrial and commercial products and systems. In
several applications- related to nonlinear, time-varying, ill-defined systems and also complex systems – FLC
systems have proved to be very efficient in comparison with other conventional control systems. The
applications of FLC systems include:
1. Traffic Control
2. Steam Engine
3. Aircraft Flight Control
4. Missile Control
5. Adaptive Control
6. Liquid-Level Control
7. Helicopter Model
8. Automobile Speed Controller
9. Braking System Controller
10. Process Control (includes cement kiln control)
11. Robotic Control
12. Elevator (Automatic Lift) control;
13. Automatic Running Control
14. Cooling Plant Control
15. Water Treatment
16. Boiler Control;
17. Nuclear Reactor Control;
18. Power Systems Control;
19. Air Conditioner Control (Temperature Controller)
20. Biological Processes
21. Knowledge-Based System
22. Fault Detection Control Unit
23. Fuzzy Hardware implementation and Fuzzy Computers
Fuzzy logic deals with reasoning that is approximate or imprecise. Fuzzy singleton rules are a
specific type of rule used in fuzzy inference systems. Here's a breakdown of the concept:
1. Fuzzy Sets:
Fuzzy logic relies on fuzzy sets, which represent gradations of membership rather than crisp
categories. Imagine a set for "temperature" instead of just hot or cold. It can include values
with varying degrees of membership like "very cold," "cold," "neutral," "warm," and "very
hot."
2. Membership Functions:
Membership functions define the grade of membership in a fuzzy set. They map an element
to a value between 0 (not in the set) and 1 (fully in the set). Visualize a bell curve for
"temperature" where the y-axis represents the degree of membership.
3. Singleton Fuzzifier:
A singleton fuzzifier is a specific type of membership function that acts like a spike with a
membership grade of 1 at a single point and 0 everywhere else. Imagine a sharp peak on the
temperature curve representing a specific crisp value, say "70 degrees."
Fuzzy inference systems use a set of rules in an "if-then" format. The antecedent (if part)
describes conditions based on fuzzy sets, and the consequent (then part) specifies the
outcome.
Fuzzy singleton rules are where the consequent of a fuzzy rule uses a singleton fuzzifier. In
simpler terms, the "then" part of the rule outputs a specific crisp value.
For example, a fuzzy singleton rule for a thermostat might be: "If the temperature is 'cold,'
then set heating to 'high.'" Here, "cold" is a fuzzy set with a membership function, and
"high" is a crisp value.
Fuzzy operator tuning refers to the process of adjusting various aspects of a fuzzy inference
system (FIS) to achieve optimal performance. The goal is to fine-tune the FIS for the specific
application it's controlling.
This involves adjusting the parameters of the membership functions associated with the
fuzzy sets used in the system. These functions define the degree of membership for an input
value in a particular fuzzy set.
By tweaking parameters like shape, spread, and position of the membership functions, you
can influence how the FIS interprets and reacts to input values.
1. Rule Tuning:
This focuses on refining the fuzzy rules themselves, which are the "if-then" statements that
dictate the system's behavior.
You might adjust the rule base by adding, removing, or modifying existing rules to better
capture the desired system response.
Manual Tuning: This involves adjusting MFs and rules based on expert knowledge and
desired system behavior. It requires a good understanding of the system and fuzzy logic
principles.
Data-driven Tuning: This leverages historical data or training data to automatically adjust
MFs and rules. Optimization algorithms like genetic algorithms or particle swarm
optimization can be used to find the best configuration based on the data.
Neuro-adaptive Learning: This technique, particularly useful for Sugeno FIS (a specific type
of fuzzy system), borrows from neural network training methods to adjust MFs.
The choice of tuning method depends on factors like the complexity of the system,
availability of data, and desired level of control.
13. Draw the profile of membership function for a fuzzy set called “Tall
men”.Take your own values for different heights.
14. Describe the different properties of fuzzy sets. Prove whether the laws of
excluded middle and contradiction true for fuzzy sets.
https://youtu.be/JNEqzIBkUV8?si=wgqV9T5FhXp-d0pb
Regular fuzzy sets (type-1) are a powerful tool for dealing with imprecise or subjective data.
However, they have limitations when it comes to handling uncertainty in the membership
function itself. This is where type-2 fuzzy sets come in.
Core Concept: Type-2 fuzzy sets address the issue of uncertainty in membership
functions by introducing an additional layer of fuzziness. Imagine a fuzzy set for
"temperature" like "warm." In a type-1 set, the membership function would be a
fuzzy curve. A type-2 fuzzy set, however, allows for a footprint of uncertainty within
this curve.
Footprint of Uncertainty (FOU): This is the key feature of type-2 sets. It represents
the ambiguity or vagueness associated with the membership grade of each element
in the set. Visualize a shaded area around the original fuzzy curve for "temperature,"
encompassing possible variations in how "warm" might be defined.
Three-Dimensional Membership Function: Due to the FOU, the membership
function of a type-2 fuzzy set becomes three-dimensional. The x and y axes
represent the regular domain and membership grade like in type-1 sets. The third
dimension (often denoted by z) depicts the uncertainty level within the membership
function.
Imagine a temperature sensor with some inherent measurement error. A type-1 fuzzy set
might represent "room temperature" with a fuzzy curve. A type-2 fuzzy set could account
for the sensor's uncertainty by including a footprint of uncertainty around the curve. This
footprint might widen or narrow depending on the sensor's known level of error. Elements
closer to the center of the footprint have higher confidence in their membership grade
("room temperature"), while those on the edges have more ambiguity.
Applications:
While type-2 fuzzy sets offer advantages, they can also be computationally more complex
than type-1 sets. The choice between them depends on the specific application and the
level of uncertainty you need to model.
_____________________________________________
16. Let fuzzy sets A and B be given as A = 0.5/3 + 1/5 + 0.6/7 + 0.8/8 and B
= 1/3 + 0.5/5 + 0.1/7 + 1/8 where the universe of discourse being X = {3, 5,
7, 8}. Now obtain the following:
a. A + B , the Algebraic Sum
b. A.B , the Algebraic Product
c. S (A,B) the subset hood measure
d. E (A,B) the equality measure.
https://youtu.be/dkPAdqaMf5c?si=xO2Z2J23tVCe66LM
18. Given two fuzzy sets X and Y. Prove
CON(X U Y) =CON (X) U CON(Y)
CON(X Ω Y) = CON(X) Ω CON(Y)
25. A fuzzy set is given by B = {(1,0.1), (2,0.2), (3,0.3), (4,0.9), (0,0.0)}. What is
the crisp set that can be concluded from it?
26.
Temperature(in ℃)={10,20, 27,30,40}
Fan speed(in rpm)={20,40,60,80}
a. Using triangular membership function calculate membership value for fuzzy
set
b. Represent in graphical format.
c. Calculate Temperature U fan speed.
27.
Obstacle distance(in mm)={10,20, 30,40,60,80}
Angle of steering (in degree)={20,40,60,80}
1. Using triangular membership function calculate membership value for fuzzy
set
2. Represent in graphical format.
Calculate Obstacle distance ∩ angle of streering
28.
Fuzzy If then else rule R has the form If “x is A” Then “y is B” Else “Y is C”
Consider R: If “distance is long” Then “speed is high” Else “speed is moderate”.
The relevant sets (crisp and fuzzy) are distance = {100,500,1000,5000} is the universe of
the fuzzy set long distance, speed = {30,50,70,90,120} is the universe of the fuzzy sets
high speed as well as moderate speed, and Long-distance =
C06 PLANNING
1. What is planning? Explain different types of planning in AI.
Even Planning is an important part of Artificial Intelligence which deals with the tasks and
domains of a particular problem. Planning is considered the logical side of acting.
That is why Planning is considered the logical side of acting. In other words, Planning
is about deciding the tasks to be performed by the artificial intelligence system and
the system's functioning under domain-independent conditions.
What is a Plan?
We require domain description, task specification, and goal description for any
planning system. A plan is considered a sequence of actions, and each action has its
preconditions that must be satisfied before it can act and some effects that can be
positive or negative.
So, we have Forward State Space Planning (FSSP) and Backward State Space
Planning (BSSP) at the basic level.
1. Forward State Space Planning (FSSP)
FSSP behaves in the same way as forwarding state-space search. It says that given an
initial state S in any domain, we perform some necessary actions and obtain a new
state S' (which also contains some new terms), called a progression. It continues until
we reach the target position. Action should be taken in this matter.
BSSP behaves similarly to backward state-space search. In this, we move from the
target state g to the sub-goal g, tracing the previous action to achieve that goal. This
process is called regression (going back to the previous goal or sub-goal). These sub-
goals should also be checked for consistency. The action should be relevant in this
case.
So for an efficient planning system, we need to combine the features of FSSP and
BSSP.
Execution of the plan is about choosing a sequence of tasks with a high probability
of accomplishing a specific task.
AI planning comes in different types, each suitable for a particular situation. Popular different
types of planning in ai include:
Classical Planning: In this style of planning, a series of actions is created to accomplish
a goal in a predetermined setting. It assumes that everything is static and predictable.
Hierarchical planning: By dividing large problems into smaller ones, hierarchical
planning makes planning more effective. A hierarchy of plans must be established, with
higher-level plans supervising the execution of lower-level plans.
Temporal Planning: Planning for the future considers time restrictions and
interdependencies between actions. It ensures that the plan is workable within a certain
time limit by taking into account the duration of tasks.
AI planning is used in many different fields, demonstrating its adaptability and efficiency. A few
significant applications are:
Robotics: To enable autonomous robots to properly navigate their surroundings, carry
out activities, and achieve goals, planning is crucial.
Gaming: AI planning is essential to the gaming industry because it enables game
characters to make thoughtful choices and design difficult and interesting gameplay
scenarios.
Logistics: To optimize routes, timetables, and resource allocation and achieve effective
supply chain management, AI planning is widely utilized in logistics.
Healthcare: AI planning is used in the industry to better the quality and effectiveness of
healthcare services by scheduling patients, allocating resources, and planning
treatments.
2. Blocks
Problem Description
In the block-world problem, you have a set of blocks, a table, and a robot arm. The
blocks can be stacked on top of each other or placed on the table. The goal is to
transform an initial configuration of blocks into a specified goal configuration using a
series of actions.
Actions
Each action has preconditions that must be satisfied for the action to be executed,
and effects that describe the outcome of the action.
PickUp(x):
Preconditions: Block x is clear (nothing on top of it), and the robot arm is empty.
Effects: Block x is held by the robot arm, and x is no longer on the table or another
block.
PutDown(x):
Stack(x, y):
Preconditions: Block x is held by the robot arm, and block y is clear.
Effects: Block x is on top of block y, and the robot arm is empty.
Unstack(x, y):
The block-world problem can be represented formally using a planning language like
STRIPS (Stanford Research Institute Problem Solver).
STRIPS Representation
Initial State:
scss
Copy code
On(A, Table)On(B, Table)On(C, A)Clear(B)Clear(C)
HandEmpty
Goal State:
scss
Copy code
On(A, B)On(B, C)On(C, Table)Clear(A)
HandEmpty
Actions:
Action: PickUp(x)
Preconditions: Clear(x), On(x, y), HandEmpty
Effects: Holding(x), ¬On(x, y), ¬Clear(x), Clear(y), ¬HandEmpty
Action: PutDown(x)
Preconditions: Holding(x)
Effects: On(x, Table), ¬Holding(x), HandEmpty, Clear(x)
Action: Stack(x, y)
Action: Unstack(x, y)
Planning Algorithm
The planning algorithm (e.g., STRIPS, heuristic search) would take the initial state,
goal state, and actions as input and output a sequence of actions that transforms the
initial state to the goal state.
3. Planning in detail