0% found this document useful (0 votes)
59 views

AI Endsem

Uploaded by

Shreya kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

AI Endsem

Uploaded by

Shreya kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Numericals:

14, 16, 17, 20, 21, 24, 25, 23, 24, 25, 26,

CO3: 4, 6, 7, 8, 9
CO4: 5, 7, 10, 11,
CO5: 1, 2, 3, 5, 13, 16, 18, 22-28

CO1: introduction

1. Define artificial intelligence. What are different AI applications? Enlist any five AI
applications.

Artificial intelligence (AI) is a scientific field that involves creating machines and computers that can
learn, reason, and act in ways that normally require human intelligence. AI can perform a variety of
advanced functions, including:

1. Understanding and translating spoken and written language

2. Analyzing data

3. Making recommendations

Applications:

1. Artificial Intelligence in E-Commerce

Artificial Intelligence is widely used in the field of E-commerce as it helps the organization to establish a good
engagement between the user and the company. Artificial Intelligence helps to make appropriate suggestions
and recommendations as per the user search history and view preferences. There are also AI chatbots that
are used to provide customer support instantly and help to reduce complaints and queries to a great extent.
Let’s take a closer look at AI applications in E-commerce.

 Personalization: Using this feature, customers would be able to see those products based on their interest
pattern and that eventually will drive more conversions.
 Enhanced Support: It’s very important to attend to every customer’s query to reduce the churn ratio and to
empower that AI-powered chatbots are well capable of handling most of the queries that too 24×7
 Dynamic Pricing Structure: It’s a smart way of fluctuating the price of any given product by analyzing data
from different sources and based on which price prediction is being done.

2. AI in Education Purpose

Educational sectors are totally organized and managed by human involvement till some years back. But
these days, the educational sector is also coming under the influence of Artificial Intelligence. It helps the
faculty as well as the students by making course recommendations, Analysing some data and some decisions
about the student, etc. Making automated messages to the students, and parents regarding any vacation,
and test results are done by Artificial Intelligence these days. Let’s take a closer look at AI applications in
Education.

 Voice Assistant: With the help of AI algorithms, this feature can be used in multiple and broad ways to
save time. provide convenience, and can assist users as and when required.
 Gamification: This feature has enabled e-learning companies to design attractive game modes into their
system so that kids can learn in a super fun way. This will not only make kids engage while learning but will
also ensure that they are catching the concepts and all thanks to AI for that.
 Smart Content Creation: AI uses algorithms to detect, predict and design content & provide valuable
insights based on the user’s interest which can include videos, audio, infographics, etc. Following this, with
the introduction of AR/VR technologies, e-learning companies are likely to start creating games (for
learning), and video content for the best experience.

3. Artificial Intelligence in Robotics

Artificial Intelligence is one of the major technologies that provide the robotics field with a boost to increase
their efficiency. AI provides robots to make decisions in real time and increase productivity. Let’s take a closer
look at AI applications in Robotics.

 NLP: Natural Language Processing plays a vital role in robotics to interpret the command as a human
being instructs. This enables AI algorithms & techniques such as sentimental analysis, syntactic parsing,
etc.
 Object Recognition & Manipulation: This functionality enables robots to detect objects within the
perimeter and this technique also helps robots to understand the size & shape of that particular object.
Besides this, this technique has two units, one is to identify the object & the other one refers to the physical
interaction with the object.

4. GPS and Navigations

GPS technology uses Artificial Intelligence to make the best route and provide the best available route to
the users for traveling. It helps the user to choose their type of lane and roads which increases the
safety features of a user.

 Personalization (Intelligent Routing): The personalized system gets active based on the user’s pattern &
behavior of preferred routes. Irrespective of the time & duration, the GPS will always provide suggestions
based on multiple patterns & analyses.
 Traffic Prediction: AI uses a Linear Regression algorithm that helps in preparing and analyzing the traffic
data. This clearly helps an individual in saving time and alternate routes are provided based on congestion
ahead of the user.
 Positioning & Planning: GPS & Navigation requires enhance support of AI for better positioning &
planning to avoid unwanted traffic zones. To help with this, AI-based techniques are being used such as
Kalman, Sensor fusion, etc. Besides this, AI also uses prediction methods to analyze the fastest & efficient
route to surface the real-time data.

5. Healthcare

Artificial Intelligence is widely used in the field of healthcare and medicine. The various algorithms of Artificial
Intelligence are used to build precise machines that are able to detect minor diseases inside the human body.
Also, Artificial Intelligence uses the medical history and current situation of a particular human being to predict
future diseases. Artificial Intelligence is also used to find the current vacant beds in the hospitals of a city that
saves the time of patients who are in emergency conditions. Let’s take a closer look at AI applications in
Healthcare.

 Insights & Analysis: With the help of AI, a collection of large datasets, that includes clinical data, research
studies, and public health data, to identify trends and patterns. This inversely provides aid in surveillance
and public health planning.
 Telehealth: This feature enables doctors and healthcare experts to take close monitoring while analyzing
data to prevent any uncertain health issues. Patients who are at high risk and require intensive care are
likely to get benefitted from this AI-powered feature.
 Patient Monitoring: In case of any abnormal activity and alarming alerts during the care of patients, an AI
system is being used for early intervention. Besides this, RPM, or Remote Patient Monitoring has been
significantly growing & is expected to go up by USD 6 Billion by 2025, to treat and monitor patients.
 Surgical Assistance: To ensure a streamlined procedure guided by the AI algorithms, it helps surgeons to
take effective decisions based on the provided insights to make sure that no further risks are involved in
this while processing.

2. Explain the concept Difference between AI and Non-AI

Learning and Adaptation:


 Regular Computing: Traditional computing operates on fixed instructions. The
programme executes a predefined set of commands without the ability to adapt or learn
from new information.

 AI - particularly machine learning - excels in learning from data. Algorithms iteratively


improve their performance, making predictions or decisions based on patterns identified
in massive datasets.

Flexibility and Problem Solving:

 Regular Computing: Traditional systems are proficient at solving specific problems for
which they are programmed. Their utility extends to a wide array of applications but
remains confined to predefined tasks.

 AI – AI thrives in dynamic environments, adapting to unforeseen challenges. The ability


to generalise knowledge allows AI systems to tackle diverse problem sets, often
outperforming traditional computing in complex, ambiguous scenarios.

Decision-Making:

 Regular Computing: Decisions in traditional computing are deterministic, following


predefined rules without the inherent capacity for nuance or context awareness.

 AI: Decision-making in AI involves probabilistic reasoning. Machine learning models


evaluate probabilities based on patterns in data, providing a nuanced approach to
decision-making that can be more akin to human cognition.

Human-Like Capabilities:

 Regular Computing: Traditional systems lack the capacity for human-like reasoning,
learning, or understanding. They can be powerful tools but don’t attempt to emulate
cognitive functions.

 AI: Artificial intelligence aims to replicate and augment human cognitive abilities.
Natural language processing, image recognition, and even creativity (especially
generative AI applications such as MidJourney, DALL-E, Adobe Firefly) are within the
realm of AI applications.

3. Difference between Explicit knowledge and Tacit Knowledge.


4. Which kind of knowledge is needed to represent in the AI systems
5. Types of knowledge ,explain any two in detail

6. State Space Search explain in detail and give steps for state space search
Steps in State Space Search
The process of state space search involves several key steps that guide the search from the initial state to the goal state.
Here’s a step-by-step explanation:
1. Define the Initial State
This is where the problem begins. For example, in a puzzle, the initial state would be the starting arrangement of the pieces.
2. Define the Goal State
The goal state is the desired arrangement or condition that solves the problem. In our puzzle example, it would be the
completed picture.
3. Generate Successor States
From the current state, create a set of possible 'next moves' or states that are reachable directly from the current state.
4. Apply Search Strategy
Choose a path based on the chosen search strategy, such as depth-first, breadth-first, or best-first search. This decision is
crucial and can affect the efficiency of the search.
5. Check for Goal State
At each new state, check to see if the goal has been reached. If it has, the search is successful.
6. Path Cost
Calculate the cost of the current path. If the cost is too high, or if a dead end is reached, backtrack and try a different path.
7. Loop and Expand
Repeat the process: generate successors, apply the search strategy, and check for the goal state until the goal is reached or
no more states can be generated.

These steps form the core of the state space search process, allowing for systematic exploration of all possible actions until a
solution is found or all options are exhausted.

7. Give the structure of agents. Compare Model based agent with Utility based Agent
with the help of suitable block diagrams.
Agents can be grouped into five classes based on their degree of perceived intelligence and capability :

 Simple Reflex Agents


 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent

Simple Reflex Agents

Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept.
Percept history is the history of all that an agent has perceived to date. The agent function is based on
the condition-action rule. A condition-action rule is a rule that maps a state i.e., a condition to an action. If
the condition is true, then the action is taken, else not. This agent function only succeeds when the
environment is fully observable. For simple reflex agents operating in partially observable environments,
infinite loops are often unavoidable. It may be possible to escape from infinite loops if the agent can randomize
its actions.
Problems with Simple reflex agents are :

 Very limited intelligence.


 No knowledge of non-perceptual parts of the state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of rules needs to be updated.

Simple Reflex Agents

Model-Based Reflex Agents

A model-based reflex agent is one that uses internal memory and a percept history to create a model of the
environment in which it's operating and make decisions based on that model. The term percept means
something that has been observed or detected by the agent. The model-based reflex agent stores the past
percepts in its memory and uses them to create a model of the environment. The agent then uses this model
to determine which action should be taken in any given situation.

It works by finding a rule whose condition matches the current situation. A model-based agent can
handle partially observable environments by the use of a model about the world. The agent has to keep
track of the internal state which is adjusted by each percept and that depends on the percept history. The
current state is stored inside the agent which maintains some kind of structure describing the part of the world
which cannot be seen.
Updating the state requires information about:

 How the world evolves independently from the agent?


 How do the agent’s actions affect the world?
Model-Based Reflex Agents

Goal-Based Agents

These kinds of agents take decisions based on how far they are currently from their goal(description of
desirable situations). Their every action is intended to reduce their distance from the goal. This allows the
agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented explicitly and can be modified, which makes these agents
more flexible. They usually require search and planning. The goal-based agent’s behavior can easily be
changed.

Goal-Based Agents

Utility-Based Agents

They choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is
not enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be
taken into consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the world, a
utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real
number which describes the associated degree of happiness.

Utility-Based Agents

Learning Agent

A learning agent in AI is the type of agent that can learn from its past experiences or it has learning
capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through
learning. A learning agent has mainly four conceptual components, which are:

1. Learning element: It is responsible for making improvements by learning from the environment.
2. Critic: The learning element takes feedback from critics which describes how well the agent is doing with
respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action.
4. Problem Generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
8. What is Production System and which are the rules of Production System
Inference Rules
There are many production rules in Artificial Intelligence. One of them is the inference rule. It is a type of rule that consists of a
logical form used for transformation. Let us look at the types of inference rules in AI:

Deductive Inference Rule

It consists of a logic that helps reasoning with the help of multiple statements to reach a conclusion.

Let us understand with the help of an example:


Example:

Statement 1: All mammals are animals.

Statement 2: Dogs are mammals.

Conclusion: Therefore, dogs are animals.

In this example, we have two statements: “All mammals are animals” and “Dogs are mammals.” We can use deductive
inference to draw a logical conclusion based on these statements.

Using the deductive inference rule of categorical syllogism, which states that if the major premise (“All mammals are animals”)
and the minor premise (“Dogs are mammals”) are true, then the conclusion (“Therefore, dogs are animals”) is also true.

By applying deductive inference to the given example, we can conclude that dogs are indeed animals based on the statements
provided.

Abductive Inference Rule

This rule helps explain the conclusion most simply by using the given observations.

Let’s explore an example to understand the abductive inference rule:

Example:

Observation 1: The ground is wet.

Observation 2: There are dark clouds in the sky.

Conclusion: It might have rained.

In this example, we have two observations: “The ground is wet” and “There are dark clouds in the sky.” We can use abductive
inference to generate a plausible explanation or hypothesis that best explains these observations.

The abductive inference rule suggests that the simplest and most likely explanation that can account for the given observations
should be considered. In this case, the most straightforward explanation is that it might have rained. The wet ground and the
presence of dark clouds in the sky are consistent with the hypothesis that rain occurred.

A production system is classified into four main classes which are:


 Monotonic Production System: In a monotonic production system, the use of one rule never prevents the involvement of
another rule when both the rules are selected at the same time. Hence, it enables the system to apply rules simultaneously.
 Partially Commutative Production System: In this production system if a set of rules is used to change state A to state B then
any allowable combination of these rules will also produce the same results (convert state A to state B).
 Non-Monotonic Production System: This production system increases the problem-solving efficiency of the machine by not
keeping a record of the changes made in the previous search process. These types of production systems are useful from an
implementation point of view as they do not backtrack to the previous state when it is found that an incorrect path was followed.
 Commutative Production System: These type of production systems is used when the order of operation is not important, and
the changes are reversible.

9. What are PEAS descriptors? For the following activity, give a PEAS description of
the task environment and characterize it in terms of the properties
i)Robot meant for cleaning the house.
PEAS System is used to categorize similar agents together. The PEAS system delivers the performance
measure with respect to the environment, actuators, and sensors of the respective agent. Most of the highest
performing agents are Rational Agents.
PEAS stands for a Performance measure, Environment, Actuator, Sensor.

1. Performance Measure: Performance measure is the unit to define the success of an agent. Performance
varies with agents based on their different precepts.
2. Environment: Environment is the surrounding of an agent at every instant. It keeps changing with time if
the agent is set in motion. There are 5 major types of environments:
 Fully Observable & Partially Observable
 Episodic & Sequential
 Static & Dynamic
 Discrete & Continuous
 Deterministic & Stochastic
3. Actuator: An actuator is a part of the agent that delivers the output of action to the environment.
4. Sensor: Sensors are the receptive parts of an agent that takes in the input for the agent.

PEAS Description of a House Cleaning Robot


Performance Measure:

 The primary measure is the level of cleanliness achieved in the house. This can be measured
by factors like dust and debris removal, floor sanitation, and surface cleanliness.
 Additionally, factors like efficiency (cleaning time), coverage (percentage of area cleaned),
and user satisfaction might be considered.

Environment:

 Partially Observable: The robot can only sense its immediate surroundings through its
sensors. It might not have complete information about dirt or obstacles beyond its sensor
range.
 Sequential: Cleaning tasks are typically done in a sequence, like vacuuming before mopping.
 Dynamic: The environment can change over time with new dirt appearing, furniture being
moved, or people walking through.
 Continuous: The environment has continuous aspects like dust level or floor space, requiring
sensors with fine resolution.
 Stochastic: Events like dropped objects or spills can occur randomly, affecting the cleaning
needs.

Actuators:

 Wheels or treads for movement.


 Vacuum cleaner or sweeper for dust and debris collection.
 Mopping or wiping mechanisms for cleaning surfaces.
 Grippers or arms for manipulating objects and furniture (optional).

Sensors:

 LiDAR or cameras for obstacle detection and navigation.


 Dirt sensors to identify areas needing cleaning.
 Cliff sensors to avoid falls from stairs.
 Bumpers for soft collisions and course correction.
 Battery level sensors for self-docking and recharge.

Characterization:

 This robotic cleaning system operates in a partially observable, sequential, dynamic,


continuous, and stochastic environment.
 The robot uses a combination of actuators and sensors to achieve the performance measure
of a clean house.
 Since it can adapt its cleaning strategy based on sensor data, this cleaning robot can
be considered a rational agent within its PEAS definition.

10. Define in your own words: (a) intelligence, (b) artificial intelligence, (c) agent, (d)
logical reasoning.

Intelligence has been defined in many ways: the capacity for abstraction, logic,
understanding, self-awareness, learning, emotional knowledge, reasoning, planning,
creativity, critical thinking, and problem-solving.

Artificial intelligence (AI) is a scientific field that involves creating machines and computers that can
learn, reason, and act in ways that normally require human intelligence. AI can perform a variety of
advanced functions, including:

4. Understanding and translating spoken and written language

5. Analyzing data

6. Making recommendations

In artificial intelligence, an agent is a computer program or system that is designed to perceive its
environment, make decisions and take actions to achieve a specific goal or set of goals. The agent operates
autonomously, meaning it is not directly controlled by a human operator. Agents can be classified into different
types based on their characteristics, such as whether they are reactive or proactive, whether they have a fixed
or dynamic environment, and whether they are single or multi-agent systems.

Reasoning plays a great role in the process of artificial Intelligence. Thus Reasoning can be defined as the
logical process of drawing conclusions, making predictions or constructing approaches towards a particular
thought with the help of existing knowledge. In artificial intelligence, reasoning is very important because to
understand the human brain, how the brain thinks, how it draws conclusions towards particular things for all
these sorts of works we need the help of reasoning.

11. For each of the following activities, give a PEAS description of the task environment.
i. Playing soccer.
ii. Exploring the subsurface oceans of Titan.
iii. Shopping for used AI books on the Internet.
iv. Playing a tennis match.
v. Self-driving car
vi. Picking Robot.

i. Playing Soccer

 Performance Measure: Scoring goals while adhering to the rules of soccer and working
collaboratively with teammates to win the game.
 Environment:
o Partially Observable: Players can only see a portion of the field and rely on
teammates and communication for complete awareness.
o Episodic: The game consists of distinct halves or periods with a defined start and
end.
o Dynamic: The ball and players constantly move, and the environment changes with
each action.
o Discrete: The game has a finite set of actions (kicks, passes, etc.) and a defined
duration.
o Stochastic: Unpredictable events like bounces and opponent actions can occur.
 Actuators: Legs for running, kicking, and jumping.
 Sensors: Vision for ball and player position, balance sensors for movement control.

ii. Exploring the Subsurface Oceans of Titan

 Performance Measure: Gathering scientific data about the composition, temperature, and
potential life forms in the ocean.
 Environment:
o Fully Observable (limited): Sensors provide data on the immediate surroundings,
but the overall environment is largely unknown.
o Episodic (potentially): Missions might consist of distinct exploration phases with
specific goals.
o Static (potentially): The ocean itself might be relatively stable, but external factors
could change.
o Continuous: Ocean properties like pressure and temperature likely vary
continuously.
o Stochastic: Unexpected events like equipment malfunctions or environmental
hazards could occur.
 Actuators: Propulsion systems for movement, manipulators for sample collection.
 Sensors: Cameras, sonar, chemical sensors to analyze the environment and collect data.
iii. Shopping for Used AI Books on the Internet

 Performance Measure: Finding and purchasing used AI books at a reasonable price and with
good condition.
 Environment:
o Partially Observable: Search results and product information provide limited details
about book quality and condition.
o Sequential: The shopping process involves browsing, selecting, and purchasing,
often in a specific order.
o Dynamic: Product availability, prices, and seller information can change rapidly.
o Discrete: Search options and purchase actions are typically well-defined.
o Deterministic (mostly): Website behavior is generally predictable, except for
potential system errors.
 Actuators: User interface controls for searching, selecting, and purchasing books.
 Sensors: Web scraping or API access to gather information from online marketplaces.

iv. Playing a Tennis Match

 Performance Measure: Winning the match by scoring more points than the opponent while
adhering to the rules of tennis.
 Environment:
o Partially Observable: Players cannot see the entire court at once and rely on
anticipation of opponent actions.
o Episodic: The match consists of sets and games with defined scoring and breaks.
o Dynamic: The ball and players constantly move, changing the environment with
every shot.
o Discrete: Actions like strokes and serves are well-defined with specific rules.
o Stochastic: Unpredictable factors like wind or ball bounces can influence the game.
 Actuators: Arms and racket for swinging and hitting the ball.
 Sensors: Vision to track the ball and opponent, balance sensors for movement control.

v. Self-Driving Car

 Performance Measure: Safely navigating to a destination while following traffic rules and
avoiding obstacles.
 Environment:
o Partially Observable: Sensors provide information on the immediate surroundings,
but visibility can be limited by weather or other vehicles.
o Sequential (potentially): The driving task can be broken down into sequential
actions like lane changes and intersections.
o Dynamic: Other vehicles, pedestrians, and weather conditions constantly change the
environment.
o Continuous: Traffic flow, speed, and distance require continuous monitoring and
adjustment.
o Stochastic: Unexpected events like accidents or sudden maneuvers by other drivers
can occur.
 Actuators: Steering, acceleration, and braking mechanisms to control the car's movement.
 Sensors: Cameras, LiDAR, radar, and GPS for obstacle detection, traffic monitoring, and self-
localization.

vi. Picking Robot

 Performance Measure: Accurately selecting and picking desired objects from a cluttered
environment, placing them in designated locations with minimal damage.
 Environment:
o Partially Observable: Sensors provide information on the immediate area but might
not capture all object details or occlusions by other objects.
o Episodic (potentially): Picking tasks might involve distinct pick-and-place cycles with
specific targets.
o Dynamic: The environment can change with object removal or addition by other
robots or human workers.
o Discrete (mostly): Picking actions (grasp, lift, place) are well-defined, but object
shapes and sizes can vary.
o Deterministic (mostly): Robot movements are generally predictable, except for
potential sensor errors or object fragility.
 Actuators: Robotic arm for manipulation and gripping.
 Sensors: Cameras or depth sensors for object recognition and location. Force sensors for
grip control and object fragility detection.

12. Give details of any two pair of Features of Environment

Observability and Dynamicity:


This pair deals with how much information an agent can perceive about its surroundings and
how those surroundings change over time.

Fully Observable vs. Partially Observable: A fully observable environment offers complete
information to the agent through its sensors. Imagine a chessboard, where the agent (a chess
AI) has perfect knowledge of all the pieces and their positions. In contrast, a partially
observable environment limits the agent's awareness. A self-driving car, for instance, can
only sense its immediate surroundings through cameras and LiDAR, leaving blind spots and
relying on predictions for what's beyond sensor range.

Static vs. Dynamic: A static environment remains constant throughout the agent's interaction.
A robot playing chess again has a static environment; the board layout doesn't change on its
own. On the other hand, a dynamic environment constantly evolves. The house cleaning
robot encounters a dynamic environment as dirt appears, furniture moves, and people walk
around. The robot must adapt its cleaning strategy based on these changes.
14. Assume that now there are 3 rooms and 2 Roombas (autonomous robotic vacuum
cleaners). Each room can be either dirty/clean and each Roomba is present in one of
the 3 rooms. What are the number of states in propositional/factored knowledge
representation?

To determine the number of states in a propositional/factored knowledge representation for


the given scenario, we need to account for all the possible configurations of the rooms'
cleanliness and the positions of the Roombas.

Here are the steps to calculate the total number of states:

Cleanliness of the Rooms:

Each of the 3 rooms can be either dirty or clean.


Therefore, for each room, there are 2 possible states (dirty or clean).
Since the states of the rooms are independent of each other, the total number of
configurations for the rooms' cleanliness is:
2×2×2=23=8
2×2×2=23=8

Positions of the Roombas:

There are 3 rooms and 2 Roombas. Each Roomba can be in any one of the 3 rooms.
The position of Roomba 1 can be in any of the 3 rooms, and similarly, the position of
Roomba 2 can also be in any of the 3 rooms.
The total number of configurations for the positions of the Roombas is:
3×3=32=9
3×3=32=9

Combined State:

To find the total number of states in the system, we multiply the number of configurations for
the rooms' cleanliness by the number of configurations for the Roombas' positions.
Therefore, the total number of states is:
8×9=72
8×9=72

Thus, the number of states in propositional/factored knowledge representation for this


scenario is 72.

15. Explain the different types of task environment. Consider an example of automated
taxi. List the environment the taxi has to operate and justify.

An environment in artificial intelligence is the surrounding of the agent. The agent takes
input from the environment through sensors and delivers the output to the environment
through actuators. There are several types of environments:
 Fully Observable vs Partially Observable
 Single-agent vs Multi-agent
 Static vs Dynamic
 Discrete vs Continuous
 Episodic vs Sequential

Environment types

1. Fully Observable vs Partially Observable

 When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it
is said to be a fully observable environment else it is partially observable.
 Maintaining a fully observable environment is easy as there is no need to keep track of the history of the
surrounding.
 An environment is called unobservable when the agent has no sensors in all environments.
 Examples:
 Chess – the board is fully observable, and so are the opponent’s moves.
 Driving – the environment is partially observable because what’s around the corner is not known.

2. Single-agent vs Multi-agent

 An environment consisting of only one agent is said to be a single-agent environment.


 A person left alone in a maze is an example of the single-agent system.
 An environment involving more than one agent is a multi-agent environment.
 The game of football is multi-agent as it involves 11 players in each team.

3. Dynamic vs Static

 An environment that keeps constantly changing itself when the agent is up with some action is said to be
dynamic.
 A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant.
 An idle environment with no change in its state is called a static environment.
 An empty house is static as there’s no change in the surroundings when an agent enters.

4. Discrete vs Continuous

 If an environment consists of a finite number of actions that can be deliberated in the environment to obtain
the output, it is said to be a discrete environment.
 The game of chess is discrete as it has only a finite number of moves. The number of moves might vary
with every game, but still, it’s finite.
 The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be
continuous.
 Self-driving cars are an example of continuous environments as their actions are driving, parking, etc.
which cannot be numbered.
5.Episodic vs Sequential

 In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes.
There is no dependency between current and previous incidents. In each incident, an agent receives input
from the environment and then performs the corresponding action.
 Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the
conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no
dependency between current and previous decisions.
 In a Sequential environment, the previous decisions can affect all future decisions. The next action of the
agent depends on what action he has taken previously and what action he is supposed to take in the future.
 Example:
 Checkers- Where the previous move can affect all the following moves.

Automated Taxi Environment Breakdown


An automated taxi operates in a complex environment that can be categorized based on
several factors:

1. Partially Observable: The environment is partially observable. While the taxi has sensors
to detect its immediate surroundings (traffic lights, pedestrians, other vehicles), it cannot
see beyond obstacles or predict future events (sudden stops, accidents).

2. Stochastic (Probabilistic): The environment is stochastic. Even with perfect perception of


its surroundings, the taxi cannot be certain about future actions of other drivers,
pedestrians, or sudden environmental changes (weather, road closures).

3. Competitive (to an extent): The environment has some competitive aspects. While some
vehicles might cooperate by following traffic rules, others might behave aggressively,
creating a need for the taxi to optimize its path and speed for efficiency.

4. Multi-Agent: The environment is multi-agent. The taxi shares the road with numerous
other vehicles, pedestrians, and cyclists, all making independent decisions that can affect
the taxi's performance.

5. Dynamic: The environment is highly dynamic. Traffic conditions, weather, and pedestrian
activity are constantly changing, requiring the taxi to adapt its behavior in real-time.

6. Continuous: The environment is continuous. The taxi's actions (steering, acceleration)


and the state of the environment (vehicle positions, speeds) have an infinite range of
possibilities.

7. Sequential: The environment is sequential. The taxi's decisions (route, speed) at one
point depend on the current state and influence its future actions and the overall trip
efficiency.
8. Partially Known: The environment is partially known. While the taxi has a map and traffic
data, unexpected events (accidents, road closures) or changes in traffic patterns can
introduce unknown elements.

16. Formulate vacuum cleaner problem, states can be represented by [<block>, clean]
or [<block>, dirty]. Assume suitable initial state.

Vacuum Cleaner Problem Formulation with Block


States
World:

 There are two blocks (Block A and Block B).


 Each block can be either clean or dirty.

Agent:

 A vacuum cleaner (agent) can be in one of the two blocks (Block A or Block B).
 The agent has two actions:

o Suck: Attempts to clean the block it's currently in (if dirty).


o Move: Moves to the other block.

States:

The state of the environment is represented by a tuple (<block>, <clean>), where:

 <block> is either "A" or "B", indicating the current location of the vacuum cleaner.
 <clean> is a boolean value, True if the block the agent is currently in is clean, False
otherwise.

Examples:

 (A, True): The vacuum cleaner is in Block A, and Block A is clean.


 (B, False): The vacuum cleaner is in Block B, and Block B is dirty.

Goal:

The goal is to achieve a state where both blocks are clean.

Successor Function:
The successor function S(state, action) takes a state and an action and returns the
resulting state after the action is performed.

 S((block, clean), Suck):

o If clean is True, the state remains the same.


o If clean is False (block is dirty), the new state becomes (block, True) (block is
now clean).

 S((block, clean), Move):

o If block is "A", the new state becomes (B, clean).


o If block is "B", the new state becomes (A, clean).

CO2 : Uniformed search

17. What are main components of problem formulation?

Components to formulate the associated problem:

 Initial State: This state requires an initial state for the problem which starts the AI agent towards a
specified goal. In this state new methods also initialize problem domain solving by a specific class.
 Action: This stage of problem formulation works with function with a specific class taken from the initial
state and all possible actions done in this stage.
 Transition: This stage of problem formulation integrates the actual action done by the previous action
stage and collects the final stage to forward it to their next stage.
 Goal test: This stage determines that the specified goal achieved by the integrated transition model or not,
whenever the goal achieves stop the action and forward into the next stage to determines the cost to
achieve the goal.
 Path costing: This component of problem-solving numerical assigned what will be the cost to achieve the
goal. It requires all hardware software and human working cost.

a. Give a complete problem formulation for 8-queens problem: To place eight


queens on a chessboard such that no queen attacks any other. Formulate with
the help of diagrams.
N-Queens Problem
N - Queens problem is to place n - queens in such a manner on an n x n chessboard that no queens attack each other by being
in the same row, column or diagonal.

It can be seen that for n =1, the problem has a trivial solution, and no solution exists for n =2 and n =3. So first we will consider
the 4 queens problem and then generate it to n - queens problem.

Given a 4 x 4 chessboard and number the rows and column of the chessboard 1 through 4.
Since, we have to place 4 queens such as q 1 q2 q3 and q4 on the chessboard, such that no two queens attack each other. In such
a conditional each queen must be placed on a different row, i.e., we put queen "i" on row "i."

Backward Skip 10sPlay VideoForward Skip 10s

Now, we place queen q 1 in the very first acceptable position (1, 1). Next, we put queen q 2 so that both these queens do not
attack each other. We find that if we place q 2 in column 1 and 2, then the dead end is encountered. Thus the first acceptable
position for q2 in column 3, i.e. (2, 3) but then no position is left for placing queen 'q 3' safely. So we backtrack one step and
place the queen 'q2' in (2, 4), the next best possible solution. Then we obtain the position for placing 'q 3' which is (3, 2). But later
this position also leads to a dead end, and no place is found where 'q 4' can be placed safely. Then we have to backtrack till 'q 1'
and place it to (1, 2) and then all other queens are placed safely by moving q 2 to (2, 4), q3 to (3, 1) and q4 to (4, 3). That is, we get
the solution (2, 4, 1, 3). This is one possible solution for the 4-queens problem. For another possible solution, the whole method
is repeated for all partial solutions. The other solutions for 4 - queens problems is (3, 1, 4, 2) i.e.
The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as follows:
Fig shows the complete state space for 4 - queens problem. But we can use backtracking method to generate the necessary
node and stop if the next node violates the rule, i.e., if two queens are attacking.
4 - Queens solution space with nodes numbered in DFS

It can be seen that all the solutions to the 4 queens problem can be represented as 4 - tuples (x 1, x2, x3, x4) where xi represents
the column on which queen "qi" is placed.

One possible solution for 8 queens problem is shown in fig:


b. Vacuum cleaner problem

Vacuum cleaner problem is a well-known search problem for an agent which works on Artificial Intelligence. In this
problem, our vacuum cleaner is our agent. It is a goal based agent, and the goal of this agent, which is the vacuum
cleaner, is to clean up the whole area. So, in the classical vacuum cleaner problem, we have two rooms and one vacuum
cleaner. There is dirt in both the rooms and it is to be cleaned. The vacuum cleaner is present in any one of these rooms.
So, we have to reach a state in which both the rooms are clean and are dust free.

So, there are eight possible states possible in our vacuum cleaner problem. These can be well illustrated with the help
of the following diagrams:

Here, states 1 and 2 are our initial states and state 7 and state 8 are our final states (goal states). This means that,
initially, both the rooms are full of dirt and the vacuum cleaner can reside in any room. And to reach the final goal
state, both the rooms should be clean and the vacuum cleaner again can reside in any of the two rooms.

The vacuum cleaner can perform the following functions: move left, move right, move forward, move backward and to
suck dust. But as there are only two rooms in our problem, the vacuum cleaner performs only the following functions
here: move left, move right and suck.
Here the performance of our agent (vacuum cleaner) depends upon many factors such as time taken in cleaning, the
path followed in cleaning, the number of moves the agent takes in total, etc. But we consider two main factors for
estimating the performance of the agent. They are:

1. Search Cost: How long the agent takes to come up with the solution.
2. Path cost: How expensive each action in the solution are.

By considering the above factors, the agent can also be classifies as a utility based agent.

18. Explain any two uninformed search strategies with example. What are the
advantages of Iterative Deepening Depth First Search over other uninformed search
strategies? Explain in detail.

1. Breadth-first Search:

o Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches
breadthwise in a tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all successor node at the current level
before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.

Advantages:

o BFS will provide a solution if any solution exists.


o If there are more than one solutions for a given problem, then BFS will provide the minimal solution which requires
the least number of steps.

Disadvantages:

Backward Skip 10sPlay VideoForward Skip 10s


ADVERTISEMENT

o It requires lots of memory since each level of the tree must be saved into memory to expand the next level.
o BFS needs lots of time if the solution is far away from the root node.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal node K.
BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and the traversed path will
be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS until the
shallowest Node. Where the d= depth of shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is O(b d).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.

ADVERTISEMENT

2. Depth-first Search

o Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each path to its greatest depth
node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.

Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the
current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the tree as E has
no other successor and still goal node is not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.

Completeness: DFS search algorithm is complete within finite state space as it will expand every node within a limited search
tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of DFS is
equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach to the goal
node.
Iterative Deepening Depth-First Search (IDDFS) offers several advantages over other
uninformed search strategies, particularly Depth-First Search (DFS) and Breadth-First Search
(BFS). Here's a breakdown of its key benefits:

1. Completeness with Space Efficiency (combines benefits of DFS and BFS):

 DFS Advantage: Unlike BFS, IDDFS is complete. This means it guarantees finding a solution if
one exists in the search space, similar to DFS. This is crucial when the depth of the goal node
is unknown.
 BFS Advantage: IDDFS is more space-efficient than BFS. BFS explores all nodes at a given
depth before moving to the next depth. This can lead to a large memory footprint, especially
for problems with branching factors (number of children per node) and large depths. IDDFS,
like DFS, only explores the path down to the current depth limit, requiring less memory.

2. Fast Finding of Shallow Solutions:

 IDDFS inherits the advantage of DFS in potentially finding shallow solutions (solutions closer
to the root node) faster. Since it iteratively deepens the search, it explores shallower depths
first, potentially finding the goal earlier than BFS, which explores all levels evenly.

3. Suitable for Unknown Depths:

 IDDFS is particularly well-suited for scenarios where the depth of the goal node is unknown.
By iteratively increasing the depth limit, it avoids the potentially excessive exploration of
irrelevant parts of the search space that can occur with BFS.

In summary, IDDFS offers a good balance between completeness, space efficiency, and the
ability to find shallow solutions quickly. It's a strong choice for uninformed search
problems with unknown depths and potentially large branching factors where memory
limitations might exist.

19. Compare DFS and BFS?


20. Given branching factor b=9, d=6, find out the number of nodes generated in BFS
and IDS.
24. You are given two jugs with measuring marks, a 4-gallon one and a 3-gallon one.
There is a pump to fill the jugs with water. How can you get exactly 2 gallons of
water into the 4- gallon jug? For the agent of water jug, develop a state space
description.
https://youtu.be/WtRWfbUZzZw?si=_vV5m7tu34TWF2ob

25. Analyze depth Limited search algorithm and give time and space complexity.
Give the initial state , goal test, successor function, and cost function for each of the
following
a. A 3-foot-tall monkey is in a room where some bananas are suspended from the 8-foot
ceiling. He would like to get the bananas. The room contains 2 stackable, moveable,
climbable 3 foot high crates.
b.You have three jugs, measuring 12 gallons , 8 gallons and 3 gallons and a water
faucet. You can fill the jugs up or empty them out from one to another or onto the
ground. You need to measure out exactly one gallon.

Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited search can solve the
drawback of the infinite path in the Depth-first search. In this algorithm, the node at the depth limit will treat as it has no
successor nodes further.

ADVERTISEMENT

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.

Example:

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if ℓ>d.

Problem A: The Monkey and the Bananas

Initial State

 The monkey is on the floor.


 The crates are positioned in the room (initial positions can be arbitrary, e.g., both on the floor at specific coordinates).
 The monkey is not holding or stacking any crates.

State representation can be a tuple:

css

Copy code

(Monkey Position, Crate1 Position, Crate2 Position, Crate1 Stacked on Crate2?)

Example initial state:


scss

Copy code

((0, 0), (1, 1), (2, 2), False)

Goal Test

 The monkey has the bananas.

Goal state representation:

Copy code

Monkey has reached the bananas.

Since the bananas are at 8 feet and each crate is 3 feet high, the monkey can reach the bananas if he stands on a stack of two
crates (3 feet + 3 feet + 3 feet of the monkey's height).

Successor Function

The possible actions include:

1. Move monkey (left, right, forward, backward).

2. Move crate (left, right, forward, backward).

3. Stack crate1 on crate2.

4. Stack crate2 on crate1.

5. Climb up or down a stack of crates.

Each action results in a new state. For example:

 Moving the monkey changes the monkey's position.


 Moving a crate changes the crate's position.
 Stacking a crate updates the boolean indicating whether crates are stacked.
 Climbing changes the monkey's position to the top of the crates.

Cost Function

Assume a uniform cost for simplicity:

 Each action (move, stack, climb) has a cost of 1.

Problem B: Measuring One Gallon with Jugs

Initial State

 All jugs are empty.

State representation:

scss
Copy code

(Jug12, Jug8, Jug3)

Example initial state:

scss

Copy code

(0, 0, 0)

Goal Test

 One of the jugs contains exactly one gallon.

Goal state representation:

makefile

Copy code

Jug12 == 1 or Jug8 == 1 or Jug3 == 1

Successor Function

The possible actions include:

1. Fill any jug from the faucet.

2. Empty any jug onto the ground.

3. Pour water from one jug into another until either the first jug is empty or the second jug is full.

Each action results in a new state. For example:

 Filling a jug changes its amount to its capacity.


 Emptying a jug changes its amount to 0.
 Pouring water updates the amounts in both jugs involved.

Cost Function

Assume a uniform cost for simplicity:

 Each action (fill, empty, pour) has a cost of 1.

23.What is the state of the queue at each iteration of BFS if it is called from node 'a'?
24. Fill out the following graph by labeling each node 1 through 12 according to the
order in which the depth-first search would visit the nodes:

25. Here is an example that compares the order that the graph is searched in when
using a BFS and then a DFS (by each of the three approaches)

.
26. Find a route from the 1st node that is 1 to the last element 16.using Bidirectional
Search, explain with interactions and direction with tree.
CO3. Informed Search
1. Simulated Annealing is a variation of Hill Climbing algorithm. Explain how
Simulated Annealing algorithm overcomes the limitations of Hill Climbing algorithm.

Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbor has a higher value.
Hill Climbing is a heuristic optimization process that iteratively advances towards a better solution at each
step in order to find the best solution in a given search space. It is a straightforward and quick technique that
iteratively improves the initial solution by making little changes to it. Hill Climbing only accepts
solutions that are better than the current solution and employs a greedy technique to iteratively move
towards the best solution at each stage.
Hill Climbing may not locate the global optimum because it is susceptible to becoming caught in local
optima. Because of this, it is inappropriate for complex issues with numerous local optima. Hill Climbing is
simple to create and has no tweaking requirements.
In order to discover the best solution in a given search space, the probabilistic optimization algorithm
Simulated Annealing simulates the annealing process used in metalworking. The algorithm begins with a
randomly generated initial solution and incrementally improves it by accepting less desirable solutions
with a certain probability. The probability of accepting a worse solution decreases as the algorithm
progresses, which enables it to escape local optima and find the global optimum.
Simulated annealing explores the search space and avoids local optimum by employing a probabilistic
method to accept a worse solution with a given probability . The initial temperature, cooling schedule, and
acceptance probability function are just a few of the tuning parameters. Hill Climbing is faster, but Simulated
Annealing is better at locating the global optimum, particularly for complex issues with numerous local optima.
Several fields, including logistics, scheduling, and circuit design, use simulated annealing. The approach is
especially helpful for optimization issues when the objective function is challenging to evaluate or where the
search space is intricate and high-dimensional.

2. Explain in detail what mean Informed Search Algorithms? Give advantage and
disadvantage of it?
A key idea in artificial intelligence (AI) and search algorithms is informed search,
which improves problem-solving effectiveness by using more information about the
issue at hand.

Informed search is a type of search algorithm in artificial intelligence that uses


additional information or heuristics to make more accurate decisions about which
paths to explore first. These heuristics provide estimates of how close a given state is
to the goal, guiding the search toward more promising solutions. Informed search is
particularly useful in solving complex problems efficiently, as it can significantly
reduce the search space and improve the speed of finding solutions.

Informed search algorithms can quickly reject irrelevant or less promising


alternatives, allowing the search to concentrate on the most reliable options, by
employing domain-specific knowledge to drive the search. Heuristics are used by
these kinds of AI search algorithms to increase the search’s effectiveness and speed.

Advantages

1. Efficiency:

Guided Search: Informed search algorithms use heuristics to guide the search process towards
the goal, often resulting in fewer nodes being explored compared to uninformed search
methods.

Faster Solution: By focusing on more promising paths, these algorithms can find solutions
faster, reducing the overall search time.

2. Optimal Solutions:

A Algorithm*: When using an admissible and consistent heuristic, the A* algorithm


guarantees finding the optimal solution. This ensures that the solution is not just quick, but
also the best possible according to the given criteria.

3. Reduced Search Space:

Heuristic Pruning: Heuristics can help prune large parts of the search space that are unlikely to
lead to a solution, thereby saving computational resources and time.

4. Scalability:
Applicable to Large Problems: These algorithms can be applied to large and complex
problems where uninformed search methods would be infeasible due to their high
computational requirements.

5. Flexibility:

Adjustable Heuristics: Heuristics can be tailored or adjusted based on specific problem


domains, making informed search algorithms versatile and adaptable to various types of
problems.

Disadvantages

1. Heuristic Design:

Complexity: Designing effective heuristics can be complex and often requires domain-
specific knowledge. Poor heuristics can lead to inefficient searches, negating the advantages
of informed search.

Accuracy: If the heuristic is not accurate, it can misguide the search process, potentially
leading to longer search times or suboptimal solutions.

2. Computational Overhead:

Heuristic Calculation: The computation of heuristics can add overhead, especially if they are
complex or require significant computation themselves. This can sometimes outweigh the
benefits gained from using the heuristic.

3. Memory Usage:

Resource Intensive: Some informed search algorithms, like A*, can be memory-intensive as
they need to store all explored nodes and their associated costs. This can be a limitation for
very large search spaces.

4. Incomplete Information:
Dependency on Heuristics: The performance of informed search algorithms heavily depends
on the quality and availability of heuristic information. In cases where heuristic information is
incomplete or unavailable, the algorithms may not perform well.

5. Overfitting:

Specificity: Heuristics tailored too specifically for certain problems may not generalize well to
other problems. This can lead to overfitting where the heuristic works well for a particular
instance but poorly on others.

3. Define hill climbing and describe its disadvantages. Also write the solution to these
disadvantages?

Figure shows the tentative mapping of Fort in Maharashtra

4. Find the route from Vasai fort to Panhala Fort Using A* search method (It’s just
problem formation locations and distance is imaginary) .

5. Explain concept of Genetic Algorithm. State the taxonomy of the crossover operator.

A genetic algorithm is an adaptive heuristic search algorithm inspired by


"Darwin's theory of evolution in Nature." It is used to solve optimization problems
in machine learning. It is one of the important algorithms as it helps solve complex
problems that would take a long time to solve.

terminologies to better understand this algorithm:

o Population: Population is the subset of all possible or probable solutions,


which can solve the given problem.
o Chromosomes: A chromosome is one of the solutions in the population for
the given problem, and the collection of gene generate a chromosome.
o Gene: A chromosome is divided into a different gene, or it is an element of
the chromosome.
o Fitness Function: The fitness function is used to determine the individual's
fitness level in the population. It means the ability of an individual to compete
with other individuals. In every iteration, individuals are evaluated based on
their fitness function.
o Genetic Operators: genetic operators play a role in changing the genetic
composition of the next generation.
o Selection

After calculating the fitness of every existent in the population, a selection process is
used to determine which of the individualities in the population will get to reproduce
and produce the seed that will form the coming generation.

The genetic algorithm works on the evolutionary generational cycle to generate


high-quality solutions. These algorithms use different operations that either enhance
or replace the population to give an improved fit solution.

Operators of Genetic Algorithms

Once the initial generation is created, the algorithm evolves the generation using following operators –
1) Selection Operator: The idea is to give preference to the individuals with good fitness scores and allow
them to pass their genes to successive generations.
2) Crossover Operator: This represents mating between individuals. Two individuals are selected using
selection operator and crossover sites are chosen randomly. Then the genes at these crossover sites are
exchanged thus creating a completely new individual (offspring). For example –

3) Mutation Operator: The key idea is to insert random genes in offspring to maintain the diversity in the
population to avoid premature convergence. For example –

Encoding scheme-dependent:

 Binary crossover: Applicable to GAs where chromosomes are represented as binary strings
(0s and 1s). Examples include single-point crossover, two-point crossover, and uniform
crossover.
 Real-coded crossover: Applicable to GAs where chromosomes represent real numbers.
Examples include arithmetic crossover and simulated binary crossover.
 Permutation crossover: Applicable to GAs where chromosomes represent orderings or
permutations (e.g., scheduling tasks). Examples include order crossover and cycle crossover.
 Tree-based crossover: Applicable to GAs where chromosomes are represented as tree
structures. Examples include subtree crossover and single-point crossover (on tree
structures).

6. Consider the search problem below with start state S and Goal state G. The transition
cost are next to the edges and the heuristic values are as shown in the table. Calculate
the final cost using A * search algorithm.

Table : Heuristic Values – Straight line distance to G

6. Explain min-max search algorithm with its applications?

o Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making


and game theory. It provides an optimal move for the player assuming that opponent is also
playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-
tac-toe, go, and various tow-players game. This Algorithm computes the minimax
decision for the current state.
o In this algorithm two players play the game, one is called MAX and other is called MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select the maximized
value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.
Properties of Mini-Max algorithm:
o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the
finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum
depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which
is O(bm).

A value accompanies each board state. If the maximizer has the upper hand in a particular situation, the
board score will typically be positive. In that board condition, if the minimizer has the advantage, it will
typically be some negative number.

Limitation of the minimax Algorithm:


The main drawback of the minimax algorithm is that it gets really slow for complex games such as
Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to
decide. This limitation of the minimax algorithm can be improved from alpha-beta pruning which we
have discussed in the next topic.

7. Using Game theory “Tic Tac Toe” explain Min –Max Search?
https://levelup.gitconnected.com/minimax-algorithm-explanation-using-tic-tac-toe-
game-22668694aa13

9. Evaluate optimal value using Alfa Beta alpha-beta pruning of following example.
CO4: Knowledge reasoning
1) What is knowledge-based agent.

o An intelligent agent needs knowledge about the real world for taking decisions
and reasoning to act efficiently.
o Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions. These agents can represent the world with some formal
representation and act intelligently.
o Knowledge-based agents are composed of two main parts:
o Knowledge-base and
o Inference system.

A knowledge-based agent must able to do the following:

o An agent should be able to represent states, actions, etc.


o An agent Should be able to incorporate new percepts
o An agent can update the internal representation of the world
o An agent can deduce the internal representation of the world
o An agent can deduce appropriate actions.

The architecture of knowledge-based agent:

The above diagram is representing a generalized architecture for a knowledge-based agent. The knowledge-based agent (KBA)
take input from the environment by perceiving the environment. The input is taken by the inference engine of the agent and
which also communicate with KB to decide as per the knowledge store in KB. The learning element of KBA regularly updates the
KB by learning new knowledge.

Knowledge base: Knowledge-base is a central component of a knowledge-based agent, it is also known as KB. It is a collection
of sentences (here 'sentence' is a technical term and it is not identical to sentence in English). These sentences are expressed in
a language which is called a knowledge representation language. The Knowledge-base of KBA stores fact about the world.

Why use a knowledge base?

Knowledge-base is required for updating knowledge for an agent to learn with experiences and take action as per the
knowledge.

ADVERTISEMENT
ADVERTISEMENT

Inference system

Inference means deriving new sentences from old. Inference system allows us to add a new sentence to the knowledge base. A
sentence is a proposition about the world. Inference system applies logical rules to the KB to deduce new information.

Inference system generates new facts so that an agent can update the KB. An inference system works mainly in two rules which
are given as:

o Forward chaining
o Backward chaining

Operations Performed by KBA

Following are three operations which are performed by KBA in order to show the intelligent behavior:

1. TELL: This operation tells the knowledge base what it perceives from the environment.
2. ASK: This operation asks the knowledge base what action it should perform.
3. Perform: It performs the selected action.

2) Give PEAS descriptors of WUMPUS World.


https://www.geeksforgeeks.org/ai-the-wumpus-world-description/

3) Explain various methods of knowledge representation technique.

There are mainly four ways of knowledge representation which are given as follows:

1. Logical Representation
2. Semantic Network Representation
3. Frame Representation
4. Production Rules
1. Logical Representation

Logical representation is a language with some concrete rules which deals with propositions and has no ambiguity in
representation. Logical representation means drawing a conclusion based on various conditions. This representation lays down
some important communication rules. It consists of precisely defined syntax and semantics which supports the sound inference.
Each sentence can be translated into logics using syntax and semantics.

Syntax:

ADVERTISEMENT

o Syntaxes are the rules which decide how we can construct legal sentences in the logic.
o It determines which symbol we can use in knowledge representation.
o How to write those symbols.

Semantics:

o Semantics are the rules by which we can interpret the sentence in the logic.
o Semantic also involves assigning a meaning to each sentence.

Logical representation can be categorised into mainly two logics:

a. Propositional Logics
b. Predicate logics

Note: We will discuss Prepositional Logics and Predicate logics in later chapters.

Advantages of logical representation:

1. Logical representation enables us to do logical reasoning.


2. Logical representation is the basis for the programming languages.
Disadvantages of logical Representation:

1. Logical representations have some restrictions and are challenging to work with.
2. Logical representation technique may not be very natural, and inference may not be so efficient.

Note: Do not be confused with logical representation and logical reasoning as logical representation is a representation
language and reasoning is a process of thinking logically.

2. Semantic Network Representation

Semantic networks are alternative of predicate logic for knowledge representation. In Semantic networks, we can represent our
knowledge in the form of graphical networks. This network consists of nodes representing objects and arcs which describe the
relationship between those objects. Semantic networks can categorize the object in different forms and can also link those
objects. Semantic networks are easy to understand and can be easily extended.

Backward Skip 10sPlay VideoForward Skip 10s


ADVERTISEMENT

This representation consist of mainly two types of relations:

a. IS-A relation (Inheritance)


b. Kind-of-relation

Example: Following are some statements which we need to represent in the form of nodes and arcs.

Statements:

a. Jerry is a cat.
b. Jerry is a mammal
c. Jerry is owned by Priya.
d. Jerry is brown colored.
e. All Mammals are animal.
In the above diagram, we have represented the different type of knowledge in the form of nodes and arcs. Each object is
connected with another object by some relation.

Drawbacks in Semantic representation:

1. Semantic networks take more computational time at runtime as we need to traverse the complete network tree to
answer some questions. It might be possible in the worst case scenario that after traversing the entire tree, we find
that the solution does not exist in this network.
2. Semantic networks try to model human-like memory (Which has 1015 neurons and links) to store the information, but
in practice, it is not possible to build such a vast semantic network.
3. These types of representations are inadequate as they do not have any equivalent quantifier, e.g., for all, for some,
none, etc.
4. Semantic networks do not have any standard definition for the link names.
5. These networks are not intelligent and depend on the creator of the system.

Advantages of Semantic network:

1. Semantic networks are a natural representation of knowledge.


2. Semantic networks convey meaning in a transparent manner.
3. These networks are simple and easily understandable.

3. Frame Representation

A frame is a record like structure which consists of a collection of attributes and its values to describe an entity in the world.
Frames are the AI data structure which divides knowledge into substructures by representing stereotypes situations. It consists
of a collection of slots and slot values. These slots may be of any type and sizes. Slots have names and values which are called
facets.

Facets: The various aspects of a slot is known as Facets. Facets are features of frames which enable us to put constraints on the
frames. Example: IF-NEEDED facts are called when data of any particular slot is needed. A frame may consist of any number of
slots, and a slot may include any number of facets and facets may have any number of values. A frame is also known as slot-
filter knowledge representation in artificial intelligence.

Frames are derived from semantic networks and later evolved into our modern-day classes and objects. A single frame is not
much useful. Frames system consist of a collection of frames which are connected. In the frame, knowledge about an object or
event can be stored together in the knowledge base. The frame is a type of technology which is widely used in various
applications including Natural language processing and machine visions.

Advantages of frame representation:

1. The frame knowledge representation makes the programming easier by grouping the related data.
2. The frame representation is comparably flexible and used by many applications in AI.
3. It is very easy to add slots for new attribute and relations.
4. It is easy to include default data and to search for missing values.
5. Frame representation is easy to understand and visualize.

Disadvantages of frame representation:

1. In frame system inference mechanism is not be easily processed.


2. Inference mechanism cannot be smoothly proceeded by frame representation.
3. Frame representation has a much generalized approach.
4. Production Rules

Production rules system consist of (condition, action) pairs which mean, "If condition then action". It has mainly three parts:

o The set of production rules


o Working Memory
o The recognize-act-cycle

In production rules agent checks for the condition and if the condition exists then production rule fires and corresponding
action is carried out. The condition part of the rule determines which rule may be applied to a problem. And the action part
carries out the associated problem-solving steps. This complete process is called a recognize-act cycle.

The working memory contains the description of the current state of problems-solving and rule can write knowledge to the
working memory. This knowledge match and may fire other rules.

If there is a new situation (state) generates, then multiple production rules will be fired together, this is called conflict set. In this
situation, the agent needs to select a rule from these sets, and it is called a conflict resolution.

Example:

o IF (at bus stop AND bus arrives) THEN action (get into the bus)
o IF (on the bus AND paid AND empty seat) THEN action (sit down).
o IF (on bus AND unpaid) THEN action (pay charges).
o IF (bus arrives at destination) THEN action (get down from the bus).

Advantages of Production rule:

1. The production rules are expressed in natural language.


2. The production rules are highly modular, so we can easily remove, add or modify an individual rule.

Disadvantages of Production rule:

1. Production rule system does not exhibit any learning capabilities, as it does not store the result of the problem for the
future uses.
2. During the execution of the program, many rules may be active hence rule-based production systems are inefficient.

4) Write a short note on predicate logic.

Predicate Logic deals with predicates, which are propositions, consist of variables.

Predicate Logic - Definition

A predicate is an expression of one or more variables determined on some specific domain. A


predicate with variables can be made a proposition by either authorizing a value to the variable or by
quantifying the variable.

The following are some examples of predicates.


o Consider E(x, y) denote "x = y"
o Consider X(a, b, c) denote "a + b + c = 0"
o Consider M(x, y) denote "x is married to y."

Quantifier:
The variable of predicates is quantified by quantifiers. There are two types of quantifier in predicate
logic - Existential Quantifier and Universal Quantifier.

Existential Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∃x p(x) and read as "There exists at
least one value in the universe of variable x such that p(x) is true. The quantifier ∃ is called the
existential quantifier.

There are several ways to write a proposition, with an existential quantifier, i.e.,

(∃x∈A)p(x) or ∃x∈A such that p (x) or (∃x)p(x) or p(x) is true for some x ∈A.

Universal Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∀x,p(x) and read as "For every
x∈U,p(x) is true." The quantifier ∀ is called the Universal Quantifier.

There are several ways to write a proposition, with a universal quantifier.

∀x∈A,p(x) or p(x), ∀x ∈A Or ∀x,p(x) or p(x) is true for all x ∈A.

Negation of Quantified Propositions:


When we negate a quantified proposition, i.e., when a universally quantified proposition is negated,
we obtain an existentially quantified proposition,and when an existentially quantified proposition is
negated, we obtain a universally quantified proposition.

The two rules for negation of quantified proposition are as follows. These are also called DeMorgan's
Law.

Propositions with Multiple Quantifiers:


The proposition having more than one variable can be quantified with multiple quantifiers. The
multiple universal quantifiers can be arranged in any order without altering the meaning of the
resulting proposition. Also, the multiple existential quantifiers can be arranged in any order without
altering the meaning of the proposition.
The proposition which contains both universal and existential quantifiers, the order of those
quantifiers can't be exchanged without altering the meaning of the proposition, e.g., the proposition
∃x ∀ y p(x,y) means "There exists some x such that p (x, y) is true for every y."

ADVERTISEMENT
ADVERTISEMENT

Example: Write the negation for each of the following. Determine whether the resulting statement is
true or false. Assume U = R.

1.∀ x ∃ m(x2<m)

Sol: Negation of ∀ x ∃ m(x 2<m) is ∃ x ∀ m (x2≥m). The meaning of ∃ x ∀ m (x 2≥m) is that there exists
for some x such that x 2≥m, for every m. The statement is true as there is some greater x such that
x2≥m, for every m.

2. ∃ m∀ x(x2<m)

Sol: Negation of ∃ m ∀ x (x 2<m) is ∀ m∃x (x2≥m). The meaning of ∀ m∃x (x 2≥m) is that for every m,
there exists for some x such that x 2≥m. The statement is true as for every m, there exists for some
greater x such that x2≥m.

5) Explain forward chaining and Backward chaining algorithm with the help of
example.

Inference engine:

The inference engine is the component of the intelligent system in artificial intelligence, which applies logical rules to the
knowledge base to infer new information from known facts. The first inference engine was part of the expert system. Inference
engine commonly proceeds in two modes, which are:

a. Forward chaining
b. Backward chaining

Horn Clause and Definite clause:

Horn clause and definite clause are the forms of sentences, which enables knowledge base to use a more restricted and efficient
inference algorithm. Logical inference algorithms use forward and backward chaining approaches, which require KB in the form
of the first-order definite clause.

ADVERTISEMENT
ADVERTISEMENT

Definite clause: A clause which is a disjunction of literals with exactly one positive literal is known as a definite clause or strict
horn clause.

Horn clause: A clause which is a disjunction of literals with at most one positive literal is known as horn clause. Hence all the
definite clauses are horn clauses.

Example: (¬ p V ¬ q V k). It has only one positive literal k.


It is equivalent to p ∧ q → k.

A. Forward Chaining

Forward chaining is also known as a forward deduction or forward reasoning method when using an inference engine. Forward
chaining is a form of reasoning which start with atomic sentences in the knowledge base and applies inference rules (Modus
Ponens) in the forward direction to extract more data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises are satisfied, and add their
conclusion to the known facts. This process repeats until the problem is solved.

Properties of Forward-Chaining:

o It is a down-up approach, as it moves from bottom to top.


o It is a process of making a conclusion based on known facts or data, by starting from the initial state and reaches the
goal state.
o Forward-chaining approach is also called as data-driven as we reach to the goal using available data.
o Forward -chaining approach is commonly used in the expert system, such as CLIPS, business, and production rule
systems.

Consider the following famous example which we will use in both approaches:

Example:

"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A, an enemy of America, has
some missiles, and all the missiles were sold to it by Robert, who is an American citizen."

Prove that "Robert is criminal."

To solve the above problem, first, we will convert all the above facts into first-order definite clauses, and then we will use a
forward-chaining algorithm to reach the goal.

Facts Conversion into FOL:

o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite clauses by using Existential
Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
o All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
o Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
o Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
o Country A is an enemy of America.
Enemy (A, America) .........(7)
o Robert is American
American(Robert). ..........(8)

B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning method when using an inference engine. A
backward chaining algorithm is a form of reasoning, which starts with the goal and works backward, chaining through rules to
find known facts that support the goal.

Properties of backward chaining:

o It is known as a top-down approach.


o Backward-chaining is based on modus ponens inference rule.
o In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts true.
o It is called a goal-driven approach, as a list of goals decides which rules are selected and used.
o Backward -chaining algorithm is used in game theory, automated theorem proving tools, inference engines, proof
assistants, and various AI applications.
o The backward-chaining method mostly used a depth-first search strategy for proof.

Example:

ADVERTISEMENT

In backward-chaining, we will use the same above example, and will rewrite all the rules.

o American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)


Owns(A, T1) ........(2)
o Missile(T1)
o ?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
o Missile(p) → Weapons (p) .......(5)
o Enemy(p, America) →Hostile(p) ........(6)
o Enemy (A, America) .........(7)
o American(Robert). ..........(8)

6) Write short note on Resolution.

https://www.javatpoint.com/ai-resolution-in-first-order-logic

7) Explain the steps involved in converting the propositional logic statement into CNF
with a suitable example.

https://youtu.be/Jf2T8RdCYfA?si=tIoIZnONEsdeUPbr

8) Explain PROLOG in detail.

Introduction :

Prolog is a logic programming language. It has important role in artificial intelligence. Unlike
many other programming languages, Prolog is intended primarily as a declarative
programming language. In prolog, logic is expressed as relations (called as Facts and Rules).
Core heart of prolog lies at the logic being applied. Formulation or Computation is carried out
by running a query over these relations.
Syntax and Basic Fields :

In prolog, We declare some facts. These facts constitute the Knowledge Base of the system. We
can query against the Knowledge Base. We get output as affirmative if our query is already in
the knowledge Base or it is implied by Knowledge Base, otherwise we get output as negative.
So, Knowledge Base can be considered similar to database, against which we can query. Prolog
facts are expressed in definite pattern. Facts contain entities and their relation. Entities are
written within the parenthesis separated by comma (, ). Their relation is expressed at the start
and outside the parenthesis. Every fact/rule ends with a dot (.). So, a typical prolog fact goes as
follows :

Format : relation(entity1, entity2, ....k'th entity).

Example :
friends(raju, mahesh).
singer(sonu).
odd_number(5).

Explanation :
These facts can be interpreted as :
raju and mahesh are friends.
sonu is a singer.
5 is an odd number.

Key Features :
1. Unification : The basic idea is, can the given terms be made to represent the same structure.
2. Backtracking : When a task fails, prolog traces backwards and tries to satisfy previous task.
3. Recursion : Recursion is the basis for any search in program.

Running queries :
A typical prolog query can be asked as :

Query 1 : ?- singer(sonu).
Output : Yes.

Explanation : As our knowledge base contains


the above fact, so output was 'Yes', otherwise
it would have been 'No'.

Query 2 : ?- odd_number(7).
Output : No.

Explanation : As our knowledge base does not


contain the above fact, so output was 'No'.

Advantages :
1. Easy to build database. Doesn’t need a lot of programming effort.
2. Pattern matching is easy. Search is recursion based.
3. It has built in list handling. Makes it easier to play with any algorithm involving lists.

Disadvantages :
1. LISP (another logic programming language) dominates over prolog with respect to I/O
features.
2. Sometimes input and output is not easy.

Applications :

Prolog is highly used in artificial intelligence(AI). Prolog is also used for pattern matching over
natural language parse trees.

8) Write a short note on belief network.

https://www.javatpoint.com/bayesian-belief-network-in-artificial-intelligence

10) Explain the steps involved in converting the propositional logic statement into CNF.
Consider the following Axioms.
All people who are graduating are happy.
All happy people smile.
Someone is Graduating.
1) Represent these axioms in FOL.
2) Convert the FOL to CNF.
3) Prove that someone is smiling using resolution technique

11) Explain the steps involved in converting the propositional logic statement into CNF.
Consider the following Axioms.
Rani is hungry.
If rani is hungry she barks.
If rani is barking then raja is angry.
1) Represent these axioms in FOL.
2) Convert the FOL to CNF.
3) Prove that Raja is angry by using resolution technique

CO5 Fuzzy Logic


1. For following fuzzy set find out given set is normal? Find out height, support, core
and cardinality

2. U=Flowers= {Jasmine, Rose, Lotus, Daffodil, Sunflower, Hibiscus,


Chrysanthemum}. Be the universe on which two fuzzy sets one for beautiful flowers
and other for fragrant flowers are defined as shown in below,

3. X={5, 10, 20, 30, 40, 50, 60, 7}


Label this fuzzy set as labeled infant, young, adult, and old value (assume suitable
memebership grade) find out all values for 4 class
a. Support
b. Young(0.8)
c. if old is subset for adult given justification
d.

e.

4. Derive cardinality and relative cardinality of a fuzzy set.

5. Obtain the subset hood and equality measures S(A,B) and E(A,B) among the
following fuzzy sets
a. A = 0.1/0.1 + 0.2/0.2 + 0.3/0.3 + 0.4/0.4 + 0.5/0.5
b. B = 0.2/0.1 + 0.2/0.2 + 0.4/0.3 + 0.4/0.4 + 0.6/0.5

6.Define Reflexivity and symmetry of a binary fuzzy relation on a single set.

7. What are fuzzy propositions?

Fuzzy propositions are statements within the framework of fuzzy logic, which deals with
reasoning that is approximate rather than fixed and exact. Traditional logic involves
propositions that are either true or false, but fuzzy logic allows for propositions to have a
degree of truth ranging between completely true and completely false. This concept is
crucial in dealing with real-world scenarios where information is often uncertain, imprecise,
or vague.

Key Concepts of Fuzzy Propositions

1.

Fuzziness:

2.
 Fuzziness arises from the ambiguity and vagueness present in many real-world situations. A
fuzzy proposition can reflect this uncertainty by having a truth value that is not just true (1)
or false (0), but any value in between.
3.

Fuzzy Sets:

4.
 Instead of classical sets where elements either belong or don't belong, fuzzy sets allow
elements to have degrees of membership. For example, in the fuzzy set of "tall people,"
someone might belong to this set with a membership degree of 0.7 if they are somewhat
tall.

5.

Linguistic Variables:

6.
 These are variables whose values are words or sentences from a natural language, instead of
numerical values. For instance, "temperature" might be a linguistic variable with values like
"cold," "warm," and "hot," which can be represented by fuzzy sets.
7.

Truth Values:

8.
 In fuzzy logic, truth values are not binary. A proposition like "John is tall" might have a truth
value of 0.8, indicating that John is mostly tall but not completely so.

9.

Fuzzy Operators:

10.
 Logical operators such as AND, OR, and NOT are extended to handle fuzzy propositions. For
example, the fuzzy AND operation might return the minimum of two truth values, reflecting
the idea that both conditions need to be sufficiently true.

Examples of Fuzzy Propositions

1.

Temperature Control:

2.
 Proposition: "The room is warm."

 In classical logic: true or false.


 In fuzzy logic: the truth value could be 0.6, indicating the room is somewhat warm.

________________________________________________

The word fuzzy refers to things which are not clear or are vague. Any
event, process, or function that is changing continuously cannot always
be defined as either true or false, which means that we need to define
such activities in a Fuzzy manner.

What is Fuzzy Logic?

Fuzzy Logic resembles the human decision-making methodology. It deals


with vague and imprecise information. This is gross oversimplification of
the real-world problems and based on degrees of truth rather than usual
true/false or 1/0 like Boolean logic.
Take a look at the following diagram. It shows that in fuzzy systems, the
values are indicated by a number in the range from 0 to 1. Here 1.0
represents absolute truth and 0.0 represents absolute falseness. The
number which indicates the value in fuzzy systems is called the truth
value.

In other words, we can say that fuzzy logic is not logic that is fuzzy, but
logic that is used to describe fuzziness. There can be numerous other
examples like this with the help of which we can understand the concept
of fuzzy logic.

Fuzzy Logic was introduced in 1965 by Lofti A. Zadeh in his research


paper “Fuzzy Sets”. He is considered as the father of Fuzzy Logic.

Fuzzy Logic - Classical Set Theory

A set is an unordered collection of different elements. It can be written


explicitly by listing its elements using the set bracket. If the order of the
elements is changed or any element of a set is repeated, it does not make
any changes in the set.

Example

 A set of all positive integers.


 A set of all the planets in the solar system.
 A set of all the states in India.
 A set of all the lowercase letters of the alphabet.
8. Explain a fuzzification method.

Fuzzification is the process of converting a crisp input value into a fuzzy value, which involves
mapping an input value to a corresponding degree of membership in a fuzzy set. This
process is a crucial step in fuzzy logic systems, enabling the system to handle imprecise and
vague data. Here's a detailed look at the fuzzification method:

Components of Fuzzification

1. Fuzzy Sets: These are sets with boundaries that are not sharply defined. Each element in the
fuzzy set has a degree of membership, ranging from 0 to 1, which indicates how strongly the
element belongs to the set.
2. Membership Functions: These functions define how each point in the input space is
mapped to a degree of membership. Common types of membership functions include
triangular, trapezoidal, Gaussian, and sigmoid functions.

Steps in the Fuzzification Process

1. Define the Universe of Discourse: Determine the range of input values for which the fuzzy
sets are defined.
2. Establish Fuzzy Sets and Membership Functions: For each input variable, create fuzzy sets
and their corresponding membership functions. For example, for the input variable
"temperature," you might have fuzzy sets like "Cold," "Warm," and "Hot," each with its
membership function.
3. Convert Crisp Input to Fuzzy Values: Take the crisp input value and determine its degree
of membership in each fuzzy set using the membership functions.

Example of Fuzzification

Consider an example where the input variable is "temperature," measured in degrees Celsius.
Let's define three fuzzy sets for this variable:

 Cold: Represented by a triangular membership function.


 Warm: Represented by a trapezoidal membership function.
 Hot: Represented by a Gaussian membership function.

9. Draw the typical architecture of an FLC

Architecture and Operations of FLC System:


1. The basic architecture of a fuzzy logic controller is shown in Figure 2. The principal components of an FLC
system is a fuzzifier, a fuzzy rule base, a fuzzy knowledge base, an inference engine, and a defuzz.ifier. It
also includes parameters for normalization. When the output from the defuzzifier is not a control action for a
plant, then the system is a fuzzy logic decision system. The fuzzifier present converts crisp quantities into
fuzzy quantities. The fuzzy rule base stores knowledge about the operation of the process of domain
expertise. The fuzzy knowledge base stores the knowledge about all the input-output fuzzy relationships. It
includes the membership functions defining the input variables to the fuzzy rule base and the out variables
to the plant under control. The inference engine is the kernel of an FLC system, and it possesses the
capability to simulate human decisions by performing approximate reasoning to achieve the desired control
strategy. The defuzzifier converts the fuzzy quantities into crisp quantities from an inferred fuzzy control
action by the inference engine.

Architecture of Fuzzy Logic Control

The following diagram shows the architecture of Fuzzy Logic Control (FLC).

Major Components of FLC

Followings are the major components of the FLC as shown in the above figure −

Fuzzifier − The role of fuzzifier is to convert the crisp input values into
fuzzy values.

Fuzzy Knowledge Base − It stores the knowledge about all the input-
output fuzzy relationships. It also has the membership function which
defines the input variables to the fuzzy rule base and the output variables
to the plant under control.

Fuzzy Rule Base − It stores the knowledge about the operation of the
process of domain.

Inference Engine − It acts as a kernel of any FLC. Basically it simulates


human decisions by performing approximate reasoning.
Defuzzifier − The role of defuzzifier is to convert the fuzzy values into
crisp values getting from fuzzy inference engine.

Steps in Designing FLC

Following are the steps involved in designing FLC −

Identification of variables − Here, the input, output and state variables


must be identified of the plant which is under consideration.

Fuzzy subset configuration − The universe of information is divided


into number of fuzzy subsets and each subset is assigned a linguistic
label. Always make sure that these fuzzy subsets include all the elements
of universe.

Obtaining membership function − Now obtain the membership


function for each fuzzy subset that we get in the above step.

Fuzzy rule base configuration − Now formulate the fuzzy rule base by
assigning relationship between fuzzy input and output.

Fuzzification − The fuzzification process is initiated in this step.

Combining fuzzy outputs − By applying fuzzy approximate reasoning,


locate the fuzzy output and merge them.

Defuzzification − Finally, initiate defuzzification process to form a crisp


output.

Applications:
FLC systems find a wide range of applications in various industrial and commercial products and systems. In
several applications- related to nonlinear, time-varying, ill-defined systems and also complex systems – FLC
systems have proved to be very efficient in comparison with other conventional control systems. The
applications of FLC systems include:

1. Traffic Control
2. Steam Engine
3. Aircraft Flight Control
4. Missile Control
5. Adaptive Control
6. Liquid-Level Control
7. Helicopter Model
8. Automobile Speed Controller
9. Braking System Controller
10. Process Control (includes cement kiln control)
11. Robotic Control
12. Elevator (Automatic Lift) control;
13. Automatic Running Control
14. Cooling Plant Control
15. Water Treatment
16. Boiler Control;
17. Nuclear Reactor Control;
18. Power Systems Control;
19. Air Conditioner Control (Temperature Controller)
20. Biological Processes
21. Knowledge-Based System
22. Fault Detection Control Unit
23. Fuzzy Hardware implementation and Fuzzy Computers

10. List the advantages of fuzzy logic control systems.


Let us now discuss the advantages of Fuzzy Logic Control.

Cheaper − Developing a FLC is comparatively cheaper than


developing model based or other controller in terms of
performance.

Robust − FLCs are more robust than PID controllers


because of their capability to cover a huge range of
operating conditions.

Customizable − FLCs are customizable.

Emulate human deductive thinking − Basically FLC is


designed to emulate human deductive thinking, the process
people use to infer conclusion from what they know.
Reliability − FLC is more reliable than conventional control
system.

Efficiency − Fuzzy logic provides more efficiency when


applied in control system.

11. What are fuzzy singleton rules?

Fuzzy logic deals with reasoning that is approximate or imprecise. Fuzzy singleton rules are a
specific type of rule used in fuzzy inference systems. Here's a breakdown of the concept:

1. Fuzzy Sets:

 Fuzzy logic relies on fuzzy sets, which represent gradations of membership rather than crisp
categories. Imagine a set for "temperature" instead of just hot or cold. It can include values
with varying degrees of membership like "very cold," "cold," "neutral," "warm," and "very
hot."

2. Membership Functions:

 Membership functions define the grade of membership in a fuzzy set. They map an element
to a value between 0 (not in the set) and 1 (fully in the set). Visualize a bell curve for
"temperature" where the y-axis represents the degree of membership.

3. Singleton Fuzzifier:

 A singleton fuzzifier is a specific type of membership function that acts like a spike with a
membership grade of 1 at a single point and 0 everywhere else. Imagine a sharp peak on the
temperature curve representing a specific crisp value, say "70 degrees."

4. Fuzzy Inference Rules:

 Fuzzy inference systems use a set of rules in an "if-then" format. The antecedent (if part)
describes conditions based on fuzzy sets, and the consequent (then part) specifies the
outcome.

5. Fuzzy Singleton Rules:

 Fuzzy singleton rules are where the consequent of a fuzzy rule uses a singleton fuzzifier. In
simpler terms, the "then" part of the rule outputs a specific crisp value.
For example, a fuzzy singleton rule for a thermostat might be: "If the temperature is 'cold,'
then set heating to 'high.'" Here, "cold" is a fuzzy set with a membership function, and
"high" is a crisp value.

Advantages of Fuzzy Singleton Rules:

 Simpler to interpret compared to fuzzy rules with fuzzy outputs.


 Computationally efficient due to the use of crisp values in the consequent.

I hope this explanation clarifies fuzzy singleton rules!

12. What is fuzzy operator tuning?

Fuzzy operator tuning refers to the process of adjusting various aspects of a fuzzy inference
system (FIS) to achieve optimal performance. The goal is to fine-tune the FIS for the specific
application it's controlling.

There are two main areas of focus in fuzzy operator tuning:

1. Membership Function (MF) Tuning:

 This involves adjusting the parameters of the membership functions associated with the
fuzzy sets used in the system. These functions define the degree of membership for an input
value in a particular fuzzy set.
 By tweaking parameters like shape, spread, and position of the membership functions, you
can influence how the FIS interprets and reacts to input values.

1. Rule Tuning:

 This focuses on refining the fuzzy rules themselves, which are the "if-then" statements that
dictate the system's behavior.
 You might adjust the rule base by adding, removing, or modifying existing rules to better
capture the desired system response.

Here are some common techniques for fuzzy operator tuning:

 Manual Tuning: This involves adjusting MFs and rules based on expert knowledge and
desired system behavior. It requires a good understanding of the system and fuzzy logic
principles.
 Data-driven Tuning: This leverages historical data or training data to automatically adjust
MFs and rules. Optimization algorithms like genetic algorithms or particle swarm
optimization can be used to find the best configuration based on the data.
 Neuro-adaptive Learning: This technique, particularly useful for Sugeno FIS (a specific type
of fuzzy system), borrows from neural network training methods to adjust MFs.

The choice of tuning method depends on factors like the complexity of the system,
availability of data, and desired level of control.

13. Draw the profile of membership function for a fuzzy set called “Tall
men”.Take your own values for different heights.

14. Describe the different properties of fuzzy sets. Prove whether the laws of
excluded middle and contradiction true for fuzzy sets.

https://youtu.be/JNEqzIBkUV8?si=wgqV9T5FhXp-d0pb

15. What are type2 fuzzy sets? Give example.

Regular fuzzy sets (type-1) are a powerful tool for dealing with imprecise or subjective data.
However, they have limitations when it comes to handling uncertainty in the membership
function itself. This is where type-2 fuzzy sets come in.

Type-2 Fuzzy Sets: Accounting for Uncertainty

Core Concept: Type-2 fuzzy sets address the issue of uncertainty in membership
functions by introducing an additional layer of fuzziness. Imagine a fuzzy set for
"temperature" like "warm." In a type-1 set, the membership function would be a
fuzzy curve. A type-2 fuzzy set, however, allows for a footprint of uncertainty within
this curve.


Footprint of Uncertainty (FOU): This is the key feature of type-2 sets. It represents
the ambiguity or vagueness associated with the membership grade of each element
in the set. Visualize a shaded area around the original fuzzy curve for "temperature,"
encompassing possible variations in how "warm" might be defined.



Three-Dimensional Membership Function: Due to the FOU, the membership
function of a type-2 fuzzy set becomes three-dimensional. The x and y axes
represent the regular domain and membership grade like in type-1 sets. The third
dimension (often denoted by z) depicts the uncertainty level within the membership
function.

Example: Sensor Reading Uncertainty

Imagine a temperature sensor with some inherent measurement error. A type-1 fuzzy set
might represent "room temperature" with a fuzzy curve. A type-2 fuzzy set could account
for the sensor's uncertainty by including a footprint of uncertainty around the curve. This
footprint might widen or narrow depending on the sensor's known level of error. Elements
closer to the center of the footprint have higher confidence in their membership grade
("room temperature"), while those on the edges have more ambiguity.

Benefits of Type-2 Fuzzy Sets:

 Improved Modeling of Uncertainty: They provide a more realistic representation of real-


world scenarios where measurements or classifications might not be perfectly crisp.
 Increased Robustness: By incorporating uncertainty, type-2 fuzzy systems can handle
unexpected variations or noise in the data without significant performance degradation.

Applications:

 Signal processing (handling noisy sensor data)


 Control systems (accounting for actuator or system variations)
 Pattern recognition (dealing with ambiguities in object classification)
 Decision making under uncertainty (incorporating different expert opinions)

While type-2 fuzzy sets offer advantages, they can also be computationally more complex
than type-1 sets. The choice between them depends on the specific application and the
level of uncertainty you need to model.

_____________________________________________
16. Let fuzzy sets A and B be given as A = 0.5/3 + 1/5 + 0.6/7 + 0.8/8 and B
= 1/3 + 0.5/5 + 0.1/7 + 1/8 where the universe of discourse being X = {3, 5,
7, 8}. Now obtain the following:
a. A + B , the Algebraic Sum
b. A.B , the Algebraic Product
c. S (A,B) the subset hood measure
d. E (A,B) the equality measure.

17. Define Dilation, Concentration and Contrast intensification on fuzzy sets.

https://youtu.be/dkPAdqaMf5c?si=xO2Z2J23tVCe66LM
18. Given two fuzzy sets X and Y. Prove
CON(X U Y) =CON (X) U CON(Y)
CON(X Ω Y) = CON(X) Ω CON(Y)

19. Given a binary fuzzy relation R(X,Y)

a. Obtain the domain of R.


b. Obtain the range of R.
c. What is the height of R.
d. Obtain inverse of R.
e. Obtain R ° R and R ∎ R
f. Express R(X ,Y) in its resolution form.
g. Define max min transitivity of a binary fuzzy relation.

23. Suppose A is a fuzzy set defined over a universe of discourse X. If Core(A)


denotes the core ofthe fuzzy set A, then Core(A) is a crisp set. What about the
Support(A)?

24. A crisp set A defined over X = {1,2,3,5,7} is A = {1,3,7}. What would be


it’s equivalent fuzzyset?

25. A fuzzy set is given by B = {(1,0.1), (2,0.2), (3,0.3), (4,0.9), (0,0.0)}. What is
the crisp set that can be concluded from it?

26.
Temperature(in ℃)={10,20, 27,30,40}
Fan speed(in rpm)={20,40,60,80}
a. Using triangular membership function calculate membership value for fuzzy
set
b. Represent in graphical format.
c. Calculate Temperature U fan speed.

27.
Obstacle distance(in mm)={10,20, 30,40,60,80}
Angle of steering (in degree)={20,40,60,80}
1. Using triangular membership function calculate membership value for fuzzy
set
2. Represent in graphical format.
Calculate Obstacle distance ∩ angle of streering

28.
Fuzzy If then else rule R has the form If “x is A” Then “y is B” Else “Y is C”
Consider R: If “distance is long” Then “speed is high” Else “speed is moderate”.
The relevant sets (crisp and fuzzy) are distance = {100,500,1000,5000} is the universe of
the fuzzy set long distance, speed = {30,50,70,90,120} is the universe of the fuzzy sets
high speed as well as moderate speed, and Long-distance =

C06 PLANNING
1. What is planning? Explain different types of planning in AI.

Even Planning is an important part of Artificial Intelligence which deals with the tasks and
domains of a particular problem. Planning is considered the logical side of acting.

That is why Planning is considered the logical side of acting. In other words, Planning
is about deciding the tasks to be performed by the artificial intelligence system and
the system's functioning under domain-independent conditions.

What is a Plan?

We require domain description, task specification, and goal description for any
planning system. A plan is considered a sequence of actions, and each action has its
preconditions that must be satisfied before it can act and some effects that can be
positive or negative.

So, we have Forward State Space Planning (FSSP) and Backward State Space
Planning (BSSP) at the basic level.
1. Forward State Space Planning (FSSP)

FSSP behaves in the same way as forwarding state-space search. It says that given an
initial state S in any domain, we perform some necessary actions and obtain a new
state S' (which also contains some new terms), called a progression. It continues until
we reach the target position. Action should be taken in this matter.

o Disadvantage: Large branching factor


o Advantage: The algorithm is Sound

2. Backward State Space Planning (BSSP)

BSSP behaves similarly to backward state-space search. In this, we move from the
target state g to the sub-goal g, tracing the previous action to achieve that goal. This
process is called regression (going back to the previous goal or sub-goal). These sub-
goals should also be checked for consistency. The action should be relevant in this
case.

o Disadvantages: not sound algorithm (sometimes inconsistency can be found)


o Advantage: Small branching factor (much smaller than FSSP)

So for an efficient planning system, we need to combine the features of FSSP and
BSSP.

Planning in artificial intelligence is about decision-making actions performed by


robots or computer programs to achieve a specific goal.

Execution of the plan is about choosing a sequence of tasks with a high probability
of accomplishing a specific task.

AI planning comes in different types, each suitable for a particular situation. Popular different
types of planning in ai include:
 Classical Planning: In this style of planning, a series of actions is created to accomplish
a goal in a predetermined setting. It assumes that everything is static and predictable.
 Hierarchical planning: By dividing large problems into smaller ones, hierarchical
planning makes planning more effective. A hierarchy of plans must be established, with
higher-level plans supervising the execution of lower-level plans.
 Temporal Planning: Planning for the future considers time restrictions and
interdependencies between actions. It ensures that the plan is workable within a certain
time limit by taking into account the duration of tasks.

Components of planning system in AI


A planning system in AI is made up of many crucial parts that cooperate to produce successful
plans. These components of planning system in ai consist of:
 Representation: The component that describes how the planning problem is represented
is called representation. The state space, actions, objectives, and limitations must all be
defined.
 Search: To locate a series of steps that will get you where you want to go, the search
component searches the state space. To locate the best plans, a variety of search
techniques, including depth-first search & A* search, can be used.
 Heuristics: Heuristics are used to direct search efforts and gauge the expense or
benefit of certain actions. They aid in locating prospective routes and enhancing the
effectiveness of the planning process.

AI planning is used in many different fields, demonstrating its adaptability and efficiency. A few
significant applications are:
 Robotics: To enable autonomous robots to properly navigate their surroundings, carry
out activities, and achieve goals, planning is crucial.
 Gaming: AI planning is essential to the gaming industry because it enables game
characters to make thoughtful choices and design difficult and interesting gameplay
scenarios.
 Logistics: To optimize routes, timetables, and resource allocation and achieve effective
supply chain management, AI planning is widely utilized in logistics.
 Healthcare: AI planning is used in the industry to better the quality and effectiveness of
healthcare services by scheduling patients, allocating resources, and planning
treatments.

2. Blocks

The block-world planning problem is a classic problem in artificial intelligence that


involves manipulating blocks on a table to achieve a desired configuration. This
problem is often used to illustrate concepts in automated planning and problem
solving.

Problem Description

In the block-world problem, you have a set of blocks, a table, and a robot arm. The
blocks can be stacked on top of each other or placed on the table. The goal is to
transform an initial configuration of blocks into a specified goal configuration using a
series of actions.

Components of the Problem

1. Blocks: Identifiable objects (e.g., A, B, C).


2. Table: A surface where blocks can be placed.
3. Robot Arm: An agent capable of picking up, moving, and placing blocks.
4. Initial State: The starting arrangement of blocks.
5. Goal State: The desired arrangement of blocks.
6. Actions: The allowed operations that can change the state.

Actions

Typical actions in a block-world planning problem include:

1. PickUp(x): Pick up block x from the table or another block.


2. PutDown(x): Put down block x on the table.
3. Stack(x, y): Place block x on top of block y.
4. Unstack(x, y): Remove block x from on top of block y.

Preconditions and Effects

Each action has preconditions that must be satisfied for the action to be executed,
and effects that describe the outcome of the action.

PickUp(x):

 Preconditions: Block x is clear (nothing on top of it), and the robot arm is empty.
 Effects: Block x is held by the robot arm, and x is no longer on the table or another
block.

PutDown(x):

 Preconditions: Block x is held by the robot arm.


 Effects: Block x is on the table, and the robot arm is empty.

Stack(x, y):


 Preconditions: Block x is held by the robot arm, and block y is clear.
 Effects: Block x is on top of block y, and the robot arm is empty.

Unstack(x, y):

 Preconditions: Block x is on top of block y, and the robot arm is empty.


 Effects: Block x is held by the robot arm, and y is clear.

The block-world problem can be represented formally using a planning language like
STRIPS (Stanford Research Institute Problem Solver).

STRIPS Representation

Initial State:

scss
Copy code
On(A, Table)On(B, Table)On(C, A)Clear(B)Clear(C)
HandEmpty

Goal State:

scss
Copy code
On(A, B)On(B, C)On(C, Table)Clear(A)
HandEmpty

Actions:

Action: PickUp(x)

 Preconditions: Clear(x), On(x, y), HandEmpty
 Effects: Holding(x), ¬On(x, y), ¬Clear(x), Clear(y), ¬HandEmpty

Action: PutDown(x)

 Preconditions: Holding(x)
 Effects: On(x, Table), ¬Holding(x), HandEmpty, Clear(x)

Action: Stack(x, y)

 Preconditions: Holding(x), Clear(y)


 Effects: On(x, y), ¬Holding(x), HandEmpty, Clear(x), ¬Clear(y)

Action: Unstack(x, y)

 Preconditions: On(x, y), Clear(x), HandEmpty


 Effects: Holding(x), ¬On(x, y), Clear(y), ¬Clear(x)

Planning Algorithm

The planning algorithm (e.g., STRIPS, heuristic search) would take the initial state,
goal state, and actions as input and output a sequence of actions that transforms the
initial state to the goal state.

3. Planning in detail

You might also like