0% found this document useful (0 votes)
26 views35 pages

Ai 1

The document discusses various topics related to artificial intelligence (AI): 1. It defines AI as machines performing human-like functions such as learning, reasoning, and problem-solving. Common applications of AI are in healthcare, automobiles, education, and more. 2. Key components of AI are discussed including machine learning, deep learning, and neural networks. Deep learning specifically mimics the human brain to solve complex problems. 3. The future of AI is predicted to include transforming scientific discovery through big data analysis, becoming integral to foreign policy and national competitiveness, and enabling personalized medicine through individualized digital models.

Uploaded by

Sayantan Majhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views35 pages

Ai 1

The document discusses various topics related to artificial intelligence (AI): 1. It defines AI as machines performing human-like functions such as learning, reasoning, and problem-solving. Common applications of AI are in healthcare, automobiles, education, and more. 2. Key components of AI are discussed including machine learning, deep learning, and neural networks. Deep learning specifically mimics the human brain to solve complex problems. 3. The future of AI is predicted to include transforming scientific discovery through big data analysis, becoming integral to foreign policy and national competitiveness, and enabling personalized medicine through individualized digital models.

Uploaded by

Sayantan Majhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

UNIT-1

1. Destination of Artificial Intelligence.

Artificial intelligence consists of machine's ability to perform consecutive


function like as human. The artificial intelligence performs learning,
reasoning and having ability to solve difficult problems. The artificial
intelligence can recognize human speech and solve the desired problem.
The benchmark of AI is human level concerning in terms of reasoning
speech & vision.
2. Parts of AI.

 Machine Learning
 Deep Learning
 Natural Language processing
 Expert System
 Robotics
 Machine Vision
 Speech Recognition

3. Application of AI.
 Healthcare.
 Automobile.
 Finance.
 Gaming.
 Robotics.
 Education.
 Space Exploration.
 Social Media.
 Entertainment.
 Agriculture.
 E-commerce.
 Surveillance.

1|Page
4. The Future of Artificial Intelligence: What Can We Expect in the
Next 10 Years?

The world of artificial intelligence is rapidly advancing and is poised to


transform every industry in the next decade. Experts have weighed in on
the potential of AI, the automation of jobs, and where AI will be in five to ten
years. Here are five things to expect in the future of AI.

1. AI and ML Will Transform the Scientific Method

AI and machine learning (ML) will enable an unprecedented ability to


analyze enormous data sets and computationally discover complex
relationships and patterns. This will revolutionize the scientific research
process and unleash a new golden age of scientific discovery.

2. AI Will Become a Pillar of Foreign Policy

The U.S. government is investing heavily in AI to maintain and strengthen


global U.S. competitiveness. AI will be imperative to the continuing
economic resilience and geopolitical leadership of the United States.

3. AI Will Enable Next-Gen Consumer Experiences

AI algorithms have the potential to bridge the feedback loops between the
digital and physical realms, enabling next-generation consumer
experiences like the metaverse and cryptocurrencies.

4. Addressing the Climate Crisis Will Require AI

AI algorithms have the potential to learn much more quickly in a digital


world, and will be necessary to mitigate the socioeconomic threats posed
by climate change. AI will be used to create digital "twin Earth" simulations,
detect nuanced trends, and anticipate unintended consequences.

5. AI Will Enable Truly Personalized Medicine

AI solutions have the potential to construct and analyze "digital twin" rubrics
of individual biology and synthesize individualized therapies for patients. AI
will be used to make sense of massive datasets from an individual's
physiology, and to reduce persistent health inequities.
2|Page
5. What is Deep Learning, and how is it used in real-world?

Deep learning is a subset of Machine learning that mimics the working of


the human brain. It is inspired by the human brain cells, called neurons,
and works on the concept of neural networks to solve complex real-world
problems. It is also known as the deep neural network or deep neural
learning.

Some real-world applications of deep learning are:


o Adding different colors to the black & white images
o Computer vision
o Text generation
o Deep-Learning Robots, etc.

6. Why do we need Artificial Intelligence?

The goal of Artificial intelligence is to create intelligent machines that can


mimic human behavior. We need AI for today's world to solve complex
problems, make our lives more smoothly by automating the routine work,
saving the manpower, and to perform many more other tasks.

7. Machine Learning:

 Machine Learning is basically the study/process which provides the


system(computer) to learn automatically on its own through experiences it
had and improve accordingly without being explicitly programmed. ML is
an application or subset of AI. ML focuses on the development of
programs so that it can access data to use it for itself. The entire process
makes observations on data to identify the possible patterns being
formed and make better future decisions as per the examples provided to
them. The major aim of ML is to allow the systems to learn by
themselves through experience without any kind of human
intervention or assistance.

3|Page
8. Deep Learning:

 Deep Learning is basically a sub-part of the broader family of Machine


Learning which makes use of Neural Networks(similar to the neurons
working in our brain) to mimic human brain-like behavior. DL algorithms
focus on information processing patterns mechanism to possibly
identify the patterns just like our human brain does and classifies the
information accordingly. DL works on larger sets of data when compared
to ML and the prediction mechanism is self-administered by
machines. 

9. Differences between Artificial Intelligence, Machine Learning and


Deep Learning: 
Artificial Intelligence Machine Learning Deep Learning
The term Artificial The term ML was first The term DL was first coined
intelligence was first coined in the year 1959 in the year 2000 Igor
coined in the by Arthur Samuel. Aizenberg.
year 1956 by John
McCarthy.
It is a technology that It is a subset of AI that It is the subset of machine
is used to create learns from past data learning and AI that is
intelligent machines and experiences. inspired by the human brain
that can mimic human cells, called neurons, and
behavior. imitates the working of the
human brain.
AI completely deals ML deals with structured Deep learning deals with
with structured, semi- and semi-structured structured and unstructured
structured data. data. data.
It requires a huge It can work with less It requires a huge amount of
amount of data to amount of data the data compared to the
work. compared to deep ML.
learning and AI.
The goal of AI is to The goal of ML is to The goal of deep learning is
enable the machine to enable the machine to to solve the complex
think without any learn from past problems as the human
human intervention. experiences. brain does, using various
algorithms.

4|Page
10. What is an Agent in Artificial Intelligence?

An agent in Artificial Intelligence (AI) is a computer program or system that


is designed to perceive its environment, make decisions, and take actions
to achieve a specific goal or set of goals. Agents can be reactive, proactive,
single or multi-agent systems, and operate in either fixed or dynamic
environments. Examples of agents include intelligent personal assistants,
autonomous robots, gaming agents, fraud detection agents, and traffic
management agents.

We should first know about sensors, effectors, and actuators.

Sensor: Sensor is a device which detects the change in the environment


and sends the information to other electronic devices. An agent observes
its environment through sensors.

Actuators: Actuators are the component of machines that converts energy


into motion. The actuators are only responsible for moving and controlling a
system. An actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment.


Effectors can be legs, wheels, arms, fingers, wings, fins, and display
screen.

Structure of Agent

Types of AI Agents
o Simple Reflex Agent
o Model-based reflex agent
5|Page
o Goal-based agents
o Utility-based agent
o Learning agent

1. Simple Reflex agent:


o The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of
the percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts
history during their decision and action process.
o The Simple reflex agent works on Condition-action rule, which means
it maps the current state to action. Such as a Room Cleaner agent, it
works only if there is dirt in the room.

2. Model-based reflex agent


o The Model-based agent can work in a partially observable
environment, and track the situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world,"
so it is called a Model-based agent.
o Internal State: It is a representation of the current state based
on percept history.

6|Page
o These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.

3. Goal-based agents
o The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
o The agent needs to know its goal which describes desirable
situations.
o Goal-based agents expand the capabilities of the model-based agent
by having the "goal" information.
o They choose an action, so that they can achieve the goal.

7|Page
4. Utility-based agents
o These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
o Utility-based agent act based not only goals but also the best way to
achieve the goal.
o The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
o The utility function maps each state to a real number to check how
efficiently each action achieves the goals.

11. What is the concept of rationality in AI?

The concept of rationality in AI refers to the idea that agents (such as


artificial intelligence systems) should make decisions based on all available
information and choose the option that is most likely to lead to the desired
outcome. This means taking into account all relevant information and
making choices that are most likely to lead to the best possible result. This
principle is often used in conjunction with the principle of utility, which
states that agents should choose the option that will maximize utility (or, in
other words, the option that will lead to the best possible outcome).
Rationality is a key principle in AI development because it allows us to
create agents that can make decisions in a way that is similar to humans.

8|Page
12. What are Problem Solving Agents in Artificial Intelligence?

Problem Solving Agents are a type of goal-based agent used in Artificial


Intelligence (AI). They are used when the direct mapping from states to
actions of a simple reflex agent is too large to store for a complex
environment. Problem Solving Agents consider future actions and the
desirability of outcomes in order to maximize their performance measure.

Problem Solving Agents are used to solve typical AI problems, such as


finding the shortest path between two points or determining the best move
in a game. To solve these problems, the agent must first formulate a goal,
then analyze the problem, represent the knowledge, and search for a
sequence of actions that will reach the goal. Finally, the agent must
execute the recommended actions.
13. What is Strong AI, and how is it different from the Weak AI?

Strong AI: Strong AI is about creating real intelligence artificially, which


means a human-made intelligence that has sentiments, self-awareness,
and emotions similar to humans. It is still an assumption that has a concept
of building AI agents with thinking, reasoning, and decision-making
capabilities similar to humans.

Weak AI: Weak AI is the current development stage of artificial intelligence


that deals with the creation of intelligent agents and machines that can help
humans and solve real-world complex problems. Siri and Alexa are
examples of Weak AI programs.

14. Explain rational agents and rationality?

A rational agent is an agent that has clear preferences, model uncertainty,


and that performs the right actions always. A rational agent is able to take
the best possible action in any situation.

Rationality is a status of being reasonable and sensible with a good sense


of judgment.

9|Page
UNIT-2
15. Search Algorithms in AI.
Search algorithms are an important part of Artificial Intelligence (AI). They
are used to find solutions to problems in a given search space. There are
two main types of search algorithms: Uninformed Search and Informed
Search.
Uninformed search is a class of general-purpose search algorithms which
operates in brute force-way. Uninformed search algorithms do not have
additional information about state or search space other than how to
traverse the tree, so it is also called blind search.
Informed search algorithms use domain knowledge. In an informed
search, problem information is available which can guide the search.
Informed search strategies can find a solution more efficiently than an
uninformed search strategy. Informed search is also called a Heuristic
search.
Heuristics function: Heuristic is a function which is used in Informed
Search, and it finds the most promising path. It takes the current state of
the agent as its input and produces the estimation of how close agent is
from the goal.
In order to compare the efficiency of different search algorithms, four
essential properties are used: completeness, optimality, time complexity,
and space complexity. Uninformed search algorithms are generally less
efficient than informed search algorithms, as they have no additional
information on the goal node.

10 | P a g e
16. The different types of Uninformed Search Algorithms are Breadth-
first Search, Depth-first Search, Depth-limited Search, Iterative
deepening depth-first search, Uniform cost search, and Bidirectional
Search.

Breadth-first Search is a general-graph search algorithm that searches


breadth wise in a tree or graph. It starts from the root node and expands all
successor nodes at the current level before moving to the nodes of the next
level. It is complete, meaning that if the shallowest goal node is at some
finite depth, then BFS will find a solution. However, it requires a lot of
memory and time if the solution is far away from the root node.

Depth-first Search is a recursive algorithm for traversing a tree or graph


data structure. It starts from the root node and follows each path to its
greatest depth node before moving to the next path. It requires very little
memory as it only needs to store a stack of the nodes on the path from root
node to the current node. However, there is the possibility that many states
keep re-occurring, and there is no guarantee of finding the solution.

Depth-limited Search Algorithm is similar to depth-first search with a


predetermined limit. It can solve the drawback of the infinite path in the
Depth-first search. It is complete within finite state space as it will expand
every node within a limited search tree. However, it may not be optimal if
the problem has more than one solution.

Uniform-cost Search Algorithm is used for traversing a weighted tree or


graph. It is implemented by the priority queue and gives maximum priority
to the lowest cumulative cost. It is complete, meaning that if there is a
solution, UCS will find it. It is always optimal as it only selects a path with
the lowest path cost.

Iterative deepening depth-first Search is a combination of DFS and BFS


algorithms. It finds out the best depth limit and does it by gradually
increasing the limit until a goal is found. It combines the benefits of
Breadth-first search's fast search and depth-first search's memory
efficiency. However, the main drawback of IDDFS is that it repeats all the
work of the previous phase.

11 | P a g e
17. Types of Informed Search Algorithms
Best-First Search (Greedy Search)
Best-First Search (also known as Greedy Search) is a combination of
Breadth-First Search (BFS) and Depth-First Search (DFS). It uses a
heuristic function and cost to reach the node to find the shortest path
through the search space. It expands the node with the lowest value of f(n)
and terminates when the goal node is found. Best-First Search can switch
between BFS and DFS, making it more efficient than either of them.
However, it can get stuck in a loop and is not optimal.

A* Search
A* Search is a combination of UCS and Greedy Best-First Search. It uses a
heuristic function and cost to reach the node to find the shortest path
through the search space. It expands the node with the lowest value of f(n)
and terminates when the goal node is found. A* Search is optimal and
complete, but it has some complexity issues and does not always produce
the shortest path.

A* Graph Search
A* Graph Search is an extension of A* Search. It is optimal when the
forward cost is equal to or less than backward cost between two nodes. It
does not store all the nodes and therefore is not resource-constrained and
can be used for very large-scale operations.

18. What is Local Search in AI?

Local search algorithms: operate using a single current node (rather than


multiple paths) and generally move only to neighbors of that node.
Typically, the paths followed by the search are not retained.
 Optimization problems: Local search algorithms are useful for solving
pure optimization problems, in which the aim is to find the best state
according to an objective function.
Hill-climbing search algorithm(steepest-ascent version): A loop
continually moves in the direction of increasing value(uphill), terminates
when it reaches a “peak” where no neighbor has a higher value.

12 | P a g e
The algorithm does not maintain a search tree, so the data structure for the
current node only record the state and the value of the objective function.
Hill climbing often get stuck for the following reasons:
1. Local maxima: Hill-climbing algorithms that reach the vicinity of a local
maximum will be drawn upward toward a local maximum but then be stuck.
2. Ridges: Ridges result in a sequence of local maxima that are not directly
connected to each other.
3. Plateaux: A flat area of the state-space landscape, can be flat local
maximum(no uphill exit exists) or shoulder(progress is possible).

19. Searching with nondeterministic actions


When the environment is either partially observable or nondeterministic (or
both), the future percepts cannot be determined in advance, and the
agent’s future actions will depend on those future percepts.
Nondeterministic problems:
Transition model is defined by RESULTS function that returns a set of
possible outcome states;
 Solution is not a sequence but a contingency plan (strategy),
In nondeterministic environments, agents can apply AND-OR search to
generate contingent plans that reach the goal regardless of which
outcomes occur during execution.
AND-OR search trees
 OR nodes: In a deterministic environment, the only branching is
introduced by the agent’s own choices in each state, we call these nodes
OR nodes.
AND nodes: In a nondeterministic environment, branching is also
introduced by the environment’s choice of outcome for each action, we call
these nodes AND nodes.
AND-OR tree: OR nodes and AND nodes alternate. States nodes are OR
nodes where some action must be chosen. At the AND nodes (shown as
circles), every outcome must be handled.

13 | P a g e
Informed Search Uninformed Search
It is also known as Heuristic Search.  It is also known as Blind Search.
It uses knowledge for the searching It doesn’t use knowledge for the
process.  searching process.
It finds a solution more quickly. It finds solution slow as
compared to an informed search.
It may or may not be complete. It is always complete.
Cost is low. Cost is high.
It consumes less time because of It consumes moderate time
quick searching. because of slow searching.
There is a direction given about the No suggestion is given regarding
solution. the solution in it.
It is less lengthy while implemented. It is more lengthy while
implemented.
It is more efficient as efficiency takes It is comparatively less efficient
into account cost and performance. as incurred cost is more and the
The incurred cost is less and speed of speed of finding the Breadth-First
finding solutions is quick. solution is slow.
Computational requirements are Comparatively higher
lessened. computational requirements.
Having a wide scope in terms of Solving a massive search task is
handling large search problems. challenging.
 Greedy Search  Depth First Search (DFS)
 A* Search  Breadth First Search (BFS)
 AO* Search  Branch and Bound
 Hill Climbing Algorithm

20. Online search Agents


Online search is a necessary idea for unknown environments. Online
search agent interleaves computation and action: first it takes an action,
then it observes the environment and computes the next action.

21. Give the steps for A* algorithm?

A* algorithm is the popular form of the Best first search. It tries to find the
shortest path using the heuristic function with the cost function to reach the
end node. The steps for A* algorithms are given below:

Step 1: Put the first node in the OPEN list.


14 | P a g e
Step 2: Check if the OPEN list is empty or not; if the list is empty, then
return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and
stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into
the closed list. For each successor n', check whether n' is already in the
OPEN or CLOSED list; if not, then compute evaluation function for n' and
place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED list, then it should
be attached to the back pointer, which reflects the lowest g(n') value.

Step 6: Return to Step 2.

UNIT-3

22. What is game theory? How is it important in AI?

Game theory is the logical and scientific study that forms a model of the
possible interactions between two or more rational players. Here rational
means that each player thinks that others are just as rational and have the
same level of knowledge and understanding. In the game theory, players
deal with the given set of options in a multi-agent situation, it means the
choice of one player affects the choice of the other or opponent players.

Game theory and AI are much related and useful to each other. In AI, the
game theory is widely used to enable some of the key capabilities required
in the multi-agent environment, in which multiple agents try to interact with
each other to achieve a goal.

Different popular games such as Poker, Chess, etc., are the logical games
with the specified rules. To play these games online or digitally, such as on
Mobile, laptop, etc., one has to create algorithms for such games. And
these algorithms are applied with the help of artificial intelligence.

Types of Games: 
Currently, there are about 5 types of classification of games. They are as
follows: 

15 | P a g e
1. Zero-Sum and Non-Zero Sum Games: In non-zero-sum games,
there are multiple players and all of them have the option to gain a
benefit due to any move by another player. In zero-sum games,
however, if one player earns something, the other players are bound
to lose a key playoff.
2. Simultaneous and Sequential Games: Sequential games are the
more popular games where every player is aware of the movement of
another player. Simultaneous games are more difficult as in them, the
players are involved in a concurrent game. BOARD GAMES are the
perfect example of sequential games and are also referred to as turn-
based or extensive-form games.
3. Imperfect Information and Perfect Information Games: In a perfect
information game, every player is aware of the movement of the other
player and is also aware of the various strategies that the other player
might be applying to win the ultimate playoff. In imperfect information
games, however, no player is aware of what the other is up to.
CARDS are an amazing example of Imperfect information games
while CHESS is the perfect example of a Perfect Information game.
4. Asymmetric and Symmetric Games: Asymmetric games are those
win in which each player has a different and usually conflicting final
goal. Symmetric games are those in which all players have the same
ultimate goal but the strategy being used by each is completely
different.
5. Co-operative and Non-Co-operative Games: In non-co-operative
games, every player plays for himself while in co-operative games,
players form alliances in order to achieve the final goal.

20. What is optimal decision making in games?


Optimal decision making in games is a strategy used to determine the
best move for a player in a game. It involves using game theory,
optimization methods, and a utility function to calculate the best move for a
player in a given game. The Nash equilibrium is a fundamental concept in
game theory, which is used to predict the outcome of a strategic interaction
in the social sciences. Mixed Integer Linear Programming (MILP) and
Linear Programming (LP) are two optimization methods used to find the

16 | P a g e
Nash equilibrium. The Lemke-Howson algorithm is a common algorithm
used to compute the Nash Equilibrium. It utilizes iterated pivoting much like
the simplex algorithm used in linear programming. The minimax value of
each node is used to determine the optimal strategy for a player, which is
the move that will result in the highest payoff.

21. What is Alpha-Beta Search?

Alpha-beta search is an optimization technique for the minimax algorithm,


which is used in two-player games. It is used to reduce the number of
game states that need to be examined, as the number of game states is
exponential in the depth of the tree. Alpha-beta pruning uses two threshold
parameters, alpha and beta, to determine which nodes to prune. Alpha is
the best (highest-value) choice found so far by the Maximizer, and beta is
the best (lowest-value) choice found so far by the Minimizer. Alpha and
beta are initially set to -∞ and +∞, respectively. Alpha-beta pruning works
by comparing the values of alpha and beta at each node. If alpha is greater
than or equal to beta, then the node is pruned.

Move ordering is an important aspect of alpha-beta pruning, as it


determines how effective the pruning is. Alpha-beta pruning can also be
used in conjunction with the minimax algorithm to create a more efficient
search algorithm. This is known as the "minimum-window alpha beta
search algorithm", which eliminates the need for a value stack.

22. What Are Stochastic Games in Artificial Intelligence?

Stochastic games are games that involve elements of chance, such as


dice rolling or card drawing. They are used in artificial intelligence (AI) to
simulate unpredictable external occurrences and to create a more realistic
environment for AI agents to interact with. In a stochastic game, players
must make decisions based on incomplete information, as the outcome of
each move is determined by a random event. For example, in a game of
backgammon, players must choose their moves based on the roll of the
dice, and in a card game, players must choose their moves based on the
cards they are dealt. Stochastic games are also used to model interactions

17 | P a g e
between multiple decision makers, and to evaluate the expected value of a
given position.

23. What are Partially Observable Games in AI?

Partially observable games in AI are games where the agent does not
have access to all the information that affects the distribution of the next
state or reward. Examples of partially observable games include chess
when played against an opponent with an unknown strategy, and Connect
4. In these games, the agent must make decisions based on incomplete
information, and may need to employ strategies such as minimax theory to
make the best decisions. Additionally, agents may need to use memory
systems to remember previously dealt cards or states of the game in order
to make the best decisions.

24. What is a Constraint Satisfaction Problem (CSP)?

A Constraint Satisfaction Problem (CSP) is a mathematical question


defined as a set of objects whose state must satisfy a number of
constraints or limitations. CSPs represent the entities in a problem as a
homogeneous collection of finite constraints over variables, which is solved
by constraint satisfaction methods. Examples of problems that can be
modeled as CSPs include type inference, the eight queens puzzle, the map
coloring problem, the maximum cut problem, and various logic puzzles
such as Sudoku, Crosswords, Futoshiki, Kakuro (Cross Sums), Numbrix,
and Hidato.
CSP (constraint satisfaction problem): Use a factored representation (a
set of variables, each of which has a value) for each state, a problem that is
solved when each variable has a value that satisfies all the constraints on
the variable is called a CSP.
Constraint propagation: Using the constraints to reduce the number of
legal values for a variable, which in turn can reduce the legal values for
another variable, and so on.
Local consistency: If we treat each variable as a node in a graph and
each binary constraint as an arc, then the process of enforcing local
consistency in each part of the graph causes inconsistent values to be
eliminated throughout the graph.

18 | P a g e
25. Backtracking Search for Constraint Satisfaction Problems.

Backtracking search is a basic uninformed search algorithm for Constraint


Satisfaction Problems (CSPs). CSPs are problems where a set of variables
must be assigned values in such a way that all constraints are satisfied.
Backtracking search is an effective way to find a solution to a CSP, but it
can be slow and inefficient.

So, how can we improve the efficiency of backtracking search in


CSPs? One way is to use heuristics to order the variables and values. This
means that the variables and values are ordered in such a way that the
most promising ones are tried first. This reduces the amount of time spent
searching for a solution, as the algorithm can quickly determine if a
particular variable or value will lead to a solution.

Another way to improve the efficiency of backtracking search in CSPs is to


use constraint propagation. This involves propagating information from one
variable to another, so that the search space can be reduced. For example,
if a variable has a certain value, then other variables may be constrained to
certain values as a result. This can reduce the amount of time spent
searching for a solution, as the algorithm can quickly determine if a
particular variable or value will lead to a solution.

Finally, backtracking search can be improved by using local search


algorithms. These algorithms look for solutions in the local neighborhood of
the current solution, rather than searching the entire search space. This
can reduce the amount of time spent searching for a solution, as the
algorithm can quickly determine if a particular variable or value will lead to
a solution.

Overall, backtracking search is an effective way to find a solution to a CSP,


but it can be slow and inefficient. By using heuristics, constraint
propagation, and local search algorithms, the efficiency of backtracking
search can be improved.

19 | P a g e
26. Local Search for Constraint Satisfaction Problems.

Constraint satisfaction problems (CSPs) are a type of problem in which a


set of variables must be assigned values in such a way that all constraints
are satisfied. Local search is a technique used to solve CSPs by searching
for solutions in the local neighborhood of a given solution.

Local search algorithms start with an initial solution and then iteratively
improve it by searching the local neighborhood of the current solution. The
local neighborhood of a solution is defined by the constraints of the
problem. For example, in a CSP with two variables, the local neighborhood
of a solution might be all solutions that differ from the current solution by
one variable.

The goal of local search is to find a solution that is better than the current
solution. This can be done by using heuristics to guide the search process.
Heuristics are rules of thumb that can be used to evaluate the quality of a
solution and determine which solutions should be explored next.

Local search algorithms are often used in combination with other


techniques, such as constraint propagation, to reduce the search space
and make the search process more efficient. Constraint propagation is a
technique used to reduce the domains of variables by enforcing local
consistency. This can help reduce the search space and make the search
process more efficient.

Local search algorithms can be used to solve a variety of CSPs, including


graph coloring, scheduling, and Sudoku. They are also used in artificial
intelligence applications, such as game playing and robotics.

UNIT-4

23. What is the inference engine, and why it is used in AI?

In artificial intelligence, the inference engine is the part of an intelligent


system that derives new information from the knowledge base by applying
some logical rules.

It mainly works in two modes:

20 | P a g e
o Backward Chaining: It begins with the goal and proceeds backward
to deduce the facts that support the goal.
o Forward Chaining: It starts with known facts, and asserts new facts.

24. What is knowledge representation in AI?

Knowledge representation is the part of AI, which is concerned with the


thinking of AI agents. It is used to represent the knowledge about the real
world to the AI agents so that they can understand and utilize this
information for solving the complex problems in AI.

Following elements of Knowledge that are represented to the agent in the


AI system:

o Objects
o Events
o Performance
o Meta-Knowledge
o Facts
o Knowledge-base

25. What are the various techniques of knowledge representation in


AI?

Knowledge representation techniques are given below:


o Logical Representation
o Semantic Network Representation
o Frame Representation
o Production Rules

21 | P a g e
26. Propositional logic in Artificial intelligence
Propositional logic (PL) is the simplest form of logic where all the
statements are made by propositions. A proposition is a declarative
statement which is either true or false. It is a technique of knowledge
representation in logical and mathematical form.

Example:
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7(False proposition)
d) 5 is a prime number.
Following are some basic facts about propositional logic:

o Propositional logic is also called Boolean logic as it works on 0 and 1.


o In propositional logic, we use symbolic variables to represent the
logic, and we can use any symbol for a representing a proposition,
such A, B, C, P, Q, R, etc.
o Propositions can be either true or false, but it cannot be both.
o Propositional logic consists of an object, relations or function,
and logical connectives.
o These connectives are also called logical operators.
o The propositions and connectives are the basic elements of the
propositional logic.
o Connectives can be said as a logical operator which connects two
sentences.
o A proposition formula which is always true is called tautology, and it
is also called a valid sentence.
o A proposition formula which is always false is called Contradiction.
o A proposition formula which has both true and false values is called
o Statements which are questions, commands, or opinions are not
propositions such as "Where is Rohini", "How are you", "What is
your name", are not propositions.

22 | P a g e
Syntax of propositional logic:
The syntax of propositional logic defines the allowable sentences for the
knowledge representation. There are two types of Propositions:

a. Atomic Propositions
b. Compound propositions

o Atomic Proposition: Atomic propositions are the simple


propositions. It consists of a single proposition symbol. These are the
sentences which must be either true or false.
Example:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.  
b) "The Sun is cold" is also a proposition as it is a false fact.   
o Compound proposition: Compound propositions are constructed by
combining simpler or atomic propositions, using parenthesis and
logical connectives.
Example:
a) "It is raining today, and street is wet."  
b) "Ankit is a doctor, and his clinic is in Mumbai."   

27. How is statement and proving of propositional theorem used in


AI?
Propositional theorem proving is a powerful tool used in Artificial
Intelligence (AI) to determine if a statement is true or false. This is done by
using inference rules to create a proof—a series of conclusions that leads
to the desired result. The most well-known rule is known as Modus Ponens,
which states that if two sentences are supplied, the third sentence may be
deduced. Another helpful inference rule is And-Elimination, which states
that any of the conjuncts can be inferred from a conjunction. These
principles can then be applied to each situation in which they apply,
resulting in good conclusions without the necessity of enumerating models.
Resolution theorem proving is a single inference rule that, when combined
with any full search algorithm, gives a complete inference method. This is
used to infer the lack of pits in a given environment, for example.
Resolution theorem proving is a powerful tool used in AI to determine if a
statement is true or false.

23 | P a g e
28. What are knowledge-based agents in Artificial Intelligence?

Knowledge-based agents are agents that reason by operating on internal


representations of knowledge. They use propositional logic, which is based
on propositions, statements about the world that can be either true or false.
Propositional logic consists of an object, relations or function, and logical
connectives. These connectives are also called logical operators, and they
are used to connect two simpler propositions or representing a sentence
logically. Examples of logical connectives include Negation (¬), Conjunction
(∧), Disjunction (∨), Implication (→), and Biconditional (↔). By
understanding propositional logic, logical connectives, models, knowledge
bases, entailment, and inference, AI can use logic to draw conclusions from
existing knowledge.

29. What is the intelligent agent in AI, and where are they used?

The intelligent agent can be any autonomous entity that perceives its
environment through the sensors and act on it using the actuators for
achieving its goal.

These Intelligent agents in AI are used in the following applications:


o Information Access and Navigations such as Search Engine
o Repetitive Activities
o Domain Experts
o Chatbots, etc.

30. What is First-Order Logic in Artificial Intelligence?

First-Order Logic (FOL) is a collection of objects, their attributes, and


relations among them used to represent knowledge in Artificial Intelligence
(AI). It is also known as Predicate Logic and is more efficient than
Propositional Logic Theory. FOL is used to design logics for machines to
understand and respond to. The basic syntax of FOL is

24 | P a g e
attribute/function(object(s)). FOL algorithms are easy to comprehend and
can be executed easily.
31. Syntax:
 It refers to the rules and regulations for writing any statement in a
programming language like C/C++.
 It does not have to do anything with the meaning of the statement.
 A statement is syntactically valid if it follows all the rules.
 It is related to the grammar and structure of the language.
32. Semantics:
 It refers to the meaning associated with the statement in a
programming language.
 It is all about the meaning of the statement which interprets the
program easily.
 Errors are handled at runtime.

Syntax Semantics
Syntax is one that defines the rules Semantics is one that refers to
and regulations that helps to write the meaning of the associated
any statement in a programming line of code in a programming
language. language.
Syntax does not have any Semantics tells about the
relationship with the meaning of the meaning.
statement.
Syntax errors are encountered after They are encountered at
the program has been executed runtime.
Syntax errors are easy to catch. Semantics errors are difficult to
catch.

33. What is Inference in First-Order Logic in Artificial Intelligence?

Inference in First-Order Logic in Artificial Intelligence is a process of


deriving new facts or sentences from existing ones. It involves the use of
Substitution, Equality, FOL inference rules for quantifiers, Universal
Generalization, Universal Instantiation, Existential Instantiation, Existential

25 | P a g e
Introduction, and Generalized Modus Ponens Rule. Substitution is a basic
procedure that is applied to terms and formulations, while Equality is used
to form atomic sentences with the use of predicate and words. FOL
inference rules for quantifiers include Universal Generalization, Universal
Instantiation, Existential Instantiation, and Existential Introduction. The
Generalized Modus Ponens Rule is a modified form of Modus Ponens used
for inference in FOL.
34. Difference between Forwarding Chaining and Backward
Chaining:
Forward Chaining Backward Chaining
When based on available data a Backward chaining starts from the goal
decision is taken then the and works backward to determine
process is called as Forward what facts must be asserted so that
chaining. the goal can be achieved.
Forward chaining is known as Backward chaining is known as goal-
data-driven technique because driven technique because we start
we reaches to the goal using the from the goal and reaches the initial
available data. state in order to extract the facts.
It is a bottom-up approach. It is a top-down approach.
It applies the Breadth-First It applies the Depth-First Strategy.
Strategy.
Its goal is to get the conclusion. Its goal is to get the possible facts or
the required data.
Slow as it has to use all the Fast as it has to use only a few rules.
rules.
It operates in forward direction It operates in backward direction i.e it
i.e it works from initial state to works from goal to reach initial state.
final decision.
Forward chaining is used for the It is used in automated inference
planning, monitoring, control, engines, theorem proofs, proof
and interpretation application. assistants and other artificial
intelligence applications.

26 | P a g e
UNIT-5

35. What is Ontological Engineering in Artificial Intelligence?

Ontology engineering is a field of computer science, information science,


and systems engineering that studies the methods and methodologies for
building ontologies. Ontologies are formal representations of knowledge
that define the categories, properties, and relationships between concepts,
data, and entities. Ontology engineering is used to make explicit the
knowledge contained within software applications, enterprises, and
business procedures for a particular domain.

So, what are the categories, objects, and events that ontology engineering
deals with?

Categories

Categories are sets of collections of various objects. In the context of


ontology engineering, categories are used to define concepts, such as
movie genres, types of people, and other abstract concepts.

Objects

Objects, also known as instances of objects or concepts, are the atomic


level of an ontology. In the context of ontology engineering, objects can be
films, directors, actors, and other entities.

Events

Events are ways in which concepts are related to one another. In the
context of ontology engineering, events can be the relationships between a
movie and its script, director, and actors.

27 | P a g e
36. Reasoning Systems: Questions and Answers

Reasoning systems are an important tool for understanding how we make


decisions and draw conclusions. In this article, we'll explore some of the
key questions and answers related to reasoning systems.

What is a Reasoning System?

A reasoning system is a set of logical rules and principles used to make


decisions and draw conclusions. It is based on the idea that if certain
conditions are met, then a certain result will follow. Reasoning systems are
used in many different fields, such as mathematics, computer science, and
philosophy.

What is the Difference Between Deductive and Inductive Reasoning?

Deductive reasoning is a type of reasoning that starts with a general


statement and then moves to a specific conclusion. For example, if we

28 | P a g e
know that all cats are animals, and we know that Fluffy is a cat, then we
can conclude that Fluffy is an animal.

Inductive reasoning is a type of reasoning that starts with specific


observations and then moves to a general conclusion. For example, if we
observe that Fluffy, Mittens, and Whiskers are all cats, then we can
conclude that all cats are animals.

What is the Difference Between Validity and Soundness?

Validity is the property of an argument that makes it logically valid. An


argument is valid if it follows logically from the premises.

Soundness is the property of an argument that makes it both logically valid


and true. An argument is sound if it follows logically from the premises and
the premises are true.

What is the Difference Between Syllogistic and Non-Syllogistic


Reasoning?

Syllogistic reasoning is a type of reasoning that uses syllogisms, which are


logical arguments composed of three parts: a major premise, a minor
premise, and a conclusion. For example, the syllogism "All cats are
animals; Fluffy is a cat; Therefore, Fluffy is an animal" is an example of
syllogistic reasoning.

Non-syllogistic reasoning is a type of reasoning that does not use


syllogisms. It is based on the idea that if certain conditions are met, then a
certain result will follow. For example, if we know that Fluffy is a cat and
cats like to eat fish, then we can conclude that Fluffy likes to eat fish.
Deductive Reasoning:

Deductive reasoning is a type of logic that starts with a general statement


and works its way down to a specific conclusion. It is often used to make
predictions or draw conclusions from a set of facts. For example, if we
know that all cats are mammals and that all mammals have fur, we can
deduce that all cats have fur.

29 | P a g e
Inductive Reasoning:

Inductive reasoning is the opposite of deductive reasoning. It starts with


specific observations and works its way up to a general conclusion. This
type of reasoning is often used to form hypotheses or theories. For
example, if we observe that cats have fur, we can inductively conclude that
all mammals have fur.

Abductive Reasoning:

Abductive reasoning is a type of logic that is used to explain a phenomenon


or solve a problem. It involves making an educated guess based on the
available evidence. For example, if we observe that cats have fur and that
all mammals have fur, we can make an educated guess that cats are
mammals.

Systematic Reasoning:

Systematic reasoning is a type of logic that is used to solve problems. It


involves breaking a problem down into smaller parts and then solving each
part individually. For example, if we want to solve a complex math problem,
we can use systematic reasoning to break the problem down into smaller
parts and then solve each part one at a time.

Creative Reasoning:

Creative reasoning is a type of logic that is used to come up with new ideas
or solutions. It involves thinking outside the box and coming up with
creative solutions to problems. For example, if we want to come up with a
new way to catch mice, we can use creative reasoning to come up with a
unique solution.

37. Classical Planning:

Classical Planning is the planning where an agent takes advantage of the


problem structure to construct complex plans of an action. The agent
performs three tasks in classical planning:

 Planning: The agent plans after knowing what is the problem.

30 | P a g e
 Acting: It decides what action it has to take.
 Learning: The actions taken by the agent make him learn new
things.

A language known as PDDL(Planning Domain Definition


Language) which is used to represent all actions into one action schema.
PDLL describes the four basic things needed in a search problem:
 Initial state: It is the representation of each state as the conjunction of
the ground and functionless atoms.
 Actions: It is defined by a set of action schemas which implicitly
define the ACTION() and RESULT() functions.
 Result: It is obtained by the set of actions used by the agent.
 Goal: It is same as a precondition, which is a conjunction of literals
(whose value is either positive or negative).

38. What is Heuristic for Planning in AI?


Heuristic for planning in AI is a shortcut method used to find approximate
solutions to complex problems. It is a type of algorithm that uses a set of
rules to quickly identify a solution that is close to the optimal solution.
Heuristic algorithms are often used in artificial intelligence (AI) to help
computers find solutions when an exact solution is not possible or requires
too much time or processing power. Heuristics can be used to solve
problems such as the traveling salesman problem, where the goal is to find

31 | P a g e
the most efficient route to visit all the cities on a list. Heuristic algorithms
can also be used to solve two-person games, such as chess, and to detect
viruses in antivirus software.

39. Step-by-step process for classical planning in artificial


intelligence:

Problem Definition: Detailed specification of inputs and acceptable


system solutions.

Problem Analysis: Analyse the problem thoroughly.

Knowledge Representation: Collect detailed information about the problem


and define all possible techniques.

Problem-Solving: Selection of best techniques.

Initial State: This state requires an initial state for the problem which starts
the AI agent towards a specified goal.

Action: This stage of problem formulation works with function with a


specific class taken from the initial state and all possible actions done in
this stage.

Transition: This stage of problem formulation integrates the actual action


done by the previous action stage and collects the final stage to forward it
to their next stage.

Goal Test: This stage determines that the specified goal achieved by the
integrated transition model or not, whenever the goal achieves stop the
action and forward into the next stage to determines the cost to achieve the
goal.

Path Costing: This component of problem-solving numerical assigned


what will be the cost to achieve the goal. It requires all hardware software
and human working cost.

End.

32 | P a g e
40. What is the Role of Planning in Artificial Intelligence?

Artificial intelligence is an important technology in the future. Whether it is


intelligent robots, self-driving cars, or smart cities, they will all use different
aspects of artificial intelligence!!! But Planning is very important to make
any such AI project.

Even Planning is an important part of Artificial Intelligence which deals with


the tasks and domains of a particular problem. Planning is considered the
logical side of acting.

Everything we humans do is with a definite goal in mind, and all our actions
are oriented towards achieving our goal. Similarly, Planning is also done for
Artificial Intelligence.

For example, Planning is required to reach a particular destination. It is


necessary to find the best route in Planning, but the tasks to be done at a
particular time and why they are done are also very important.

That is why Planning is considered the logical side of acting. In other


words, Planning is about deciding the tasks to be performed by the artificial
intelligence system and the system's functioning under domain-
independent conditions.

41. Hierarchical Planning:

Hierarchical Planning is an Artificial Intelligence (AI) problem solving


approach for a certain kind of planning problems -- the kind focusing
on problem decomposition, where problems are step-wise refined into
smaller and smaller ones until the problem is finally solved. A solution
hereby is a sequence of actions that's executable in a given initial state
(and a refinement of the initial compound tasks that needed to be refined).
This form of hierarchical planning is usually referred to as Hierarchical Task
Network (HTN) planning, but many variants and extensions exist.

33 | P a g e
42. Non-deterministic Domains, Time, Schedule, and Resources in
Artificial Intelligence:

Non-deterministic domains, time, schedule, and resources are important


concepts in artificial intelligence. In this article, we will discuss what these
terms mean, how they are used in AI, and how they can help improve AI
performance.

What is a Non-deterministic Domain?

A non-deterministic domain is a type of problem-solving environment in


which the outcome of a given action is not predetermined. This means that
the same action can lead to different results depending on the context and
environment in which it is performed. Non-deterministic domains are often
used in AI applications such as robotics, natural language processing, and
computer vision.

What is Time in AI?

Time is an important factor in AI applications. AI algorithms must be able to


process data quickly and accurately in order to produce meaningful results.
The amount of time required for an AI algorithm to complete a task is
referred to as its "time complexity".

What is a Schedule in AI?

A schedule is a set of tasks that must be completed in order for an AI


application to function properly. Schedules are used to ensure that tasks
are completed in the correct order and on time. Schedules are often used
in robotics, natural language processing, and computer vision applications.

What are Resources in AI?

Resources are the hardware and software components that are necessary
for an AI application to function properly. Resources can include
processors, memory, storage, and other hardware components. Resources
are also used to store data and to provide access to external data sources.

34 | P a g e
How Can Non-deterministic Domains, Time, Schedule, and Resources
Help Improve AI Performance?

Non-deterministic domains, time, schedule, and resources are all important


factors in AI applications. By understanding how these factors interact, AI
developers can create more efficient and accurate algorithms. Additionally,
by optimizing the use of resources, AI developers can reduce the amount
of time required for an AI algorithm to complete a task. This can lead to
improved performance and better results.

35 | P a g e

You might also like