0% found this document useful (0 votes)
5 views158 pages

Unit 1 Correct

Uploaded by

reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views158 pages

Unit 1 Correct

Uploaded by

reshma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 158

What Is Artificial

Intelligence?

AI is a rapidly advancing field focused on creating intelligent


machines that mimic human abilities like learning and problem-
solving. It has transformative potential across industries.
The Core Goal of AI
Reas oning Learning Problem-Solving

Analyze information and draw Acquire new knowledge and skills Identify and solve problems in a
logical conclusions. from data. goal-oriented way.

The fundamental aim of artificial intelligence is to emulate human intelligence in machines, enabling them to
perform tasks like reasoning, learning, and problem-solving more efficiently and effectively.
How Does AI Work?
1 Input
Data is collected from various sources and sorted into categories.

2 Processing
The AI system sorts and deciphers the data using programmed
patterns until it recognizes similar patterns.

3 Outcomes
The AI can then use those patterns to predict outcomes.

AI works through a cyclical process of data input, processing, outcome prediction, and
self-adjustment, enabling the system to continuously learn and improve its
performance.
Types of AI
1 Narrow AI 2 General AI
S pecialized for specific tasks, like speech Aims to exhibit human-like intelligence across a
recognition or image classification. wide range of tasks.

3 Machine Learning 4 Natural Language Proces s ing


Enables learning from data to improve Enables understanding and generation of human
performance. language.

AI encompasses diverse techniques, from specialized to general human-like intelligence.


Practical Applications of AI
Healthcare
AI is used for medical diagnosis, analyzing medical images to identify diseases.

Finance
AI helps in credit scoring by analyzing financial data to predict creditworthiness.

Retail
AI is used for product recommendations based on customer browsing and purchase history.

Manufacturing
AI helps in quality control by inspecting products for defects.

Artificial Intelligence has a wide range of practical applications across industries, from healthcare and
finance to retail and manufacturing, revolutionizing how we approach problem-solving and decision-
making.
The Need for AI

Increas ed Efficiency
AI automates repetitive tasks, freeing up human time and resources.

Enhanc ed Decis ion-Making


AI can analyze vast amounts of data to identify patterns and trends.

Innovation and Progress


AI can accelerate scientific discovery and technological advancements.

Improved Quality of L ife


AI has the potential to revolutionize various sectors, leading to a better quality of life.
AI-B as ed Technologies
Machine L earning
Enables systems to learn from data and make predictions without explicit
programming.

Natural L anguage Process ing


Focuses on enabling computers to understand, interpret, and generate human
language.

Computer Vision
Deals with the processing and analysis of visual information using computer
algorithms.

Artificial Intelligence encompasses a wide range of technologies, including machine learning,


natural language processing, and computer vision, each playing a crucial role in the
development and application of intelligent systems.
The Future of AI
Robotics AI-powered robots and automation
systems that can perform tasks in various
industries.
Neural Networks A type of machine learning algorithm
modeled after the human brain.

Expert Systems AI systems that mimic the decision-making


ability of human experts in specific fields.

Chatbots AI-powered virtual assistants that can


interact with users through text or voice
interfaces.
As Artificial Intelligence continues to evolve, it will likely lead to the development of increasingly
sophisticated technologies, from advanced robotics and neural networks to expert systems and
intelligent chatbots, transforming the way we live and work.
Searching Algorithms in Artificial Intelligence

• Search forms the core component of many intelligent processes.

• Search is the systemstic examination of states to find path from the start or root node to
the goal state.

• Many traditional search algorithms are used in AI applications.

• For complex problems, the traditional alhgorithms are unable to find the solution within
some practical time and space limits.

• Consequently many techniques are developed using heuristic function.

1
Requirements of search algorithms

• The first requirement is that it should causes motion.

whenever we apply an algorithm we should able to move from one state to another
state.

• The second requirement is that it should be systematic.

It should have step by step approach so that we can move from source state to goal
state.

2
Types of searching algorithms

3
Uninformed Search

• Does not contain domain knowledge

such as closeness, the location of the goal.

• Operates in brute force way.

Only includes information how to traverse the tree

How to identity leaf node

how to identity goal node

• Examines each node until it acieves the goal node.

4
Informed Search

• Uses domain knowledge

Problem information is available which can guide the search.

• Informed search strategies can find a solution more efficiient than an uninformed search
strategy.

• Also called as heuristc search.

• Might not always be guaranteed to the best solution but guaranteed to fnd a good solution
in reasonable time.

5
Informed Search

6
Breadth First Search Algorithm

1. Create a variable called NODE-LIST and set it to the initial state.

2. Until a goal state is found, or NODE-LIST is empty:

a) Remove the first element from NODE-LIST and call it E. If NODE-LIST was
empty, then quit.

b) For element E do the following:

i. Apply the rule to generate a new state,

ii. if the new state is a goal state. quit and return this state

iii. Otherwise, add the new state to the end of NODE-LIST

7
Breadth First Search Example

Step 1: Initially NODE-LIST contains only one node corresponding to the source state A.

NODE-LIST: A

8
Breadth First Search Example

Step 2: A is removed from NODE-LIST. The node is expanded, and its children B and C
are generated. They are placed at the back of NODE-LIST.

NODE-LIST: B C

9
Breadth First Search Example

Step 2: A is removed from NODE-LIST. The node is expanded, and its children B and C
are generated. They are placed at the back of NODE-LIST.

NODE-LIST: B C

10
Breadth First Search Example

Step 3: Node B is removed from NODE-LIST. The node is expanded, and its children D
and E are generated. They are placed at the back of NODE-LIST.

NODE-LIST: C D E

11
Breadth First Search Example

Step 4: Node C is removed from NODE-LIST. The node is expanded, and its children D
and G are added to the back of NODE-LIST.

NODE-LIST: D E D G

12
Breadth First Search Example

Step 5: Node D is removed from NODE-LIST. Its children C and F are generated and
added to the back of NODE-LIST.

NODE-LIST: E D G C F

13
Breadth First Search Example

Step 6: Node E is removed from NODE-LIST. Its has no children.

NODE-LIST: D G C F

14
Breadth First Search Example

Step 7: D is expanded; B and F are put in OPEN.

NODE-LIST: G C F B F

15
Breadth First Search Example

Step 8: G is selected for expansion. It is found to be a goal node. Hence the algorithm
returns the path A- C- G by the following parent pointers of the node corresponding to G.

16
Breadth First Search Example

Advantages:

• One of the simplest search strategies

• BFS is complete. If there is a solution, BFS is guaranteed to find it.

• If there are multiple solutions, then a miinimal solution will be found.

Disadvantages:

• The breadth first search algorithm cannot be effectively used unless the
search spae is quite small.

17
S earch Problem

• A search problem consists of:


• A S tate S pace. Set of all possible states where you can be.
• A S tart S tate. The state from where the search begins.
• A Goal S tate. A function that looks at the current state returns whether or not it is the goal state.
• The Solution to a search problem is a sequence of actions, called the plan that transforms the start state to the goal
state.
• This plan is achieved through search algorithms.
E x ploring AI S earch
S trategies
In Artificial Intelligence, search strategies are crucial for solving
complex problems and unlocking new possibilities. These
fundamental techniques underpin many AI applications, enabling
the traversal of vast search spaces and optimization of decision-
making.

This presentation will delve into the intricacies of search strategies,


exploring their importance, diverse approaches, and
transformative impact on the world of AI.
Depth-First Search
Depth-First Search (DFS) is a fundamental uninformed search
strategy in AI that explores as far as possible along each branch
before backtracking. DFS systematically visits all the nodes of a
graph or tree-like structure, prioritizing depth over breadth.

• DFS starts at the root node and explores the first child node
until it reaches a dead-end or the goal node.
• It then backtracks to the most recent node with unexplored
children and continues the search.
DFS is often implemented using a stack data structure, making
it efficient for deep, narrow search spaces.
• DFS can be useful for problems like maze solving, where
the goal is to find a path from the start to the end.
The Essence of Search Strategies
Search strategies are the driving force behind AI systems. They enable AI to navigate through many possible solutions
and find the most optimal one.

AI has a variety of search methods, from simple ones like Breadth-First Search and Depth-First Search to more
advanced techniques like Greedy Search and A* Algorithm. Each approach has its own strengths and weaknesses,
allowing AI systems to adapt and perform efficiently for different problems.

The choice of search strategy depends on the specific problem at hand. This diversity of search strategies is what
makes AI systems so powerful and versatile.
Types of S earch S trategies
Uninformed S earch S trategies Heuris tic Search Strategies

Depth-First Search and Breadth-First Search explore Hill Climbing and A* Search use educated guesses
the search space without any additional information, about the promising directions to explore, allowing
systematically visiting all possible solutions. them to find solutions more efficiently.
Breadth-First Search
Breadth-First Search (BFS) is a fundamental uninformed search
strategy that explores the search space level by level. It starts at
the root node, visiting all neighboring nodes before moving to the
next depth level.

The key steps are: 1) Initialize a queue and add the root. 2)
Dequeue a node. 3) Enqueue all its unvisited neighbors. 4) Repeat
until the queue is empty or the goal is found.

BFS is efficient for finding the shortest path in an unweighted


graph, as it explores all possible solutions at the current depth
before moving on.
Conclusion: Unlocking the Potential
of Search Strategies

Versatility
Search strategies are the foundation of many AI-driven applications, showcasing their versatility and
adaptability to a wide range of problem domains.

Optimization
By continuously refining and improving search algorithms, AI systems can achieve greater efficiency
and find more optimal solutions to complex problems.

Innovation
As the field of AI continues to evolve, the advancement of search strategies will play a crucial role in
driving innovation and unlocking new possibilities for intelligent systems.
DFS Algorithm
function findPath(robot, start, goal):
stack ← empty stack
visited ← empty set
stack.push(start)

while not stack.isEmpty( ):


current ← stack.pop( )
if current == goal:
return "Path found"

visited.add(current)
neighbors ← robot.getNeighbors(current)

for neighbor in neighbors:


if neighbor not in visted:
stack.push(neighbor)
return "Path not found"
Depth Limited Search
• Modified version of DFS that imposes a limit
on the depth of the search.
• Will only explore nodes up to a certain depth,
effectively preventing it from going down
excessively deep paths that are unlikely to lead
to the goal.
• By setting a maximum depth limit, DLS aims
to improve efficiency and ensure more
manageable search times.
Iterative Deepening Search(IDS) or Iterative
Deepening Depth First Search(IDDFS)
• Iterative Deepening Depth-First Search (IDDFS) combines the
benefits of both Depth-First Search (DFS) and Breadth-First
Search (BFS).
• Used to find the shortest path or to search through a tree or graph.
• Depth-Limited DFS: IDDFS starts by running a Depth-First
Search (DFS) with a limited depth. This means it only explores
paths up to a certain length before stopping.
• Increasing Depth: If it doesn't find the goal at that depth, it
increases the depth limit and runs DFS again. This process is
repeated, gradually increasing the depth, until the goal is found.
Cost First Search
• Used to find the least-cost path in search problems, especially when paths
have different costs associated with them.
• Priority Queue: UCS uses a priority queue to explore nodes. The priority
of a node is determined by the cost to reach that node from the start.
• Expand the Cheapest Node First: At each step, the algorithm expands (or
explores) the node with the lowest cumulative cost. This means it always
chooses the path that has the least total cost so far.
• Path Costs: If a node can be reached by multiple paths, UCS keeps track
of the cost for each path. It only keeps the path with the lowest cost and
discards the rest.
• Goal Test: The search continues until the goal node is expanded. Since
UCS explores nodes in increasing order of cost, the first time it reaches the
goal, it guarantees that the path found is the least costly.
Generate and Test Search

• Generate-and-test search algorithm is a very


simple algorithm that guarantees to find a
solution if done systematically and there exists
a solution.
• Algorithm: Generate-And-Test
– 1.Generate a possible solution.
– 2.Test to see if this is the expected solution.
– 3.If the solution has been found quit else go
to step 1.
Generate and Test Search
Generate and Test

• Potential solutions that need to be generated


vary depending on the kinds of problems
• For some problems the possible solutions may
be particular points in the problem space and
for some problems, paths from the start state
Generate and Test

• Generate-and-test, like depth-first search,


requires that complete solutions be generated
for testing
• In its most systematic form, it is only an
exhaustive search of the problem space.
• Solutions can also be generated randomly but
solution is not guaranteed
• This approach is what is known as British
Museum algorithm: finding an object in the
British Museum by wandering randomly.
Systematic Generate and Test

• While generating complete solutions and


generating random solutions are the two
extremes there exists another approach that
lies in between
• The approach is that the search process
proceeds systematically but some paths that
unlikely to lead the solution are not considered.
• This evaluation is performed by a heuristic
function
Systematic Generate and Test

• Depth-first search tree with backtracking can be


used to implement systematic generate-and-
test procedure
• As per this procedure, if some intermediate
states are likely to appear often in the tree, it
would be better to modify that procedure to
traverse a graph rather than a tree
Generate and Test Planning

• First, the planning process uses constraint-


satisfaction techniques and creates lists of
recommended substructures.
• Then the generate-and-test procedure uses the
lists generated and required to explore only a
limited set of structures.
• Constrained in this way, generate-and-test proved
highly effective.
Example:

• Example: coloured blocks


• “Arrange four 6-sided cubes in a row, with each
side of each cube painted one of four colors,
such that on all four sides of the row one block
face of each color are showing.”
• Heuristic: If there are more red faces than other
colours then, when placing a block with several
red faces, use few of them as possible as
outside faces.
Example:

• Example – Traveling Salesman Problem (TSP)


• A salesman has a list of cities, each of which he must
visit exactly once. There are direct roads between
each pair of cities on the list. Find the route the
salesman should follow for the shortest possible
round trip that both starts and finishes at any one of
the cities.
– Traveler needs to visit n cities.
– Know the distance between each pair of cities.
– Want to know the shortest route that visits all the
cities once.
Example:
Example:

Search flow with Generate and Test


Example:

Finally, select the path whose


length is less
Hill Climbing

• Hill climbing algorithm is a local search algorithm which


continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem.
• It terminates when it reaches a peak value where no
neighbor has a higher value.
• Hill climbing algorithm is a technique which is used for
optimizing the mathematical problems.
• One of the widely discussed examples of Hill climbing
algorithm is Traveling-salesman Problem in which we
need to minimize the distance traveled by the salesman.
Hill Climbing

• It is also called greedy local search as it only


looks to its good immediate neighbor state and
not beyond that.
• A node of hill climbing algorithm has two
components which are state and value.
• Hill Climbing is mostly used when a good
heuristic is available.
• In this algorithm, we don't need to maintain and
handle the search tree or graph as it only keeps
a single current state.
Hill Climbing Features

• Generate and Test variant: Hill Climbing is the


variant of Generate and Test method. The Generate
and Test method produce feedback which helps to
decide which direction to move in the search space.
• Greedy approach: Hill-climbing algorithm search
moves in the direction which optimizes the cost.
• No backtracking: It does not backtrack the search
space, as it does not remember the previous states.
State-space Diagram

• The state-space landscape is a graphical representation of


the hill-climbing algorithm which is showing a graph
between various states of algorithm and Objective
function/Cost.
• On Y-axis we have taken the function which can be an
objective function or cost function, and state-space on the
x-axis.
• If the function on Y-axis is cost then, the goal of search is
to find the global minimum and local minimum.
• If the function of Y-axis is Objective function, then the
goal of the search is to find the global maximum and local
maximum.
State-space Diagram
Different regions in the state space

• Local Maximum: Local maximum is a state which is better


than its neighbor states, but there is also another state
which is higher than it.
• Global Maximum: Global maximum is the best possible state
of state space landscape. It has the highest value of
objective function.
• Current state: It is a state in a landscape diagram where an
agent is currently present.
• Flat local maximum: It is a flat space in the landscape where
all the neighbor states of current states have the same
value.
• Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm

• Simple hill Climbing


• Steepest-Ascent hill-climbing
• Stochastic hill Climbing
Simple Hill Climbing

• Simple hill climbing is the simplest way to implement a


hill climbing algorithm. It only evaluates the neighbor
node state at a time and selects the first one which
optimizes current cost and set it as a current state.
• It only checks it's one successor state, and if it finds
better than the current state, then move else be in the
same state. This algorithm has the following features:
– Less time consuming
– Less optimal solution and the solution is not
guaranteed
Simple Hill Climbing

• Step 1: Evaluate the initial state, if it is goal state then return


success and Stop.
• Step 2: Loop Until a solution is found or there is no new
operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
– If it is goal state, then return success and quit.
– Else if it is better than the current state then assign new
state as a current state.
– Else if not better than the current state, then return to
step2.
• Step 5: Exit.
Steepest-Ascent hill climbing

• The steepest-Ascent algorithm is a variation of


simple hill climbing algorithm.
• This algorithm examines all the neighboring
nodes of the current state and selects one
neighbor node which is closest to the goal state.
• This algorithm consumes more time as it
searches for multiple neighbors
Steepest-Ascent hill climbing

• Step 1: Evaluate the initial state, if it is goal state then return success and
stop, else make current state as initial state.
• Step 2: Loop until a solution is found or the current state does not
change.
– Let SUCC be a state such that any successor of the current state will be
better than it.
– For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to
SUCC.
• Step 5: Exit.
Steepest-Ascent hill climbing

• Stochastic hill climbing does not examine for all


its neighbor before moving.
• Rather, this search algorithm selects one
neighbor node at random and decides whether
to choose it as a current state or examine
another state.
Problems in Hill Climbing Algorithm

• 1. Local Maximum:
– A local maximum is a peak state in the landscape
which is better than each of its neighboring
states, but there is another state also present
which is higher than the local maximum.
– Solution: Backtracking technique can be a
solution of the local maximum in state space
landscape. Create a list of the promising path so
that the algorithm can backtrack the search
space and explore other paths as well.
Problems in Hill Climbing Algorithm
Problems in Hill Climbing Algorithm

• 2. Plateau: A plateau is the flat area of the search


space in which all the neighbor states of the current
state contains the same value, because of this
algorithm does not find any best direction to move.
A hill-climbing search might be lost in the plateau
area.
• Solution: The solution for the plateau is to take big
steps or very little steps while searching, to solve
the problem. Randomly select a state which is far
away from the current state so it is possible that the
algorithm could find non-plateau region.
Problems in Hill Climbing Algorithm
Problems in Hill Climbing Algorithm

• 3. Ridges: A ridge is a special form of the local


maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and
cannot be reached in a single move.
• Solution: With the use of bidirectional search, or
by moving in different directions, we can
improve this problem.
Problems in Hill Climbing Algorithm
Constraint Satisfaction Problem
• A Constraint Satisfaction Problem (CSP) is a informed search
strategy where a problem is defined by a set of variables.
• Each variable with a domain of possible values, and
constraints that specify allowable combinations of values.
• The goal is to find a solution that satisfies all constraints.
Recursive Best First Search
• Recursive Best-First Search (RBFS) is an informed search
algorithm that uses limited memory by recursively exploring
the most promising paths first.
• It uses a heuristic function to guide the search and backtracks
when a path becomes less promising than others.
• RBFS efficiently balances between depth-first and breadth-
first strategies.
Game Playing
• Game playing is a popular application of artificial
intelligence that involves the development of computer
programs to play games, such as chess, checkers, or Go.
• The goal of game playing in artificial intelligence is to
develop algorithms that can learn how to play games and
make decisions that will lead to winning outcomes.
• Game playing algorithms use search algorithms to help AI
agents make strategic decisions in competitive games.
Game Playing
• Game playing is a search problem defined by
▫ Initial State
▫ Success function
▫ Goal test
▫ Path cost/utility/pay off function
• A game must feel natural
▫ Obey laws of the game
▫ Characters aware of the environment
▫ Path finding
▫ Decision making
▫ Planning
Game Playing
• Two kinds of game playing algorithms are
▫ Min max algorithm
▫ Alpha Beta algorithm
Min max algorithm Example 1
Min max Search algorithm
Expert System
• The expert systems are computer application developed
to solve complex problems in a particular domain, at
the level of extra ordinary human intelligence and
expertise.
• An expert system is an AI software that uses knowledge
stored in a knowledge base to solve problems that
would usually require a human expert thus preserving a
human expert’s knowledge in its knowledge base.
• They can advise users as well as provide explanations
to them about how they reached a particular conclusion
or advice.
Expert system
• Characteristics of expert systems:
– High performance
– Understandable
– Reliable
– Highly responsive
• Expert system capable of:
– Advising
– Assisting and instructing human in decision making
– Deriving a solution
– Interpreting input
– Predicting result Suggesting alternative options to a
problem
Expert system
• Expert systems incapable of:
– Substituting human decision makers
– Producing accurate output for inadequate
knowledge base
– Refining their own knowledge
Components of Expert System
Components of Expert System
• Knowledge Base
– The knowledge base represents facts and rules. It consists
of knowledge in a particular domain as well as rules to
solve a problem, procedures and data relevant to the
domain.
• Inference Engine
– The function of the inference engine is to fetch the relevant
knowledge from the knowledge base, interpret it and to
find a solution relevant to the user’s problem. The inference
engine acquires the rules from its knowledge base and
applies them to the known facts to infer new facts.
Inference engines can also include an explanation and
debugging abilities.
Components of Expert System
• Knowledge Acquisition and Learning Module –
The function of this component is to allow the expert
system to acquire more and more knowledge from
various sources and store it in the knowledge base.
• User Interface –
This module makes it possible for a non-expert user to
interact with the expert system and find a solution to
the problem.
• Explanation Module –
This module helps the expert system to give the user an
explanation about how the expert system reached a
particular conclusion.
Inference Engine
• The Inference Engine generally uses two
strategies for acquiring knowledge from the
Knowledge Base, namely
– Forward Chaining
– Backward Chaining
Forward Chaining
• Forward Chaining is a strategic process used by the
Expert System to answer the questions – What will
happen next. This strategy is mostly used for managing
tasks like creating a conclusion, result or effect.
• Example: prediction or share market movement status.
Backward Chaining
• Backward Chaining is a strategy used by the Expert System to
answer the questions – Why this has happened. This strategy is
mostly used to find out the root cause or reason behind it,
considering what has already happened.
• Example: diagnosis of stomach pain, blood cancer or
dengue,etc.
Genetic Algorithm

You might also like