0% found this document useful (0 votes)
20 views21 pages

Ai - 501unit 1 and 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views21 pages

Ai - 501unit 1 and 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Artificial Intelligence TCS –501

Artificial Intelligence(AI) is that branch of computer science that creates intelligent


machines that think and act like humans. It is one of the revolutionizing
technologies that people are fascinated by because of its ability to relate to their
daily lives. AI enables machines to think, learn and adapt, to enhance and
automate tasks across industries.

Artificial Intelligence has many subsets that focus on different aspects of mimicking
human beings. Machine learning is one of the popular subsets, whereas the
others included are Deep Learning, Natural Language Processing, and
Robotics.

Historical Context :

The root of Ai trace back to Alan turing , who in 1950 posed the quetion ,”Can
machines think ?“ His development of the Turing test set a foundation for thinking
about machine intelligence .

Early Ai focused on solving simple mathematical problems , but today ,it


encompasses everything from self driving cars to voice assistant.

Importance of AI:
Artificial Intelligence is not just a part of computer science even it's so vast and
requires lots of other factors which can contribute to it. To create the AI first we should
know that how intelligence is composed, so the Intelligence is an intangible part of
our brain which is a combination of Reasoning, learning, problem-solving
perception, language understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires
the following discipline:

• Mathematics
• Biology
• Psychology
• Sociology
• Computer Science
• Neurons Study
• Statistics

AI type:
1. Weak AI or Narrow AI:
• Narrow AI is a type of AI which is able to perform a dedicated task with
intelligence.The most common and currently available AI is Narrow AI in
the world of Artificial Intelligence.
• Narrow AI cannot perform beyond its field or limitations, as it is only
trained for one specific task. Hence it is also termed as weak AI. Narrow
AI can fail in unpredictable ways if it goes beyond its limits.
• Apple Siriis a good example of Narrow AI, but it operates with a limited
pre-defined range of functions.
• IBM's Watson supercomputer also comes under Narrow AI, as it uses an
Expert system approach combined with Machine learning and natural
language processing.
• Some Examples of Narrow AI are playing chess, purchasing suggestions
on e-commerce site, self-driving cars, speech recognition, and image
recognition.

2. General AI:
• General AI is a type of intelligence which could perform any intellectual
task with efficiency like a human.
• The idea behind the general AI to make such a system which could be
smarter and think like a human by its own.
• Currently, there is no such system exist which could come under general
AI and can perform any task as perfect as a human.
• The worldwide researchers are now focused on developing machines
with General AI.
• As systems with general AI are still under research, and it will take lots of
efforts and time to develop such systems.

3. Super AI:
• Super AI is a level of Intelligence of Systems at which machines could
surpass human intelligence, and can perform any task better than
human with cognitive properties. It is an outcome of general AI.
• Some key characteristics of strong AI include capability include the
ability to think, to reason,solve the puzzle, make judgments, plan, learn,
and communicate by its own.
• Super AI is still a hypothetical concept of Artificial Intelligence.
Development of such systems in real is still world changing task.
Types of AI Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the time.
These are given below:

• Simple Reflex Agent


• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent

1. Simple Reflex agent:


• The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
• Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
2. Model-based reflex agent
• The Model-based agent can work in a partially observable environment, and track the
situation.
• A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
o Internal State: It is a representation of the current state based on percept
history.
• These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
• Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.
3. Goal-based agents
• The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.
4. Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success
at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
5. Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
• A learning agent has mainly four conceptual components, which are:
o Learning element: It is responsible for making improvements by learning
from environment
o Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
o Performance element: It is responsible for selecting external action
o Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.
Sensor: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. An agent observes its environment
through sensors.

Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we define an AI
agent or rational agent, then we can group its properties under PEAS representation
model. It is made up of four words:

• P: Performance measure
• E: Environment
• A: Actuators
• S: Sensors
Here performance measure is the objective for the success of an agent's behavior.

PEAS for self-driving cars:

Let's suppose a self-driving car then PEAS representation will be:

Performance: Safety, time, legal drive, comfort

Environment: Roads, other vehicles, road signs, pedestrian

Actuators: Steering, accelerator, brake, signal, horn

Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.


Concept of an Intelligent agent in AI:

The concept of an intelligent agent is fundamental to understanding how artificial


intelligence (AI) systems operate. In AI, an intelligent agent is an autonomous entity that
perceives its environment, makes decisions, and takes actions to achieve specific goals.
These agents are designed to interact with their environment in ways that optimize their
performance according to a set of objectives.

Key Components of an Intelligent Agent:

1. Perception: The agent collects information from its environment through sensors.

2. Decision-Making: Based on the information gathered, the agent processes the data and
decides on a course of action.

3. Action: The agent executes actions in the environment through actuators to achieve its
goals.

4. Learning: Some agents can learn from experience, improving their decision making over
time. Different Types of Intelligent Agents:

ENVIRONMENT In AI :

An environment refers to everything external to the intelligent agent that the agent interacts
with. The environment provides the conditions and stimuli that the agent perceives through
sensors and responds to through actuators. It plays a crucial role in determining how an
agent behaves, as the agent's actions and decisions are based on the state of the
environment and the information it receives from it.

Types of environments in AI:

a-> Deterministic environment: A deterministic environment is one in which the outcome


of every action is entirely predictable. Once the agent takes an action, the next state of the
environment is determined with certainty, with no randomness or unexpected events.
Example: Solving a Maze

b-> Stochastic environment: A stochastic environment is one where the outcome of an


agent’s actions is uncertain or involves some level of randomness. In such environments,
the same action can lead to different results, and the agent must account for the
probability of various outcomes when making decisions. Example: Autonomous car in
traffic
c-> Episodic environment: An episodic environment in the context of AI is one where an
agent's actions and decisions in one stage do not affect future stages. Each episode is
independent of the others, meaning the agent does not need to remember or consider past
actions or their consequences when deciding on current actions. Example: Image
Classification Task

d-> Sequential environment: A sequential environment is one where the outcome of an


agent's current action can influence future actions and their outcomes. In such
environments, decisions are not isolated; instead, each action is part of a series of steps
that collectively determine the agent's success in achieving its goals. This means that the
agent must consider the sequence of actions over time rather than evaluating each action
independently.

e->• Discrete: The environment consists of a limited set of distinct states or actions. For
example, a turn-based game like tic-tac-toe, where each move is discrete.

• Continuous: The environment has a range of possible states or actions. For example,
controlling a robot arm, where movements occur in continuous space.

Search Algorithm Terminologies:

• Search: Searching is a step by step procedure to solve a search-problem


in a given search space. A search problem can have three main factors:

o Search Space: Search space represents a set of possible solutions,


which a system may have.
o Start State: It is a state from where agent begins the search.

o Goal state: It is a state which where the goal state is achieved .

• Search tree: A tree representation of search problem is called Search


tree. The root of the search tree is the root node which is corresponding
to the initial state.

• Actions: It gives the description of all the available actions to the agent.

• Transition model: A description of what each action do, can be


represented as a transition model.
• Path Cost: It is a function which assigns a numeric cost to each path.

• Solution: It is an action sequence which leads from the start node to the
goal node.

• Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:

Following are the four essential properties of search algorithms to compare the
efficiency of these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a


solution if at least any solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution


(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete


its task.

Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.

Types of search algorithms


Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal. It operates in a brute-force way as it only includes information
about how to traverse the tree and how to identify leaf and goal nodes. Uninformed
search applies a way in which search tree is searched without any information about
the search space like initial state operators and test for the goal, so it is also called
blind search.It examines each node of the tree until it achieves the goal node.

It can be divided into six main types:

• Breadth-first search
• Uniform cost search
• Depth-first search
• Depth limited search
• Iterative deepening depth-first search
• Bidirectional Search
1.Breadth-first Search:
• Breadth-first search is the most common search strategy for traversing
a tree or graph. This algorithm searches breadthwise in a tree or graph,
so it is called breadth-first search.
• BFS algorithm starts searching from the root node of the tree and
expands all successor node at the current level before moving to nodes
of next level.
• The breadth-first search algorithm is an example of a general-graph
search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
Advantages:

• BFS will provide a solution if any solution exists.


• If there are more than one solutions for a given problem, then BFS will
provide the minimal solution which requires the least number of steps.
• It also helps in finding the shortest path in goal state, since it needs all
nodes at the same hierarchical level before making a move to nodes at
lower levels.
• It is also very easy to comprehend with the help of this we can assign the
higher rank among path types.
Disadvantages:

• It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
• It can be very inefficient approach for searching through deeply layered
spaces, as it needs to thoroughly explore all nodes at each level before
moving on to the next

Example:
In the below tree structure, we have shown the traversing of the tree using BFS
algorithm from the root node S to goal node K. BFS search algorithm traverse in layers,
so it will follow the path which is shown by the dotted arrow, and the traversed path
will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number
of nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some
finite depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of


the node.

2. Depth-first Search
• Depth-first search isa recursive algorithm for traversing a tree or graph
data structure.
• It is called the depth-first search because it starts from the root node and
follows each path to its greatest depth node before moving to the next
path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.
Advantage:

• DFS requires very less memory as it only needs to store a stack of the
nodes on the path from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it
traverses in the right path).
• With the help of this we can stores the route which is being tracked in
memory to save time as it only needs to keep one at a particular time.
Disadvantage:

• There is the possibility that many states keep re-occurring, and there is
no guarantee of finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go
to the infinite loop.
• The depth-first search (DFS) algorithm does not always find the shortest
path to a solution.

Example:
In the below search tree, we have shown the flow of depth-first search, and it will
follow the order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node
is not found. After backtracking it will traverse node C and then G, and here it will
terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will
expand every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by
the algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which is
O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of


steps or high cost to reach to the goal node.
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can
find a solution more efficiently than an uninformed search strategy. Informed search
is also called a Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in
another way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search

A* Search Algorithm in Artificial Intelligence


An Introduction to A* Search Algorithm in AI
A* is a powerful graph traversal and pathfinding algorithm widely used in artificial
intelligence and computer science. It is mainly used to find the shortest path between
two nodes in a graph, given the estimated cost of getting from the current node to
the destination node. The main advantage of the algorithm is its ability to provide an
optimal path by exploring the graph in a more informed way compared to traditional
search algorithms such as Dijkstra's algorithm.

Algorithm A* combines the advantages of two other search algorithms: Dijkstra's


algorithm and Greedy Best-First Search. Like Dijkstra's algorithm, A* ensures that the
path found is as short as possible but does so more efficiently by directing its search
through a heuristic similar to Greedy Best-First Search. A heuristic function, denoted
h(n), estimates the cost of getting from any given node n to the destination node.

The main idea of A* is to evaluate each node based on two parameters:

1. g(n): the actual cost to get from the initial node to node n. It represents the sum
of the costs of node n outgoing edges.
2. h(n): Heuristic cost (also known as "estimation cost") from node n to destination
node n. This problem-specific heuristic function must be acceptable, meaning
it never overestimates the actual cost of achieving the goal. The evaluation
function of node n is defined as f(n) = g(n) h(n).
Algorithm A* selects the nodes to be explored based on the lowest value of f(n),
preferring the nodes with the lowest estimated total cost to reach the goal. The A*
algorithm works:
Ne

← prevnext →

You might also like