Iat 1
Iat 1
perform tasks typically requiring human intelligence, such as learning, problem-solving, reasoning,
and decision-making. It enables machines to mimic cognitive functions like perception, language
understanding, and interaction with the environment.
2] Benefits of AI:
1. Increased Efficiency: AI can automate repetitive tasks, improving speed and accuracy in
industries like manufacturing, healthcare, and finance.
2. Improved Decision-Making: AI analyzes large datasets quickly, providing insights that help
businesses and governments make informed decisions.
3. Cost Reduction: Automation through AI reduces operational costs, such as labor and time,
while improving productivity.
Risks of AI:
2. Security Threats: AI can be exploited for cyberattacks, deepfakes, and other malicious
activities, leading to privacy and security concerns.
3] In AI, an agent is an entity that perceives its environment through sensors and acts upon it using
actuators to achieve a specific goal. It takes input from its surroundings, processes the information,
and makes decisions to perform actions that maximize its performance or success in completing
tasks. Examples of agents include robots, self-driving cars, and software systems like virtual
assistants.
4] 1. Sensors: These are used by the agent to perceive the environment. They gather information,
like a robot's camera or a software program's data input.
2.Actuators: These enable the agent to take actions in the environment. For example, a robot's
wheels or arms, or in software, it could be actions like sending a message or making a decision.
3. Perception: This is the processing of sensor data to understand what’s happening in the
environment. The agent interprets the information it receives.
4. Decision-Making: The agent makes decisions or selects actions based on the perceived
information to achieve its goals. This can involve simple rules or complex algorithms.
5]
6] Time Complexity:
Time complexity of a search algorithm refers to the amount of time it takes to find a solution (or
explore all possible states) based on the size of the input or search space. It is usually expressed in
terms of Big O notation (e.g., O(n), O(b^d)) where:
d is the depth of the solution (the number of steps from the root to the goal).
Space Complexity:
Space complexity refers to the amount of memory or storage the algorithm requires during its
execution, including data structures such as queues, stacks, and the number of nodes stored at a
time.
For example:
Breadth-First Search (BFS) has time complexity O(b^d) and space complexity O(b^d), as it
stores all nodes at each level.
7]
8] A heuristic function in AI is a method used to estimate the cost or distance from the current state
to the goal state in search algorithms. It helps guide the search process by prioritizing paths that are
more likely to lead to the goal, making the search more efficient. For example, in a pathfinding
problem, a heuristic function could estimate the distance to the destination, allowing the algorithm
to choose the shortest or best path. It’s often used in informed search algorithms like A*.
9] • S0: The initial state, which specifies how the game is set up at the start.
• RESULT(s, a): The transition model, which defines the state resulting from taking ac- Transition
model
tion a in state s.
• IS-TERMINAL(s): A terminal test, which is true when the game is over and false Terminal test
otherwise. States where the game has ended are called terminal states. Terminal state
• UTILITY(s; p): A utility function (also called an objective function or payoff function),
which defines the final numeric value to player p when the game ends in terminal state s.
10]
1. Initial State:
The starting point of the search, representing the problem in its initial configuration.
Example: In a maze-solving problem, the initial state would be the position of the agent at
the start of the maze.
2. Actions:
The set of all possible moves or actions that the agent can take from a given state.
Example: In the maze, possible actions could be "move up," "move down," "move left," or
"move right."
3. Transition Model:
Defines the result of applying an action to a state. It maps a current state and action to the
next state.
Example: If the agent moves "up" from its current position, the transition model will show
the new position of the agent in the maze.
4. Goal State:
The desired final state that the algorithm is searching for. The goal test checks if the current
state is the goal.
Example: In the maze, the goal state is reaching the exit of the maze.
5. Path Cost:
A cost function that assigns a numerical value to the cost of a path from the initial state to
the current state. Typically, it accumulates the cost of individual actions.
Example: In a maze-solving problem, each move could have a cost (e.g., 1 for each step), and
the path cost is the total number of steps taken.
6. Search Strategy:
The strategy determines the order in which the search explores different states. It can be
uninformed (blind) or informed (heuristic-based).
Example: Breadth-First Search (uninformed) explores all possible paths level by level, while
A* Search (informed) uses a heuristic to explore the paths most likely to lead to the goal.
Problem: Find the shortest path in a simple grid maze from the top-left corner to the bottom-right
corner.
Key Components:
Transition Model: Moving from (0, 0) to (0, 1) results in the new state (0, 1).
Goal State: The position at the bottom-right corner, say (4, 4).
Path Cost: Each move has a cost of 1, so a total of 5 moves would have a path cost of 5.
Search Strategy: If using A* search, it would prioritize paths based on the distance from the
current position to the goal.
12] BFS(non recursive)
DFS:(Recursive)
BFS DFS
Non recursive recursive
Time and space complexity is high less
Implemented using queue stack
It is both complete and optimal only optimal as it is infinite loop
No back tracking Back tracking occurs
Horizontal traction Vertical traction
Efficiency less high
13] A*
GREEDY:
Incomplete and not optimal
14] pruning
15] map colouring
A Constraint Satisfaction Problem (CSP) is a mathematical problem defined by a set of variables, each
with a domain of possible values, and a set of constraints that specify allowable combinations of
values. The goal of a CSP is to assign values to the variables such that all constraints are satisfied.
1. Variables: These represent the entities involved in the problem. For example, in a scheduling
problem, variables could be timeslots.
2. Domains: Each variable has a domain, which is the set of possible values it can take. For
example, a variable representing a day of the week might have a domain of {Monday,
Tuesday, ... Sunday}.
3. Constraints: These define relationships between variables and limit the combinations of
values that can be assigned. For example, a constraint could be that two meetings cannot
happen at the same time.
Types of CSPs:
Binary CSP: Involves constraints between pairs of variables (e.g., graph coloring, where no
two adjacent nodes can have the same color).
N-ary CSP: Involves constraints between more than two variables (e.g., Sudoku puzzles
where rows, columns, and boxes must contain unique digits).
Solution to a CSP:
A solution is an assignment of values to variables such that all constraints are satisfied. A partial
solution satisfies only a subset of constraints, while a complete solution satisfies all constraints.
3. Arc Consistency: Ensures that for every pair of variables, each value in the domain of one
variable has a corresponding allowable value in the domain of the other, pruning the search
space.
Applications:
Sudoku: Each cell is a variable, domains are the possible digits, and constraints are the rules
of Sudoku.
Map Coloring: Assigning colors to regions on a map such that no adjacent regions share the
same color.