AI Midsem
AI Midsem
Prolog
Declarative Language: Focuses on what the problem is, not how to solve it. Uses facts and rules to
represent knowledge.
Pattern Matching & Backtracking: Excellent for symbolic reasoning, natural language processing, and
logic-based tasks. Uses backtracking to explore all possible solutions.
LISP
Symbolic Expression: Uses lists for both code and data representation, allowing powerful
metaprogramming capabilities.
Dynamic Typing & Garbage Collection: Manages data types at runtime and includes automatic memory
management.
Applications: Widely used in AI research, natural language processing, and symbolic reasoning.
Agent in AI
Definition:
An agent in AI is an entity that perceives its environment through sensors and acts upon that
environment through actuators. It operates autonomously, making decisions to achieve specific
goals based on its perceptions and knowledge.
Types of Agents:
Act solely based on the current percept, ignoring the rest of the percept history.
Example: A thermostat that turns on the heater when the temperature drops below a certain
threshold.
Maintain an internal state to keep track of the world, allowing them to handle partially observable
environments.
Example: A robotic vacuum cleaner that keeps track of cleaned and uncleaned areas.
3. Goal-Based Agents:
Consider goals and take actions to achieve them, requiring a way to measure progress.
Example: A GPS navigation system that plans a route to reach a specific destination.
4. Utility-Based Agents:
5. Learning Agents:
Example: An AI playing chess that improves its strategy by learning from past games.
Components of an Agent:
1. Perception: Sensing the environment using sensors (e.g., cameras, microphones, temperature
sensors).
2. Decision-Making: Processing the percepts to make decisions based on predefined rules, models,
or learned knowledge.
3. Action: Acting upon the environment using actuators (e.g., motors, displays, speakers).
4. Learning (Optional): Adapting and improving behavior based on experiences.
Applications:
History of AI
Early Beginnings:
Ancient Myths and Philosophies: Concepts of artificial beings with intelligence date back to
ancient civilizations with myths of automata and mechanical men.
Automata and Mechanical Devices: 18th and 19th centuries saw inventors creating mechanical
devices like Jacques de Vaucanson's Digesting Duck.
Theoretical Foundations:
Alan Turing (1950s): Proposed the concept of a "universal machine" and the Turing Test to
determine machine intelligence.
John McCarthy (1956): Coined the term "Artificial Intelligence" during the Dartmouth Conference,
marking AI's official birth as a field of study.
Logic Theorist (1956): Developed by Allen Newell and Herbert A. Simon, proved mathematical
theorems.
LISP (1958): Created by John McCarthy, a primary language for AI research.
General Problem Solver (1957): Aimed to solve a wide range of problems using a symbolic
approach.
SHRDLU (1968): Demonstrated natural language understanding by interacting with a virtual world
of blocks.
Expert Systems (1970s): Mimicked human expertise in specific domains, e.g., MYCIN for
diagnosing bacterial infections.
AI Winters (1970s-1980s):
Funding Cuts and Challenges: Over-optimism led to periods of decreased funding and interest,
known as "AI winters."
Early AI Systems' Limitations: Faced challenges in processing power and handling real-world
complexity.
Machine Learning and Data: Availability of large datasets and increased computational power
revitalized AI research.
Deep Learning (2000s): Enabled significant advancements in image recognition, natural language
processing, and more.
AI Applications: Healthcare, finance, autonomous vehicles, robotics, and entertainment.
Technologies like virtual assistants, recommendation systems, and self-driving cars emerged.
Future Prospects:
Ethics and Regulation: Ensuring AI systems are fair, accountable, and aligned with human values
is a key focus.
Advancements: Research continues to address current limitations and explore new frontiers like
general AI, explainable AI, and quantum AI.
Overview:
Traversal Method: Similar to Depth-First Search (DFS) but with a predetermined limit on the depth
to which it will search.
Depth Limit: The maximum depth level that the algorithm will explore.
Steps:
1. Start at the root node (or any arbitrary node) with an initial depth of 0.
2. Push the starting node onto the stack and mark it as visited.
3. Pop a node from the stack.
4. If the depth of the popped node is less than the depth limit, explore its unvisited neighbors.
5. For each unvisited neighbor, push it onto the stack, mark it as visited, and increase the depth.
6. Repeat steps 3-5 until the stack is empty or the depth limit is reached.
Complexity:
Time Complexity: O(b^l), where b is the branching factor and l is the depth limit. This is because it
explores up to b branches at each depth level up to l.
Space Complexity: O(bl), which includes the maximum depth of the stack, being the product of the
branching factor and the depth limit.
Applications:
Useful in scenarios where the search space is large or infinite, and a solution is expected within a
certain depth.
Overview:
Traversal Method: Expands the least-cost node first. Optimal if the path cost is a non-decreasing
function of the depth.
Data Structure Used: Utilizes a priority queue to keep track of nodes, prioritizing those with the
lowest cost.
Steps:
1. Start at the root node (or any arbitrary node) with a path cost of 0.
2. Enqueue the starting node into the priority queue with its path cost.
3. Dequeue the node with the lowest path cost from the priority queue.
4. If the dequeued node is the goal node, return the path and its cost.
5. For each neighbor of the dequeued node, calculate the total path cost to reach that neighbor.
6. Enqueue the neighbors into the priority queue with their respective path costs.
7. Repeat steps 3-6 until the goal node is reached or the priority queue is empty.
Complexity:
Time Complexity: O(b^(C*/ε)), where C* is the cost of the optimal solution, b is the branching
factor, and ε is the minimum cost of any action. This is because UCS explores paths in increasing
order of cost.
Space Complexity: O(b^(C*/ε)), since all nodes must be stored in memory to find the goal.
Applications:
Optimal for finding the shortest path in graphs where the cost to reach a node is known and non-
negative.
Suitable for scenarios where the path cost varies, such as in road networks or transportation
systems.
Problem Reduction in AI
Overview:
Key Concepts:
The main idea is to divide a problem into smaller subproblems, solve each subproblem, and
combine their solutions.
Example: Sorting algorithms like Merge Sort and Quick Sort use divide and conquer to sort
elements.
2. AND/OR Graphs:
Recursive algorithms naturally align with problem reduction, where a problem is solved by
recursively solving its subproblems.
Example: The Tower of Hanoi problem, where the task is broken down into moving smaller
subsets of disks.
Applications:
Search Algorithms: Problem reduction is used in search algorithms to explore solution spaces by
breaking down complex states into simpler states.
Optimization Problems: Many optimization problems, such as dynamic programming, involve
breaking down a problem into overlapping subproblems.
Expert Systems: In AI expert systems, problem reduction is used to diagnose issues by narrowing
down potential causes through a series of smaller diagnostic tests.
Advantages:
Manageability: Breaking down complex problems makes them easier to understand, solve, and
manage.
Parallelism: Subproblems can often be solved in parallel, improving efficiency.
Reusability: Solutions to subproblems can be reused in different contexts, saving computation
time.
Example:
Consider the problem of solving a jigsaw puzzle. The problem can be reduced by:
4. Solving each section independently and then combining them to complete the puzzle.
Means-End Analysis
Overview:
Key Concepts:
The goal state is the desired situation or condition that needs to be achieved.
2. Operators:
Operators are actions or steps that can be taken to transform the current state into the goal
state.
Each operator has preconditions that must be met before it can be applied.
3. Difference Reduction:
MEA focuses on reducing the difference between the current state and the goal state by
applying the most appropriate operators.
Steps:
1. Identify the Difference:
Determine the differences between the current state and the goal state.
2. Select an Operator:
5. Repeat:
Repeat the process of identifying differences and applying operators until the goal state is
achieved.
Applications:
Planning Systems: MEA is used in AI planning systems to generate plans that achieve specific
goals by breaking down tasks into smaller, manageable steps.
Robotics: MEA helps robots navigate and perform tasks by determining the actions needed to
reach a target location or complete a task.
Problem Solving: MEA is applied in problem-solving scenarios where a series of steps are required
to reach a solution, such as puzzle-solving and game-playing.
Example:
Consider a robot that needs to move from point A to point B in a grid:
3. Difference: Robot's current location (A) is different from the target location (B).
4. Operators: Move up, move down, move left, move right.
5. Apply Operator: If point B is to the right of point A, apply the "move right" operator.
Credit Assignment in AI
Overview:
Credit Assignment: A problem in reinforcement learning and neural networks where the goal is to
determine which actions or components contribute to achieving a desired outcome. It involves
assigning credit or blame to actions or decisions based on their impact on the final result.
Key Concepts:
1. Temporal Credit Assignment:
Addresses the challenge of determining which actions taken over time are responsible for the
eventual outcome.
Focuses on identifying which components (e.g., neurons in a neural network) are responsible for
a specific outcome.
Example: In a neural network, determining which weights and biases contributed to the error in
prediction.
Methods:
1. Reinforcement Learning:
Uses reward signals to assign credit to actions that lead to positive outcomes and blame to
actions that lead to negative outcomes.
Techniques like Q-learning and Policy Gradient Methods help address the credit assignment
problem by updating the value of actions based on their outcomes.
2. Backpropagation:
A method used in training neural networks to assign credit to individual weights and biases
based on their contribution to the error.
The error is propagated backward through the network, allowing the optimization algorithm to
adjust the weights and biases to minimize the error.
3. Eligibility Traces:
Used in reinforcement learning to assign credit to actions based on both their immediate and
delayed effects.
Combines ideas from both temporal and structural credit assignment by keeping a trace of
which actions and states have been visited and adjusting their values accordingly.
Applications:
Game Playing: Identifying which moves or strategies led to winning or losing a game.
Robotics: Determining which actions taken by a robot contributed to successfully completing a
task.
Neural Network Training: Adjusting the weights and biases in a neural network based on their
contribution to the error in prediction.
Example:
Consider a reinforcement learning agent playing a game of chess:
1. The agent receives a reward signal based on the game's outcome (win, lose, or draw).
2. Temporal credit assignment helps the agent identify which moves throughout the game contributed
to the final result.
3. Structural credit assignment helps the neural network adjust the weights and biases responsible for
making those moves.
Overview:
British Museum Algorithm: A metaphorical term used to describe a brute-force search method.
The name comes from the idea of searching for a specific artifact in the vast collection of the British
Museum by examining each item one by one.
Key Concepts:
1. Brute-Force Search: Involves systematically checking all possible solutions to find the correct one.
2. Exhaustive Search: Another term for brute-force search, where every possible option is explored
until the solution is found.
Steps:
1. Generate All Possible Solutions: List all potential solutions to the problem.
2. Evaluate Each Solution: Check each solution to see if it meets the criteria or solves the problem.
3. Select the Correct Solution: Once the correct solution is found, stop the search.
Complexity:
Time Complexity: O(n!), where n is the number of elements to be checked. This is because every
possible permutation of elements is considered.
Space Complexity: O(n), as it requires storing the elements being checked.
Applications:
Suitable for small problem spaces where the number of possible solutions is manageable.
Overview:
Exhaustive Search: A problem-solving technique that involves exploring all possible solutions to
find the best one. It is synonymous with brute-force search.
Key Concepts:
2. Optimal Solution: Guarantees finding the best solution, as every option is evaluated.
Steps:
3. Select the Best Solution: Choose the solution that best satisfies the problem requirements.
Complexity:
Time Complexity: O(n^k), where n is the number of elements and k is the number of positions to
fill. This is because every combination of elements is considered.
Applications:
Suitable for problems with small solution spaces or when an optimal solution is required.
Used in combinatorial problems, such as the traveling salesman problem, where all possible routes
are evaluated to find the shortest path.