0% found this document useful (0 votes)
8 views10 pages

AI Midsem

The document provides an overview of various AI concepts, including programming languages like Prolog and LISP, types of AI agents, and search algorithms such as Depth-Limited Search and Uniform Cost Search. It also discusses problem-solving techniques like Means-End Analysis and Credit Assignment, along with historical developments in AI and future prospects. Additionally, it covers brute-force search methods like the British Museum Algorithm and Exhaustive Search, highlighting their applications and complexities.

Uploaded by

piyushraj102000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views10 pages

AI Midsem

The document provides an overview of various AI concepts, including programming languages like Prolog and LISP, types of AI agents, and search algorithms such as Depth-Limited Search and Uniform Cost Search. It also discusses problem-solving techniques like Means-End Analysis and Credit Assignment, along with historical developments in AI and future prospects. Additionally, it covers brute-force search methods like the British Museum Algorithm and Exhaustive Search, highlighting their applications and complexities.

Uploaded by

piyushraj102000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

AI midsem

Prolog
Declarative Language: Focuses on what the problem is, not how to solve it. Uses facts and rules to
represent knowledge.

Pattern Matching & Backtracking: Excellent for symbolic reasoning, natural language processing, and
logic-based tasks. Uses backtracking to explore all possible solutions.

Examples: Suitable for expert systems and solving logic puzzles.

LISP

Symbolic Expression: Uses lists for both code and data representation, allowing powerful
metaprogramming capabilities.

Dynamic Typing & Garbage Collection: Manages data types at runtime and includes automatic memory
management.

Macros: Enables creation of new syntactic constructs, extending the language.

Applications: Widely used in AI research, natural language processing, and symbolic reasoning.

Agent in AI

Definition:

An agent in AI is an entity that perceives its environment through sensors and acts upon that
environment through actuators. It operates autonomously, making decisions to achieve specific
goals based on its perceptions and knowledge.

Types of Agents:

1. Simple Reflex Agents:

Act solely based on the current percept, ignoring the rest of the percept history.

Example: A thermostat that turns on the heater when the temperature drops below a certain
threshold.

2. Model-Based Reflex Agents:

Maintain an internal state to keep track of the world, allowing them to handle partially observable
environments.
Example: A robotic vacuum cleaner that keeps track of cleaned and uncleaned areas.
3. Goal-Based Agents:

Consider goals and take actions to achieve them, requiring a way to measure progress.
Example: A GPS navigation system that plans a route to reach a specific destination.

4. Utility-Based Agents:

Evaluate actions based on a utility function to maximize a certain measure of performance.


Example: An autonomous trading system that makes decisions to maximize financial gain.

5. Learning Agents:

Improve their performance over time by learning from their experiences.

Example: An AI playing chess that improves its strategy by learning from past games.

Components of an Agent:

1. Perception: Sensing the environment using sensors (e.g., cameras, microphones, temperature
sensors).
2. Decision-Making: Processing the percepts to make decisions based on predefined rules, models,
or learned knowledge.

3. Action: Acting upon the environment using actuators (e.g., motors, displays, speakers).
4. Learning (Optional): Adapting and improving behavior based on experiences.

Applications:

Autonomous vehicles (self-driving cars)


Personal assistants (Siri, Alexa)
Robotics (industrial robots, service robots)

Game AI (NPCs in video games)


Smart home systems (automated lighting, security)

History of AI
Early Beginnings:

Ancient Myths and Philosophies: Concepts of artificial beings with intelligence date back to
ancient civilizations with myths of automata and mechanical men.
Automata and Mechanical Devices: 18th and 19th centuries saw inventors creating mechanical
devices like Jacques de Vaucanson's Digesting Duck.

Theoretical Foundations:

Alan Turing (1950s): Proposed the concept of a "universal machine" and the Turing Test to
determine machine intelligence.
John McCarthy (1956): Coined the term "Artificial Intelligence" during the Dartmouth Conference,
marking AI's official birth as a field of study.

Early AI Research (1950s-1960s):

Logic Theorist (1956): Developed by Allen Newell and Herbert A. Simon, proved mathematical
theorems.
LISP (1958): Created by John McCarthy, a primary language for AI research.
General Problem Solver (1957): Aimed to solve a wide range of problems using a symbolic
approach.

The Rise of AI (1960s-1970s):

SHRDLU (1968): Demonstrated natural language understanding by interacting with a virtual world
of blocks.
Expert Systems (1970s): Mimicked human expertise in specific domains, e.g., MYCIN for
diagnosing bacterial infections.

AI Winters (1970s-1980s):

Funding Cuts and Challenges: Over-optimism led to periods of decreased funding and interest,
known as "AI winters."
Early AI Systems' Limitations: Faced challenges in processing power and handling real-world
complexity.

Revival and Modern AI (1990s-Present):

Machine Learning and Data: Availability of large datasets and increased computational power
revitalized AI research.
Deep Learning (2000s): Enabled significant advancements in image recognition, natural language
processing, and more.
AI Applications: Healthcare, finance, autonomous vehicles, robotics, and entertainment.
Technologies like virtual assistants, recommendation systems, and self-driving cars emerged.

Future Prospects:

Ethics and Regulation: Ensuring AI systems are fair, accountable, and aligned with human values
is a key focus.
Advancements: Research continues to address current limitations and explore new frontiers like
general AI, explainable AI, and quantum AI.

Depth-Limited Search (DLS)

Overview:

Traversal Method: Similar to Depth-First Search (DFS) but with a predetermined limit on the depth
to which it will search.
Depth Limit: The maximum depth level that the algorithm will explore.

Steps:

1. Start at the root node (or any arbitrary node) with an initial depth of 0.
2. Push the starting node onto the stack and mark it as visited.
3. Pop a node from the stack.
4. If the depth of the popped node is less than the depth limit, explore its unvisited neighbors.
5. For each unvisited neighbor, push it onto the stack, mark it as visited, and increase the depth.

6. Repeat steps 3-5 until the stack is empty or the depth limit is reached.

Complexity:

Time Complexity: O(b^l), where b is the branching factor and l is the depth limit. This is because it
explores up to b branches at each depth level up to l.
Space Complexity: O(bl), which includes the maximum depth of the stack, being the product of the
branching factor and the depth limit.

Applications:

Useful in scenarios where the search space is large or infinite, and a solution is expected within a
certain depth.

Uniform Cost Search (UCS)

Overview:

Traversal Method: Expands the least-cost node first. Optimal if the path cost is a non-decreasing
function of the depth.
Data Structure Used: Utilizes a priority queue to keep track of nodes, prioritizing those with the
lowest cost.

Steps:

1. Start at the root node (or any arbitrary node) with a path cost of 0.
2. Enqueue the starting node into the priority queue with its path cost.
3. Dequeue the node with the lowest path cost from the priority queue.
4. If the dequeued node is the goal node, return the path and its cost.
5. For each neighbor of the dequeued node, calculate the total path cost to reach that neighbor.

6. Enqueue the neighbors into the priority queue with their respective path costs.
7. Repeat steps 3-6 until the goal node is reached or the priority queue is empty.

Complexity:
Time Complexity: O(b^(C*/ε)), where C* is the cost of the optimal solution, b is the branching
factor, and ε is the minimum cost of any action. This is because UCS explores paths in increasing
order of cost.
Space Complexity: O(b^(C*/ε)), since all nodes must be stored in memory to find the goal.

Applications:

Optimal for finding the shortest path in graphs where the cost to reach a node is known and non-
negative.
Suitable for scenarios where the path cost varies, such as in road networks or transportation
systems.

Problem Reduction in AI

Overview:

Problem Reduction: Also known as "problem decomposition" or "problem refinement," it involves


breaking down a complex problem into smaller, more manageable subproblems. Each subproblem
can be solved independently, and their solutions are then combined to solve the original problem.

Key Concepts:

1. Divide and Conquer:

The main idea is to divide a problem into smaller subproblems, solve each subproblem, and
combine their solutions.
Example: Sorting algorithms like Merge Sort and Quick Sort use divide and conquer to sort
elements.

2. AND/OR Graphs:

AND nodes: Require all children nodes (subproblems) to be solved.


OR nodes: Require only one child node (subproblem) to be solved.
AND/OR graphs are used to represent problem reduction, where nodes represent subproblems
and edges represent the relationship between them.

3. Recursive Problem Solving:

Recursive algorithms naturally align with problem reduction, where a problem is solved by
recursively solving its subproblems.
Example: The Tower of Hanoi problem, where the task is broken down into moving smaller
subsets of disks.

Applications:

Search Algorithms: Problem reduction is used in search algorithms to explore solution spaces by
breaking down complex states into simpler states.
Optimization Problems: Many optimization problems, such as dynamic programming, involve
breaking down a problem into overlapping subproblems.
Expert Systems: In AI expert systems, problem reduction is used to diagnose issues by narrowing
down potential causes through a series of smaller diagnostic tests.

Advantages:

Manageability: Breaking down complex problems makes them easier to understand, solve, and
manage.
Parallelism: Subproblems can often be solved in parallel, improving efficiency.
Reusability: Solutions to subproblems can be reused in different contexts, saving computation
time.

Example:
Consider the problem of solving a jigsaw puzzle. The problem can be reduced by:

1. Sorting pieces by edge types (corners, edges, inner pieces).

2. Assembling the border first.


3. Dividing the remaining pieces into smaller sections based on color or pattern.

4. Solving each section independently and then combining them to complete the puzzle.

Means-End Analysis

Overview:

Means-End Analysis (MEA): A problem-solving technique used in AI to reduce the difference


between the current state and the goal state. It involves identifying actions (means) that can help
achieve the desired end.

Key Concepts:

1. Current State vs. Goal State:

The current state is the situation or condition as it is now.

The goal state is the desired situation or condition that needs to be achieved.

2. Operators:

Operators are actions or steps that can be taken to transform the current state into the goal
state.

Each operator has preconditions that must be met before it can be applied.

3. Difference Reduction:

MEA focuses on reducing the difference between the current state and the goal state by
applying the most appropriate operators.

Steps:
1. Identify the Difference:

Determine the differences between the current state and the goal state.

2. Select an Operator:

Choose an operator that can reduce the identified difference.

3. Apply the Operator:

Apply the chosen operator to move closer to the goal state.

4. Update the Current State:

Update the current state based on the applied operator's effects.

5. Repeat:

Repeat the process of identifying differences and applying operators until the goal state is
achieved.

Applications:

Planning Systems: MEA is used in AI planning systems to generate plans that achieve specific
goals by breaking down tasks into smaller, manageable steps.

Robotics: MEA helps robots navigate and perform tasks by determining the actions needed to
reach a target location or complete a task.

Problem Solving: MEA is applied in problem-solving scenarios where a series of steps are required
to reach a solution, such as puzzle-solving and game-playing.

Example:
Consider a robot that needs to move from point A to point B in a grid:

1. Current State: Robot is at point A.


2. Goal State: Robot needs to be at point B.

3. Difference: Robot's current location (A) is different from the target location (B).
4. Operators: Move up, move down, move left, move right.

5. Apply Operator: If point B is to the right of point A, apply the "move right" operator.

6. Update State: Update the robot's location to reflect the movement.


7. Repeat: Continue applying operators until the robot reaches point B.

Credit Assignment in AI

Overview:

Credit Assignment: A problem in reinforcement learning and neural networks where the goal is to
determine which actions or components contribute to achieving a desired outcome. It involves
assigning credit or blame to actions or decisions based on their impact on the final result.

Key Concepts:
1. Temporal Credit Assignment:

Addresses the challenge of determining which actions taken over time are responsible for the
eventual outcome.

Example: In a game, identifying which moves contributed to winning or losing.

2. Structural Credit Assignment:

Focuses on identifying which components (e.g., neurons in a neural network) are responsible for
a specific outcome.
Example: In a neural network, determining which weights and biases contributed to the error in
prediction.

Methods:

1. Reinforcement Learning:

Uses reward signals to assign credit to actions that lead to positive outcomes and blame to
actions that lead to negative outcomes.
Techniques like Q-learning and Policy Gradient Methods help address the credit assignment
problem by updating the value of actions based on their outcomes.

2. Backpropagation:

A method used in training neural networks to assign credit to individual weights and biases
based on their contribution to the error.

The error is propagated backward through the network, allowing the optimization algorithm to
adjust the weights and biases to minimize the error.

3. Eligibility Traces:

Used in reinforcement learning to assign credit to actions based on both their immediate and
delayed effects.
Combines ideas from both temporal and structural credit assignment by keeping a trace of
which actions and states have been visited and adjusting their values accordingly.

Applications:

Game Playing: Identifying which moves or strategies led to winning or losing a game.
Robotics: Determining which actions taken by a robot contributed to successfully completing a
task.
Neural Network Training: Adjusting the weights and biases in a neural network based on their
contribution to the error in prediction.

Example:
Consider a reinforcement learning agent playing a game of chess:

1. The agent receives a reward signal based on the game's outcome (win, lose, or draw).
2. Temporal credit assignment helps the agent identify which moves throughout the game contributed
to the final result.
3. Structural credit assignment helps the neural network adjust the weights and biases responsible for
making those moves.

British Museum Algorithm

Overview:

British Museum Algorithm: A metaphorical term used to describe a brute-force search method.
The name comes from the idea of searching for a specific artifact in the vast collection of the British
Museum by examining each item one by one.

Key Concepts:

1. Brute-Force Search: Involves systematically checking all possible solutions to find the correct one.

2. Exhaustive Search: Another term for brute-force search, where every possible option is explored
until the solution is found.

Steps:

1. Generate All Possible Solutions: List all potential solutions to the problem.

2. Evaluate Each Solution: Check each solution to see if it meets the criteria or solves the problem.
3. Select the Correct Solution: Once the correct solution is found, stop the search.

Complexity:

Time Complexity: O(n!), where n is the number of elements to be checked. This is because every
possible permutation of elements is considered.
Space Complexity: O(n), as it requires storing the elements being checked.

Applications:

Suitable for small problem spaces where the number of possible solutions is manageable.

Used as a baseline to compare the efficiency of more advanced algorithms.

Exhaustive Search Algorithm

Overview:

Exhaustive Search: A problem-solving technique that involves exploring all possible solutions to
find the best one. It is synonymous with brute-force search.

Key Concepts:

1. Complete Search: Ensures that all possible solutions are considered.

2. Optimal Solution: Guarantees finding the best solution, as every option is evaluated.
Steps:

1. List All Possible Solutions: Generate a comprehensive list of potential solutions.


2. Evaluate Each Solution: Assess each solution to determine if it meets the desired criteria.

3. Select the Best Solution: Choose the solution that best satisfies the problem requirements.

Complexity:

Time Complexity: O(n^k), where n is the number of elements and k is the number of positions to
fill. This is because every combination of elements is considered.

Space Complexity: O(n), as it requires storing the elements being evaluated.

Applications:

Suitable for problems with small solution spaces or when an optimal solution is required.

Used in combinatorial problems, such as the traveling salesman problem, where all possible routes
are evaluated to find the shortest path.

You might also like