0% found this document useful (0 votes)
14 views

Noteartificial intelligence

The document provides an overview of Artificial Intelligence (AI) and Machine Learning (ML), explaining their definitions, differences, and applications. It discusses the Turing Test, requirements for passing it, state-of-the-art AI applications, and the advantages and disadvantages of AI. Additionally, it highlights the foundational interdisciplinary fields contributing to AI's development, including philosophy and mathematics.

Uploaded by

nihafahima9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Noteartificial intelligence

The document provides an overview of Artificial Intelligence (AI) and Machine Learning (ML), explaining their definitions, differences, and applications. It discusses the Turing Test, requirements for passing it, state-of-the-art AI applications, and the advantages and disadvantages of AI. Additionally, it highlights the foundational interdisciplinary fields contributing to AI's development, including philosophy and mathematics.

Uploaded by

nihafahima9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Lecture-1

1. What is Artificial Intelligence and Machine Learning?


Ans: AI (Artificial Intelligence): The simulation of human intelligence in machines that are
programmed to think, learn, and perform tasks that typically require human intelligence, such as
problem-solving, understanding language, and recognizing patterns.
ML (Machine Learning): A subset of AI that focuses on creating algorithms and models that
enable computers to learn from and make predictions or decisions based on data without being
explicitly programmed for each task.
2. Difference between AI and ML.

AI (Artificial Intelligence) ML (Machine Learning)


The broad concept of machines being able to A subset of AI that involves teaching
carry out tasks in a way that mimics human machines to learn from data without being
intelligence. explicitly programmed.
Focuses specifically on algorithms and
Encompasses a wide range of technologies,
models that learn from and make predictions
including ML, expert systems, and robotics.
based on data.
To create systems that can perform complex To enable machines to learn from data and
tasks and exhibit intelligent behavior. improve their performance on specific tasks.
May use various methods, including rule-based Primarily relies on statistical models and
systems, genetic algorithms, and neural algorithms that allow systems to learn from
networks. data.
AI systems can function with or without learning ML systems depend heavily on data to learn
from data, depending on the method used. and make decisions.
Can be designed for specific tasks or
Generally designed for specific tasks, with
generalized tasks, depending on the AI
models trained for particular purposes.
technique used.
Includes supervised learning, unsupervised
Includes ML, natural language processing,
learning, reinforcement learning, and deep
computer vision, robotics, and expert systems.
learning.
May include learning, reasoning, and self- Focuses specifically on learning from data to
correction, depending on the type of AI. improve over time.
Can handle both simple and highly complex Typically used for specific tasks that involve
tasks, potentially mimicking human-level pattern recognition and data-driven
intelligence. predictions.
AI is a broader, older field that includes various ML is a more recent, rapidly advancing field
approaches and methodologies. within AI that focuses on data-driven learning.
3. Difference between AI Approach and Traditional Programming Approach.

AI Approach Traditional Programming Approach


Learns from data and improves Follows explicit rules and instructions
over time. provided by a programmer.
Makes decisions based on Decisions are based on predefined
patterns and data. logic and rules.
Adaptable to new data and can Rigid; changes require manual
improve with more information. updates to the code.
Requires large amounts of data to Can function with minimal or no data,
learn and function. just code logic.
Can handle errors by learning from Requires explicit error handling; does
mistakes and adjusting. not learn from errors.
Suitable for complex, unstructured, Best for straightforward, well-defined
or evolving tasks. tasks.
Involves training models using Involves writing and testing code
data and refining over time. based on predefined specifications.
Produces probabilistic outputs Produces deterministic outputs based
based on learned patterns. on coded logic.
Minimal intervention needed after Requires ongoing maintenance and
initial training. updates by developers.
Used in dynamic fields like image Used in static fields like accounting
recognition, speech processing, software, payroll systems, and
and recommendation systems. database management.

4. Difference between ML, Neural Network and Deep Learning.

Machine Learning (ML) Neural Networks (NN) Deep Learning (DL)


A field of AI where A type of ML model A subset of NN that uses multiple
machines learn from data inspired by the human layers (deep architectures) to
to make predictions. brain's neuron structure. model complex patterns.
Handles simpler models More complex than basic Extremely complex models with
like decision trees, ML algorithms, many layers for handling intricate
regression, etc. mimicking brain neurons. data patterns.
Uses one or two hidden Uses multiple layers (often
Typically uses shallow
layers for processing dozens or hundreds), forming
architectures (few layers).
input. "deep" networks.
Can work with smaller Requires more data than
Requires vast amounts of data to
datasets, depending on basic ML algorithms to
effectively train deep models.
the algorithm. perform well.
Can run on standard May require more Requires specialized hardware
hardware for simpler advanced hardware for like GPUs due to high
tasks. larger NN models. computational power needs.
Often needs manual Requires some feature Automatically learns features
feature engineering to engineering but less from raw data, reducing the need
improve performance. than traditional ML. for manual feature engineering.
Performs well for Better than basic ML for Excels at very complex tasks like
straightforward tasks and complex tasks but image and speech recognition,
smaller datasets. limited by shallow depth. but only with deep networks.
Takes longer to train Can take a very long time to train
Training time is relatively
than basic ML models due to multiple layers and large
short for simpler models.
due to higher complexity. datasets.
Machine Learning (ML) Neural Networks (NN) Deep Learning (DL)
Linear regression, Convolutional neural networks
Feedforward neural
decision trees, k-nearest (CNNs), recurrent neural
networks, perceptron.
neighbors. networks (RNNs), transformers.
Used in applications like
Commonly used in tasks Applied in advanced fields like
fraud detection,
like classification and computer vision, natural
recommendation systems,
simple pattern language processing, and deep
and predictive
recognition. reinforcement learning.
maintenance.

5. How can we define ML by seeing a problem?


Ans: To define if a problem can be solved using Machine Learning (ML), follow these steps:
 Is the task based on learning from examples or data?
If the problem involves data (e.g., customer preferences, past sales, images) and you need the
system to learn patterns or behaviors from that data, it’s likely a good fit for ML.
 Is it difficult to program rules for this problem?
If writing explicit rules for solving the problem would be too complicated or impossible (like
recognizing handwriting or predicting stock prices), ML is useful because it can learn these rules
from data.
 Does the solution need to improve over time?
ML is beneficial when you need a solution that improves with more data or over time. If the
problem involves dynamic environments (e.g., fraud detection, recommendation systems), ML can
adapt and get better.
 Is there a need for predictions or classifications?
If the problem involves predicting an outcome (e.g., weather forecasting) or classifying things into
categories (e.g., spam or not spam), it can likely be solved using ML.
 Is there a pattern but not an obvious rule?
If the solution relies on detecting complex patterns that are hard to define manually, such as
identifying objects in images or trends in data, ML can learn these patterns automatically.
In simple terms: If the problem needs to learn from data, detect patterns, make predictions, or
adapt over time—and if writing out manual rules is too difficult—then ML is a suitable approach!

6. When we should use ML to solve problem?


Ans: Machine learning (ML) can be a powerful tool for solving problems, but it's not always the
best approach. Here are some situations where using ML is appropriate:
 Complex Patterns and Relationships: When the problem involves complex patterns or
relationships that are difficult to define with traditional algorithms. ML excels at discovering
patterns in large datasets that are not immediately obvious.
 Large and High-Dimensional Data: If you have a large amount of data with many features,
ML algorithms can help find correlations and make predictions. Traditional methods might
struggle with high-dimensional data or might not scale well.
 Automation of Repetitive Tasks: For tasks that involve repetitive decision-making or
classification, ML models can automate these processes and improve efficiency. For example,
spam filtering or image recognition.
 Personalization: When you need to provide personalized experiences or recommendations,
such as in e-commerce or content platforms, ML can help tailor suggestions to individual users
based on their behavior and preferences.
 Dynamic and Evolving Environments: If the problem domain is constantly changing or
evolving, ML models can adapt and learn from new data over time, which can be challenging
for static rule-based systems.
 Predictive Analytics: For tasks involving forecasting or predicting future outcomes based on
historical data, such as predicting stock prices or demand forecasting, ML can be very
effective.

7. Explain Turing Test.


Ans: The Turing Test is a concept introduced by British mathematician and computer scientist Alan
Turing in 1950. It is designed to assess whether a machine can exhibit intelligent behavior that is
indistinguishable from that of a human.
How the Turing Test works:
 Imitation Game Setup: The test involves three participants: a human evaluator (judge), a
human participant, and a machine (AI). The evaluator interacts with both the human and the
machine, but only through text-based communication, without knowing which is which.
 Objective: The evaluator's task is to determine which participant is the human and which is
the machine. The machine's goal is to generate responses that convince the evaluator it is
human.
 Pass Condition: If the evaluator cannot reliably distinguish between the human and the
machine, and the machine fools the evaluator into thinking it is human at least 50% of the
time, the machine is said to have passed the Turing Test.
Significance:
The Turing Test serves as a benchmark for artificial intelligence (AI) and measures a machine's
ability to imitate human-like thinking and conversation. However, it focuses on external behavior
rather than internal understanding or reasoning.
While the Turing Test is a key early concept in AI, some modern researchers argue that it is
limited, as passing the test doesn't necessarily indicate true intelligence or consciousness, but
rather the ability to mimic human communication.

8. What are Requirements to Pass the Turing Test?


Ans: To pass the Turing Test, a machine must meet the following key requirements:
1) Natural Language Processing (NLP): The machine must be able to understand and generate
human-like text responses in real time. It should be capable of handling complex linguistic
tasks such as sentence formation, grammar, context, and meaning.
2) Human-like Responses: The machine needs to produce answers that are coherent,
contextually appropriate, and indistinguishable from those of a human. The responses should
mimic human behavior, including the ability to make small talk, answer questions, or handle
vague or ambiguous queries.
3) Adaptability to Varied Topics: The machine should be able to handle a wide range of topics
during the conversation, showing the versatility of knowledge and understanding across
multiple domains.
4) Convincing Dialogue: The machine must be able to engage in meaningful conversation,
using reasoning and knowledge. It should show an ability to give thoughtful responses and
avoid giving robotic or repetitive answers.
5) Deception: The machine's primary goal is to "deceive" the evaluator into believing it is human.
To do this, it needs to convincingly imitate human thinking patterns, emotions, and nuances,
such as humor, hesitation, or even evasion, when necessary.
6) Limited Timeframe: The Turing Test is conducted within a limited timeframe. The machine
needs to maintain the illusion of being human long enough to fool the evaluator before the time
runs out.
7) Inability of the Evaluator to Differentiate: If, at the end of the test, the evaluator cannot
reliably determine which participant is human and which is the machine, and is unable to
distinguish between them at least 50% of the time, the machine is said to have passed the
Turing Test.

9. State of the art of AI Application.


Ans: Here's a simpler explanation of the state-of-the-art AI applications:
1) Chatbots and Voice Assistants: AI like ChatGPT, Siri, and Alexa help people with tasks
like answering questions, setting reminders, or having conversations.
2) Self-Driving Cars: Companies like Tesla and Waymo are using AI to create cars that can
drive themselves by "seeing" the road, detecting obstacles, and making decisions safely.
3) Healthcare AI: AI helps doctors detect diseases early by analyzing medical scans (like X-
rays) and even predicting how proteins in the body work. AI-powered robots assist
surgeons in performing more precise surgeries.
4) Creating Art, Music, and Images: AI can now generate artwork, music, or images based
on instructions, like the AI systems DALL·E or AIVA that create visuals or music
automatically.
5) AI in Finance: AI helps banks detect fraud by spotting unusual patterns in transactions. It
also powers apps that help you manage money or automatically invest, like robo-advisors.
6) Personalized Learning: AI tailors lessons to individual students, helping them learn at their
own pace. It’s used in platforms like Khan Academy.
7) Gaming: AI makes video game characters (NPCs) smarter and more realistic, allowing
them to adapt to the way you play the game.
8) AI in Cybersecurity: AI helps detect and stop cyberattacks by analyzing data for signs of
hacking or viruses in real time.

10. Advantages and Disadvantages of AI.


Ans: Here are advantages and disadvantages of Artificial Intelligence (AI):
Advantages of AI:
1) AI can perform repetitive tasks, freeing up time for humans to focus on creative and complex
work. Examples include data entry, customer support, and manufacturing.
2) AI can process large amounts of data faster than humans, improving efficiency in industries
like finance, healthcare, and logistics.
3) Unlike humans, AI systems can work continuously without breaks, leading to higher
productivity and service availability.
4) AI systems can perform tasks with high accuracy, reducing errors that humans might make due
to fatigue or oversight.
5) AI can analyze vast datasets, identifying patterns and trends that humans might miss,
providing valuable insights for decision-making.
6) AI enables personalized experiences, such as tailored recommendations in streaming services
(e.g., Netflix), online shopping (e.g., Amazon), or targeted advertising.
7) AI helps in diagnosing diseases, developing treatments, and even performing surgeries,
improving patient outcomes and healthcare efficiency.
8) By automating processes and reducing the need for human labor, AI can lower operational
costs for businesses.
9) AI-powered robots can be used in hazardous environments (e.g., mining, bomb defusal, space
exploration), reducing risks to human workers.
10)AI technologies like speech recognition and text-to-speech systems help people with
disabilities, making technology more accessible.
Disadvantages of AI:
1) As AI automates tasks, it can lead to job loss in sectors like manufacturing, customer service,
and transportation, affecting many workers.
2) Developing and implementing AI systems can be expensive, requiring significant investment in
technology, infrastructure, and expertise.
3) While AI can mimic certain aspects of creativity, it lacks the true creative thinking and emotional
intelligence that humans have.
4) AI can raise ethical issues, such as bias in algorithms, surveillance, and privacy violations. AI
decisions are often opaque, making accountability difficult.
5) AI systems rely heavily on data. If the data is biased, incomplete, or incorrect, the AI’s
performance and decisions may be flawed.
6) AI systems can be vulnerable to cyberattacks, manipulation, or misuse, leading to data
breaches or exploitation of AI systems for malicious purposes.
7) As AI becomes more autonomous, there is concern about losing control over AI systems,
particularly in critical areas like military or law enforcement.
8) AI systems can be difficult to understand, maintain, and upgrade. Their complexity can make
them less transparent and harder to fix when problems arise.
9) While AI creates new job opportunities, it may also widen the gap between those who have the
skills to work with AI and those who don’t, increasing economic inequality.
10)AI lacks the ability to understand and respond to human emotions, which is critical in roles that
require empathy, such as healthcare or customer service.

11. Foundation of AI.


Ans: The foundation of Artificial Intelligence (AI) lies in interdisciplinary fields, each contributing
key insights and techniques to the development of intelligent machines. Here's how these fields
shape AI:
1) Philosophy plays a crucial role by exploring fundamental questions about intelligence,
reasoning, and learning. Philosophers study logic to understand how we draw conclusions
and develop rules for reasoning. They also consider the mind as a physical system, laying
the groundwork for modeling learning processes and exploring the relationship between
language and rationality.
2) Mathematics is central to AI, providing tools for representing knowledge, proving
correctness, and creating algorithms. Concepts like computation help us design programs
that can solve problems, while studies of decidability and tractability define the limits of
what AI can achieve. Probability theory is essential for making decisions under uncertainty.
3) Economics introduces formal theories of rational decision-making, which guide AI systems
in optimizing outcomes. Game theory, a part of economics, helps in understanding
interactions among multiple decision-makers, such as in strategic planning or negotiations.
4) Neuroscience offers insights into the physical basis of mental activities by studying how
the brain functions. This understanding inspires AI systems to mimic processes like
perception, learning, and decision-making.
5) Psychology examines how humans and animals think, adapt, and act. By understanding
cognitive processes, perception, and motor control, AI developers can design systems that
replicate human-like behaviors.
6) Computer Engineering is essential for building efficient machines that can process vast
amounts of data and execute AI algorithms quickly. Advances in hardware, such as GPUs
and specialized chips, have made it possible to train complex AI models.
7) Control Theory and Cybernetics focus on creating systems that can regulate themselves.
These areas explore concepts like stability and homeostasis, which are important for
designing autonomous systems that operate reliably under their own control.
8) Linguistics investigates how language relates to thought and communication. AI systems
rely on linguistic principles for understanding and generating human language, using
grammar and knowledge representation to interact effectively.
Together, these disciplines provide the theoretical and practical foundation needed to develop AI,
enabling machines to think, learn, and act intelligently.

12. Define Weak AI and Strong AI.


Ans: Weak AI (Narrow AI):
Definition: Weak AI, also known as Narrow AI, refers to artificial intelligence systems designed to
perform a specific task or a narrow range of tasks. These systems are highly specialized and
operate under limited contexts. They do not possess general intelligence or self-awareness; they
follow predefined algorithms and rules to accomplish tasks like image recognition, language
translation, or playing chess.
Examples:
 Siri or Alexa: Virtual assistants that perform specific tasks like setting reminders or
answering questions based on programmed responses.
 Recommendation Systems: AI used by platforms like Netflix or Amazon to suggest movies
or products based on user behavior.

Strong AI (General AI):


Definition: Strong AI, also known as Artificial General Intelligence (AGI), refers to AI systems that
possess the ability to understand, learn, and apply intelligence across a wide range of tasks,
similar to human intelligence. A strong AI system would not only perform specialized tasks but
could also think, reason, and potentially have self-awareness and consciousness, making
decisions in a way that a human would.
Examples:
 Hypothetical Future AI: No current AI systems meet the definition of Strong AI. AGI would
be capable of everything a human can do intellectually, from solving mathematical problems
to understanding emotions.

13. Explain What is AI.


Ans: The diagram presents four perspectives on defining Artificial Intelligence (AI), categorized by
how systems think and act, as well as whether they do so like humans or rationally:
1) Thinking Humanly (The Cognitive Modeling Approach): This perspective aims to create
AI systems that replicate how humans think. It focuses on modeling the human brain's
cognitive processes, such as reasoning, learning, and memory. This approach often draws
inspiration from neuroscience and psychology to mimic human thought processes.
2) Thinking Rationally (The "Laws of Thought" Approach): Here, AI systems are designed
to reason logically and make decisions based on established rules of logic or rationality.
This approach is rooted in mathematics and philosophy, focusing on the creation of
algorithms that strictly adhere to logical principles.
3) Acting Humanly (The Turing Test Approach): This perspective evaluates whether an AI
system can behave like a human. The Turing Test, proposed by Alan Turing, is a
benchmark for this approach. If an AI can interact with humans in a way that makes it
indistinguishable from a human, it passes the test.
4) Acting Rationally (The Rational Agent Approach): This approach focuses on designing
AI systems that act to achieve the best possible outcomes, given their goals and available
information. These systems are called "rational agents," and they aim to make optimal
decisions based on reasoning, probability, and learning.
Lecture-2
1. What is an (Intelligent) Agent?
 An agent is any entity capable of operating autonomously, perceiving its environment
through sensors, reasoning (computing based on inputs), and acting upon the environment
via actuators. Examples include:
o Human agents: Sensors-like eyes and ears, actuators like hands and legs.
o Robotic agents: Sensors like cameras, and actuators like motors.
 An intelligent agent is a rational agent that perceives its environment and performs actions
optimally to achieve its goals. The measure of success is determined by an objective
performance criterion.
 Components of an intelligent agent:
o Perception: Receiving input from the environment.
o Reasoning: Computing or deciding on actions.
o Actuation: Acting on the environment.

2. Agents and Environments


 The relationship between agents and their environments is critical. An agent interacts with
the environment through:
o Sensors to perceive the environment.
o Actuators to perform actions.
 Agent functions and programs:
o The agent function maps a history of percepts (percept sequence) to actions.
o The agent program is a concrete implementation of the agent function on the
agent's physical architecture.
 Example: A robotic vacuum cleaner operates in an apartment's environment. Sensors
detect dirt and obstacles, while actuators move the robot and clean the floor.

3. The Concept of Rationality


 Rationality measures how well an agent acts to achieve its objectives based on:
o Performance measure: Criteria to evaluate success.
o Percept sequence: All inputs received so far.
o Knowledge: What the agent knows about its environment.
o Actions: The choices the agent can make.
 A rational agent maximizes its performance measure based on the information available
and constraints like limited computing power or incomplete knowledge.
 Key differences:
o Rationality ≠ Perfection: Rational agents make the best decision with the
information they have but aren't omniscient.
o Autonomy: Rational agents learn from experiences rather than solely relying on
built-in knowledge.
 Example: A vacuum-cleaner agent cleans efficiently by prioritizing dirt removal, minimizing
power use, and avoiding obstacles.

4. PEAS Analysis
 PEAS (Performance, Environment, Actuators, Sensors) is a framework for designing
intelligent agents.
 For each agent:
1. Performance measure: What constitutes success? (e.g., cleanliness for a vacuum
cleaner).
2. Environment: Where does the agent operate? (e.g., an apartment floor).
3. Actuators: What tools does the agent use? (e.g., wheels, brushes).
4. Sensors: How does the agent perceive the environment? (e.g., dirt detectors).
 Examples:
o Medical diagnosis system:

 Performance measure: Healthy patients, cost minimization.


 Environment: Patient records, hospitals.
 Actuators: Screens to display diagnoses and treatments.
 Sensors: Keyboards or microphones for data input.
o Part-picking robot:

 Performance measure: Correctly sorted parts.


 Environment: Conveyor belts, bins.
 Actuators: Robotic arms.
 Sensors: Cameras, joint angle sensors.
5. The Nature of Environments
Environments have distinct characteristics:
1. Fully observable vs. Partially observable: Does the agent have complete access to the
environment's state?
2. Deterministic vs. Stochastic: Are the outcomes predictable or random?
3. Episodic vs. Sequential: Do actions affect future decisions?
4. Static vs. Dynamic: Does the environment change over time?
5. Discrete vs. Continuous: Are there a finite number of states and actions?
6. Single agent vs. Multi-agent: Does the agent interact with others?
Examples:
 Chess: Fully observable, sequential, static, discrete, multi-agent.
 Taxi driving: Partially observable, stochastic, sequential, dynamic, continuous, multi-agent.

6. How Components of an Agent Program Work


 The agent program is implemented as a combination of:
o Sensors: Collect percepts from the environment.
o State update logic: Maintains internal memory based on percepts and actions.
o Decision-making logic: Determines the best action to take.
o Actuators: Execute the chosen action.
 Each component contributes to the agent's ability to respond intelligently to its environment.
For example:
o A learning agent uses feedback from the critic to refine its decision-making logic over
time.
o The problem generator ensures the agent explores new possibilities for better
learning.
7. The Structure of Agents
Agents can be classified based on their design and functionality:
1. Table-driven agents:
o Use a large table of percept-action pairs.
o Limitations: Inefficient for complex environments due to table size and inability to
adapt.
2. Simple reflex agents:
o Operate using condition-action rules (e.g., "If dirty, then clean").
o Stateless: They do not remember past percepts.
o Suitable only for fully observable environments.
3. Model-based reflex agents:
o Maintain an internal state to track the environment's dynamics.
o Use models of how the environment changes with or without actions.
o More adaptive than simple reflex agents.
4. Agents with goals:
o Have explicit goals that define desirable outcomes.
o Plan and search for actions to achieve those goals.
o More flexible but computationally demanding.
5. Utility-based agents:
o Use a utility function to evaluate how desirable each state is.
o Handle trade-offs between conflicting goals (e.g., speed vs. safety).
6. Learning agents:
o Adapt and improve over time by learning from experiences.
o Consist of:
 Performance element: Executes actions.
 Critic: Evaluates performance.
 Learning element: Modifies the agent's behavior.
 Problem generator: Suggests exploratory actions.
Lecture-3
1. Introduction
 Problem-solving is a core activity for intelligent agents, involving searching through
possible actions to find a solution.
 Goal formulation: The agent decides what desirable state (goal) it wants to achieve based
on its current situation and performance measure.
 Problem formulation: The process of defining actions and states that the agent must
consider to achieve its goal.
 Search phase: The agent looks for a sequence of actions leading to the goal state.
 Execution phase: The agent performs the actions that achieve the goal.
2. Problem-Solving Agents (Goal-Based Agent)
 A problem-solving agent is a type of goal-based agent that decides its actions by finding a
sequence of steps leading to a desirable state (goal).
 Steps in Problem Solving:
1) Goal formulation: Define the desired goal state.
2) Problem formulation: Identify actions and states relevant to achieving the goal.
3) Search for a solution: Explore possible sequences of actions to find one leading to
the goal state.
4) Execution of solution: Implement the solution actions.
 A problem consists of:
1) An initial state: Where the agent starts.
2) A set of actions: Possible moves.
3) A goal test: Determines if the agent has reached the goal.
4) A path cost function: Evaluates the cost of the solution (e.g., time, energy,
distance).
3. Example Problems
 Toy Problems:
o Vacuum World:
 States: Two locations, clean or dirty.
 Actions: Move left, move right, suck dirt.
 Goal: All locations clean.
 Path cost: Each action has a cost of 1.
o 8-Queens Problem:
 States: Chessboard configurations with 0–8 queens.
 Actions: Place or move queens such that no two threaten each other.
 Goal: 8 queens on the board without any attacking.
 Path cost: Not relevant; only the final state matters.
 Real-world problems:
o Touring Problem: Visit every city exactly once, starting and ending at the same city.
o Romanian Holiday Problem:
 Initial state: In(Arad).
 Goal state: In(Bucharest).
 Actions: Drive between cities.
 Path cost: Distance, time, or fuel consumption.
4. Searching for Solutions
 Search involves exploring the state space to find a path from the initial state to the goal
state.
 Search tree: Represents the exploration process:
o Nodes: States in the state space.
o Arcs: Actions leading from one state to another.
 Steps:
1) Initialize the search tree with the initial state.
2) Expand nodes using the successor function.
3) Use a strategy to choose which node to expand next.
4) Stop when a goal state is found.
 Example: Romanian Holiday Problem
o State space: Cities in Romania.
o Solution: Sequence of cities from Arad to Bucharest.
5. Search Tree Terminology
 State: A representation of a physical configuration.
 Node: A data structure in the search tree representing:
1) State: Current configuration.
2) Parent node: The node that generated this one.
3) Action: Action taken to reach this node.
4) Path cost: Cost from the initial state to this node.
5) Depth: Number of steps from the root to this node.
 State space vs. Search tree:
o State space: All possible states reachable from the initial state.

o Search tree: Representation of exploration paths; could be infinite.

 Frontier: The set of leaf nodes available for expansion.


 Queuing function:
o Manages nodes to explore using strategies like FIFO, LIFO, or priority queues.
6. Measuring Problem-Solving Performance
1) Completeness
 Definition: Determines whether the algorithm is guaranteed to find a solution if one exists.
 Importance: Ensures that the search method does not overlook valid solutions, especially
in problems where solutions might exist at great depths or in vast state spaces.
 Examples:
o Breadth-First Search (BFS) is complete because it systematically explores all nodes
level by level.
o Depth-First Search (DFS) is not complete in infinite state spaces as it can get stuck
exploring one branch indefinitely.
2) Time Complexity
 Definition: The amount of time required to find a solution, typically measured as the
number of nodes generated during the search.
 Key Influencing Factors:
o b: Branching factor (maximum number of successors per node).
o d: Depth of the shallowest goal node.
o m: Maximum depth of the state space (can be infinite).
 Worst-case Time Complexity:
o If the branching factor is b and the shallowest solution is at depth d, the total nodes
generated can be as large as: O(bd)
 Examples:
o For large branching factors and deep solutions, the time complexity grows
exponentially, making exhaustive searches impractical.
3) Space Complexity
 Definition: The maximum memory required during the search process, which is often the
number of nodes stored in memory at any given time.
 Components:
o Frontier: Nodes yet to be expanded.
o Explored Set: Nodes that have already been expanded (used in some algorithms
like A*).
 Worst-case Space Complexity:
o For most uninformed search algorithms, the space requirement is proportional to the
size of the frontier.
o Breadth-First Search (BFS): Requires memory proportional to all nodes at the
deepest level, O(bd).
o Depth-First Search (DFS): Memory is proportional to the maximum depth of the tree,
O(bm), where mmm is the maximum depth.
4) Optimality
 Definition: Whether the algorithm guarantees finding the solution with the least path cost.
 Criteria for Optimality:
o The search strategy must prioritize paths with the lowest cost.
o For uniform-cost search or A*, the search is optimal as it expands nodes in order of
increasing cost.
 Examples:
o BFS is optimal when all step costs are equal.
o DFS is not optimal because it does not consider path costs during exploration.

Additional Concepts
Branching Factor (b)
 The maximum number of children (successors) a node can have.
 A higher branching factor leads to exponential growth in both time and space complexity.
Depth (d)
 The number of steps in the shortest path from the initial state to the goal state.
 Determines how early the solution can be found in the tree.
Maximum Path Length (m)
 The longest possible path in the state space.
 For infinite state spaces, m = ∞, which can make some algorithms like DFS problematic.

Real-World Implications
 Trade-offs:
o Algorithms with better completeness and optimality often require more time and
memory.
o Example: BFS is complete and optimal but has high space complexity, making it
unsuitable for memory-constrained systems.
 Selection of Algorithms:
o For small problems or problems with finite depth, BFS might be a good choice.
o For problems with infinite state spaces, Iterative Deepening Search (IDS) combines
the benefits of DFS (low memory usage) and BFS (completeness and optimality).
Lecture-4
1. What is Adversarial Search?
 Definition: Adversarial search is a search strategy used in environments where agents
must compete against each other, such as in games. Unlike simple search problems, the
outcome depends on the actions of both the agent and its adversary.
 Characteristics:
o Involves two or more agents with conflicting goals.
o Often involves zero-sum games, where one player's gain is equivalent to the other
player's loss.
o The environment is typically deterministic and fully observable, making strategic
planning essential.
 Example: Chess, where each move by one player counters the other.

2. Application in Games
Adversarial search is widely applied in games requiring strategy and competition. Some
applications include:
 Board Games:
o Chess
o Checkers
o Go
 Card Games:
o Poker
o Bridge
 Video Games:
o Real-time strategy games like StarCraft or Age of Empires.
 Economic Simulations:
o Multi-agent systems simulating competitive marketplaces.

3. Different Types of Games


Games can be categorized based on various factors:
 Deterministic vs. Stochastic:
o Deterministic: No element of chance (e.g., Chess, Tic-Tac-Toe).
o Stochastic: Involves randomness (e.g., Backgammon, Monopoly).
 Perfect Information vs. Imperfect Information:
o Perfect Information: All players have complete knowledge of the state (e.g., Go,
Chess).
o Imperfect Information: Players have limited knowledge (e.g., Poker, Battleship).
 Zero-Sum vs. Non-Zero-Sum:
o Zero-Sum: One player’s gain is exactly equal to the other’s loss (e.g., Chess,
Checkers).
o Non-Zero-Sum: Players can achieve mutual gains or losses (e.g., Trading
simulations).
 Single-Player vs. Multiplayer:
o Single-Player: Solving puzzles or achieving personal objectives (e.g., Sudoku).
o Multiplayer: Competing with others (e.g., card games, multiplayer strategy games).

4. Optimal Decisions in Games


 The goal of an agent in a game is to maximize its chances of winning while minimizing its
opponent's success.
 Optimal decisions depend on:
o Game tree exploration: Evaluating all possible moves and counter-moves.
o Payoff values: Quantifying the desirability of game states for each player.
o Strategies:
 Min-Max Strategy: Aims to minimize the opponent's maximum possible gain
while maximizing the agent's own minimum gain.
 Alpha-Beta Pruning: Reduces the number of nodes explored in the game
tree, making the search more efficient.

5. Multiplayer Games
 In multiplayer games, more than two players compete, requiring additional strategies to
handle complex interactions.
 Features:
o Coalitions: Players may form temporary alliances.
o Dynamic Objectives: Each player may have unique or evolving goals.
o Complex Evaluation Functions: Utility values need to consider multiple players.
 Example: Board games like Risk or economic simulations with multiple competing agents.
6. Min-Max Algorithm
 A fundamental algorithm used in adversarial search for two-player games.
 Concept:
o Maximizing Player: Chooses moves to maximize their payoff.
o Minimizing Player: Chooses moves to minimize the maximizing player’s payoff.
 Steps:
1) Generate the game tree from the current state.
2) Evaluate terminal nodes using a utility function.
3) Backpropagate values up the tree:
 At max nodes, choose the maximum value of child nodes.
 At min nodes, choose the minimum value of child nodes.
 Output: The best move for the maximizing player assuming the opponent plays optimally.

7. Alpha-Beta Pruning
 An optimization technique for the Min-Max algorithm.
 Purpose: Prune branches of the game tree that cannot affect the final decision, reducing
computation time.
 Key Terms:
o Alpha: The best value the maximizing player can guarantee at that level or above.
o Beta: The best value the minimizing player can guarantee at that level or above.
 Process:
o Traverse the game tree in depth-first order.
o Prune branches where:
 The value of a branch is worse than the alpha value for the maximizing player.
 The value of a branch is worse than the beta value for the minimizing player.
 Result: Same decision as Min-Max but with fewer node evaluations.
 Efficiency: Reduces the number of nodes to O(bd/2), where b is the branching factor and d
is the depth.
8. State-of-the-Art Game Programs
 Advanced programs use adversarial search and other AI techniques to outperform human
players.
 Examples:
o Deep Blue: Defeated world chess champion Garry Kasparov using advanced game
tree exploration and evaluation.
o AlphaGo: Used reinforcement learning and neural networks to master the game of
Go, defeating top human players.
o Poker AI: AI systems like Libratus and Pluribus excel in imperfect information games
like Poker.
o Real-time Strategy Games:
 AI programs for games like StarCraft II use a mix of adversarial search,
machine learning, and planning to defeat professional players.

9. Deterministic Games in Practice


Definition
 Deterministic games are games where the outcome of every action is entirely predictable
and depends solely on the current state and the players' actions.
 These games lack randomness or chance elements, ensuring that the same initial
conditions and actions always lead to the same outcomes.
Characteristics
 Perfect Information: Players have complete knowledge of the game state at all times (e.g.,
Chess, Checkers).
 No Randomness: No dice rolls, shuffled cards, or other stochastic elements.
 Strategy-Dependent: Winning depends purely on skill and strategy, with no reliance on
luck.
Examples
1) Chess:
o Two-player game with perfect information.
o Each player alternates moves, aiming to checkmate the opponent's king.
o Enormous state space but entirely deterministic.
2) Checkers:
o Also, a two-player deterministic game.
o Players move pieces diagonally and aim to capture the opponent's pieces or block
their moves.
3) Go:
o Deterministic game with simple rules but immense complexity.
o Players alternately place stones on a grid, trying to control territory.
4) Tic-Tac-Toe:
o Simplest example of a deterministic game.
o Played on a 3x3 grid, aiming to align three symbols in a row.
5) Connect Four:
o Two players drop discs into a vertical grid, aiming to align four discs in a row.
Adversarial Search in Deterministic Games
 Min-Max Algorithm: Ideal for two-player deterministic games. It evaluates the best move
assuming both players play optimally.
 Alpha-Beta Pruning: Optimizes the Min-Max algorithm by pruning irrelevant branches of
the search tree.
Real-World Applications
1) Game AI Development:
o Deterministic games like Chess and Go have driven advancements in artificial
intelligence.
o Algorithms such as Deep Blue (Chess) and AlphaGo (Go) were developed to solve
deterministic game challenges.
2) AI Benchmarks:
o Deterministic games serve as benchmarks for evaluating AI algorithms' performance.
o Examples include competitions between AI agents in Chess and Go.
3) Education and Research:
o These games are commonly used to teach concepts like search algorithms,
optimization, and decision-making in AI courses.
Advantages
 Predictable Outcomes: Perfect for developing and testing AI algorithms since results are
repeatable and predictable.
 Focus on Skill: Emphasizes strategic thinking and planning over chance.
Limitations
 Lack of Real-World Complexity: The absence of uncertainty and imperfect information
makes deterministic games less representative of real-world decision-making scenarios.
 Finite State Space: While large, the state space is ultimately finite, allowing for exhaustive
computation in some cases.

You might also like