Unit-I: Foundations of AI
Unit-I: Foundations of AI
What is AI:
Artificial Intelligence (AI) refers to the simulation of human intelligence by machines, particularly
computer systems. It involves creating systems or algorithms that can perform tasks typically
requiring human intelligence, such as learning, reasoning, problem-solving, understanding language,
and perceiving the environment.
1. Narrow AI (Weak AI): This is AI designed to perform specific tasks, like voice assistants (e.g.,
Siri), recommendation algorithms (e.g., Netflix), or autonomous vehicles. These systems are
good at one thing but lack general intelligence.
2. General AI (Strong AI): This hypothetical form of AI would have the ability to understand,
learn, and apply intelligence across any task, much like a human. Strong AI doesn't exist yet
but is a subject of much research and debate.
3. Machine Learning (ML): A subset of AI, where machines are given large amounts of data and
learn to make decisions or predictions based on patterns they detect. Algorithms adapt as
they process more data, improving their performance.
4. Deep Learning: A further subset of machine learning, deep learning uses neural networks
with many layers to model complex patterns in large datasets, often achieving very high
performance in tasks like image recognition, natural language processing, and more.
AI applications include virtual assistants, autonomous vehicles, facial recognition, and personalized
recommendations, to name a few. AI is rapidly evolving, bringing opportunities but also ethical
considerations related to jobs, privacy, and decision-making.
Foundations of AI:
The foundation of Artificial Intelligence (AI) is rooted in a combination of multiple disciplines such
as mathematics, computer science, neuroscience, and cognitive psychology. The development of AI
has been shaped by both theoretical advances and technological progress. Here are the key
foundational concepts and fields that contribute to AI:
Philosophical Roots: AI traces its origins to ancient philosophical questions about the
nature of intelligence, reasoning, and human cognition. Philosophers like Aristotle
developed formal systems of logic that have influenced AI's goal of mimicking human
reasoning.
Logic and Reasoning: Early AI was grounded in symbolic logic and rule-based systems
(deductive reasoning). The goal was to create formal systems where reasoning could be
automated, laying the groundwork for early AI systems.
2. Mathematics
Linear Algebra: AI, particularly machine learning and deep learning, heavily relies on linear
algebra for manipulating data in the form of vectors, matrices, and tensors.
Probability and Statistics: Probabilistic reasoning is crucial for AI systems to make decisions
under uncertainty. Bayesian reasoning, in particular, is fundamental in modern AI
approaches like machine learning.
Calculus and Optimization: Derivatives and gradient descent methods from calculus are
essential in training machine learning models, allowing them to minimize error functions
and improve over time.
Graph Theory: Many AI algorithms, particularly in networking and search, rely on graph
theory to model relationships between data points.
Human Brain as Inspiration: Early AI research took inspiration from how the brain
processes information. The concept of neural networks, which form the basis of deep
learning, was modeled after biological neurons.
Search Algorithms: Many AI problems can be reduced to searching for the best solution
from a set of possibilities. Algorithms like A (A-star)* and Minimax are used in tasks like
pathfinding and game-playing.
Data Structures: Efficient data structures, such as trees, graphs, and hash maps, are
essential for the functioning of AI algorithms, especially in areas like decision-making and
knowledge representation.
5. Machine Learning
Unsupervised Learning: AI systems that learn patterns and relationships in data without
labeled outcomes.
7. Knowledge Representation
Expert Systems: These systems are designed to replicate the decision-making abilities of
human experts by encoding knowledge into rules.
NLP is the branch of AI that focuses on enabling machines to understand, interpret, and
generate human language. Early AI systems used rule-based approaches, while modern
NLP relies on machine learning and deep learning techniques.
Robotics combines AI with physical systems, enabling machines to interact with the
physical world. AI in robotics involves decision-making, perception (such as computer
vision), and the integration of sensory data to navigate environments.
Turing Test (1950): Proposed by Alan Turing to assess a machine’s ability to exhibit
intelligent behavior indistinguishable from a human.
First AI Programs: Early AI programs, like the Logic Theorist (1955) and General Problem
Solver (1957), aimed to replicate human problem-solving processes.
AI Winter: Periods in the history of AI where progress slowed due to unmet expectations
and reduced funding (1970s, 1980s).
Deep Learning Breakthrough: Starting in the 2010s, deep learning revolutionized fields like
image recognition and natural language processing, leading to the modern AI boom.
Together, these fields form the foundation of AI, enabling advancements in systems capable of
learning, reasoning, and interacting with the world.
The history of Artificial Intelligence (AI) spans decades of research and development, marked by
significant breakthroughs, periods of hype, and intervals of disillusionment. Here's a timeline of
key events and milestones in AI history:
1943: Warren McCulloch and Walter Pitts publish a paper on neural networks, describing
how neurons in the brain could be modeled using mathematical logic, laying the
groundwork for future neural networks.
1950: Alan Turing publishes the paper "Computing Machinery and Intelligence," proposing
the Turing Test as a way to determine if a machine can exhibit intelligent behavior
indistinguishable from a human.
1951: The first AI programs are written. Christopher Strachey’s checkers program and Alan
Turing’s chess program were among the earliest examples of AI applied to games.
1956: The term Artificial Intelligence is coined by John McCarthy during the Dartmouth
Conference, widely considered the founding event of AI as a field of study. This conference
brought together key figures such as Marvin Minsky, Allen Newell, and Herbert Simon.
1961: The first industrial robot, Unimate, is introduced, automating tasks in manufacturing.
1965: Joseph Weizenbaum develops ELIZA, one of the first natural language processing
programs, designed to simulate human conversation by mimicking a psychotherapist.
1966: Marvin Minsky and Seymour Papert publish "Perceptrons," a book on neural
networks that highlights the limitations of early AI systems, which temporarily slows
research into neural networks.
1969: The first successful AI system for game playing, Shakey the Robot, is developed at
the Stanford Research Institute (SRI). Shakey can navigate environments and perform
simple tasks based on reasoning.
1970s: The early optimism surrounding AI fades due to technical challenges, lack of
computing power, and unmet expectations. Funding cuts lead to the first AI Winter, a
period of reduced interest and progress in AI research.
1972: The Prolog programming language is developed, focusing on symbolic reasoning and
logic programming.
1976: MYCIN, an expert system for diagnosing bacterial infections, demonstrates the
potential of rule-based systems in specialized domains like medicine. Expert systems
become a major focus of AI research during the 1970s and 1980s.
1980: The development of XCON, an expert system for configuring computers at Digital
Equipment Corporation (DEC), shows the commercial potential of AI in business
applications.
1982: John Hopfield revives interest in neural networks with the development of the
Hopfield Network.
Late 1980s: Despite advancements in AI, the limitations of expert systems become
apparent due to their inability to generalize beyond predefined rules, leading to another
decline in enthusiasm and the start of the second AI Winter in the early 1990s.
1990s: AI research shifts towards machine learning techniques, which focus on statistical
methods and data-driven approaches rather than rule-based systems.
1997: IBM's Deep Blue defeats world chess champion Garry Kasparov, marking a significant
achievement in AI. Deep Blue uses brute-force search and expert heuristics to evaluate
millions of chess positions per second.
1998: Kismet, developed at MIT, becomes one of the first robots capable of demonstrating
emotional expressions, hinting at advancements in AI for social interaction.
2006: Geoffrey Hinton introduces the concept of deep learning as we know it today, using
deep neural networks with many layers to model complex data patterns. This marks the
beginning of the modern AI revolution.
2009: Google introduces the self-driving car project, a major leap for AI in robotics and
autonomous systems, demonstrating AI's potential in real-world applications.
2011: IBM's Watson wins Jeopardy! by using natural language processing and machine
learning to understand and answer complex questions. This showcases AI's ability to
process large volumes of unstructured data.
2012: The AlexNet model, developed by Alex Krizhevsky, wins the ImageNet competition,
revolutionizing computer vision through deep learning and showing how neural networks
can outperform traditional algorithms.
2021: AlphaFold, developed by DeepMind, solves the 50-year-old protein folding problem,
demonstrating AI's potential to revolutionize fields like biology and healthcare.
2022: Text-to-image AI models like DALL-E 2 and Stable Diffusion show AI's creative
abilities, producing highly realistic images from textual prompts.
2023: Large language models (LLMs) like GPT-4 continue to advance natural language
understanding and generation, finding applications in education, customer service, and
coding. However, ethical concerns, particularly around bias, misinformation, and the
impact on jobs, remain significant.
Ongoing Research: While AI continues to make strides in specialized tasks, the pursuit of
Artificial General Intelligence (AGI)—AI that can perform any intellectual task that a human
can—is still a distant goal. Current AI systems excel in narrow domains but lack true
general intelligence or consciousness.
Ethics and Governance: As AI becomes more integrated into everyday life, debates
surrounding AI ethics, transparency, and regulation intensify. Issues like job displacement,
privacy concerns, and decision-making transparency are central to ongoing discussions.
The history of AI is characterized by alternating cycles of optimism, hype, and setbacks. Today, AI is
advancing rapidly, with potential applications in nearly every field, from healthcare to finance to
education, but it also raises complex social, ethical, and economic questions that will shape its
future.
The "state of the art" in Artificial Intelligence (AI) refers to the most advanced and cutting-edge
technologies, methods, and systems currently being developed and deployed. As of the 2020s, AI
has seen rapid progress, especially in the following key areas:
Large Language Models (LLMs): AI models like OpenAI's GPT-4, Google's PaLM, and
Anthropic's Claude are state-of-the-art in generating human-like text, understanding
context, and performing various language tasks. These models can write essays, generate
code, and answer complex questions, often exhibiting capabilities close to human-level in
many text-related tasks.
Transformers: Introduced in 2017, the transformer architecture (used in models like BERT,
GPT, T5) revolutionized NLP. Transformers use attention mechanisms to process sequences
of data and have been highly successful in tasks such as machine translation, text
summarization, question answering, and more.
Multimodal Models: Models like GPT-4 and CLIP from OpenAI are capable of handling both
text and image data, pushing the boundaries of how AI understands and generates
information from multiple types of input simultaneously.
2. Deep Learning
Neural Networks and Deep Learning: Deep learning continues to dominate AI with neural
networks that have many layers (hence "deep"), allowing them to model highly complex
patterns in data. These models are now the standard for tasks such as image recognition,
speech recognition, and autonomous driving.
Self-Supervised Learning: Techniques that allow models to learn from large amounts of
unlabeled data have become a crucial innovation, helping models like SimCLR and BYOL in
unsupervised learning and reducing reliance on labeled datasets.
3. Computer Vision
Vision Transformers (ViTs): These models have redefined how AI processes images by
applying the transformer architecture (previously dominant in NLP) to visual tasks, often
surpassing convolutional neural networks (CNNs) in performance for tasks like image
classification.
Image Synthesis: AI tools like DALL-E 2, Stable Diffusion, and MidJourney are at the
forefront of generating highly realistic and artistic images from text descriptions. These
models are transforming industries such as design, marketing, and content creation.
Object Detection and Segmentation: Models like YOLOv7 and Mask R-CNN are state-of-the-
art in identifying and segmenting objects in images, with applications in autonomous
driving, surveillance, medical imaging, and augmented reality.
4. Reinforcement Learning
Deep Reinforcement Learning (DRL): Systems like DeepMind’s AlphaGo, AlphaZero, and
MuZero represent the state of the art in reinforcement learning, where AI learns to make
decisions by interacting with an environment. MuZero, for example, can master complex
games like chess and Go without being explicitly programmed with the rules.
Self-Driving Cars: Companies like Tesla, Waymo, and Cruise continue to push the
boundaries of autonomous driving technology, using AI for tasks like perception, planning,
and control in real-world environments.
6. Generative AI
Text-to-Image and Text-to-Video: AI models like DALL-E 2 and MidJourney are leading
innovations in text-to-image generation, where users provide a text prompt, and the AI
generates realistic or artistic visuals. This is extended to text-to-video with platforms like
RunwayML pushing AI-generated video content.
AI in Art and Music: AI is increasingly being used to generate art, music, and creative
content. Generative models are creating new possibilities in creative industries, where AI
can compose music, write poetry, and even assist in film production.
AI in Drug Discovery: AI is accelerating the drug discovery process, using machine learning
models to predict the effectiveness of drug compounds and their potential side effects,
significantly shortening development cycles.
8. Autonomous Decision-Making
AI in Supply Chains: AI systems optimize supply chains, managing logistics, inventory, and
distribution more effectively by predicting demand, optimizing routes, and minimizing
waste.
Bias Detection and Fairness: Research into AI fairness is a key focus, with new algorithms
developed to detect and mitigate biases in AI systems. Tools like Fairness Indicators are
used to monitor bias in machine learning models.
Quantum Computing: Although in its infancy, quantum AI represents a frontier area where
quantum computers are expected to solve certain types of AI problems (such as
optimization tasks) exponentially faster than classical computers. Companies like IBM and
Google are exploring quantum machine learning.
Differential Privacy: Techniques that ensure data privacy while allowing AI models to learn
from sensitive data are increasingly used, particularly in industries where user privacy is
paramount (e.g., healthcare, finance).
Bias and Fairness: Even state-of-the-art models suffer from biases, as they often reflect the
biases present in their training data. Ensuring fairness and avoiding discriminatory
outcomes is an ongoing area of concern.
Explainability: As AI models become more complex, understanding why they make certain
decisions becomes harder. This lack of transparency, especially in critical domains like
healthcare or criminal justice, raises ethical questions.
Ethics of Generative AI: As AI generates realistic text, images, and videos, concerns about
deepfakes, misinformation, and intellectual property rights are growing. These issues
present new regulatory and ethical challenges.
INTELLEGENT AGENTS
Intelligent agents are autonomous entities that perceive their environment, reason or plan to
achieve goals, and take actions to affect their environment in some way. They are a fundamental
concept in Artificial Intelligence (AI) and can be thought of as systems that act in an intelligent
manner, typically by making decisions to solve complex problems.
2. Perception: They sense and interpret data from their environment using sensors or input
devices. For example, a robot uses cameras and sensors to perceive its surroundings.
3. Reasoning: Based on the input data, intelligent agents use reasoning, decision-making, or
problem-solving processes to make choices.
4. Goal-Oriented Behavior: Agents have specific objectives or goals they seek to achieve,
which guide their decision-making.
5. Learning and Adaptation: Some agents are capable of learning from their environment and
past experiences, adapting their actions to improve performance over time.
6. Action: Agents take actions that affect their environment. These actions can be physical (in
robots) or digital (in software systems).
2. Perception Component: A system that processes and interprets the data collected from
sensors.
3. Reasoning/Planning Engine: The core decision-making part that uses algorithms or rules to
plan and reason about the best course of action.
4. Actuators: Mechanisms or actions the agent can perform to interact with the environment.
5. Learning Mechanism (optional): In learning agents, this component allows the system to
improve its actions based on past experiences.
2. Virtual Assistants: AI-driven personal assistants like Siri, Google Assistant, or Alexa that
perceive voice commands, process them, and perform actions like sending messages or
setting reminders.
3. Autonomous Vehicles: Self-driving cars that use sensors (cameras, radar, LiDAR) to perceive
the road and surroundings, make decisions (e.g., braking, steering), and drive
autonomously.
4. Software Agents: Systems like email filters that automatically sort messages, or AI in video
games that controls non-player characters (NPCs).
5. Recommender Systems: Netflix or Amazon recommendation engines that use data about
users' preferences to suggest movies, shows, or products.
6. Trading Bots: Automated trading agents that operate in financial markets, making buy/sell
decisions based on real-time data analysis.
1. Robotics: Intelligent agents are used in autonomous robots for navigation, object
manipulation, and interaction with humans.
2. Healthcare: AI agents assist in diagnosing diseases, suggesting treatments, and monitoring
patient health through wearable sensors.
5. Gaming: AI agents control NPCs and simulate complex behaviors, improving the gaming
experience.
6. Smart Homes: Devices like thermostats, lights, and security systems are controlled by
intelligent agents to optimize energy usage and security.
7. Autonomous Vehicles: Self-driving cars use AI agents to perceive the environment, plan
routes, and make driving decisions.
In Artificial Intelligence (AI), agents and environments are fundamental concepts that define how
intelligent systems interact with the world around them. Understanding these concepts is crucial
for designing AI systems that can make decisions, solve problems, and achieve goals in complex
settings.
Agents:
An agent is any entity that perceives its environment through sensors and acts upon it through
actuators to achieve its goals. Agents can be anything from robots, software programs, humans, or
even animals. They are designed to operate autonomously, meaning they can take actions without
direct human intervention, based on the information they perceive.
Characteristics of Agents:
1. Perception: Agents perceive the environment using sensors (e.g., cameras for a robot,
input data for a software agent).
2. Action: Agents perform actions on the environment using actuators (e.g., motors in robots
or actions like sending a message in software agents).
3. Autonomy: Agents can make decisions and take actions on their own, based on the
information they have gathered from their environment.
4. Rationality: Agents aim to maximize their performance measure by choosing the most
appropriate actions. Rational agents make decisions that lead to the best expected
outcome given their knowledge.
ENVIRONMENT:
The environment refers to the external world with which the agent interacts. It encompasses
everything the agent perceives and acts upon. The environment can be physical (as in the real
world) or virtual (as in a computer game or a simulated world). Different environments pose
different challenges for agents, such as uncertainty, partial observability, and changing conditions.
Characteristics of Environments:
o In a fully observable environment, the agent has complete access to all the
relevant information about the current state of the environment.
Example: A chess game, where all pieces and possible moves are visible.
o In a partially observable environment, the agent only has limited information, and
must make decisions based on incomplete or uncertain data.
o In an episodic environment, the agent’s actions in one episode do not affect future
episodes. Each action is independent of previous actions.
o In a sequential environment, current actions can affect future states and decisions,
requiring the agent to consider the long-term impact of its actions.
o A static environment does not change while the agent is deliberating or making
decisions.
Example: Solving a puzzle where the state remains the same during the
decision process.
o A dynamic environment changes over time, either autonomously or as a result of
other agents or forces in the environment.
o A discrete environment has a finite number of states or actions, and the agent can
clearly identify them.
Example: A board game like checkers, where the board and pieces have
distinct, finite positions.
o In a single-agent environment, there is only one agent acting to achieve its goals,
with no other agents influencing its actions.
Example: A stock market, where multiple agents (traders) buy and sell
stocks, influencing each other's decisions.
Agent-Environment Interaction:
The agent's interaction with the environment can be described as a cycle where:
2. Based on its perception, the agent uses its decision-making process to select an action.
4. The environment changes in response to the agent's action, and the agent perceives the
new state, continuing the cycle.
This cycle is often formalized using the concept of perception-action loops or feedback loops in
which the agent continually adapts its actions based on how the environment evolves.
Performance Measure:
Designing an Agent:
When designing an intelligent agent, several factors need to be considered:
1. Task Environment: Understanding the characteristics of the environment the agent will
operate in (e.g., is it dynamic, partially observable, or multi-agent?).
2. Agent Architecture: Defining the structure and components of the agent, such as its
sensors, actuators, decision-making algorithms, and memory for tracking past experiences.
3. Agent’s Knowledge: Determining how much prior knowledge the agent should have about
the environment and how it can learn or adapt to new situations.
4. Learning Capability: Deciding whether the agent should learn from past experiences
(learning agent) or if it will be programmed with a fixed set of rules (non-learning agent).
o The agent (robot) perceives the environment through cameras and sensors.
o The environment includes obstacles (e.g., shelves, workers) and tasks (e.g., pick up,
drop off).
o The robot adapts its actions based on sensor data and changing conditions in the
warehouse.
o The agent (self-driving car) perceives its environment using sensors like cameras,
LiDAR, and GPS.
o It decides how to drive, steer, and brake based on its perception of the road, traffic,
and pedestrians.
o The agent (virtual assistant, like Siri or Alexa) perceives its environment through
voice inputs.
o The environment is a digital space, where it interacts with apps, databases, and the
internet to fulfill user requests.
o The agent (chess AI) perceives the game state by reading the positions of the
pieces on the board.
o It decides its next move based on an evaluation of potential future game states.
o The environment is fully observable and deterministic because all possible moves
and outcomes are known.
The concept of rationality in Artificial Intelligence (AI) refers to the idea that an
intelligent agent makes decisions and takes actions that lead to the best possible outcome,
based on the information available to it. A rational agent is designed to act in a way that
maximizes its performance, given its knowledge and capabilities.
1. Performance Measure:
o Example: For a self-driving car, the performance measure could be reaching the
destination safely and efficiently while following traffic rules.
o A rational agent bases its actions on the perceptual information it receives from its
environment. The better the perception, the more informed the decisions.
o It also uses its prior knowledge about the environment, which might include rules,
previous experiences, and learned information.
3. Action:
o The agent chooses actions that, based on its current knowledge, are expected to
maximize its performance measure. This involves reasoning about the outcomes of
potential actions and selecting the best one.
4. Future Consequences:
o Rationality involves reasoning about the future consequences of actions, not just
their immediate effects.
o Example: In a multi-step task like a robot navigating a maze, the agent must think
ahead about how its actions will influence future states and the final goal.
A rational agent is one that, for every possible sequence of percepts (i.e., information received
from the environment), selects an action that maximizes its expected performance, given the
available evidence.
4. Actions available: The set of possible actions the agent can choose from.
Types of Rationality:
1. Perfect Rationality:
o A perfectly rational agent would always select the best action given the available
information, environment, and performance measure.
2. Bounded Rationality:
o Agents must make decisions that are "good enough" within these limitations, even
if they are not the optimal ones.
o Example: In real-time applications like autonomous driving, the car must make
decisions quickly based on incomplete or uncertain data, meaning the decision
may be rational given the time constraints, but not perfect.
Rationality also depends on the characteristics of the environment in which the agent operates:
o In a deterministic environment, the agent knows exactly how its actions will affect
the environment, leading to predictable outcomes.
o In a static environment, the agent does not need to worry about changes in the
environment while it is deliberating.
o In a dynamic environment, the agent must account for the possibility that the
environment may change (e.g., other agents acting, or the passage of time).
It is important to note that rationality is not the same as omniscience. An omniscient agent would
know the actual outcomes of all actions in advance and could always make the best choice.
However, rational agents do not have access to perfect knowledge; they make the best decisions
given what they know and the information available at the time.
Example of Rationality:
The environment is a room divided into a grid, some cells of which are dirty.
The robot has sensors to detect dirt and walls, and it can move in four directions (up,
down, left, right).
The robot’s goal (performance measure) is to clean the entire room in the shortest time
possible.
Use its sensors to perceive the environment (e.g., detect dirt and walls).
Use prior knowledge about the layout of the room (if it has been there before).
Plan its movements to maximize efficiency (e.g., not revisiting already cleaned areas).
Make decisions based on the current and expected future states, aiming to clean the room
quickly and avoid unnecessary movements.
A rational vacuum agent would make decisions that maximize the amount of dirt cleaned in the
shortest time, given the limitations of its sensors and the information available to it.
Rationality in AI is about making the best possible decisions to achieve the highest
performance based on the available information, within the limitations of the agent's
knowledge and computational capabilities. While perfect rationality may be unattainable
in most practical scenarios, AI systems are designed to approximate rational behavior as
closely as possible through the use of advanced decision-making techniques.
The nature of environments in which an agent operates is a critical factor in the design of
AI systems. The environment defines the challenges and constraints an agent must consider while
interacting and making decisions. Depending on the characteristics of the environment, the
complexity of the agent's behavior may vary. Here, we'll explore the key features and
classifications of environments in AI.
Key Characteristics of Environments:
o Fully Observable:
o Partially Observable:
The agent may need to infer hidden information or make decisions under
uncertainty.
Example: A self-driving car may have sensors that fail to detect all aspects
of its surroundings, such as a pedestrian hidden behind a parked vehicle.
o Deterministic:
o Stochastic:
o Episodic:
o Sequential:
Example: Driving a car, where each steering decision influences the car’s
future state and position on the road.
o Static:
Example: A crossword puzzle, where the puzzle state doesn’t change while
the agent is solving it.
o Dynamic:
o Discrete:
Example: A board game like chess or tic-tac-toe, where the game board has
a limited number of possible configurations.
o Continuous:
In a continuous environment, the state space and action space are infinite,
and variables can take any value within a range. The agent must deal with
smooth changes in state and action.
Example: The physical world for a robot, where the robot’s position,
velocity, and other factors are continuous variables.
o Single-Agent:
o Multi-Agent:
o The entire state of the game is visible to both players (fully observable).
o The game consists of discrete steps or moves, and each step depends on the
previous one (sequential, discrete).
o The game board doesn’t change unless one of the players makes a move (static).
o The car’s sensors may not capture all necessary information, such as occluded
pedestrians (partially observable).
o The car operates in a continuous space, adjusting speed and steering continuously
(continuous).
o The system may not have full access to all patient information or symptoms
(partially observable).
o Diagnoses and treatment outcomes may vary due to uncertainty in how a patient
will respond to treatment (stochastic).
o The system deals with distinct states, such as diagnosing different diseases
(discrete).
o The state of the patient doesn’t change during the decision-making process (static).
o There is only one agent (the diagnostic AI system) making decisions (single-agent).
o The agent doesn’t have complete knowledge of the market, and there are hidden
factors (partially observable).
o Each trading decision affects future states, like portfolio performance (sequential).
o Multiple traders (agents) interact in the market, influencing each other's decisions
(multi-agent).
The nature of environments significantly influences how AI systems are designed and
operate. Understanding the environment’s characteristics—whether it is fully observable or
partially observable, deterministic or stochastic, static or dynamic, episodic or sequential—guides
the design of agents and algorithms to ensure they can effectively achieve their goals. The
complexity of the environment directly affects the sophistication required for the agent’s decision-
making processes
The structure of agents in Artificial Intelligence (AI) refers to how an intelligent agent is
designed to perceive its environment, process information, and take actions. An agent's structure
includes its internal components, the way it interprets inputs, and how it decides on actions to
maximize its performance in the environment.
1. Perception (Sensors):
o Perceptual inputs are obtained from the environment through sensors. These
sensors collect data that help the agent understand its current situation or state.
o Example: In a self-driving car, cameras, radar, and LiDAR serve as sensors that
collect information about the road, other vehicles, pedestrians, and traffic signals.
2. Actuators:
o Actuators are the components that allow the agent to take actions in its
environment. They execute the decisions made by the agent and influence the
environment.
o Example: In a robot, actuators could be wheels or robotic arms that allow the
agent to move or manipulate objects.
3. Perceptual Sequence:
o A perceptual sequence is the history of all percepts (inputs) that the agent has
received up to the current moment. This helps the agent make decisions based on
its previous experiences and current observations.
4. Action:
o Based on its perception of the environment, the agent takes an action to achieve a
goal. The action is selected to maximize the performance measure, which could
involve following a rule, a learned strategy, or searching for the optimal solution.
Types of Agents:
o Structure:
o Behavior:
o Advantages:
o Disadvantages:
o Structure:
These agents maintain an internal model of the environment. This model
helps the agent understand how the environment works and keeps track of
unobservable aspects of the environment.
The agent updates its internal state based on both current percepts and
prior actions.
o Behavior:
Model-based agents use the model to predict the effects of actions and
choose actions that lead to the desired outcome.
Example: A self-driving car that tracks the positions of other vehicles over
time, even when they are momentarily obscured by objects.
o Advantages:
o Disadvantages:
3. Goal-Based Agents:
o Structure:
These agents act based on achieving specific goals rather than following
condition-action rules or reflexes.
o Behavior:
A goal-based agent considers not only the current state of the environment
but also the future states that result from its actions.
o Advantages:
More flexible than reflex agents since they can change their behavior
dynamically to achieve different goals.
o Disadvantages:
4. Utility-Based Agents:
o Structure:
The utility function assigns a numerical value to each state, indicating the
level of happiness or satisfaction the agent experiences in that state.
o Behavior:
The agent chooses actions that maximize its overall utility, selecting the
one that leads to the highest expected utility.
Can make nuanced decisions, weighing multiple factors like risks, costs,
and benefits.
o Disadvantages:
5. Learning Agents:
o Structure:
Learning agents have the ability to improve their performance over time by
learning from their experiences.
o Behavior:
o Disadvantages:
Architecture of Agents:
1. Table-Driven Agents:
o Use a predefined table of percept-action pairs, which maps each percept directly to
an action.
o Example: A thermostat that simply turns the heater on or off based on the current
temperature.
2. Rule-Based Agents:
o Use a set of condition-action rules to select actions based on the current percepts.
This is an extension of simple reflex agents but with more flexible decision-making
based on rules.
o Example: A spam filter that classifies emails based on rules derived from email
content (e.g., if an email contains certain keywords, it is classified as spam).
3. Planning Agents:
4. Learning Architecture:
o Example: A machine learning-based chatbot that refines its ability to answer user
questions by learning from conversations.
Sensors: Cameras and proximity sensors detect obstacles (e.g., walls or furniture).
Simple Reflex Agent: If an obstacle is detected (percept), the robot turns (action).
Model-Based Agent: The robot uses a map of the room to avoid repeatedly cleaning the
same area.
Goal-Based Agent: The goal is to clean the entire room efficiently, avoiding obstacles and
covering all areas.
Learning Agent: Over time, the robot learns which areas of the room get dirtier faster and
prioritizes those areas.