0% found this document useful (0 votes)
46 views63 pages

BE02000041 Funda of AI Unit 1 Introduction

The document provides an overview of Artificial Intelligence (AI), covering its history, definitions, and various categories such as Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). It discusses the evolution of AI from early concepts and the Turing Test to modern applications and ethical considerations. Additionally, it explores the structure and types of AI agents, highlighting their roles in decision-making and problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views63 pages

BE02000041 Funda of AI Unit 1 Introduction

The document provides an overview of Artificial Intelligence (AI), covering its history, definitions, and various categories such as Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). It discusses the evolution of AI from early concepts and the Turing Test to modern applications and ethical considerations. Additionally, it explores the structure and types of AI agents, highlighting their roles in decision-making and problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Fundamental of AI

(BE02000041)

Unit – I
Introduction

Prof. Hitesh D. Rajput


Asst. Prof., Computer Engineering Department,
L. D. College of Engineering, Ahmedabad
Outline
• History & overview of Artificial Intelligence
• Definition of Artificial Intelligence
• Artificial Narrow Intelligence, Artificial General
Intelligence, Artificial Super Intelligence
• Concepts of Production, Agents and Environments
• Characteristic of Intelligent Agents, Concept of
Rationality, Nature of Environments.
Introduction to AI
What is AI?
• Definition: AI is the simulation of human intelligence in
machines that are programmed to think, learn, and problem-
solve.
• According to Gemini: AI is a collection of technologies that
allow computers to perform tasks that humans normally do. AI
systems use math and logic to simulate human reasoning and
make decisions.
• According to IBM: AI is technology that enables computers and
machines to simulate human learning, comprehension,
problem solving, decision making, creativity and autonomy.
Early Concepts of AI
Ancient Mythology:
• Stories of automatons (mechanical beings) like Talos (a man of
bronze who protected Crete from pirates and invaders) in
Greek mythology.
Philosophers:
• Aristotle and the idea of logical reasoning.
• Early concepts of machine intelligence in the 17th century, like
René Descartes and Thomas Hobbes (who used rationalism to
explain the physical and mental world).
Alan Turing:
• 1936: Developed the Turing Machine concept, laying the
groundwork for computer science.
• 1950: Published the famous paper, "Computing Machinery and
Intelligence", introducing the Turing Test for machine intelligence.
Early Concepts of AI
Turing Test: (Acting Humanly)
• The Turing Test is a widely recognized benchmark for evaluating
a machine’s ability to demonstrate human-like intelligence.
• The core idea is simple: A human judge engages in a text-based
conversation with both a human and a machine.
• The judge’s task is to determine which participant is human
and which is the machine. If the judge is unable to distinguish
between the human and the machine based solely on the
conversation, the machine is said to have passed the Turing
Test.
Early Concepts of AI
Thinking humanly: The cognitive modeling approach
• Once we gather enough data, we can create a model to simulate the
human process. This model can be used to create software that can
think like humans.
• All we care about is the output of the program given a particular
input. If the program behaves in a way that matches human behavior,
then we can say that humans have a similar thinking mechanism.
• Within computer science, there is a field of study called Cognitive
Modeling that deals with simulating the human thinking process.
• It tries to understand how humans solve problems. It takes the
mental processes that go into this problem solving process and turns
it into a software model. This model can then be used to simulate
human behavior.
• Cognitive modeling is used in a variety of AI applications such as
deep learning, expert systems, Natural Language Processing,
robotics, and so on.
Early Concepts of AI
Thinking rationally: The “laws of thought” approach
• Rationality refers to doing the right thing in a given circumstance.

• The Greek philosopher Aristotle was one of the first to attempt to


codify “right thinking,” that is, irrefutable reasoning processes.
• His syllogisms provided patterns for argument structures that always
yielded correct conclusions when given correct premises.
• for example, “Socrates is a man; all men are mortal; therefore,
Socrates is mortal.” These laws of thought were supposed to govern
the operation of the mind; their study initiated the field called logic.
Early Concepts of AI
Acting rationally: The rational agent approach
• An agent is just something that acts (agent comes from the Latin
agere, to do).
• Of course, all computer programs do something, but computer
agents are expected to do more: operate autonomously, perceive
their environment, persist over a prolonged time period, adapt to
change, and create and pursue goals.
• Rational agents need to be performed in such a way that there is
maximum benefit to the entity performing the action.
• An agent is said to act rationally if, given a set of rules, it takes
actions to achieve its goals.
• It just perceives and acts according to the information that’s
available. This system is used a lot in AI to design robots when they
are sent to navigate unknown terrains.
Early Concepts of AI

Image Courtesy: https://elizaycayilmaz.medium.com/definitions-of-ai-5efb8989db09


The Birth of AI (1950s - 1960s)
1956: Dartmouth Conference:
• The official birth of AI as a field of study.
• Organizers: John McCarthy (a mathematics professor), Marvin
Minsky, Nathaniel Rochester, and Claude Shannon.
• Term "Artificial Intelligence" coined by John McCarthy.

Early Programs:
• Logic Theorist (1955): Developed by Allen Newell and Herbert
A. Simon, the first AI program that could prove mathematical
theorems.
• General Problem Solver (1959): Another Newell-Simon
development, aiming to simulate human problem-solving.
Growth and Setbacks (1960s - 1970s)
ELIZA (1966):
• A natural language processing program created by Joseph
Weizenbaum that simulated conversation.

Expert Systems:
• AI programs designed to solve specific problems in specialized
domains
• Example: medical diagnosis done by MYCIN in the 1970s.

The AI Winter (1970s):


• Limited progress led to disillusionment and reduced funding for
AI research, known as the "AI Winter."
AI Resurgence (1980s - 1990s)
Expert Systems and Neural Networks:
• Expert systems became more widely used in business and
medicine.
• Backpropagation: Rediscovery of the neural network
algorithm, spurring advancements in machine learning.
Deep Blue:
• 1997: IBM's Deep Blue defeated world chess champion Garry
Kasparov, showcasing AI's ability to handle complex tasks.
The Rise of Machine Learning and Big Data
(2000s - 2010s)
Machine Learning:
• AI systems began to "learn" from data, improving their
performance over time.
Big Data and Computing Power:
• Growth of data and increased computing power enabled more
sophisticated AI models.
Breakthroughs in Speech Recognition and Image Processing:
• Voice assistants like Siri (2011), Google Now (2012), and
Amazon Alexa emerged.
• AlexNet: an Image Classification model that transformed deep
learning. It was introduced by Geoffrey Hinton and his team in
2012, and marked a key event in the history of deep learning,
showcasing the strengths of CNN architectures and its vast
applications
Modern AI (2020s and Beyond)
AI in Everyday Life:
• AI-powered tools like self-driving cars, AI assistants, and
recommendation systems.
Generative AI:
• Tools like GPT (Generative Pretrained Transformer) and DALL·E
for generating text and images.
AI Ethics and Governance:
• Growing discussions on the ethical implications of AI, privacy
concerns, and regulation.
Categories of AI
Categories of AI:
• ANI (Artificial Narrow Intelligence)
• AGI (Artificial General Intelligence)
• ASI (Artificial Super Intelligence)
Artificial Narrow Intelligence
Definition: ANI refers to AI systems designed to perform specific
tasks or solve particular problems.
Characteristics of ANI:
• Highly specialized
• Task-specific
• Operates within a limited context (e.g., solving a problem,
answering questions)
• Can outperform humans in specific tasks (but cannot perform
beyond its scope)
Artificial Narrow Intelligence (contd..)
Examples:
• Voice Assistants: Siri, Alexa, Google Assistant
• Recommendation Systems: Netflix, Amazon, Spotify
• Autonomous Vehicles: Tesla's autopilot
• Image Recognition: Facial recognition software
Current State: All AI we interact with today falls under ANI.
Limitations:
• Cannot adapt to new tasks beyond its programming.
• Lack of common sense reasoning.
Artificial General Intelligence (AGI)
Definition: AGI refers to a hypothetical AI system that can
perform any intellectual task that a human being can do.
Characteristics of AGI:
• Capable of learning, reasoning, and understanding across
various domains
• Flexible and adaptable to new situations, just like humans
• Exhibits human-like cognitive abilities such as problem-solving,
language understanding, and creativity
Artificial General Intelligence (AGI) (Contd..)
Key Aspects:
• Autonomy: The ability to make independent decisions.
• Learning: The ability to learn from experience across multiple
domains.
• Reasoning: The ability to use logic and understanding to make
decisions.
Current Status: AGI is still theoretical. We have not yet developed
an AGI system.
Challenges:
• Complexity of Human Cognition: Recreating human-like
intelligence in machines is extremely complex.
• Ethical Concerns: The potential risks of creating an AGI with
equal or superior intelligence to humans.
Artificial Super Intelligence (ASI)
Definition: ASI refers to AI that surpasses human intelligence in
every aspect, including creativity, problem-solving, and
emotional intelligence.
Characteristics of ASI:
• Superhuman intelligence in every field
• Exponential growth: ASI would potentially be capable of rapidly
improving itself.
• Potential to outperform human beings in all domains—
intellectual, emotional, social, and more.
Potential Benefits:
• Could solve complex global problems (e.g., climate change,
disease eradication).
• Potential for breakthroughs in science and technology.
Artificial Super Intelligence (ASI) (contd..)
Risks and Concerns:
• Control problem: The difficulty of ensuring that ASI aligns with
human values and objectives.
• Existential risk: The possibility that ASI could act in ways that
threaten humanity's survival.
• Ethical issues: The consequences of creating an intelligence
superior to human beings.

Current Status: ASI remains speculative and hypothetical. Experts


disagree on when (or if) it will be achieved.
Comparing ANI, AGI, and ASI
Artificial Narrow Artificial General Artificial Super
Aspect
Intelligence (ANI) Intelligence (AGI) intelligence (ASI)

Scope of Narrow (specific tasks Broad (all tasks a Vast (surpasses all
Tasks only) human can perform) human capabilities)
Continual self-
Learning Pre-programmed, Ability to learn and
improvement and
Ability limited learning adapt across domains
learning
High autonomy, can Autonomous
Limited to pre-
Autonomy make independent decision-making on a
programmed rules
decisions superhuman level
Hypothetical, a
Current Real, existing in various Theoretical, not yet
potential future
Status applications realized
scenario
Siri, Tesla Autopilot, None yet, a theoretical None yet, theoretical
Examples
AlphaGo AI model super intelligence
Comparing ANI, AGI, and ASI
The Evolution of AI
Transition from ANI to AGI:
• ANI: Highly specialized, limited tasks.
• AGI: One system capable of learning and performing a wide
variety of tasks.
• ASI: A super intelligent system that far exceeds human
intelligence.
Challenges:
• The path from ANI to AGI is not clear, and there are significant
technological, ethical, and safety concerns.
• Some believe AGI could emerge in the next few decades, while
others predict a longer timeline or even doubt its feasibility.
Agents in Artificial Intelligence
• An agent is a computer program or system that is designed to
perceive its environment, make decisions and take actions to
achieve a specific goal or set of goals.
• The agent operates autonomously, meaning it is not directly
controlled by a human operator.

• Agents can be classified into different types based on their


characteristics, such as whether they are reactive or proactive,
whether they have a fixed or dynamic environment, and
whether they are single or multi-agent systems.
Agents in Artificial Intelligence
• Reactive agents are those that respond to immediate stimuli from their
environment and take actions based on those stimuli.
• Proactive agents, on the other hand, take initiative and plan ahead to
achieve their goals.
• The environment in which an agent operates can also be fixed or dynamic.
Fixed environments have a static set of rules that do not change, while
dynamic environments are constantly changing and require agents to adapt
to new situations.
• Multi-agent systems involve multiple agents working together to achieve a
common goal. These agents may have to coordinate their actions and
communicate with each other to achieve their objectives.
• Agents are used in a variety of applications, including robotics, gaming, and
intelligent systems.
• They can be implemented using different programming languages and
techniques, including machine learning and natural language processing.
Agents in Artificial Intelligence
• Artificial intelligence is defined as the study of rational agents.
• A rational agent could be anything that makes decisions, such as a person,
firm, machine, or software.
• It carries out an action with the best outcome after considering past and
current percepts(agent’s perceptual inputs at a given instance).
• An AI system is composed of an agent and its environment. The agents act
in their environment. The environment may contain other agents.

• An agent is anything that can be viewed as:

• Perceiving its environment through sensors and


• Acting upon that environment through actuators
Structure of an AI Agent
• To understand the structure of Intelligent Agents, we should be familiar
with Architecture and Agent programs.
• Architecture is the machinery that the agent executes on. It is a device with
sensors and actuators, for example, a robotic car, a camera, and a PC. An
agent program is an implementation of an agent function.
• An agent function is a map from the percept sequence(history of all that an
agent has perceived to date) to an action.
Agent = Architecture + Agent Program
Types of Agents
• Agents can be grouped into five classes based on their degree
of perceived intelligence and capability :

• Simple Reflex Agents


• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
• Rational Agent
• Multi-agent systems
• Hierarchical agents
Simple Reflex Agents
• Simple reflex agents ignore the rest of the percept history and
act only on the basis of the current percept.
• Percept history is the history of all that an agent has perceived
to date. The agent function is based on the condition-action
rule.
• A condition-action rule is a rule that maps a state i.e., a
condition to an action. If the condition is true, then the action
is taken, else not.
• This agent function only succeeds when the environment is
fully observable.
• For simple reflex agents operating in partially observable
environments, infinite loops are often unavoidable.
• It may be possible to escape from infinite loops if the agent can
randomize its actions.
Simple Reflex Agents
Problems with Simple reflex agents are :
• Very limited intelligence.
• No knowledge of non-perceptual parts of the state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then the collection of rules
needs to be updated.
Model-Based Reflex Agents
• It works by finding a rule whose condition matches the current
situation.
• A model-based agent can handle partially observable
environments by the use of a model about the world.
Model-Based Reflex Agents
• The agent has to keep track of the internal state which is
adjusted by each percept and that depends on the percept
history.
• The current state is stored inside the agent which maintains
some kind of structure describing the part of the world which
cannot be seen.

Updating the state requires information about:

• How the world evolves independently from the agent?


• How do the agent’s actions affect the world?
Goal-Based Agents
• These kinds of agents take decisions based on how far they are
currently from their goal(description of desirable situations).
• Their every action is intended to reduce their distance from the
goal.
Goal-Based Agents
• This allows the agent a way to choose among multiple
possibilities, selecting the one which reaches a goal state.
• The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more
flexible.
• They usually require search and planning. The goal-based
agent’s behavior can easily be changed.
Utility-Based Agents
• The agents which are developed having their end uses as
building blocks are called utility-based agents.
Utility-Based Agents
• The agents which are developed having their end uses as
building blocks are called utility-based agents.
• When there are multiple possible alternatives, then to decide
which one is best, utility-based agents are used.
• They choose actions based on a preference (utility) for each
state.
• Sometimes achieving the desired goal is not enough. We may
look for a quicker, safer, cheaper trip to reach a destination.
• Agent happiness should be taken into consideration. Utility
describes how “happy” the agent is.
• Because of the uncertainty in the world, a utility agent chooses
the action that maximizes the expected utility.
• A utility function maps a state onto a real number which
describes the associated degree of happiness.
Learning Agents
• A learning agent in AI is the type of agent that can learn from
its past experiences or it has learning capabilities.
• It starts to act with basic knowledge and then is able to act and
adapt automatically through learning.
Learning Agents
• A learning agent has mainly four conceptual components,
which are:

• Learning element: It is responsible for making improvements


by learning from the environment.
• Critic: The learning element takes feedback from critics which
describes how well the agent is doing with respect to a fixed
performance standard.
• Performance element: It is responsible for selecting external
action.
• Problem Generator: This component is responsible for
suggesting actions that will lead to new and informative
experiences.
Rational Agents
• A rational agent can be said to those, who do the right thing.
• It is an autonomous entity designed to perceive its
environment, process information, and act in a way that
maximizes the achievement of its predefined goals or
objectives.
• Rational agents always aim to produce an optimal solution.
• Example: A self-driving car maneuvering through city traffic is a
sample of a rational agent. It uses sensors to observe the
environment, analyzes data on road conditions, traffic flow, and
pedestrian activity, and makes choices to arrive at its
destination in a safe and effective manner.
• The self-driving car shows rational agent traits by constantly
improving its path through real-time information and lessons
from past situations like roadblocks or traffic jams.
Understanding Intelligent Agents
• Intelligent agents represent a subset of AI systems
demonstrating intelligent behaviour, including adaptive
learning, planning, and problem-solving.
• It operate in dynamic environments, where it makes decisions
based on the information available to them.
• These agents dynamically adjust their behaviour, learning from
past experiences to improve their approach and aiming for
accurate solutions.
Understanding Intelligent Agents (Contd..)
• The design of an intelligent agent typically involves four key
components:

• Perception: Agents have sensors or mechanisms to observe


and perceive aspects of their environment. This may involve
collecting data from the physical world, accessing databases, or
receiving input from other software components.
• Reasoning: Agents possess computational or cognitive
capabilities to process the information they perceive. They use
algorithms, logic, or machine learning techniques to analyze
data, make inferences, and derive insights from the available
information.
Understanding Intelligent Agents (Contd..)
• Decision-Making: Based on their perception and reasoning,
agents make decisions about the actions they should take to
achieve their goals. These decisions are guided by predefined
objectives, which may include optimizing certain criteria or
satisfying specific constraints.
• Action: Agents execute actions in their environment to affect
change and progress towards their goals. These actions can
range from simple operations, such as sending a message or
adjusting parameters, to more complex tasks, such as
navigating a virtual world or controlling physical devices. just
their behaviour, learning from past experiences to improve
their approach and aiming for accurate solutions.
• Examples: self-driving cars, recommendation systems, virtual
assistants, and game-playing AI.
Rational Agents and Rationality in Decision-Making
• Intelligent agents are characterized by their rationality in
decision-making, which aims to attain optimal outcomes or, in
uncertain scenarios, the best-expected outcome.

• A rational agent can be said to those, who do the right thing, It


is an autonomous entity designed to perceive its environment,
process information, and act in a way that maximizes the
achievement of its predefined goals or objectives. Rational
agents always aim to produce an optimal solution.

• Rationality in AI refers to the principle that such agents should


consistently choose actions that are expected to lead to the
best possible outcomes, given their current knowledge and the
uncertainties present in the environment.
Rational Agents and Rationality in Decision-Making
• This principle of rationality guides the behavior of intelligent
agents in the following ways:
• Perception and Information Processing: Rational agents strive
to perceive and process information efficiently to gain the most
accurate understanding of their environment.
• Reasoning and Inference: They employ logical reasoning and
probabilistic inference to make informed decisions based on
available evidence and prior knowledge.
• Decision-Making Under Uncertainty: When faced with
uncertainty, rational agents weigh the probabilities of different
outcomes and choose actions that maximize their expected
utility or achieve the best possible outcome given the available
information.
Rational Agents and Rationality in Decision-Making
• This principle of rationality guides the behavior of intelligent
agents in the following ways:
• Adaptation and Learning: Rational agents adapt their behavior
over time based on feedback and experience, continuously
refining their decision-making strategies to improve
performance and achieve their goals more effectively.
• Example of a rational agent is a chess-playing AI, which selects
moves with the highest likelihood of winning.
PEAS Representation of AI agent
• PEAS stands for performance measure, environment, actuators
and sensors.
• It is a framework that is used to describe an AI agent. It's a
structured approach to design and understand AI systems.
• Perfromance measure: Performance measure is a criteria that
measures the success of the agent. It is used to evaluate how
well the agent is acheiving its goal.
• For example, in a spam filter system, the performance measure
could be minimizing the number of spam emails reaching the
inbox.
• Environment: The environment represents the domain or
context in which the agent operates and interacts. This can
range from physical spaces like rooms to virtual environments
such as game worlds or online platforms like the internet.
PEAS Representation of AI agent (contd..)
• Actuators: Actuators are the mechanisms through which the AI
agent performs actions or interacts with its environment to
achieve its goals. These can include physical actuators like
motors and robotic hands, as well as digital actuators like
computer screens and text-to-speech converters.
• Sensors: Sensors enable the AI agent to gather information
from its environment, providing data that informs its decision-
making process and actions. These sensors can capture various
environmental parameters such as temperature, sound,
movement, or visual input. Examples of sensors include
cameras, microphones, temperature sensors, and motion
sensors.
Applications of Intelligent Agents
• Intelligent agents find applications across a wide range of
domains, revolutionizing industries and enhancing human
capabilities. Some notable applications include:

• Autonomous Systems: Intelligent agents power autonomous


vehicles, drones, and robots, enabling them to perceive their
surroundings, navigate complex environments, and make
decisions in real-time.
• Personal Assistants: Virtual personal assistants like Siri, Alexa,
and Google Assistant employ intelligent agents to understand
user queries, retrieve relevant information, and perform tasks
such as scheduling appointments, setting reminders, and
controlling smart home devices.
Applications of Intelligent Agents (contd..)
• Recommendation Systems: E-commerce platforms, streaming
services, and social media platforms utilize intelligent agents to
analyze user preferences and behavior, providing personalized
recommendations for products, movies, music, and content.
• Financial Trading: Intelligent agents are employed in
algorithmic trading systems to analyze market data, identify
trading opportunities, and execute trades autonomously,
maximizing returns and minimizing risks.
Challenges for Intelligent Agents
• Despite their immense potential, intelligent agents also pose
several challenges and considerations:
• Ethical and Legal Implications: Intelligent agents raise ethical
concerns regarding privacy, bias, transparency, and
accountability. Developers must ensure that agents behave
ethically and comply with legal regulations and societal norms.
• Robustness and Reliability: Agents must be robust and reliable
in dynamic and uncertain environments. They should be
capable of handling unexpected situations, adversarial attacks,
and noisy or incomplete data.
• Interpretability: Understanding and interpreting the decisions
made by intelligent agents is crucial for building trust and
transparency. Explainable AI techniques are essential for
providing insights into the reasoning process and decision-
making of agents.
Challenges for Intelligent Agents (Contd..)
• Despite their immense potential, intelligent agents also pose
several challenges and considerations:
• Scalability and Efficiency: As AI systems become increasingly
complex and data-intensive, scalability and efficiency become
critical considerations. Designing agents that can scale to large-
scale deployments and operate efficiently with limited
computational resources is essential.
Uses of Agents
• Agents are used in a wide range of applications in artificial
intelligence, including:
• Robotics: Agents can be used to control robots and automate
tasks in manufacturing, transportation, and other industries.
• Smart homes and buildings: Agents can be used to control
heating, lighting, and other systems in smart homes and
buildings, optimizing energy use and improving comfort.
• Transportation systems: Agents can be used to manage traffic
flow, optimize routes for autonomous vehicles, and improve
logistics and supply chain management.
• Healthcare: Agents can be used to monitor patients, provide
personalized treatment plans, and optimize healthcare
resource allocation.
• Finance: Agents can be used for automated trading, fraud
detection, and risk management in the financial industry.
Uses of Agents
• Games: Agents can be used to create intelligent opponents in
games and simulations, providing a more challenging and
realistic experience for players.
• Natural language processing: Agents can be used for language
translation, question answering, and chatbots that can
communicate with users in natural language.
• Cyber security: Agents can be used for intrusion detection,
malware analysis, and network security.
• Environmental monitoring: Agents can be used to monitor and
manage natural resources, track climate change, and improve
environmental sustainability.
• Social media: Agents can be used to analyze social media data,
identify trends and patterns, and provide personalized
recommendations to users.
Types of Environments in AI
• An environment in artificial intelligence is the surrounding of
the agent.
• The agent takes input from the environment through sensors
and delivers the output to the environment through actuators.
• There are several types of environments:
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
Fully Observable vs Partially Observable
• When an agent sensor is capable to sense or access the
complete state of an agent at each point in time, it is said to be
a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is
no need to keep track of the history of the surrounding.
• An environment is called unobservable when the agent has no
sensors in all environments.
Examples:
• Chess – the board is fully observable, and so are the
opponent’s moves.
• Driving – the environment is partially observable because
what’s around the corner is not known.
Deterministic vs Stochastic
• When a uniqueness in the agent’s current state completely
determines the next state of the agent, the environment is said
to be deterministic.
• The stochastic environment is random in nature which is not
unique and cannot be completely determined by the agent.
Examples:
• Chess – there would be only a few possible moves for a chess
piece at the current state and these moves can be determined.
• Self-Driving Cars- the actions of a self-driving car are not
unique, it varies time to time.
Competitive vs Collaborative
• An agent is said to be in a competitive environment when it
competes against another agent to optimize the output.
• The game of chess is competitive as the agents compete with
each other to win the game which is the output.
• An agent is said to be in a collaborative environment when
multiple agents cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they
cooperate with each other to avoid collisions and reach their
destination which is the output desired.
Single-agent vs Multi-agent
• An environment consisting of only one agent is said to be a
single-agent environment.
• A person left alone in a maze is an example of the single-agent
system.
• An environment involving more than one agent is a multi-agent
environment.
• The game of football is multi-agent as it involves 11 players in
each team.
Dynamic vs Static
• An environment that keeps constantly changing itself when the
agent is up with some action is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the
environment keeps changing every instant.
• An idle environment with no change in its state is called a static
environment.
• An empty house is static as there’s no change in the
surroundings when an agent enters.
Discrete vs Continuous
• If an environment consists of a finite number of actions that
can be deliberated in the environment to obtain the output, it
is said to be a discrete environment.
• The game of chess is discrete as it has only a finite number of
moves. The number of moves might vary with every game, but
still, it’s finite.
• The environment in which the actions are performed cannot be
numbered i.e. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as
their actions are driving, parking, etc. which cannot be
numbered.
Episodic vs Sequential
• In an Episodic task environment, each of the agent’s actions is
divided into atomic incidents or episodes. There is no dependency
between current and previous incidents. In each incident, an agent
receives input from the environment and then performs the
corresponding action.
• Example: Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will
make the decision on the current part i.e. there is no dependency
between current and previous decisions.
• In a Sequential environment, the previous decisions can affect all
future decisions. The next action of the agent depends on what
action he has taken previously and what action he is supposed to
take in the future.
• Example: Checkers- Where the previous move can affect all the
following moves.
Known vs Unknown
• In a known environment, the output for all probable actions is
given.
• Obviously, in case of unknown environment, for an agent to
make a decision, it has to gain knowledge about how the
environment works.

You might also like