0% found this document useful (0 votes)
12 views54 pages

AI Unit-1

Uploaded by

Jyoti Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views54 pages

AI Unit-1

Uploaded by

Jyoti Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Artificial Intelligence

Syllabus
UNIT-I INTRODUCTION: - Introduction to Artificial Intelligence, Foundations
and History of Artificial Intelligence, Applications of Artificial Intelligence,
Intelligent Agents, Structure of Intelligent Agents. Computer vision, Natural
Language Possessing.
UNIT-II INTRODUCTION TO SEARCH: - Searching for solutions, uniformed
search strategies, informed search strategies, Local search algorithms and optimistic
problems, Adversarial Search, Search for Games, Alpha - Beta pruning.
UNIT-III KNOWLEDGE REPRESENTATION & REASONING: - Propositional
logic, Theory of first order logic, Inference in First order logic, Forward &
Backward chaining, Resolution, Probabilistic reasoning, Utility theory, Hidden
Markov Models (HMM), Bayesian Networks.
Syllabus
UNIT-IV MACHINE LEARNING: - Supervised and unsupervised learning,
Decision trees, Statistical learning models, learning with complete data - Naive
Bayes models, Learning with hidden data – EM algorithm, Reinforcement
learning.

UNIT-V PATTERN RECOGNITION: - Introduction, Design principles of


pattern recognition system, Statistical Pattern recognition, Parameter estimation
methods - Principle Component Analysis (PCA) and Linear Discriminant Analysis
(LDA), Classification Techniques – Nearest Neighbor (NN) Rule, Bayes Classifier,
Support Vector Machine (SVM), K – means clustering.
Course Outcome (CO) | PO | Blooms | KC
UNIT-I
Artificial Intelligence?

Artificial intelligence (AI) is wide-ranging branch of computer science concerned


with building smart machines capable of performing tasks that typically require
human intelligence.
AI can be divided into two categories:
1. Strong AI Strong AI Weak AI
Definition The machine can actually think The devices cannot follow these
2. Weak AI and perform tasks on its own just tasks on their own but are made
like a human being. to look intelligent.
Functionality Algorithm is stored by a computer Tasks are entered manually to be
program. performed.
Examples There are no proper examples for An automatic car or remote-
Strong AI. control devices.
Progress Initial Stage Advanced Stage
Artificial Intelligence?

Definition of AI: AI is actually mapping of intelligence where intelligence is


boundary less. Boundaries of AI are:
• Acting Humanly
• Thinking Humanly
• Thinking Rationally
• Acting Rationally
Acting humanly: The Turing Test approach
• The Turing Test was designed by Alan
Turing in 1950.
• According to Turing, “Instead of asking
whether machines can thinks, we should ask
whether machines can pass a behavioural
intelligence test”
• The program is to have a conversation (via
online typed messages) with an interrogator
for 5 minutes. The interrogator has a guess if
the conversation is with a program or a
person. The program passes the test if it fools
the interrogator 30% of the time”
Computer programs like ELIZA, MGONZ, NATACHATA, and CYBERLOVER have fooled many
users in the past, and the users never knew that they were talking to a computer program.
Acting humanly: The Turing Test approach

Capabilities to pass Alan Turing test:


• Natural Language Processing to enable computers to communicate
successfully in English
• Knowledge Representation to store what computer knows or hears
• Automated Reasoning to use the stored information to answer questions and to
draw new conclusions;
• Machine Learning to adapt to new circumstances and to detect and extrapolate
patterns.
To pass the total Turing Test, the computer will need:
• Computer vision to perceive objects, and
• Robotics to manipulate objects and move about.
Thinking humanly: The cognitive modeling approach

Thinking humanly is to make a system or program to think like a human. But to achieve
that, we need to know how does a human thinks. To understand the exact process of how we
think, we need to go inside the human mind to see how this giant machine works. There are
three ways to interpret how the human mind thinks in theory:
• Introspection method - Catch our thoughts and see how it flows
• Psychological Inspections method - Observe a person on the action
• Brain Imaging method - Observe a person’s brain in action

Using the above methods, if we are able to catch the human brain’s actions and give it as a
theory, then we can convert that theory into a computer program. If the input/output of the
computer program matches with human behavior, then it may be possible that a part of the
program may be behaving like a human brain. Allen Newell and Herbert Simon developed
the General Problem Solver (GPS) program
Thinking rationally: The “laws of thought” approach

The Greek philosopher Aristotle was the one who first codifies “right-thinking”
reasoning processes. Aristotle’s syllogisms provided patterns for argument structures
that always provide correct premises.
For example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.”
These arguments initiated the field called logic. Notations for statements for all
kinds of objects were developed and interrelated between them to show logic.
Remember, logic is the prerequisite to study AI. There are two main obstacles to this
approach.
• It is not easy to take informal knowledge and state it in the formal terms
required by logical notation, particularly when the knowledge is less than
100% certain.
• There is a big difference between solving a problem “in principle” and solving
it in practice.
Acting rationally: The rational agent approach
A rational agent is an agent that acts to achieve its best performance for a given task.
The “Logical Approach” to AI emphasizes correct inferences and achieving a
correct inference is a part of the rational agent. Being able to give a logical reason is
one way of acting rationally. But all correct inferences cannot be called rationality,
because there are situations that don’t always have a correct thing to do. It is also
possible to act rationally without involving inferences. Our reflex actions are
considered as best examples of acting rationally without inferences.
The rational agent approach to AI has a couple of advantage over other approaches:
• A correct inference is considered a possible way to achieve rationality but is not
always required to achieve rationality.
• It is a more manageable scientific approach to define rationality than others that
are based on human behavior or human thought.
Foundation of AI
The foundation provides the disciplines that contributed ideas, viewpoints and
techniques to Artificial Intelligence
1. Philosophy
2. Mathematics & Statistics
3. Economics
4. Neuroscience
5. Psychology
6. Computer Science & Engineering
7. Control Theory & Cybernetics
8. Linguistics
Philosophy
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
Mathematics
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
• Three fundamental areas are logic, computation and probability.
Economics
• How should we make decisions so as to maximize profit?
Neuroscience
• How do brains process information?
Psychology
• How do humans and animals think and act?
Computer Science & Engineering
• How can we build an efficient computer? Foundation of AI
Linguistics
• How does language relate to thought?
Control theory and cybernetics
• How can artifacts operate under their own control?
History of AI
Applications of AI
Intelligent Agents

Intelligent agent is a program or software that perceives (or observes or senses) its
environment through sensors, thinks intelligently and acts upon that environment
through its actuators (to perform actions on the environment or produce output).
Example: Robotics Agent
Intelligent Agents

Agents and its environment:


• Percept
• Percept sequence
• Agent function
• Agent table
• Agent program
Example—the vacuum-cleaner world
• This particular world has just two locations: squares A and B.
• The vacuum agent perceives which square it is in and whether there is dirt in
the square.
• It can choose to move left, move right, suck up the dirt, or do nothing.
Percept: The term “percept” is used to refer agent’s perceptual inputs at any given
instant. So, percepts for vacuum cleaner world:
• Location: Square A/ Square B
• Status: Dirty/ Clean
Percept sequence: An Agent’s percept sequence is the complete history of
everything the agent has even perceived. Percept
sequence for vacuum-cleaner world:
• {A, Dirty}
• {A, Clean}
• {B, Dirty}
• {B, Clean} ….
Agent Function: An agent’s behavior is described by the agent function that
maps any given percept sequence to an action
PERCEPT ACTION
{A, Dirty} Suck
{A, Clean} Right
{B, Dirty} Suck
{B, Clean} Left
Agent table (percept table): We can imagine tabulating the agent function that
describes any given agent. For most agents, this could be very large table –
infinite, in fact, unless we place a bound on the length of percept sequences
Agent program: The percept table is an external characterization of an agent.
Internally the agent function for an artificial agent will be implemented by an
agent program.
Structure of an Agent

The job of AI is to design an agent program that implements the agent function - the
mapping from percepts to actions. The program will run on some sort of computing
device with physical sensors and actuators – This is the architecture (The platform
to run the program, not necessarily the hardware)
Agent = Architecture + Program
Agent program take the current percept as input from the sensors and return an
action to the actuators.
Concept of Rationality

What is rational at any given time depends on four things:


• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.

A Rational Agent is an agent which takes the right action for every perception. By
doing so, it maximizes the performance measure that makes an agent be the most
successful.
Omniscience, learning, and autonomy

An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality
A rational agent not only to gather information but also to learn as much as possible
from what it perceives. The agent’s initial configuration could reflect some prior
knowledge of the environment, but as the agent gains experience this may be
modified and augmented.
To the extent that an agent relies on the prior knowledge of its designer rather than
AUTONOMY on its own percepts, we say that the agent lacks autonomy. A rational
agent should be autonomous—it should learn what it can to compensate for partial
or incorrect prior knowledge
Omniscience vs. Rational Agent
Task Environment

Task environments are essentially the “problems” to which rational agents are the
“solutions”.
Properties of TASK environment:
• Fully observable vs. partially observable
• Single agent vs. multi-agent
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Known vs. unknown
Task Environment

An environment is called fully observable when the information received by an


agent at any point of time is sufficient to make the optimal decision. Example: Tic-
Tac-Toe, Chess. An environment is called partially observable when the agent
needs a memory in order to make the best possible decision. Example: Poker Game,
Card Games.

An environment is called deterministic when agent’s actions uniquely determine


the outcome. Example: Chess An environment is called stochastic when an agent’s
actions don’t uniquely determine the outcome. Example: Dice games.
Task Environment

In an episodic environment, there is a series of one-shot actions, and only the


current percept is required for the actions. Example: Part-picking Robot. However,
in sequential environment, an agent requires memory of past actions to determine
the next best actions. Example: Chess

If the environment can change while an agent is deliberating then we say the
environment is dynamic for that agent; otherwise, it is static. Example: Crossword
Puzzle has static environment and Automated Taxi Driver as dynamic environment.

Discrete environments are those on which a finite set of possibilities can drive the
outcome of the task Chess with Time. Continuous environments rely on unknown
and rapidly changing data sources. Example: Automated Taxi Driver.
Task Environment

If only one agent is involved in an environment and operating by itself then such an
environment is called single agent environment. Example: Crossword Puzzle.
However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment. Example: Chess

In a known environment, the results for all actions are known to the agent.
Example: Solitaire, Card games. While in unknown environment, agent needs to
learn how it works in order to perform an action. Example: New Video Garmes
Task Environment
Task Deterministic Episodic/ Static/ Discrete/
Observable Agents
Environment / Stochastic Sequential Dynamic Continuous
Crossword
Puzzle
Chess
Taxi-Driving
Medical
Diagnosis
Image
Analysis
Part-Picking
Robot
Interactive
English Tutor
Task Environment
Task Deterministic Episodic/ Static/ Discrete/
Observable Agents
Environment / Stochastic Sequential Dynamic Continuous
Crossword
Fully Single-Agent Deterministic Sequential Static Discrete
Puzzle
Chess Fully Multi-Agent Stochastic Sequential Dynamic Continuous
Taxi-Driving Partially Multi-Agent Stochastic Sequential Dynamic Continuous
Medical
Partially Single-Agent Stochastic Sequential Dynamic Continuous
Diagnosis
Image
Fully Single-Agent Stochastic Episodic Static Continuous
Analysis
Part-Picking
Partially Single-Agent Stochastic Episodic Dynamic Continuous
Robot
Interactive
Partially Multi-Agent Stochastic Sequential Dynamic Discrete
English Tutor
Specifying the task environment

For the acronymically minded, we call this the PEAS (Performance, Environment,
Actuators, Sensors) description. In designing an agent, the first step must always be
to specify the task environment as fully as possible.
• Performance – which qualities it should have?
• Environment – where it should act?
• Actuators – how will it perform actions?
• Sensors – how will it perceive environment?
Performance
Agent Type Environment Actuators Sensors
Measure
Medical
Diagnosis
System
Satellite image
analysis system
Part Picking
Robot
Interactive
English Tutor

PEAS (Performance, Environment, Actuators, Sensors) description


of Agent Types
Performance
Agent Type Environment Actuators Sensors
Measure
Display
Keyboard entry of
Questions, Tests,
Medical Diagnosis Healthy Patients, Patients, Hospital, symptoms,,
Diagnoses,
System Minimize Costs Staff findings, patient’s
Treatments,
answers
Referrals
Display
Satellite image Correct image Downlink from Colour pixel
categorization of
analysis system categorization orbiting satellite arrays
scene
Part Picking Percentage of parts Joined arm and Camera, Joint
Bins, Park
Robot in correct bins hand angle sensors
Set of students, Display exercises,
Interactive Maximize student’s
examination suggestions, Keyboard entry
English Tutor score
agency corrections
Intelligent Agent Types & its functionality

Four basic kinds of agent programs that embody the principles underlying almost all
intelligent systems:
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents; and
• Utility-based agents.
• Learning agents.
Intelligent
Agent Types

Simple reflex agents


A simple reflex agent is the simplest of all the agent programs. Decisions or
actions are taken based on the current percept, which does not depend on the rest
of the percept history. These agents react only to the predefined rules. It works
best when the environment is fully observable.

The agent takes input from the environment through sensors and delivers the
output to the environment through actuators.

The agent function, in this case, is based on condition-action rule where the
condition or the state is mapped to the action such that action is taken only when
condition is true or else it is not.
Intelligent
Agent Types

Model-based reflex
agents
Model-based Reflex Agents works by finding a rule whose condition
matches the current situation.

A model-based agent can handle partially observable environments by the use


of a model about the world. The agent has to keep track of the internal
state which is adjusted by each percept and that depends on the percept
history.
The current state is stored inside the agent which maintains some kind of
structure describing the part of the world which cannot be seen.

Updating the state requires information about:


• How the world evolves independently from the agent?
• How do the agent’s actions affect the world?
Intelligent
Agent Types

Goal-based agents
Goal-based AI agents are an expansion of model-based AI agents. These AI
agents can perform all the tasks that model-based AI agents can perform, i.e.,
these models work on the current perception of the environment that is
collected via sensors and the knowledge gained via historical events that have
occurred. These both are required for the correct functioning of a model-based
AI agent and a goal-based AI agent, but the additional functioning requirement
of this model is the expected output.

In goal-based agents, the user provides the input and knows the expected
output. The model performs the actions while keeping the goal state in
perspective. The whole technique of the goal-based agent to reach a goal or a
final state is based on searching and planning.
Intelligent
Agent Types

Utility-based agents
Utility-Based Agents agents are similar to the goal-based agent but provide an
extra component of utility measurement which makes them different by
providing a measure of success at a given state.

Utility-based agent act based not only goals but also the best way to achieve the
goal.

The Utility-based agent is useful when there are multiple possible alternatives,
and an agent has to choose in order to perform the best action.

The utility function maps each state to a real number to check how efficiently
each action achieves the goals.
Intelligent
Agent Types

Learning agents
A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning
from environment
• Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.
Computer Vision

Computer vision is a field of Artificial Intelligence that works on enabling


computers to see, identify and process images in the same way that human vision
does, and then provide appropriate output. Applications are face recognition, object
recognition, location recognition and tracking, Forensics, Virtual Reality, Robotics,
Navigation & Security and so on.
Computer Vision Hierarchy: Computer vision is divided into three basic
categories that are as following:
• Low-level vision: It includes process image for feature extraction
• Intermediate-level vision: It includes object recognition and 3D scene
interpretation
• High-level vision: It includes conceptual description of a scene like activity,
intention and behavior.
Computer Vision vs. Image Processing
Computer Vision vs. Image Processing - Example
Natural Language Processing (NLP)

Natural language processing (NLP) is a branch of artificial intelligence that helps


computers understand, interpret and manipulate human language. NLP draws from
many disciplines, including computer science and computational linguistics, in its
pursuit to fill the gap between human communication and computer understanding.

The field of NLP involves making computers to perform useful tasks with the
natural languages humans use. The input and output of an NLP system can be –
• Speech
• Written Text
Natural Language Processing (NLP)

Natural language processing (NLP) is a branch of artificial intelligence that helps


computers understand, interpret and manipulate human language. NLP draws from
many disciplines, including computer science and computational linguistics, in its
pursuit to fill the gap between human communication and computer understanding.
The field of NLP involves making computers to perform useful tasks with
the natural languages humans use. The input and output of an NLP system can be –
• Speech
• Written Text
Components of NLP:
1. Natural Language Understanding (NLU)
2. Natural Language Generation (NLG)
Natural Language Understanding (NLU) is
the ability of a computer to understand human
language. Natural language understanding is
taking in an input text string and analyzing
what it means. The most basic form of NLU is
parsing, which takes text written in natural
language and converts it into a structured
format that computers can understand.

Natural Language Generation (NLG) is the


process of producing meaningful phrases and
sentences in the form of natural language from
some internal representation. It involves text
planning, sentence planning & text realization
UNIT-II
Syllabus
UNIT-I INTRODUCTION: - Introduction to Artificial Intelligence, Foundations and
History of Artificial Intelligence, Applications of Artificial Intelligence, Intelligent
Agents, Structure of Intelligent Agents. Computer vision, Natural Language
Possessing.
UNIT-II INTRODUCTION TO SEARCH: - Searching for solutions, uniformed
search strategies, informed search strategies, Local search algorithms and
optimistic problems, Adversarial Search, Search for Games, Alpha - Beta
pruning.
UNIT-III KNOWLEDGE REPRESENTATION & REASONING: - Propositional
logic, Theory of first order logic, Inference in First order logic, Forward &
Backward chaining, Resolution, Probabilistic reasoning, Utility theory, Hidden
Markov Models (HMM), Bayesian Networks.

You might also like