0% found this document useful (0 votes)
76 views72 pages

Unit-1 PPT Ai

The document outlines the vision and mission of an institute and its computer science department, emphasizing the provision of high-quality technical education and the development of research and entrepreneurial skills. It details program educational objectives and outcomes, focusing on engineering knowledge, problem analysis, and ethical practices. Additionally, it covers the fundamentals of artificial intelligence, including intelligent agents, learning, and task environments, while highlighting the importance of rationality and autonomy in AI systems.

Uploaded by

sathiyab.csbs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views72 pages

Unit-1 PPT Ai

The document outlines the vision and mission of an institute and its computer science department, emphasizing the provision of high-quality technical education and the development of research and entrepreneurial skills. It details program educational objectives and outcomes, focusing on engineering knowledge, problem analysis, and ethical practices. Additionally, it covers the fundamentals of artificial intelligence, including intelligent agents, learning, and task environments, while highlighting the importance of rationality and autonomy in AI systems.

Uploaded by

sathiyab.csbs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 72

VISION & MISSION (INSTITUTE)

 VISION

To achieve a prominent position among the top technical institutions.


 MISSION
M1: To bestow standard technical education par excellence through state of
the art infrastructure, competent faculty and high ethical standards.
M2: To nurture research and entrepreneurial skills among students in cutting
edge technologies.
M3: To provide education for developing high-quality professionals to
transform the society.
VISION & MISSION (DEPARTMENT)
 VISION

To create eminent professionals of Computer Science and Engineering by imparting


quality education.

 MISSION

M1: To provide technical exposure in the field of Computer Science and Engineering
through state of the art infrastructure and ethical standards.
M2: To engage the students in research and development activities in the field of
Computer Science and Engineering.
M3: To empower the learners to involve in industrial and multi-disciplinary projects for
addressing the societal needs.
PROGRAM EDUCTIONAL
OBJECTIVES (PEOs)
Our graduates shall
PEO1: Analyse, design and create innovative
products for addressing social needs.
PEO2: Equip themselves for employability, higher
studies and research.
PEO3: Nurture the leadership qualities and
entrepreneurial skills for their successful career
PROGRAM OUTCOMES

PO1 Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals and an engineering specialization to the solution of complex engineering
problems.
PO2. Problem analysis: Identify, formulate, review research literature, and analyze
complex engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
PO3 Design/development of solutions: Design solutions for complex engineering
problems and design system components or processes that meet the specified needs with
appropriate consideration for the public health and safety, and the cultural, societal, and
environmental considerations.
PO4 Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.
PO5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.
PO6 The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
PROGRAM OUTCOMES

PO7 Environment and sustainability: Understand the impact of the professional


engineering solutions in societal and environmental contexts, and demonstrate the
knowledge of, and need for sustainable development.
PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.
PO9 Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
PO10 Communication: Communicate effectively on complex engineering activities with
the engineering community and with society at large, such as, being able to comprehend
and write effective reports and design documentation, make effective presentations, and
give and receive clear instructions.
PO11 Project management and finance: Demonstrate knowledge and understanding of
the engineering and management principles and apply these to one‘s own work, as a
member and leader in a team, to manage projects and in multidisciplinary
environments.00
PO12 Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological
change.
Program Specific Outcomes
(PSOs)
Students will be able to
PSO1: Apply the basic and advanced knowledge in
developing software, hardware and firmware solutions
addressing real life problems.
PSO2: Design, develop, test and implement IT based
solutions on Android and iOS based apps meeting the
requirements of common people.
Revised Bloom’s Taxonomy
UCB1613 –Artificial
Intelligence
UNIT I INTRODUCTION 9
Introduction–Definition — Future of Artificial Intelligence — Characteristics of
Intelligent Agents–Typical Intelligent Agents — Problem Solving Approach to Typical AI
problems.
UNIT II PROBLEM SOLVING METHODS 9
Problem solving Methods — Search Strategies- Uninformed — Informed — Heuristics
— Local Search Algorithms and Optimization Problems — Searching with Partial
Observations — Constraint Satisfaction Problems — Constraint Propagation —
Backtracking Search — Game Playing — Optimal Decisions in Games — Alpha — Beta
Pruning — Stochastic Games
UNIT III KNOWLEDGE REPRESENTATION 9
First Order Predicate Logic — Prolog Programming — Unification — Forward Chaining-
Backward Chaining — Resolution — Knowledge Representation — Ontological
Engineering-Categories and Objects — Events — Mental Events and Mental Objects —
Reasoning Systems for Categories —
Reasoning with Default Information
Syllabus
UNIT IV SOFTWARE AGENTS
9
Architecture for Intelligent Agents — Agent communication — Negotiation and Bargaining

Argumentation among Agents — Trust and Reputation in Multi-agent systems.
UNIT V APPLICATIONS
9
AI applications — Language Models — Information Retrieval- Information Extraction —
Natural Language Processing — Machine Translation — Speech Recognition — Robot —
Hardware —
Perception — Planning — Moving
What is Artificial Intelligence ?

• making computers that think?


• the automation of activities we associate with human thinking,
like decision making, learning ... ?
• the art of creating machines that perform functions that require
intelligence when performed by people ?
• the study of mental faculties through the use of computational
models ?
What is Artificial Intelligence ?

THOUGHT Systems that Systems that


think think
like humans rationally

Systems that act Systems that act


BEHAVIOUR like humans rationally

HUMAN RATIONAL
(maths+engineering)
Systems that act like humans

?
• You enter a room which has a computer terminal. You have a
fixed period of time to type what you want into the terminal, and
study the replies. At the other end of the line is either a human
being or a computer system.
• If it is a computer system, and at the end of the period you cannot
reliably determine whether it is a system or a human, then the
system is deemed to be intelligent.
Systems that act like humans
• These cognitive tasks include:
• Natural language processing
• for communication with human
• Knowledge representation
• to store information effectively & efficiently
• Automated reasoning
• to retrieve & answer questions using the stored
information
• Machine learning
• to adapt to new circumstances
The total Turing Test

• Includes two more issues:


• Computer vision
• to perceive objects (seeing)
• Robotics
• to move objects (acting)
Systems that act rationally:
“Rational agent”
• Rational behavior: doing the right thing
• The right thing: that which is expected to maximize goal
achievement, given the available information
• Giving answers to questions is ‘acting’.
• I don't care whether a system:
• replicates human thought processes
• makes the same decisions as humans
• uses purely logical reasoning
History of Artificial Intelligence
• The gestation of artificial intelligence (1943-1955)
• The birth of artificial intelligence (1956)
• Early enthusiasm, great expectations (1952-1969)
• Knowledge-based systems: (1969-1979)
•AI becomes an industry (1980-present)
•The return of neural networks (1986-present)
•AI becomes a science (1987-present)
FUTURE OF AI
• Autonomous planning and scheduling
• Game playing
• Autonomous control
• Diagnosis
• Logistics Planning
• Robotics
• Language understanding and problem solving
• Typical areas to which AI methods are applied
Intelligent Agents
• What is an agent ?
• An agent is anything that perceiving its environment through sensors and
acting upon that environment through actuators
• Example:
• Human is an agent
• A robot is also an agent with cameras and motors
• A thermostat detecting room temperature.
Intelligent Agents
Diagram of an agent

What AI should fill


Simple Terms

Percept
 Agent’s perceptual inputs at any given instant
Percept sequence
 Complete history of everything that the agent has ever perceived.
Agent function & program

Agent’s behavior is mathematically described by


Agent function
A function mapping any given percept sequence to an action

Practically it is described by
An agent program
The real implementation
Vacuum-cleaner world

Perception: Clean or Dirty? where it is in?


Actions: Move left, Move right, suck, do nothing
Vacuum-cleaner world
Program implements the agent
function tabulated in Fig. 2.3
Function Reflex-Vacuum-Agent([location,status]) return an
action
If status = Dirty then return Suck
else if location = A then return Right
else if location = B then return left
Concept of Rationality

Rational agent
 One that does the right thing
 = every entry in the table for the agent function is correct (rational).
What is correct?
 The actions that cause the agent to be most successful
 So we need ways to measure success.
Performance measure
Performance measure
 An objective function that determines
 How the agent does successfully
 E.g., 90% or 30% ?

An agent, based on its percepts


  action sequence :
if desirable, it is said to be performing well.
 No universal performance measure for all agents
Performance measure
A general rule:
 Design performance measures according to
 What one actually wants in the environment
 Rather than how one thinks the agent should behave

E.g., in vacuum-cleaner world


 We want the floor clean, no matter how the agent behave
 We don’t restrict how the agent behaves
Rationality

What is rational at any given time depends on four things:


 The performance measure defining the criterion of success
 The agent’s prior knowledge of the environment
 The actions that the agent can perform
 The agents’s percept sequence up to now
Rational agent
For each possible percept sequence,
 an rational agent should select
an action expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has
E.g., an exam
 Maximize marks, based on
the questions on the paper & your knowledge
Example of a rational agent

Performance measure
 Awards one point for each clean square
 at each time step, over 10000 time steps
Prior knowledge about the environment
 The geography of the environment
 Only two squares
 The effect of the actions
Example of a rational agent

Actions that can perform


 Left, Right, Suck and NoOp
Percept sequences
 Whereis the agent?
 Whether the location contains dirt?

Under this circumstance, the agent is rational.


Omniscience

An omniscient agent
 Knows the actual outcome of its actions in
advance
 No other possible outcomes
 However, impossible in real world
Omniscience

Based on the circumstance, it is rational.


As rationality maximizes
 Expected performance
Perfection maximizes
 Actual performance
Hence rational agents are not omniscient.
Learning

Does a rational agent depend on only current percept?


 No, the past percept sequence should also be used
 This is called learning
 After experiencing an episode, the agent
 should adjust its behaviors to perform better for the same job next time.
Autonomy
If an agent just relies on the prior knowledge of its designer rather than its
own percepts then the agent lacks autonomy
A rational agent should be autonomous- it should learn what it can to
compensate for partial or incorrect prior knowledge.
E.g., a clock
 No input (percepts)
 Run only but its own algorithm (prior knowledge)
 No learning, no experience, etc.
Software Agents
Sometimes, the environment may not be the
real world
 E.g.,flight simulator, video games, Internet
 They are all artificial but very complex
environments
 Those agents working in these environments are
called
Software agent (softbots)
Because all parts of the agent are software
Task environments
Task environments are the problems
 While the rational agents are the solutions
Specifying the task environment
 PEAS description as fully as possible
 Performance
 Environment

 Actuators

 Sensors

In designing an agent, the first step must always be to specify the task environment as
fully as possible.
Use automated taxi driver as an example
Task environments

Performance measure
 How can we judge the automated driver?
 Which factors are considered?
 getting to the correct destination
 minimizing fuel consumption

 minimizing the trip time and/or cost

 minimizing the violations of traffic laws

 maximizing the safety and comfort, etc.


Task environments

Environment
A taxi must deal with a variety of roads
 Traffic lights, other vehicles, pedestrians, stray
animals, road works, police cars, etc.
 Interact with the customer
Task environments
Actuators (for outputs)
 Control over the accelerator, steering, gear
shifting and braking
 A display to communicate with the customers

Sensors (for inputs)


 Detect other vehicles, road situations
 GPS (Global Positioning System) to know where
the taxi is
 Many more devices are necessary
Task environments

A sketch of automated taxi driver


Properties of task
environments
1. Fully observable vs. Partially observable
 If an agent’s sensors give it access to the complete state of the environment
at each point in time then the environment is effectively and fully observable
 if the sensors detect all aspects
 That are relevant to the choice of action
Partially observable
An environment might be Partially observable because of noisy and
inaccurate sensors or because parts of the state are simply missing
from the sensor data.
Example:
 A local dirt sensor of the cleaner cannot tell
 Whether other squares are clean or not
Properties of task
environments
2. Deterministic vs. stochastic
 next state of the environment completely determined by the current
state and the actions executed by the agent, then the environment is
deterministic, otherwise, it is Stochastic.
 Strategic environment: deterministic except for actions of other agents

-Cleaner and taxi driver are:


 Stochastic because of some unobservable aspects  noise or unknown
Properties of task
environments
3. Episodic vs. sequential
 An episode = agent’s single pair of perception & action
 The quality of the agent’s action does not depend on other episodes
 Every episode is independent of each other
 Episodic environment is simpler
 The agent does not need to think ahead
 EX- Agent that has to spot defective parts on an assembly line.

Sequential
 Current action may affect all future decisions
-Ex. Taxi driving and chess.
Properties of task
environments
4.Static vs. dynamic
A dynamic environment is always changing
over time
E.g., the number of people in the street, Taxi driving
 While static environment
E.g., crossword puzzles
Semi dynamic
 environment is not changed over time
 but the agent’s performance score does
 Eg-chess when played with clock
Properties of task
environments
5.Discrete vs. continuous
 Ifthere are a limited number of distinct states,
clearly defined percepts and actions, the
environment is discrete
 E.g., Chess game
 Continuous: Taxi driving
Properties of task
environments
6. Single agent VS. Multiagent
 Playinga crossword puzzle – single agent
 Chess playing – two agents
 Competitive multiagent environment
Chess playing
 Cooperative multiagent environment
Automated taxi driver
Avoiding collision
Properties of task
environments
7. Known vs. unknown
This distinction refers not to the environment itslef but to the
agent’s (or designer’s) state of knowledge about the
environment.
-In known environment, the outcomes for all actions are
given. ( example: solitaire card games).
- If the environment is unknown, the agent will have to learn
how it works in order to make good decisions.( example: new
video game).
Examples of task
environments
Structure of agents
Structure of agents

Agent = architecture + program


 Architecture = some sort of computing device (sensors + actuators)
 (Agent) Program = some function that implements the agent mapping =
“?”
 Agent Program = Job of AI
Agent programs

Input for Agent Program


 Only the current percept
Input for Agent Function
 The entire percept sequence
 The agent must remember all of them
Implement the agent program as
 A look up table (agent function)
Agent programs

Skeleton design of an agent program


Agent Programs

P = the set of possible percepts


T= lifetime of the agent
 The total number of percepts it receives
Size of the look up table
Consider playing chess

T t
 P =10, T=150
t 1
P
 Will require a table of at least 10150 entries
Agent programs
Despite of huge size, look up table does what we want.
The key challenge of AI
 Find out how to write programs that, to the extent possible, produce rational
behavior
 From a small amount of code
 Rather than a large amount of table entries

 E.g., a five-line program of Newton’s Method


 V.s. huge tables of square roots, sine, cosine, …
Types of agent programs

Four types
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
Simple reflex agents
Performs action only on current situation
They choose action based on the current situation ignoring the previous
history of perceptions
They work only if the environment is fully observable.
It uses just condition-action rules
 The rules are like the form “if … then …”
 This can be done based on the pre determined rules that are present in the
knowledge base.
Eg . if car in front brakes.
Simple reflex agents
Simple reflex agents
Model-based Reflex Agents
It is an intelligent agent that uses percept history and internal memory to
make decisions about the model.
For the world that is partially observable
 the agent has to keep track of an internal state
 That depends on the percept history
 Reflecting some of the unobserved aspects

 E.g., driving a car and changing lane

Requiring two types of knowledge


 How the world evolves independently of the agent
 How the agent’s actions affect the world
Model-based Reflex Agents

The agent is with memory


Model-based Reflex Agents
Goal-based agents

They choose their actions in order to achieve goals.


This allows the agent a way to choose among multiple
possibilities ,selecting the one which reaches a goal state..
They usually requires search and planning.
Eg. GPS system finding a path to certain destination.
Goal-based agents
Utility-based agents

 Itis an agent that acts based not only on what


the goal is but the best way to reach that goal.
 Goals alone are not enough
 They choose the actions based on the
preference for each state.
 Eg. A GPS system is finding the
shortest/fastest/safer to certain destination.
Utility-based agents
Learning Agents
It is an agent which can learn from its past experiences or it has the
learning capabilities.
It starts to act with the basic knowledge and then able to act and
adapt automatically through learning.
In AI,
 Once an agent is done
 We teach it by giving it a set of examples
 Test it by using another set of examples
We then say the agent learns
 A learning agent
Learning Agents
Four conceptual components
 Learning element(experience)
 It is responsible for making improvements by observing

performance.
 Performance element(input)
 Selecting external actions. Takes in percepts and decides an action

 Critic(feed back)
 Tells the Learning element how well the agent is doing with respect

to fixed performance standard.


(Feedback from user or examples, good or not?)
 Problem generator(alternate path)
 This is responsible for Suggesting actions that will lead to new and

informative experiences.
Learning Agents
Applications

You might also like