0% found this document useful (0 votes)
78 views15 pages

AI Unit 1 Notes

Uploaded by

syed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views15 pages

AI Unit 1 Notes

Uploaded by

syed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

What is AI

we see eight definitions of AI, laid out along two dimensions


1. Human
2. Rationality
• The definitions on top are concerned with thought processes and reasoning,
• whereas the definition on the bottom address behaviour.
• The definitions on the left measure success in terms of fidelity (faithful, loyal) to
human performance,
• whereas RATIONALITY the ones on the right measure against an ideal performance
measure, called rationality.
A system is rational if it does the “right thing,” given what it knows.
Historically, all four approaches to AI have been followed, each by different people with
different methods.

Thinking Humanly Thinking Rationally


“The exciting new effort to make computers The study of mental faculties
think ... machines with minds, in the full and through the use of computational
literal sense.” (Haugeland, 1985) models.” (Charniak and
McDermott, 1985)
“[The automation of] activities that we
associate with human thinking, activities such “The study of the computations that
as decision-making, problem solving, make it possible to perceive, reason,
learning ...” (Bellman, 1978) and act.” (Winston, 1992)
Acting Humanly Acting Rationally
The art of creating machines that perform “Computational Intelligence is the
functions that require intelligence when study of the design of intelligent
performed by people.” (Kurzweil, 1990) agents.” (Poole et al., 1998)

“The study of how to make computers do AI . . . is concerned with intelligent


things at which, at the moment, people are behaviour in artifacts.” (Nilsson,
better.” (Rich and Knight, 1991) 1998)

Acting humanly: The Turing Test approach


The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence.
computer passes the test if a human interrogator, after posing some written questions, cannot
tell whether the written responses come from a person or from a computer.

To pass the total Turing Test, the computer will need to possess the following capabilities:
• Natural language processing: NLP is required to communicate with Interrogator in
general human language like English.
• Knowledge representation: To store and retrieve information during the test.
• Automated reasoning: To use the previously stored information for answering the
questions.
• Machine learning: To adapt new changes and can detect generalized patterns.
• Vision (For total Turing test): To recognize the interrogator actions and other objects
during a test.
• Robotics (For total Turing test): To act upon objects if requested.
• The Turing test is based on a party game "Imitation
game," with some modifications.
• This game involves three players in which one player
is Computer, another player is human responder, and
the third player is a human Interrogator, who is
isolated from other two players and his job is to find
that which player is machine among two of them.
• Consider, Player A is a computer, Player B is human,
and Player C is an interrogator. Interrogator is aware that one of them is machine, but
he needs to identify this on the basis of questions and their responses
Thinking humanly: The cognitive modelling approach
If we are going to say that a given program thinks like a human,
we must have some way of determining how humans think. We need to get inside the actual
workings of human minds.
There are three ways to do this:
• Through introspection—trying to catch our own thoughts as they go by;
• Through psychological experiments—observing a person in action; and
• Through brain imaging—observing the brain in action.
Once we have a sufficiently precise theory of the mind, it becomes possible to express the
theory as a computer program. If the program’s input–output behaviour matches
corresponding human behaviour, that is evidence that some of the program’s mechanisms
could also be operating in humans, this approach is known as cognitive science.
Thinking rationally: The “laws of thought” approach
• The Greek philosopher Aristotle was one of the first to attempt to codify “right
thinking,” that SYLLOGISM(an instance of a form of reasoning in which a
conclusion is drawn from two given or assumed propositions) is, irrefutable reasoning
processes.
• His syllogisms provided patterns for argument structures that always yielded correct
conclusions when given correct premises—for example,
• “Socrates is a man; all men are mortal(living things especially people); therefore,
Socrates is mortal.”
• Another example – All TVs use energy; Energy always generates heat; therefore, all
TVs generate heat.”
• These laws of thought were LOGIC supposed to govern the operation of the mind;
their study initiated the field called logic.
There are two limitations to this approach:
1. First, it is not easy to take informal knowledge to use logical notation when there is
not enough certainty on the knowledge.
2. Solving in principle and solving in practice varies hugely.
Acting rationally: The rational agent approach
A traditional computer program blindly executes the code that we write. Neither it acts on its
own nor it adapts to change itself based on the outcome.
The so-called agent program that we refer to here is expected to do more than the traditional
computer program. It is expected to create and pursue the goal, change state, and operate
autonomously.
A rational agent is an agent that acts to achieve its best performance for a given task.
The “The Law of Thought Approach” to AI emphasizes correct inferences(conclusion
reached on the basis of evidence and reasoning) and achieving a correct inference is a part of
the rational agent. Being able to give a logical reason is one way of acting rationally. But all
correct inferences cannot be called rationality, because there are situations that don’t always
have a correct thing to do. It is also possible to act rationally without involving inferences.
Our reflex actions are considered as best examples of acting rationally without inferences.
The rational agent approach to AI has a couple of advantage over other approaches:
1. A correct inference is considered a possible way to achieve rationality but is not
always required to achieve rationality.
2. It is a more manageable scientific approach to define rationality than others that are
based on human behaviour or human thought.
Foundation of AI
The foundations of AI contribute ideas, viewpoints and techniques
 Philosophy
 Mathematics
 Economics
 Neuroscience
 Psychology
 Computer Engineering
 Control theory and cybernetics
 Linguistics
Philosophy: is the study of philosophy of mind and philosophy of computer science,

• Can formal rules be used to draw valid conclusions?


• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?

Rationalism: is a method of thinking that is marked by being a deductive and abstract way of
reasoning.
Dualism: the division of something conceptually into two opposed or contrasted aspects, or
the state of being so divided. "a dualism between man and nature “
Materialism: which holds that the brain’s operation according to the laws of physics
constitutes the mind.
Empiricism: knowledge or justification comes only or primarily from sensory experience.
Induction: The general rules are acquired by exposure to repeated associations between their
elements.
Logical positivism: This theory of knowledge asserts that only statements verifiable through
direct observation or logical proof are meaningful in terms of conveying truth value,
information or factual content
Confirmation theory: is the study of the logic by which scientific hypotheses may be
confirmed or disconfirmed (or supported or refuted) by evidence.

Mathematics:
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
The main three fundamental mathematical areas that contributed in AI are logic, computation
and probability.
Logic: Propositional logic is a mathematical system for reasoning about propositions and
how they relate to one another.
Computation: determine the limits of what could be done with logic and computation.
Algorithm: “a set of instructions to be followed in calculations or other operations.
Incompleteness Theorem: "sufficiently powerful" formal system cannot consistently
produce certain theorems which are isomorphic to true statements of number theory.
Computable: which functions are computable—capable of being computed.
Traceability: a problem is called intractable if the time required to solve instances of
the problem grows exponentially with the size of the instances.
NP-completeness: the theory of NP-Completeness provides a method how to recognize
an intractable problem.
Probability: Probability theory uses the concept of random variables and probability
distribution to find the outcome of any situation. Probability theory is an advanced branch of
mathematics that deals with the odds and statistics of happening an event.

Economics
• How should we make decisions so as to maximize payoff?
• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?
Utility : Utility represents the satisfaction that consumers receive for choosing and
consuming a product or service based on different options.
Decision theory: Decision theory which combines probability theory with utility theory,
making decisions based on assigning probabilities to various factors and assigning numerical
consequences to the outcome.
Game theory: Game theory studies decision-making in situations where different players
interact and their outcomes depend on each other's choices.
Operation Research: Operations research (OR) is an analytical method of problem-
solving and decision-making that is useful in the management of organizations.
Neuroscience
• How do brains process information?
Neuroscience is the study of the nervous system, especially the brain. We are still a long way
from understanding how cognitive processes actually work. The truly amazing conclusion is
that a collection of simple cells can lead to thought, action, and consciousness or, brains
cause minds. The only real alternative theory is mysticism: that minds operate in some
mystical realm that is beyond physical science.
Phycology
• How do humans and animals think and act?
Behaviorism: Behaviorism is a theory of learning based on the idea that all behaviors are
acquired through conditioning, and conditioning occurs through interaction with the
environment.
• Behaviorists believe that our actions are shaped by environmental stimuli.
Cognitive psychology: Cognitive psychology involves the study of internal mental processes
—all of the workings inside your brain, including perception, thinking, memory, attention,
language, problem-solving, and learning.
The three key steps of a knowledge-based agent:
1. the stimulus must be translated into an internal representation
2. the representation is manipulated by cognitive processes to derive internal
representations
3. These are in turn retranslated back into action.
Computer engineering
• How can we build an efficient computer?
• Operational computer: An operation, in computing, is an action that is carried out to
accomplish a given task. There are five basic types of computer operations: inputting,
processing, outputting, storing and controlling.
• Operational programmable computer: A computer that follows a set of software
instructions, which is essential in every computer.
AI has pioneered many ideas that have made their way back to mainstream computer science,
including time sharing, interactive interpreters, personal computers with windows and mice,
rapid development environments, the linked list data type, automatic storage management,
and key concepts of symbolic, functional, declarative, and object-oriented programming.
Control theory and cybernetics
• How can artifacts operate under their own control
Control Theory: In AI, control theory is the study of how agents can best interact with their
environment to achieve a desired goal.
• The goal of control theory is to design algorithms that enable agents to make optimal
decisions, while taking into account the uncertainty of the environment.
Homeostatic and objective function: self-regulating process by which an organism can
maintain internal stability while adjusting to changing external conditions is known as
homeostatic.
Linguistics
• How does language relate to thought?
Verbal Behavior — behaviorist approach to language learning
Computational linguistics: Computational linguists build systems that can perform tasks
such as speech recognition (e.g., Siri), speech synthesis, machine translation (e.g., Google
Translate), grammar checking, text mining and other “Big Data” applications, and many
others.
Natural language processing and knowledge representation:
Natural language processing: It enables computers to understand natural language as
humans do. Whether the language is spoken or written.
• Natural language processing uses artificial intelligence to take real-world input,
process it, and make sense of it in a way a computer can understand.
Knowledge representation: Knowledge representation and reasoning (KR, KRR) is the part
of Artificial intelligence which concerned with AI agents thinking and how thinking
contributes to intelligent behavior of agents.
• It is responsible for representing information about the real world so that a computer
can understand and can utilize this knowledge to solve the complex real world
problems such as diagnosis a medical condition or communicating with humans in
natural language.
History of AI
The gestation of Artificial Intelligence (1943-1952)
o Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.

The birth of Artificial Intelligence (1952-1956)


o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for
some theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974)


o Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named
as WABOT-1.

The first AI winter (1974-1980)


o The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human
expert.
o In the Year 1980, the first national conference of the American Association of
Artificial Intelligence was held at Stanford University.

The second AI winter (1987-1993)


o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high
cost but not efficient result. The expert system such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)


o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.

Deep learning, big data and artificial general intelligence (2011-present)


o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had
to solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
o Year 2012: Google has launched an Android app feature "Google now", which was
able to provide information to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and
which had taken hairdresser appointment on call, and lady on other side didn't notice
that she was talking with the machine.

Now AI has developed to a remarkable level. The concept of Deep learning, big data, and
data science are now trending like a boom. Nowadays companies like Google, Facebook,
IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.

Intelligent Agents in AI
An AI system can be defined as the study of the rational agent and its environment. The
agents sense the environment through sensors and act on their environment through actuators.
An AI agent can have mental properties such as knowledge, belief, intention, etc.

What is an Agent?
An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting.
An agent can be:
• Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even
we are also agents.
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.

Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program.
It can be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.
f:P* → A
Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function f.

Specifying the task environment


The task environment specifies the performance measure, the environment, and the agent’s
actuators and sensors. It is also called as PEAS (Performance, Environment, Actuators,
Sensors). PEAS is a type of model on which an AI agent works upon.
Example: PEAS for self-driving cars:

Let's suppose a self-driving car then PEAS representation will be:


Performance: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

Some more Example


Agent Performance Environment Actuators Sensors
measure

1. Medical • Healthy • Patient • Tests Keyboard


Diagnose patient • Hospital • Treatments (Entry of
• Minimiz • Staff symptoms)
ed cost

2. Vacuum • Cleanne • Room • Wheels • Camera


Cleaner ss • Table • Brushes • Dirt
• Efficienc • Wood • Vacuum detection
y floor Extractor sensor
• Battery • Carpet • Cliff
life • Various sensor
• Security obstacles • Bump
Sensor
• Infrared
• Wall
Sensor

3. Part - • Percenta • Conveyor • Jointed • Camera


picking ge of belt with Arms • Joint
Robot parts in parts, • Hand angle
correct • Bins sensors.
bins.

Properties of task environments

The range of task environments that might arise in AI is obviously vast. We can, however,
identify a fairly small number of dimensions along which task environments can be
categorized as
• Fully observable vs Partially Observable
• Single-agent vs multi-agent
• Deterministic vs Stochastic
• Episodic vs sequential
• Static vs Dynamic
• Discrete vs Continuous
• Known vs Unknown

Fully observable vs Partially Observable:


• If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially
observable.
• A fully observable environment is easy as there is no need to maintain the internal
state to keep track history of the world.
• An agent with no sensors in all environments then such an environment is called
as unobservable.
Single-agent vs Multi-agent
• If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
• However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
• The agent design problems in the multi-agent environment are different from single
agent environment.

Deterministic vs Stochastic:
• If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely
by an agent.
• In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.

Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
• However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.

Static vs Dynamic:
• If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
• Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
• However for dynamic environment, agents need to keep looking at the world at each
action.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.

Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
• A chess game comes under discrete environment as there is a finite number of moves
that can be performed.
• A self-driving car is an example of a continuous environment.

Known vs Unknown
• Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
• In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
• It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
Types of AI Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
 Simple Reflex Agent
 Model-based reflex agent
 Goal-based agents
 Utility-based agent
 Learning agent

1. Simple Reflex agent:


o The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
o The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
o Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
2. Model-based reflex agent
o The Model-based agent can work in a partially observable environment, and track the
situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
o Internal State: It is a representation of the current state based on percept
history.
o These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
o Updating the agent state requires information about:
a. How the world evolves
b. How the agent's action affects the world.

3. Goal-based agents
o The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
4. Utility-based agents
o These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
o A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning
from environment
b. Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.

You might also like