0% found this document useful (0 votes)
12 views14 pages

Artificial Intelligence

Chapter -1

Uploaded by

Maa Nos Subedy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views14 pages

Artificial Intelligence

Chapter -1

Uploaded by

Maa Nos Subedy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Chapter – 1

Artificial Intelligence
Artificial Intelligence
AI is the part of computer science concerned with designing intelligence computer system that
is, system that exhibit the characteristics we associated with intelligence in human behavior.
AI is a branch of computer science which deals with helping machines find solutions to complex
problems in a more human – like fashion. This generally involves borrowing characteristics from
human intelligence and applying them as algorithms in a computer friendly way.
Brief History of Artificial Intelligence
The development of AI can be understood under the following headings:
1. The gestation of artificial intelligence (1943-1955)
 The first work on Al was done by Warren McCulloch and Walter Pitts (1943). They drew
on three sources: Knowledge of the basic physiology and function of neurons in the brain;
a formal analysis of propositional logic due to Russel and Whitehead; and Turing's theory
of computation. They showed for example, that any computable function could be
computed by some network of connected neurons, and that all the logical connectives
(and, or, not, etc.) could be implemented by simple net structures. McCulloch and Pits
also suggested that suitably defined networks could learn.
 In 1947, Alan Turing published an article "Computing Machinery and Intelligence" where
he introduced the Turing Test, machine learning, genetic algorithms, and reinforcement
learning.
 In 1949, Donald Hebb demonstrated a simple updating rule (now known as Hebbian
learning) for modifying the connection strengths between neurons.
 In 1950, two graduates (Minsky and Edmonds) from Princeton college built the first neural
network computer which was named "SNARC".
2. The birth of artificial intelligence (1956)
 In 1956, McCarthy, Minsky, Claude Shannon, and Rochester organized a two-month
workshop at Dartmouth college in Hanover, New Hampshire which brought together
U.S. researchers interested in automata theory, neural networks, and the study of
intelligence. The objective of the workshop was to find how to make machines use
language, form abstractions and concepts, solve kinds of problems now reserved for
humans, and improve themselves. There were altogether 10 attendees.
 John McCarthy first coined the term "artificial intelligence" at the Dartmouth
conference. He defined Al as the science and engineering of making intelligent
machines.
3. Early enthusiasm, great expectations (1952-1969)
 Newell and Simon developed General Problem Solver (GPS). This program could
imitate human problem- solving protocols, but could only handle a limited class of
puzzles.
 In 1952, Arthur Samuel wrote a series of programs for checkers (draughts) that
eventually learned to play at a strong amateur level.
 In 1958 at MIT AI lab Memo No. 1, McCarthy defined the high-level Al programming
language "Lisp". In the same year, McCarthy published a paper entitled "Programs
with Common Sense" in which he described the Advice Taker, a hypothetical program
that can be seen as the first complete Al system.
 In 1959, Herbert Gelernter developed Geometry Theorem Prover which could prove
theorems using explicitly represented axioms.
 In 1963, McCarthy started the Al lab at Stanford.
 In 1965, J. A. Robinson discovered resolution method which was a complete theorem-
proving algorithm for first-order logic.
 In 1963, James Slagle developed SAINT program which was able to solve closed-form
calculus integration problems typical of first-year college courses.
 In 1967, Daniel Bobrow developed STUDENT program which solved algebra story
problems, such as the following:
If the number of customers Rabindra gets is twice the square of 10% of the number of
advertisements he runs, and the number of advertisements he runs is 50, what is the
number of customers Rabindra gets?
 Tom Evans' ANALOGY program (1968) solved geometry analogy problems that appear
in IQ tests.
 Hebb's learning methods were enhanced by Bernie Widrow (Widrow and Hoff, 1960;
Widrow, 1962), who called his networks "adalines", and by Franke Rosenblatt (1962)
with his "perceptrons".
4. A dose of reality (1966-1973)
In 1957, Simon made concrete predictions that within 10 years, a computer would be a chess
champion, and a significant mathematical theorem prover. These predictions turned out into
reality within 40 years rather than 10. There were several reasons behind this:
 The first reason was that most early programs knew nothing of their subject matter, they
succeeded by means of simple syntactic manipulations. So, there occurred problems in
early machine translation efforts.
 The second reason was the intractability of many of the problems that Al was attempting
to solve. Most of the early Al programs solved problems by trying out different
combinations of steps until the solution was found. This strategy worked initially but later
failed in many situations.
 The third reason was that there were some fundamental limitations on the basic structure
being used to generate intelligent behavior. 5. Knowledge-based systems: The key to
power? (1969 1979) In this period, a more powerful, domain-specific knowledge was used
that allowed larger reasoning steps.
5. Knowledge-based systems: The key to power? (1969 1979)
In this period, a more powerful, domain-specific knowledge was used that allowed larger
reasoning steps
 The DENDRAL program (Buchanan et al., 1969) was developed at Stanford, and was the
first successful knowledge-intensive system: its expertise derived from large number of
special-purpose rules. The program was used to solve the problem of inferring molecular
structure from the information provided by a mass spectrometer.
 Feigenbaum, Buchanan, and Dr. Edward Shortlife developed MYCIN to diagnose blood
infections. With about 450 rules, MYCIN was able to perform as well as some experts.
 Winograd developed SHRDLU system for understanding natural language and was
designed specifically for one area the blocks world.
 At Yale, Roger Schank and his students built a series of programs that all had the task of
understanding natural language. The emphasis was less on language in itself and more on
the problems of representing and reasoning with the knowledge required for language
understanding.
 A large number of different representation and reasoning languages were developed.
Some were based on logic The Prolog language (popular in Europe), and the PLANNER
(popular in the United States). Others, following Minsky's idea of frames (1975), adopted
a more structured approach.
6. AI becomes an industry (1980 - present)
 The first successful commercial expert system, R1 began operation at the Digital
Equipment Corporation (McDermott, 1982). The program helped configure orders for
new computer systems; by 1986, it was saving the company an estimated $40 million a
year.
 Nearly every major U.S. corporation had its own Al group and was either using or
investigating expert systems.
 In 1981, Japan announced the "Fifth Generation" project to build intelligent computers
running Prolog. In response, the United States formed the Microelectronics and
Computer Technology Corporation (MCC) as a research consortium.
Overall, the Al industry bloomed from a few million dollars in 1980 to billions of dollars in
1988, including hundreds of companies building expert systems, vision systems, robots,
and software and hardware specialized for these purposes.
7. The return of neural networks
 The algorithm was applied to many learning problems in computer science and
psychology.
 Modern neural network research has bifurcated into two fields, one concerned with
creating effective network architectures and algorithms and understanding their
mathematical properties, the other concerned with careful modeling of the empirical
properties of actual neurons and ensembles of neurons.
8. Al adopts the scientific method (1987 - present)
Al has finally come firmly under the scientific method. It is now possible to replicate experiments
by using shared repositories of test data and code.
 Approaches based on "hidden Markov models (HMMs)" have come to dominate the field
of speech recognition. HMMs were based on a rigorous mathematical theory, and are
generated by a process of training on a large corpus of real speech data.
 Machine translation follows the same course as speech recognition.
 Neural networks also fit this trend. As neural networks have started using improved
methodology and theoretical frameworks, "data mining" technology has spawned a
vigorous new industry.
 The "Bayesian network" formalism was invented to allow efficient representation of, and
rigorous reasoning with, uncertain knowledge.
 Similar gentle revolutions have occurred in robotics, computer vision, and knowledge
representation.
9. The emergence of intelligent agents (1995 - present)
 One of the most important environments for intelligent agents is the Internet. Al systems
have become more common in Web-based applications. Moreover, Al technologies
underlie many Internet tools such as search engines, recommender systems, and Web
site aggregators.
 The first conference on "Artificial General Intelligence (AGI)" was held in 2008. AGI looks
for a universal algorithm for learning and acting in any environment.
10. The availability of very large data sets (2001- present)
 More emphasis on data than algorithm. Increasing availability of very large data sources:
for example, trillions of words of English and billions of images from the Web; billions of
base pairs of genomic sequences.
 Today, many thousands of Al applications are deeply embedded in the infrastructure of
every industry.

Different types of AI / Approach of AI


Al can be understood as the human-like intelligence exhibited by a machine. The field of Al is
inter-disciplinary in which several sciences and profession converse such as computer science,
psychology, linguistics, neuroscience etc. Al can be defined in terms of the following four
categories i.e., four views.
I. Acting like human / acting humanly
The art of creating machines that perform functions which require intelligence when
performed by people is understood as acting like a human. This approach is also called a
Turing test. Turing test, purposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence. The Turing test measures the performance
of intelligence machine against that of a human being.
The Turing test is a method for determining whether or not a computer is capable of thinking
like a human. According to this test, a computer is deemed to have artificial intelligence if it
can mimic human responses under specific conditions.

Consider the following scenario. There are two rooms A and B. One of the rooms contains a
computer, and the other room contains a human. The interrogator is outside and does not
know which room has a computer. He can ask questions through a teletype and receives
answers from both rooms A and B. The interrogator needs to identify whether a human is in
room A or in room B. To pass the Turing test, the computer has to fool the interrogator into
believing that it is human

Natural language processing (NLP): To enable it to communicate easier and successfully.


Knowledge representation: To store information what it knows.

Automated reasoning: To use the stored information to answer questions and to draw new
conclusions.
Machine learning: To adapt to new circumstances and to detect and extrapolate patterns.

II. Thinking humanly: The cognitive modelling approach


This refers to the automation of activities that we associate with human thinking activities
such as decision making, problem-solving, learning, etc. This phenomenon is also known as
the cognitive-based approach in which the focus is not just on the behaviour and the input,
but the output also looks at the reasoning process. Cognitive science brings together
computer model forms AI and experimental technique from psychology to try to construct
precise and testable theories of the working of the human mind.
This approach requires understanding how human thinks. A different way to determine how
human thinks like experience, investigation, inspection, current perception etc. E.g., General
Problem Solver
III. Thinking rationally: The "laws of thought" approach
The study of computations that make it possible to perceive reason and act is meant by
thinking. This is also known as the "laws of though" approach. These laws of thought were
supposed to govern the operation of the mind; their study initiated the field called logic.
A system is said to be rational if it does the right thing for what it knows. It attempts to codify
the normative premises in term of logic, deriving the result as a right thinking. Logic provides
precise notations or statements about all kinds of things and relations between them.
E.g., For the premises, All men are students. Ram is a man. The result is: Ram is a student
IV. Acting rationally: The rational agent approach

Computational intelligence is the study of the design of an intelligence agent. This model is
also known as the rational agent approach. A rational agent is one that acts so to achieve the
best outcome or when there is uncertainty, the best-expected outcome.

A rational agent is more general than the laws of thought approach, which emphasizes
correct interference. Making correct interference is sometimes part of being a rational agent
because one way to act rationally is to reason logically to the conclusion that a givers action
will achieve one's goals and then act on that conclusion. E.g., reflex agent.

AI and Related fields


a) Logical AI
A program knows about the world, in general, the facts of the specific situation in which it
must act, and its goals are all represented by sentences of some mathematical, logical
language. The program decides what to do by inferring that certain actions are appropriate
for achieving its goals.
b) Search
Search Al programs often examine large numbers of possibilities, e.g. moves in a chess game
or inferences by a theorem proving program. Discoveries are continually made about how to
do this more efficiently in various domains.
c) Pattern recognition
When a program makes observations of some kind, it is often programmed to compare what
it sees with a pattern. For example, a vision program may try to match a pattern of eyes and
a nose in a scene to find a face. More complex patterns, e.g., in a natural language text, in a
chess position, or the history of some event are also studied.
d) Representation
Facts about the world must represented in some way. Usually languages of mathematical
logic are used.
e) Inference
The mathematical-logical deduction is adequate for some purposes, but new methods of non-
monotonic înference have been added to logic since the 1970s. The simplest kind of non-
monotonic reasoning is default reasoning in which a conclusion is to be inferred by default,
but the conclusion can be withdrawn if there is evidence to the contrary. For example, when
we hear of a bird, we can infer that it can fly, but this conclusion can be reversed when we
hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that
constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is
monotonic in that the set of conclusions that can the drawn from a set of premises is a
monotonic increasing function of the premises
f) Learning from experience
The approaches to Al based on connectionism and neural nets specialize in that. There is also
learning of laws expressed in logic. Programs can only learn what facts or behaviors their
formalisms can represent, and unfortunately learning systems are almost all based on very
limited abilities to represent information
g) Ontology
Ontology is the study of the kinds of things that exist. In Al, the programs and sentences deal
with various kinds of objects, and we study what these are and what their basic properties
are. Emphasis on ontology begins in the 1990s

Importance and application of Artificial Intelligence

We are a privileged generation to live in this era full of technological advancements. Gone are
the days when almost everything was done manually, and now we live in a time where a lot of
work is taken over by machines, software, and various automatic processes. In this regard,
artificial intelligence has a special place in all the advancement made today. Al is nothing but the
science of computers and machines developing intelligence like humans. In this technology, the
machines can do some of the simples to complex stuff that humans need to do regularly. The
artificial intelligence applications help to get the work done faster and with accurate results.
Error-free and efficient worlds are the main motives behind artificial intelligence.
A concise answer to what Al can do today is difficult because there are so many activities in so
many subfields. There are numerous real-world applications of Al which signifies how important
an Al is in today's world. Following are the applications which explains the importance of artificial
intelligence in today's world:
a. Game playing
b. Speech recognition
c. Natural Language Processing
d. Computer Vision
e. Expert Systems
f. Consumer marketing
g. Intrusion Detection
h. Autonomous planning and scheduling
i. Medical diagnosis
j. Media and entertainment.
k. Scientific research
l. Finance

Definition and importance of knowledge and learning


Data
Data are streams of raw facts representing events before they have been arranged into a form
that people can understand and use.
Information
Information Information is that property of data, which represents and measures the effects of
processing them. By information, we mean data that have been shaped into a form that is
meaningful and useful to human beings.
Knowledge
1. Knowledge Definition
Knowledge is understood as the fact or condition of knowing something with familiarity
gained through experience or association. It is something that we come to know by seeing,
hearing, touching, feeling, and tasting. It can be referred to as the information that we receive
or acquire through any medium. It gives us the power to explore things and to take decisions
accordingly. To solve many problems, it requires much knowledge, and this knowledge must
be represented or stored in trusted devices.
2. Knowledge storing
To acquire knowledge, we need to understand it first. We learn and gain knowledge in our
own preferred language. Every human or domain has their own way of understanding the
knowledge and stores it as per requirement. It is a natural language for people to understand
knowledge.
Symbols for computer: The knowledge in computer is stored as a number or character string
which represents an object or idea. (internal representation of the knowledge)
The core concepts: It means the mapping from facts to an internal computer representation
and to a form that people can understand
3. Knowledge representation
There is an unlimited amount of knowledge in the whole world which holds meaning and is
valuable in their own way. The important features of intelligence are to create knowledge
from data.
Knowledge can be of various types:
 It can be a simple factor in a complex relationship.
 Mathematical formulas or rules for natural language syntax also represents
knowledge.
 Associations between related concepts provide knowledge, which helps a lot to solve
many problems.
 Inheritance hierarchies between classes of objects have made concepts of
programming easily.

Knowledge is more general, which means it may be applied to situations we have not been
programmed for it.
Importance of Knowledge
Sufficient amount of knowledge introduces to intelligence. If we have enough knowledge, then
we can achieve intelligence. Knowledge plays a major role in building intelligence systems. Not
only that, knowledge is very important, even in our daily lives. It makes us superior and gives us
wisdom.
Learning
Learning In simple terms, learning refers to the cognitive process of acquiring skill or knowledge.
It is making a useful change in our minds. Learning can simply be understood as the process or
phase of gaining knowledge or skills. Learning means of construct of modifying the
representation of what is being experienced.
According to Herbert Simon (1983), "Learning is the phenomenon of knowledge acquisition in
the absence of explicit programming". Learning denotes change in the system that is adaptive in
such a way that they enable the system to do the same task or task which are drawn from the
same population more efficiently and more effectively next time.
Learning basically involves three factors, and they are presented below :
Changes:
Learning changes leaner. For machine learning, the problem is determining the nature of the
following changes and how to represent them in the best form.
Generalization:
Learning leads to generalization where performance must improve not only on the same task but
on similar tasks.
Improvement:
Learning leads to improvements. Machine learning must address the possibility that the changes
may degrade in term of performance and find a way to prevent it

Intelligent Agents
Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting
upon that environment through the actuator.

 Human agent: Eyes, ears and other organs for sensors, hands, legs, mouth and another
body part for actuators.
 Robotic agent: Cameras and infrared range finder for sensors; various motor for the
actuator.
Agents and Environment

The term "percept" refers to the agent's perceptual input at any given instant, that is, location
and state of the environment. An agent's percept sequence is the complete history of everything
the agent has ever perceived.
The agent function maps from percept histories to actions.
[f: P* → A]
The agent program runs on the physical architecture to produce f,
Agent = architecture + program
e.g., Vacuum-cleaner world
Percept sequence Actions
[A, clean] Right
[A, dirty] Suck
[B, clean] Left
[B, dirty] Suck

Rational Agents

For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, based on the evidence provided by the percept sequence
and whatever built-in knowledge the agent has. Performance measure is an objective criterion
for the success of an agent's behaviour. E.g., a performance measure of a vacuum-cleaner agent
could be the amount of dirt cleaned up, amount of time taken, amount of electricity, amount of
noise generated etc.

Rationality is distinct form omniscience. An omniscient agent knows the actual outcome of its
actions and can act accordingly, but omniscience is impossible. Whereas, rationality maximizes
expected performance
Agents can act to modify future percepts to obtain useful information (information gathering,
exploration). Agent is autonomous if its behaviour is determined by its own percepts and
experiences (with the ability to learn and adapt) without depending solely on built-in knowledge
In our discussion of the rationality of the simple vacuum- cleaner agent, we had to specify the
performance measure, environment and agent's actuator and sensors. We will group all these
together under the task environment.
Before we design an intelligent agent, we must specify its task environment (PEAS).

 Performance measure P
 Environment E
 Actuators A
 Sensors S

E.g., Agent: Taxi driver


Performance measure: Safe, fast, legal, comfortable trip, maximise profits
Environment: Roads, other traffic, customers
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Camera, GPS, Speedometer, engine sensor, keyboard.
Agent Type P E A S
Medical Healthy patient, Patient, hospital Display Keyboard entry
Diagnosis minimize cost staff question, tests, of symptoms,
System diagnosis, findings,
treatment, patient’s answer
referrals
Interactive Maximize Set of students Display Typed words
English Tutor students score testing exercises,
on test suggestions
Satellite Image Correct Downlink from Display Color pixel
analysis system categorization orbiting satellite categorization arrays
of scene
Refinery Maximize purity, Refinery Valves, pumps, Temperature,
controller yield safety operators heaters, display pressure,
chemical sensors
Taxi driver Safe, fast, legal, Roads, other Steering wheel,
comfortable trip, traffic, accelerator,
maximise profits customers brake, signal,
horn

Agent Types
There are five basic types of agents in order of increasing generality.
1. Table driven agents
It uses a percept sequence actions table in memory to find the next action. They are
implemented by a large look up table.
2. Simple reflex agents
Agents are based on condition action rules, implemented with an appropriate production
system. They are stateless devices which do not have a memory of past world states.

3. Model – based reflex agents


Agents with memory have an internal state, which is used to keep track of past states of
the world.
4. Goal based agents
Agents with goals are agents that, in addition to state information, have goal information
that describes desirable situations. Agents of this kind consider future events.

5. Utility based agents


Utility based agents base their decision on classic axiomatic utility theory in order to act
rationally

You might also like